Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3 values | title stringlengths 1 757 | labels stringlengths 4 664 | body stringlengths 3 261k | index stringclasses 10 values | text_combine stringlengths 96 261k | label stringclasses 2 values | text stringlengths 96 232k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
123,626 | 17,772,276,751 | IssuesEvent | 2021-08-30 14:55:25 | kapseliboi/ac-web | https://api.github.com/repos/kapseliboi/ac-web | opened | CVE-2020-15084 (High) detected in express-jwt-5.3.0.tgz | security vulnerability | ## CVE-2020-15084 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>express-jwt-5.3.0.tgz</b></p></summary>
<p>JWT authentication middleware.</p>
<p>Library home page: <a href="https://registry.npmjs.org/express-jwt/-/express-jwt-5.3.0.tgz">https://registry.npmjs.org/express-jwt/-/express-jwt-5.3.0.tgz</a></p>
<p>Path to dependency file: ac-web/package.json</p>
<p>Path to vulnerable library: /node_modules/express-jwt/package.json</p>
<p>
Dependency Hierarchy:
- :x: **express-jwt-5.3.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kapseliboi/ac-web/commit/dfced36be0641d32ba1dbfcdd9969dd354b300c5">dfced36be0641d32ba1dbfcdd9969dd354b300c5</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In express-jwt (NPM package) up and including version 5.3.3, the algorithms entry to be specified in the configuration is not being enforced. When algorithms is not specified in the configuration, with the combination of jwks-rsa, it may lead to authorization bypass. You are affected by this vulnerability if all of the following conditions apply: - You are using express-jwt - You do not have **algorithms** configured in your express-jwt configuration. - You are using libraries such as jwks-rsa as the **secret**. You can fix this by specifying **algorithms** in the express-jwt configuration. See linked GHSA for example. This is also fixed in version 6.0.0.
<p>Publish Date: 2020-06-30
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-15084>CVE-2020-15084</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/auth0/express-jwt/security/advisories/GHSA-6g6m-m6h5-w9gf">https://github.com/auth0/express-jwt/security/advisories/GHSA-6g6m-m6h5-w9gf</a></p>
<p>Release Date: 2020-06-30</p>
<p>Fix Resolution: 6.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-15084 (High) detected in express-jwt-5.3.0.tgz - ## CVE-2020-15084 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>express-jwt-5.3.0.tgz</b></p></summary>
<p>JWT authentication middleware.</p>
<p>Library home page: <a href="https://registry.npmjs.org/express-jwt/-/express-jwt-5.3.0.tgz">https://registry.npmjs.org/express-jwt/-/express-jwt-5.3.0.tgz</a></p>
<p>Path to dependency file: ac-web/package.json</p>
<p>Path to vulnerable library: /node_modules/express-jwt/package.json</p>
<p>
Dependency Hierarchy:
- :x: **express-jwt-5.3.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kapseliboi/ac-web/commit/dfced36be0641d32ba1dbfcdd9969dd354b300c5">dfced36be0641d32ba1dbfcdd9969dd354b300c5</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In express-jwt (NPM package) up and including version 5.3.3, the algorithms entry to be specified in the configuration is not being enforced. When algorithms is not specified in the configuration, with the combination of jwks-rsa, it may lead to authorization bypass. You are affected by this vulnerability if all of the following conditions apply: - You are using express-jwt - You do not have **algorithms** configured in your express-jwt configuration. - You are using libraries such as jwks-rsa as the **secret**. You can fix this by specifying **algorithms** in the express-jwt configuration. See linked GHSA for example. This is also fixed in version 6.0.0.
<p>Publish Date: 2020-06-30
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-15084>CVE-2020-15084</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/auth0/express-jwt/security/advisories/GHSA-6g6m-m6h5-w9gf">https://github.com/auth0/express-jwt/security/advisories/GHSA-6g6m-m6h5-w9gf</a></p>
<p>Release Date: 2020-06-30</p>
<p>Fix Resolution: 6.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | cve high detected in express jwt tgz cve high severity vulnerability vulnerable library express jwt tgz jwt authentication middleware library home page a href path to dependency file ac web package json path to vulnerable library node modules express jwt package json dependency hierarchy x express jwt tgz vulnerable library found in head commit a href found in base branch master vulnerability details in express jwt npm package up and including version the algorithms entry to be specified in the configuration is not being enforced when algorithms is not specified in the configuration with the combination of jwks rsa it may lead to authorization bypass you are affected by this vulnerability if all of the following conditions apply you are using express jwt you do not have algorithms configured in your express jwt configuration you are using libraries such as jwks rsa as the secret you can fix this by specifying algorithms in the express jwt configuration see linked ghsa for example this is also fixed in version publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
30,880 | 6,336,485,305 | IssuesEvent | 2017-07-26 21:11:56 | extnet/Ext.NET | https://api.github.com/repos/extnet/Ext.NET | closed | Ext.net.DropDownField: invalid KeyNav initialization | 4.x defect sencha | Due to a breaking change introduced in ExtJS 6.5.0, the Ext.util.KeyNav class no longer accepts the target as its contructor's first parameter. As Ext.NET uses KeyNav with Ext.net.DropDownField client-side component implementation, this is now broken on Ext.NET, the error thus prevents pages from loading.
While the Ext.util.KeyNav is a breaking change, fixing Ext.net.DropDownField client-side code wouldn't introduce any breaking change. | 1.0 | Ext.net.DropDownField: invalid KeyNav initialization - Due to a breaking change introduced in ExtJS 6.5.0, the Ext.util.KeyNav class no longer accepts the target as its contructor's first parameter. As Ext.NET uses KeyNav with Ext.net.DropDownField client-side component implementation, this is now broken on Ext.NET, the error thus prevents pages from loading.
While the Ext.util.KeyNav is a breaking change, fixing Ext.net.DropDownField client-side code wouldn't introduce any breaking change. | defect | ext net dropdownfield invalid keynav initialization due to a breaking change introduced in extjs the ext util keynav class no longer accepts the target as its contructor s first parameter as ext net uses keynav with ext net dropdownfield client side component implementation this is now broken on ext net the error thus prevents pages from loading while the ext util keynav is a breaking change fixing ext net dropdownfield client side code wouldn t introduce any breaking change | 1 |
267,306 | 23,291,023,223 | IssuesEvent | 2022-08-05 22:59:44 | danbudris/vulnerabilityProcessor | https://api.github.com/repos/danbudris/vulnerabilityProcessor | opened | MEDIUM vulnerability CVE-2019-12900 - bzip2-libs affecting 1 resources | hey there test severity/MEDIUM | Issue auto cut by Vulnerability Processor
Processor Version: `v0.0.0-dev`
Message Source: `EventBridge`
Finding Source: `inspectorV2`
MEDIUM vulnerability CVE-2019-12900 detected in 1 resources
- arn:aws:ecr:us-west-2:338155784195:repository/test-inspector/sha256:7585bd31388fb7584260436e613c871868fd1509a728bf0c60bfe3f792e43aff
Affected Packages:
- bzip2-libs
Associated Pull Requests:
- https://github.com/danbudris/vulnerabilityProcessor/pull/1290
| 1.0 | MEDIUM vulnerability CVE-2019-12900 - bzip2-libs affecting 1 resources - Issue auto cut by Vulnerability Processor
Processor Version: `v0.0.0-dev`
Message Source: `EventBridge`
Finding Source: `inspectorV2`
MEDIUM vulnerability CVE-2019-12900 detected in 1 resources
- arn:aws:ecr:us-west-2:338155784195:repository/test-inspector/sha256:7585bd31388fb7584260436e613c871868fd1509a728bf0c60bfe3f792e43aff
Affected Packages:
- bzip2-libs
Associated Pull Requests:
- https://github.com/danbudris/vulnerabilityProcessor/pull/1290
| non_defect | medium vulnerability cve libs affecting resources issue auto cut by vulnerability processor processor version dev message source eventbridge finding source medium vulnerability cve detected in resources arn aws ecr us west repository test inspector affected packages libs associated pull requests | 0 |
20,226 | 6,008,336,088 | IssuesEvent | 2017-06-06 07:31:56 | bretsky/Spark | https://api.github.com/repos/bretsky/Spark | closed | Optimise for higher framerates | code | Framerates are currently around 50-55, or 55-60 without printing. The framerates should be able to sustain 60 on JARVIS-Prime, and sustain 30 FPS on an average computer. Rewrite game using OpenGL to optimise speed and utilise GPU.
| 1.0 | Optimise for higher framerates - Framerates are currently around 50-55, or 55-60 without printing. The framerates should be able to sustain 60 on JARVIS-Prime, and sustain 30 FPS on an average computer. Rewrite game using OpenGL to optimise speed and utilise GPU.
| non_defect | optimise for higher framerates framerates are currently around or without printing the framerates should be able to sustain on jarvis prime and sustain fps on an average computer rewrite game using opengl to optimise speed and utilise gpu | 0 |
410,079 | 11,982,754,317 | IssuesEvent | 2020-04-07 13:27:19 | wazuh/wazuh-kibana-app | https://api.github.com/repos/wazuh/wazuh-kibana-app | closed | Apply the right sorting depending on the list | UI/UX enhancement priority/medium | | Wazuh | Elastic | Rev |
|-----------|----------|--------|
| 3.x | 6.x/7.x | -- |
**Description**
Some tables and lists have a default value for the initial sorting that not always makes sense.
**Steps to reproduce**
Open a section with a table, use the agents search bar, there are a lot of examples in the app.
**Screenshots**
Agents search bar:

Checks list from an SCA policy:

| 1.0 | Apply the right sorting depending on the list - | Wazuh | Elastic | Rev |
|-----------|----------|--------|
| 3.x | 6.x/7.x | -- |
**Description**
Some tables and lists have a default value for the initial sorting that not always makes sense.
**Steps to reproduce**
Open a section with a table, use the agents search bar, there are a lot of examples in the app.
**Screenshots**
Agents search bar:

Checks list from an SCA policy:

| non_defect | apply the right sorting depending on the list wazuh elastic rev x x x description some tables and lists have a default value for the initial sorting that not always makes sense steps to reproduce open a section with a table use the agents search bar there are a lot of examples in the app screenshots agents search bar checks list from an sca policy | 0 |
243,656 | 7,860,353,888 | IssuesEvent | 2018-06-21 19:40:31 | broadinstitute/gatk | https://api.github.com/repos/broadinstitute/gatk | opened | When Funcotator is run in the mutect2.wdl, it should output MAF, not VCF (by default) | Funcotator FuncotatorBetaBlocker Mutect PRIORITY_HIGH wdl |
## Feature request
### Tool(s) or class(es) involved
Funcotator (and M2)
### Description
The mutect2 WDL files run Funcotator with MAF output. This happens when Funcotator is turned on.
| 1.0 | When Funcotator is run in the mutect2.wdl, it should output MAF, not VCF (by default) -
## Feature request
### Tool(s) or class(es) involved
Funcotator (and M2)
### Description
The mutect2 WDL files run Funcotator with MAF output. This happens when Funcotator is turned on.
| non_defect | when funcotator is run in the wdl it should output maf not vcf by default feature request tool s or class es involved funcotator and description the wdl files run funcotator with maf output this happens when funcotator is turned on | 0 |
36,981 | 8,198,671,511 | IssuesEvent | 2018-08-31 17:15:05 | google/googletest | https://api.github.com/repos/google/googletest | closed | Include BUILD for building with bazel | Priority-Medium Type-Defect auto-migrated | ```
googletest ought to provide a BUILD file to allow it to be included without
modification in a project's bazel workspace.
```
Original issue reported on code.google.com by `mrdomino` on 4 Jul 2015 at 3:57
| 1.0 | Include BUILD for building with bazel - ```
googletest ought to provide a BUILD file to allow it to be included without
modification in a project's bazel workspace.
```
Original issue reported on code.google.com by `mrdomino` on 4 Jul 2015 at 3:57
| defect | include build for building with bazel googletest ought to provide a build file to allow it to be included without modification in a project s bazel workspace original issue reported on code google com by mrdomino on jul at | 1 |
19,570 | 3,226,833,175 | IssuesEvent | 2015-10-10 16:56:19 | scipy/scipy | https://api.github.com/repos/scipy/scipy | closed | Dopri5 ODE integrator with step-size control | defect scipy.integrate | I am trying to solve a simple example with the `dopri5` integrator in `scipy.integrate.ode`. As the documentation states
> This is an explicit runge-kutta method of order (4)5 due to Dormand & Prince (with stepsize control and dense output).
this should work. So here is my example:
import numpy as np
from scipy.integrate import ode
import matplotlib.pyplot as plt
def MassSpring_with_force(t, state, f):
""" Simple 1DOF dynamics model: m ddx(t) + k x(t) = f(t)"""
# unpack the state vector
x = state[0]
xd = state[1]
# these are our constants
k = 2.5 # Newtons per metre
m = 1.5 # Kilograms
# compute acceleration xdd
xdd = ( ( -k*x + f) / m )
# return the two state derivatives
return [xd, xdd]
def force(t):
""" Excitation force """
f0 = 1 # force amplitude [N]
freq = 20 # frequency[Hz]
omega = 2 * np.pi *freq # angular frequency [rad/s]
return f0 * np.sin(omega*t)
# Time range
t_start = 0
t_final = 1
# Main program
state_ode_f = ode(MassSpring_with_force)
state_ode_f.set_integrator('dopri5', rtol=1e-4, nsteps=500,
first_step=1e-6, max_step=1e-1, verbosity=True)
state2 = [0.0, 0.0] # initial conditions
state_ode_f.set_initial_value(state2, 0)
state_ode_f.set_f_params(force(0))
sol = np.array([[t_start, state2[0], state2[1]]], dtype=float)
print("Time\t\t Timestep\t dx\t\t ddx\t\t state_ode_f.successful()")
while state_ode_f.successful() and state_ode_f.t < (t_final):
state_ode_f.set_f_params(force(state_ode_f.t))
state_ode_f.integrate(t_final, step=True)
sol = np.append(sol, [[state_ode_f.t, state_ode_f.y[0], state_ode_f.y[1]]], axis=0)
print("{0:0.8f}\t {1:0.4e} \t{2:10.3e}\t {3:0.3e}\t {4}".format(
state_ode_f.t, sol[-1, 0]- sol[-2, 0], state_ode_f.y[0], state_ode_f.y[1], state_ode_f.successful()))
The result I get is:
Time Timestep dx ddx state_ode_f.successful()
1.00000000 1.0000e+00 0.000e+00 0.000e+00 True
Hence, only one time-step is computed which is obviously incorrect.
This works with `vode` and `zvode` integrators | 1.0 | Dopri5 ODE integrator with step-size control - I am trying to solve a simple example with the `dopri5` integrator in `scipy.integrate.ode`. As the documentation states
> This is an explicit runge-kutta method of order (4)5 due to Dormand & Prince (with stepsize control and dense output).
this should work. So here is my example:
import numpy as np
from scipy.integrate import ode
import matplotlib.pyplot as plt
def MassSpring_with_force(t, state, f):
""" Simple 1DOF dynamics model: m ddx(t) + k x(t) = f(t)"""
# unpack the state vector
x = state[0]
xd = state[1]
# these are our constants
k = 2.5 # Newtons per metre
m = 1.5 # Kilograms
# compute acceleration xdd
xdd = ( ( -k*x + f) / m )
# return the two state derivatives
return [xd, xdd]
def force(t):
""" Excitation force """
f0 = 1 # force amplitude [N]
freq = 20 # frequency[Hz]
omega = 2 * np.pi *freq # angular frequency [rad/s]
return f0 * np.sin(omega*t)
# Time range
t_start = 0
t_final = 1
# Main program
state_ode_f = ode(MassSpring_with_force)
state_ode_f.set_integrator('dopri5', rtol=1e-4, nsteps=500,
first_step=1e-6, max_step=1e-1, verbosity=True)
state2 = [0.0, 0.0] # initial conditions
state_ode_f.set_initial_value(state2, 0)
state_ode_f.set_f_params(force(0))
sol = np.array([[t_start, state2[0], state2[1]]], dtype=float)
print("Time\t\t Timestep\t dx\t\t ddx\t\t state_ode_f.successful()")
while state_ode_f.successful() and state_ode_f.t < (t_final):
state_ode_f.set_f_params(force(state_ode_f.t))
state_ode_f.integrate(t_final, step=True)
sol = np.append(sol, [[state_ode_f.t, state_ode_f.y[0], state_ode_f.y[1]]], axis=0)
print("{0:0.8f}\t {1:0.4e} \t{2:10.3e}\t {3:0.3e}\t {4}".format(
state_ode_f.t, sol[-1, 0]- sol[-2, 0], state_ode_f.y[0], state_ode_f.y[1], state_ode_f.successful()))
The result I get is:
Time Timestep dx ddx state_ode_f.successful()
1.00000000 1.0000e+00 0.000e+00 0.000e+00 True
Hence, only one time-step is computed which is obviously incorrect.
This works with `vode` and `zvode` integrators | defect | ode integrator with step size control i am trying to solve a simple example with the integrator in scipy integrate ode as the documentation states this is an explicit runge kutta method of order due to dormand prince with stepsize control and dense output this should work so here is my example import numpy as np from scipy integrate import ode import matplotlib pyplot as plt def massspring with force t state f simple dynamics model m ddx t k x t f t unpack the state vector x state xd state these are our constants k newtons per metre m kilograms compute acceleration xdd xdd k x f m return the two state derivatives return def force t excitation force force amplitude freq frequency omega np pi freq angular frequency return np sin omega t time range t start t final main program state ode f ode massspring with force state ode f set integrator rtol nsteps first step max step verbosity true initial conditions state ode f set initial value state ode f set f params force sol np array dtype float print time t t timestep t dx t t ddx t t state ode f successful while state ode f successful and state ode f t t final state ode f set f params force state ode f t state ode f integrate t final step true sol np append sol state ode f y axis print t t t t format state ode f t sol sol state ode f y state ode f y state ode f successful the result i get is time timestep dx ddx state ode f successful true hence only one time step is computed which is obviously incorrect this works with vode and zvode integrators | 1 |
5,148 | 2,610,182,042 | IssuesEvent | 2015-02-26 18:58:02 | chrsmith/quchuseban | https://api.github.com/repos/chrsmith/quchuseban | opened | 解码怎么去脸上的色斑 | auto-migrated Priority-Medium Type-Defect | ```
《摘要》
很多的女性都以为只有脸颊周围才会长斑,其他地方都不会��
�斑,其实黄褐斑的增长是很普遍的,他不仅会长在脸颊周围�
��并且口,鼻,额头,下巴等地方都会长斑,而眼角长斑却是
很少的,那么眼角长斑怎么办呢,怎样去掉脸上的色斑这个��
�困扰这众多的祛斑女性。怎么去脸上的色斑,
《客户案例》
我今年26岁了,宝宝也两岁多了,我的妊娠斑是在怀孕的
时候长,医生说是这种妊娠斑因为孕激素分泌太多引起的色��
�沉淀现象,说是只要休息好,多增加营养,斑就会自己消除�
��可是等我生完宝宝,脸上的妊娠斑却越来越重了,那个时候
要喂奶也没敢用什么药品,就这样宝宝都两岁了,我脸上的��
�娠斑不但没有减少反而越来越多了,我是实在看不下去了,�
��处打听祛斑的方法,后来我在网上查祛斑产品的时候看到「
黛芙薇尔精华液」,说是纯精华的祛斑产品,看很多人都说��
�个产品效果不错,忍不住好奇,我就去了他们的商城lady010,��
�细看了他们的产品介绍,我忍不住咨询了他们的客服,给他�
��说了我的情况,客服专家说,我这种斑是可以彻底去掉的,
平时多注意休息和水果蔬菜,我就试着订购了两个周期,没��
�到三天就收到了,按照专家的建议,我平时很注意睡眠,坚�
��用完后,我脸上的斑点真的没有了,皮肤也白了很多,这个
产品确实挺不错的。
阅读了怎么去脸上的色斑,再看脸上容易长斑的原因:
《色斑形成原因》
内部因素
一、压力
当人受到压力时,就会分泌肾上腺素,为对付压力而做��
�备。如果长期受到压力,人体新陈代谢的平衡就会遭到破坏�
��皮肤所需的营养供应趋于缓慢,色素母细胞就会变得很活跃
。
二、荷尔蒙分泌失调
避孕药里所含的女性荷尔蒙雌激素,会刺激麦拉宁细胞��
�分泌而形成不均匀的斑点,因避孕药而形成的斑点,虽然在�
��药中断后会停止,但仍会在皮肤上停留很长一段时间。怀孕
中因女性荷尔蒙雌激素的增加,从怀孕4—5个月开始会容易出
现斑,这时候出现的斑点在产后大部分会消失。可是,新陈��
�谢不正常、肌肤裸露在强烈的紫外线下、精神上受到压力等�
��因,都会使斑加深。有时新长出的斑,产后也不会消失,所
以需要更加注意。
三、新陈代谢缓慢
肝的新陈代谢功能不正常或卵巢功能减退时也会出现斑��
�因为新陈代谢不顺畅、或内分泌失调,使身体处于敏感状态�
��,从而加剧色素问题。我们常说的便秘会形成斑,其实就是
内分泌失调导致过敏体质而形成的。另外,身体状态不正常��
�时候,紫外线的照射也会加速斑的形成。
四、错误的使用化妆品
使用了不适合自己皮肤的化妆品,会导致皮肤过敏。在��
�疗的过程中如过量照射到紫外线,皮肤会为了抵御外界的侵�
��,在有炎症的部位聚集麦拉宁色素,这样会出现色素沉着的
问题。
外部因素
一、紫外线
照射紫外线的时候,人体为了保护皮肤,会在基底层产��
�很多麦拉宁色素。所以为了保护皮肤,会在敏感部位聚集更�
��的色素。经常裸露在强烈的阳光底下不仅促进皮肤的老化,
还会引起黑斑、雀斑等色素沉着的皮肤疾患。
二、不良的清洁习惯
因强烈的清洁习惯使皮肤变得敏感,这样会刺激皮肤。��
�皮肤敏感时,人体为了保护皮肤,黑色素细胞会分泌很多麦�
��宁色素,当色素过剩时就出现了斑、瑕疵等皮肤色素沉着的
问题。
三、遗传基因
父母中有长斑的,则本人长斑的概率就很高,这种情况��
�一定程度上就可判定是遗传基因的作用。所以家里特别是长�
��有长斑的人,要注意避免引发长斑的重要因素之一——紫外
线照射,这是预防斑必须注意的。
《有疑问帮你解决》
1,黛芙薇尔精华液真的有效果吗?真的可以把脸上的黄褐��
�去掉吗?
答:黛芙薇尔精华液DNA精华能够有效的修复周围难以触��
�的色斑,其独有的纳豆成分为皮肤的美白与靓丽,提供了必�
��可少的营养物质,可以有效的去除黄褐斑,黄褐斑,黄褐斑
,蝴蝶斑,晒斑、妊娠斑等。它它完全突破了传统的美肤时��
�,宛如在皮肤中注入了一杯兼具活化、再生、滋养等功效的�
��尾酒,同时为脸部提供大量有机维生素精华,脸部的改变显
而易见。自产品上市以来,老顾客纷纷介绍新顾客,71%的新��
�客都是通过老顾客介绍而来,口碑由此而来!
2,服用黛芙薇尔美白,会伤身体吗?有副作用吗?
答:黛芙薇尔精华液应用了精纯复合配方和领先的分类��
�斑科技,并将“DNA美肤系统”疗法应用到了该产品中,能彻�
��祛除黄褐斑,蝴蝶斑,妊娠斑,晒斑,黄褐斑,老年斑,有
效淡化黄褐斑至接近肤色。黛芙薇尔通过法国、美国、台湾��
�地的专家通力协作,超过10年的研究以全新的DNA肌肤修复技��
�,挑战传统化学护肤理念,不懈追寻发现破译大自然的美丽�
��迹,令每一位爱美的女性都能享受到科技创新所带来的自然
之美。
专为亚洲女性肤质研制,精心呵护女性美丽,多年来,为数��
�百万计的女性解除了黄褐斑困扰。深得广大女性朋友的信赖!
3,去除黄褐斑之后,会反弹吗?
答:很多曾经长了黄褐斑的人士,自从选择了黛芙薇尔��
�白,就一劳永逸。这款祛斑产品是经过数十位权威祛斑专家�
��据斑的形成原因精心研制而成用事实说话,让消费者打分。
树立权威品牌!我们的很多新客户都是老客户介绍而来,请问�
��如果效果不好,会有客户转介绍吗?
4,你们的价格有点贵,能不能便宜一点?
答:如果您使用西药最少需要2000元,煎服的药最少需要3
000元,做手术最少是5000元,而这些毫无疑问,不会对彻底去�
��你的斑点有任何帮助!一分价钱,一份价值,我们现在做的��
�是一个口碑,一个品牌,价钱并不高。如果花这点钱把你的�
��褐斑彻底去除,你还会觉得贵吗?你还会再去花那么多冤枉��
�,不但斑没去掉,还把自己的皮肤弄的越来越糟吗
5,我适合用黛芙薇尔精华液吗?
答:黛芙薇尔适用人群:
1、生理紊乱引起的黄褐斑人群
2、生育引起的妊娠斑人群
3、年纪增长引起的老年斑人群
4、化妆品色素沉积、辐射斑人群
5、长期日照引起的日晒斑人群
6、肌肤暗淡急需美白的人群
《祛斑小方法》
怎么去脸上的色斑,同时为您分享祛斑小方法
2、30克杏仁与适量鸡蛋清混合调匀,每晚睡前搽于面部有色��
�处,次日早晨用白酒洗掉。
```
-----
Original issue reported on code.google.com by `additive...@gmail.com` on 1 Jul 2014 at 2:41 | 1.0 | 解码怎么去脸上的色斑 - ```
《摘要》
很多的女性都以为只有脸颊周围才会长斑,其他地方都不会��
�斑,其实黄褐斑的增长是很普遍的,他不仅会长在脸颊周围�
��并且口,鼻,额头,下巴等地方都会长斑,而眼角长斑却是
很少的,那么眼角长斑怎么办呢,怎样去掉脸上的色斑这个��
�困扰这众多的祛斑女性。怎么去脸上的色斑,
《客户案例》
我今年26岁了,宝宝也两岁多了,我的妊娠斑是在怀孕的
时候长,医生说是这种妊娠斑因为孕激素分泌太多引起的色��
�沉淀现象,说是只要休息好,多增加营养,斑就会自己消除�
��可是等我生完宝宝,脸上的妊娠斑却越来越重了,那个时候
要喂奶也没敢用什么药品,就这样宝宝都两岁了,我脸上的��
�娠斑不但没有减少反而越来越多了,我是实在看不下去了,�
��处打听祛斑的方法,后来我在网上查祛斑产品的时候看到「
黛芙薇尔精华液」,说是纯精华的祛斑产品,看很多人都说��
�个产品效果不错,忍不住好奇,我就去了他们的商城lady010,��
�细看了他们的产品介绍,我忍不住咨询了他们的客服,给他�
��说了我的情况,客服专家说,我这种斑是可以彻底去掉的,
平时多注意休息和水果蔬菜,我就试着订购了两个周期,没��
�到三天就收到了,按照专家的建议,我平时很注意睡眠,坚�
��用完后,我脸上的斑点真的没有了,皮肤也白了很多,这个
产品确实挺不错的。
阅读了怎么去脸上的色斑,再看脸上容易长斑的原因:
《色斑形成原因》
内部因素
一、压力
当人受到压力时,就会分泌肾上腺素,为对付压力而做��
�备。如果长期受到压力,人体新陈代谢的平衡就会遭到破坏�
��皮肤所需的营养供应趋于缓慢,色素母细胞就会变得很活跃
。
二、荷尔蒙分泌失调
避孕药里所含的女性荷尔蒙雌激素,会刺激麦拉宁细胞��
�分泌而形成不均匀的斑点,因避孕药而形成的斑点,虽然在�
��药中断后会停止,但仍会在皮肤上停留很长一段时间。怀孕
中因女性荷尔蒙雌激素的增加,从怀孕4—5个月开始会容易出
现斑,这时候出现的斑点在产后大部分会消失。可是,新陈��
�谢不正常、肌肤裸露在强烈的紫外线下、精神上受到压力等�
��因,都会使斑加深。有时新长出的斑,产后也不会消失,所
以需要更加注意。
三、新陈代谢缓慢
肝的新陈代谢功能不正常或卵巢功能减退时也会出现斑��
�因为新陈代谢不顺畅、或内分泌失调,使身体处于敏感状态�
��,从而加剧色素问题。我们常说的便秘会形成斑,其实就是
内分泌失调导致过敏体质而形成的。另外,身体状态不正常��
�时候,紫外线的照射也会加速斑的形成。
四、错误的使用化妆品
使用了不适合自己皮肤的化妆品,会导致皮肤过敏。在��
�疗的过程中如过量照射到紫外线,皮肤会为了抵御外界的侵�
��,在有炎症的部位聚集麦拉宁色素,这样会出现色素沉着的
问题。
外部因素
一、紫外线
照射紫外线的时候,人体为了保护皮肤,会在基底层产��
�很多麦拉宁色素。所以为了保护皮肤,会在敏感部位聚集更�
��的色素。经常裸露在强烈的阳光底下不仅促进皮肤的老化,
还会引起黑斑、雀斑等色素沉着的皮肤疾患。
二、不良的清洁习惯
因强烈的清洁习惯使皮肤变得敏感,这样会刺激皮肤。��
�皮肤敏感时,人体为了保护皮肤,黑色素细胞会分泌很多麦�
��宁色素,当色素过剩时就出现了斑、瑕疵等皮肤色素沉着的
问题。
三、遗传基因
父母中有长斑的,则本人长斑的概率就很高,这种情况��
�一定程度上就可判定是遗传基因的作用。所以家里特别是长�
��有长斑的人,要注意避免引发长斑的重要因素之一——紫外
线照射,这是预防斑必须注意的。
《有疑问帮你解决》
1,黛芙薇尔精华液真的有效果吗?真的可以把脸上的黄褐��
�去掉吗?
答:黛芙薇尔精华液DNA精华能够有效的修复周围难以触��
�的色斑,其独有的纳豆成分为皮肤的美白与靓丽,提供了必�
��可少的营养物质,可以有效的去除黄褐斑,黄褐斑,黄褐斑
,蝴蝶斑,晒斑、妊娠斑等。它它完全突破了传统的美肤时��
�,宛如在皮肤中注入了一杯兼具活化、再生、滋养等功效的�
��尾酒,同时为脸部提供大量有机维生素精华,脸部的改变显
而易见。自产品上市以来,老顾客纷纷介绍新顾客,71%的新��
�客都是通过老顾客介绍而来,口碑由此而来!
2,服用黛芙薇尔美白,会伤身体吗?有副作用吗?
答:黛芙薇尔精华液应用了精纯复合配方和领先的分类��
�斑科技,并将“DNA美肤系统”疗法应用到了该产品中,能彻�
��祛除黄褐斑,蝴蝶斑,妊娠斑,晒斑,黄褐斑,老年斑,有
效淡化黄褐斑至接近肤色。黛芙薇尔通过法国、美国、台湾��
�地的专家通力协作,超过10年的研究以全新的DNA肌肤修复技��
�,挑战传统化学护肤理念,不懈追寻发现破译大自然的美丽�
��迹,令每一位爱美的女性都能享受到科技创新所带来的自然
之美。
专为亚洲女性肤质研制,精心呵护女性美丽,多年来,为数��
�百万计的女性解除了黄褐斑困扰。深得广大女性朋友的信赖!
3,去除黄褐斑之后,会反弹吗?
答:很多曾经长了黄褐斑的人士,自从选择了黛芙薇尔��
�白,就一劳永逸。这款祛斑产品是经过数十位权威祛斑专家�
��据斑的形成原因精心研制而成用事实说话,让消费者打分。
树立权威品牌!我们的很多新客户都是老客户介绍而来,请问�
��如果效果不好,会有客户转介绍吗?
4,你们的价格有点贵,能不能便宜一点?
答:如果您使用西药最少需要2000元,煎服的药最少需要3
000元,做手术最少是5000元,而这些毫无疑问,不会对彻底去�
��你的斑点有任何帮助!一分价钱,一份价值,我们现在做的��
�是一个口碑,一个品牌,价钱并不高。如果花这点钱把你的�
��褐斑彻底去除,你还会觉得贵吗?你还会再去花那么多冤枉��
�,不但斑没去掉,还把自己的皮肤弄的越来越糟吗
5,我适合用黛芙薇尔精华液吗?
答:黛芙薇尔适用人群:
1、生理紊乱引起的黄褐斑人群
2、生育引起的妊娠斑人群
3、年纪增长引起的老年斑人群
4、化妆品色素沉积、辐射斑人群
5、长期日照引起的日晒斑人群
6、肌肤暗淡急需美白的人群
《祛斑小方法》
怎么去脸上的色斑,同时为您分享祛斑小方法
2、30克杏仁与适量鸡蛋清混合调匀,每晚睡前搽于面部有色��
�处,次日早晨用白酒洗掉。
```
-----
Original issue reported on code.google.com by `additive...@gmail.com` on 1 Jul 2014 at 2:41 | defect | 解码怎么去脸上的色斑 《摘要》 很多的女性都以为只有脸颊周围才会长斑,其他地方都不会�� �斑,其实黄褐斑的增长是很普遍的,他不仅会长在脸颊周围� ��并且口,鼻,额头,下巴等地方都会长斑,而眼角长斑却是 很少的,那么眼角长斑怎么办呢,怎样去掉脸上的色斑这个�� �困扰这众多的祛斑女性。怎么去脸上的色斑, 《客户案例》 ,宝宝也两岁多了,我的妊娠斑是在怀孕的 时候长,医生说是这种妊娠斑因为孕激素分泌太多引起的色�� �沉淀现象,说是只要休息好,多增加营养,斑就会自己消除� ��可是等我生完宝宝,脸上的妊娠斑却越来越重了,那个时候 要喂奶也没敢用什么药品,就这样宝宝都两岁了,我脸上的�� �娠斑不但没有减少反而越来越多了,我是实在看不下去了,� ��处打听祛斑的方法,后来我在网上查祛斑产品的时候看到「 黛芙薇尔精华液」,说是纯精华的祛斑产品,看很多人都说�� �个产品效果不错,忍不住好奇, �� �细看了他们的产品介绍,我忍不住咨询了他们的客服,给他� ��说了我的情况,客服专家说,我这种斑是可以彻底去掉的, 平时多注意休息和水果蔬菜,我就试着订购了两个周期,没�� �到三天就收到了,按照专家的建议,我平时很注意睡眠,坚� ��用完后,我脸上的斑点真的没有了,皮肤也白了很多,这个 产品确实挺不错的。 阅读了怎么去脸上的色斑,再看脸上容易长斑的原因: 《色斑形成原因》 内部因素 一、压力 当人受到压力时,就会分泌肾上腺素,为对付压力而做�� �备。如果长期受到压力,人体新陈代谢的平衡就会遭到破坏� ��皮肤所需的营养供应趋于缓慢,色素母细胞就会变得很活跃 。 二、荷尔蒙分泌失调 避孕药里所含的女性荷尔蒙雌激素,会刺激麦拉宁细胞�� �分泌而形成不均匀的斑点,因避孕药而形成的斑点,虽然在� ��药中断后会停止,但仍会在皮肤上停留很长一段时间。怀孕 中因女性荷尔蒙雌激素的增加, — 现斑,这时候出现的斑点在产后大部分会消失。可是,新陈�� �谢不正常、肌肤裸露在强烈的紫外线下、精神上受到压力等� ��因,都会使斑加深。有时新长出的斑,产后也不会消失,所 以需要更加注意。 三、新陈代谢缓慢 肝的新陈代谢功能不正常或卵巢功能减退时也会出现斑�� �因为新陈代谢不顺畅、或内分泌失调,使身体处于敏感状态� ��,从而加剧色素问题。我们常说的便秘会形成斑,其实就是 内分泌失调导致过敏体质而形成的。另外,身体状态不正常�� �时候,紫外线的照射也会加速斑的形成。 四、错误的使用化妆品 使用了不适合自己皮肤的化妆品,会导致皮肤过敏。在�� �疗的过程中如过量照射到紫外线,皮肤会为了抵御外界的侵� ��,在有炎症的部位聚集麦拉宁色素,这样会出现色素沉着的 问题。 外部因素 一、紫外线 照射紫外线的时候,人体为了保护皮肤,会在基底层产�� �很多麦拉宁色素。所以为了保护皮肤,会在敏感部位聚集更� ��的色素。经常裸露在强烈的阳光底下不仅促进皮肤的老化, 还会引起黑斑、雀斑等色素沉着的皮肤疾患。 二、不良的清洁习惯 因强烈的清洁习惯使皮肤变得敏感,这样会刺激皮肤。�� �皮肤敏感时,人体为了保护皮肤,黑色素细胞会分泌很多麦� ��宁色素,当色素过剩时就出现了斑、瑕疵等皮肤色素沉着的 问题。 三、遗传基因 父母中有长斑的,则本人长斑的概率就很高,这种情况�� �一定程度上就可判定是遗传基因的作用。所以家里特别是长� ��有长斑的人,要注意避免引发长斑的重要因素之一——紫外 线照射,这是预防斑必须注意的。 《有疑问帮你解决》 黛芙薇尔精华液真的有效果吗 真的可以把脸上的黄褐�� �去掉吗 答:黛芙薇尔精华液dna精华能够有效的修复周围难以触�� �的色斑,其独有的纳豆成分为皮肤的美白与靓丽,提供了必� ��可少的营养物质,可以有效的去除黄褐斑,黄褐斑,黄褐斑 ,蝴蝶斑,晒斑、妊娠斑等。它它完全突破了传统的美肤时�� �,宛如在皮肤中注入了一杯兼具活化、再生、滋养等功效的� ��尾酒,同时为脸部提供大量有机维生素精华,脸部的改变显 而易见。自产品上市以来,老顾客纷纷介绍新顾客, 的新�� �客都是通过老顾客介绍而来,口碑由此而来 ,服用黛芙薇尔美白,会伤身体吗 有副作用吗 答:黛芙薇尔精华液应用了精纯复合配方和领先的分类�� �斑科技,并将“dna美肤系统”疗法应用到了该产品中,能彻� ��祛除黄褐斑,蝴蝶斑,妊娠斑,晒斑,黄褐斑,老年斑,有 效淡化黄褐斑至接近肤色。黛芙薇尔通过法国、美国、台湾�� �地的专家通力协作, �� �,挑战传统化学护肤理念,不懈追寻发现破译大自然的美丽� ��迹,令每一位爱美的女性都能享受到科技创新所带来的自然 之美。 专为亚洲女性肤质研制,精心呵护女性美丽,多年来,为数�� �百万计的女性解除了黄褐斑困扰。深得广大女性朋友的信赖 ,去除黄褐斑之后,会反弹吗 答:很多曾经长了黄褐斑的人士,自从选择了黛芙薇尔�� �白,就一劳永逸。这款祛斑产品是经过数十位权威祛斑专家� ��据斑的形成原因精心研制而成用事实说话,让消费者打分。 树立权威品牌 我们的很多新客户都是老客户介绍而来,请问� ��如果效果不好,会有客户转介绍吗 ,你们的价格有点贵,能不能便宜一点 答: , , ,而这些毫无疑问,不会对彻底去� ��你的斑点有任何帮助 一分价钱,一份价值,我们现在做的�� �是一个口碑,一个品牌,价钱并不高。如果花这点钱把你的� ��褐斑彻底去除,你还会觉得贵吗 你还会再去花那么多冤枉�� �,不但斑没去掉,还把自己的皮肤弄的越来越糟吗 ,我适合用黛芙薇尔精华液吗 答:黛芙薇尔适用人群: 、生理紊乱引起的黄褐斑人群 、生育引起的妊娠斑人群 、年纪增长引起的老年斑人群 、化妆品色素沉积、辐射斑人群 、长期日照引起的日晒斑人群 、肌肤暗淡急需美白的人群 《祛斑小方法》 怎么去脸上的色斑,同时为您分享祛斑小方法 、 ,每晚睡前搽于面部有色�� �处,次日早晨用白酒洗掉。 original issue reported on code google com by additive gmail com on jul at | 1 |
165,395 | 6,275,843,715 | IssuesEvent | 2017-07-18 08:06:41 | Certaincy/Intranet | https://api.github.com/repos/Certaincy/Intranet | closed | Basic News View | Priority: High Status: Available Type: Enhancement | <!--
Please add a description and acceptance criterias
-->
### Description
A view which displays/renders a news item, fetched from the API.
### Acceptance Criterias
* [x] Fetch data from the API
* [x] Renders news item with news layout
| 1.0 | Basic News View - <!--
Please add a description and acceptance criterias
-->
### Description
A view which displays/renders a news item, fetched from the API.
### Acceptance Criterias
* [x] Fetch data from the API
* [x] Renders news item with news layout
| non_defect | basic news view please add a description and acceptance criterias description a view which displays renders a news item fetched from the api acceptance criterias fetch data from the api renders news item with news layout | 0 |
90,814 | 8,272,848,757 | IssuesEvent | 2018-09-17 00:58:05 | etcd-io/etcd | https://api.github.com/repos/etcd-io/etcd | closed | Improve snapshot backup/restore test coverage | Testing | Snapshot fetches the point-in-time state of etcd backend database. API itself is simple; stream key-value pairs by iterating each bucket. However, there have been many [unexpected behaviors](https://github.com/coreos/etcd/issues/8009) with frequent snapshots and restore.
Increase test coverage around snapshot API and its restore operations. Some of the missing test cases are:
- [X] restore multi-node cluster from same snapshot file
- https://github.com/coreos/etcd/pull/9118 https://github.com/coreos/etcd/pull/9198
- [ ] do frequent snapshots and make sure `db` file size remains static
- [ ] make embedded etcd return error on initial integrity check rather than `os.Exit`
- https://github.com/coreos/etcd/pull/8554 had to test this in e2e because of exit code
More...?
| 1.0 | Improve snapshot backup/restore test coverage - Snapshot fetches the point-in-time state of etcd backend database. API itself is simple; stream key-value pairs by iterating each bucket. However, there have been many [unexpected behaviors](https://github.com/coreos/etcd/issues/8009) with frequent snapshots and restore.
Increase test coverage around snapshot API and its restore operations. Some of the missing test cases are:
- [X] restore multi-node cluster from same snapshot file
- https://github.com/coreos/etcd/pull/9118 https://github.com/coreos/etcd/pull/9198
- [ ] do frequent snapshots and make sure `db` file size remains static
- [ ] make embedded etcd return error on initial integrity check rather than `os.Exit`
- https://github.com/coreos/etcd/pull/8554 had to test this in e2e because of exit code
More...?
| non_defect | improve snapshot backup restore test coverage snapshot fetches the point in time state of etcd backend database api itself is simple stream key value pairs by iterating each bucket however there have been many with frequent snapshots and restore increase test coverage around snapshot api and its restore operations some of the missing test cases are restore multi node cluster from same snapshot file do frequent snapshots and make sure db file size remains static make embedded etcd return error on initial integrity check rather than os exit had to test this in because of exit code more | 0 |
70,918 | 30,750,272,226 | IssuesEvent | 2023-07-28 18:33:00 | pulumi/pulumi-google-native | https://api.github.com/repos/pulumi/pulumi-google-native | closed | Error message displays property names in wrong casing | kind/bug resolution/wont-fix upstream/service | Example: try provisioning a GKE cluster (`container/v1:Cluster`) without specifying a value for `initialNodeCount`. The service returns the following error:
> Error 400: Cluster.initial_node_count must be greater than zero.: "https://container.googleapis.com/v1/projects/pulumi-development/locations/us-central1/clusters"
Note that the message says `initial_node_count` which is incorrect capitalization of the `initialNodeCount` as (correctly) defined by discovery documents. | 1.0 | Error message displays property names in wrong casing - Example: try provisioning a GKE cluster (`container/v1:Cluster`) without specifying a value for `initialNodeCount`. The service returns the following error:
> Error 400: Cluster.initial_node_count must be greater than zero.: "https://container.googleapis.com/v1/projects/pulumi-development/locations/us-central1/clusters"
Note that the message says `initial_node_count` which is incorrect capitalization of the `initialNodeCount` as (correctly) defined by discovery documents. | non_defect | error message displays property names in wrong casing example try provisioning a gke cluster container cluster without specifying a value for initialnodecount the service returns the following error error cluster initial node count must be greater than zero note that the message says initial node count which is incorrect capitalization of the initialnodecount as correctly defined by discovery documents | 0 |
65,646 | 19,621,223,708 | IssuesEvent | 2022-01-07 06:55:43 | primefaces/primefaces | https://api.github.com/repos/primefaces/primefaces | opened | Jakarta EE9.1: Support primefaces | defect | Greetings, I am working with Jakarta EE 9.1 but when implementing the library I get that I am missing components such as jakarta.faces
| 1.0 | Jakarta EE9.1: Support primefaces - Greetings, I am working with Jakarta EE 9.1 but when implementing the library I get that I am missing components such as jakarta.faces
| defect | jakarta support primefaces greetings i am working with jakarta ee but when implementing the library i get that i am missing components such as jakarta faces | 1 |
90,571 | 3,823,379,326 | IssuesEvent | 2016-03-30 07:49:48 | readium/readium-shared-js | https://api.github.com/repos/readium/readium-shared-js | closed | Error in mlong division MathML test | bug MathJax priority medium | The MathML test in EPUB TestSuite 100 (MathML-027) featuring long division isn't quite right. The horizontal lines are missing. Also, the vertical spacing seems over-large.

| 1.0 | Error in mlong division MathML test - The MathML test in EPUB TestSuite 100 (MathML-027) featuring long division isn't quite right. The horizontal lines are missing. Also, the vertical spacing seems over-large.

| non_defect | error in mlong division mathml test the mathml test in epub testsuite mathml featuring long division isn t quite right the horizontal lines are missing also the vertical spacing seems over large | 0 |
47,427 | 13,056,180,784 | IssuesEvent | 2020-07-30 03:54:29 | icecube-trac/tix2 | https://api.github.com/repos/icecube-trac/tix2 | closed | PYTHON_LOGGING eanabled causes builds to fail. (Trac #539) | Migrated from Trac cmake defect | When PYTHON_LOGGING in CMakeCache.txt is set to True, dataio fails to compile:
/disk02/home/blaufuss/icework/offline-software/trunk/src/dataio/public/dataio/I3File.h:48:
error: ISO C++ forbids declaration of ‘map’ with no type
/disk02/home/blaufuss/icework/offline-software/trunk/src/dataio/public/dataio/I3File.h:48:
error: typedef name may not be a nested-name-specifier
Seems some include files are getting munged.
Migrated from https://code.icecube.wisc.edu/ticket/539
```json
{
"status": "closed",
"changetime": "2009-03-01T00:40:18",
"description": "When PYTHON_LOGGING in CMakeCache.txt is set to True, dataio fails to compile:\n\n/disk02/home/blaufuss/icework/offline-software/trunk/src/dataio/public/dataio/I3File.h:48:\n error: ISO C++ forbids declaration of \u2018map\u2019 with no type\n/disk02/home/blaufuss/icework/offline-software/trunk/src/dataio/public/dataio/I3File.h:48:\n error: typedef name may not be a nested-name-specifier\n\nSeems some include files are getting munged.\n\n",
"reporter": "anonymous",
"cc": "",
"resolution": "fixed",
"_ts": "1235868018000000",
"component": "cmake",
"summary": "PYTHON_LOGGING eanabled causes builds to fail.",
"priority": "normal",
"keywords": "",
"time": "2009-02-28T19:41:04",
"milestone": "",
"owner": "troy",
"type": "defect"
}
```
| 1.0 | PYTHON_LOGGING eanabled causes builds to fail. (Trac #539) - When PYTHON_LOGGING in CMakeCache.txt is set to True, dataio fails to compile:
/disk02/home/blaufuss/icework/offline-software/trunk/src/dataio/public/dataio/I3File.h:48:
error: ISO C++ forbids declaration of ‘map’ with no type
/disk02/home/blaufuss/icework/offline-software/trunk/src/dataio/public/dataio/I3File.h:48:
error: typedef name may not be a nested-name-specifier
Seems some include files are getting munged.
Migrated from https://code.icecube.wisc.edu/ticket/539
```json
{
"status": "closed",
"changetime": "2009-03-01T00:40:18",
"description": "When PYTHON_LOGGING in CMakeCache.txt is set to True, dataio fails to compile:\n\n/disk02/home/blaufuss/icework/offline-software/trunk/src/dataio/public/dataio/I3File.h:48:\n error: ISO C++ forbids declaration of \u2018map\u2019 with no type\n/disk02/home/blaufuss/icework/offline-software/trunk/src/dataio/public/dataio/I3File.h:48:\n error: typedef name may not be a nested-name-specifier\n\nSeems some include files are getting munged.\n\n",
"reporter": "anonymous",
"cc": "",
"resolution": "fixed",
"_ts": "1235868018000000",
"component": "cmake",
"summary": "PYTHON_LOGGING eanabled causes builds to fail.",
"priority": "normal",
"keywords": "",
"time": "2009-02-28T19:41:04",
"milestone": "",
"owner": "troy",
"type": "defect"
}
```
| defect | python logging eanabled causes builds to fail trac when python logging in cmakecache txt is set to true dataio fails to compile home blaufuss icework offline software trunk src dataio public dataio h error iso c forbids declaration of ‘map’ with no type home blaufuss icework offline software trunk src dataio public dataio h error typedef name may not be a nested name specifier seems some include files are getting munged migrated from json status closed changetime description when python logging in cmakecache txt is set to true dataio fails to compile n n home blaufuss icework offline software trunk src dataio public dataio h n error iso c forbids declaration of with no type n home blaufuss icework offline software trunk src dataio public dataio h n error typedef name may not be a nested name specifier n nseems some include files are getting munged n n reporter anonymous cc resolution fixed ts component cmake summary python logging eanabled causes builds to fail priority normal keywords time milestone owner troy type defect | 1 |
17,256 | 2,993,217,628 | IssuesEvent | 2015-07-22 01:08:20 | googlei18n/noto-fonts | https://api.github.com/repos/googlei18n/noto-fonts | opened | Svarita preceeding visarga not rendered correctly | Script-Devanagari Type-Defect | Moved from googlei18n/noto-alpha/issues/284
Imported from Google Code issue #284 created by vvasuki@google.com on 2015-06-02T18:14:16.000Z:
Some lines in the Rigveda text http://www.detlef108.de/RV-D-UTF8.html are not rendered correctly. In particular, consider: [स न॑ः पि॒तेव॑]
This is what it looks like with Noto Sans: http://i.imgur.com/jocRriY.png
This is what it looks like with Chandas : http://i.imgur.com/PqyVQ1i.png
-------------------------------------------------------------------
Comment #1 originally posted by cibu@google.com on 2015-06-03T16:06:51.000Z:
I don't think Noto Devanagari is really designed for vedic. However, there are some low hanging fruits like this could be corrected. | 1.0 | Svarita preceeding visarga not rendered correctly - Moved from googlei18n/noto-alpha/issues/284
Imported from Google Code issue #284 created by vvasuki@google.com on 2015-06-02T18:14:16.000Z:
Some lines in the Rigveda text http://www.detlef108.de/RV-D-UTF8.html are not rendered correctly. In particular, consider: [स न॑ः पि॒तेव॑]
This is what it looks like with Noto Sans: http://i.imgur.com/jocRriY.png
This is what it looks like with Chandas : http://i.imgur.com/PqyVQ1i.png
-------------------------------------------------------------------
Comment #1 originally posted by cibu@google.com on 2015-06-03T16:06:51.000Z:
I don't think Noto Devanagari is really designed for vedic. However, there are some low hanging fruits like this could be corrected. | defect | svarita preceeding visarga not rendered correctly moved from noto alpha issues imported from google code issue created by vvasuki google com on some lines in the rigveda text are not rendered correctly in particular consider this is what it looks like with noto sans this is what it looks like with chandas comment originally posted by cibu google com on i don t think noto devanagari is really designed for vedic however there are some low hanging fruits like this could be corrected | 1 |
11,746 | 3,519,721,648 | IssuesEvent | 2016-01-12 17:54:08 | AppendixRSolutions/wendi | https://api.github.com/repos/AppendixRSolutions/wendi | opened | Complete ASME/ANS Matrix Setup | Documentation | Complete transferring the tables from the ASME/ANS RA-Sb 2013 Fire PRA supporting requirements into the wiki. | 1.0 | Complete ASME/ANS Matrix Setup - Complete transferring the tables from the ASME/ANS RA-Sb 2013 Fire PRA supporting requirements into the wiki. | non_defect | complete asme ans matrix setup complete transferring the tables from the asme ans ra sb fire pra supporting requirements into the wiki | 0 |
136,583 | 18,750,321,313 | IssuesEvent | 2021-11-05 00:26:17 | AlexRogalskiy/github-action-user-contribution | https://api.github.com/repos/AlexRogalskiy/github-action-user-contribution | closed | CVE-2021-32804 (High) detected in tar-2.2.2.tgz, tar-6.1.0.tgz - autoclosed | security vulnerability | ## CVE-2021-32804 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>tar-2.2.2.tgz</b>, <b>tar-6.1.0.tgz</b></p></summary>
<p>
<details><summary><b>tar-2.2.2.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-2.2.2.tgz">https://registry.npmjs.org/tar/-/tar-2.2.2.tgz</a></p>
<p>Path to dependency file: github-action-user-contribution/package.json</p>
<p>Path to vulnerable library: github-action-user-contribution/node_modules/dtslint/node_modules/tar/package.json</p>
<p>
Dependency Hierarchy:
- dtslint-4.1.6.tgz (Root Library)
- utils-0.0.91.tgz
- :x: **tar-2.2.2.tgz** (Vulnerable Library)
</details>
<details><summary><b>tar-6.1.0.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-6.1.0.tgz">https://registry.npmjs.org/tar/-/tar-6.1.0.tgz</a></p>
<p>Path to dependency file: github-action-user-contribution/package.json</p>
<p>Path to vulnerable library: github-action-user-contribution/node_modules/tar/package.json,github-action-user-contribution/node_modules/npm/node_modules/tar/package.json</p>
<p>
Dependency Hierarchy:
- editorconfig-checker-4.0.2.tgz (Root Library)
- :x: **tar-6.1.0.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/github-action-user-contribution/commit/ff36aa255408dae983ba3ce55f6a432a63ee8665">ff36aa255408dae983ba3ce55f6a432a63ee8665</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The npm package "tar" (aka node-tar) before versions 6.1.1, 5.0.6, 4.4.14, and 3.3.2 has a arbitrary File Creation/Overwrite vulnerability due to insufficient absolute path sanitization. node-tar aims to prevent extraction of absolute file paths by turning absolute paths into relative paths when the `preservePaths` flag is not set to `true`. This is achieved by stripping the absolute path root from any absolute file paths contained in a tar file. For example `/home/user/.bashrc` would turn into `home/user/.bashrc`. This logic was insufficient when file paths contained repeated path roots such as `////home/user/.bashrc`. `node-tar` would only strip a single path root from such paths. When given an absolute file path with repeating path roots, the resulting path (e.g. `///home/user/.bashrc`) would still resolve to an absolute path, thus allowing arbitrary file creation and overwrite. This issue was addressed in releases 3.2.2, 4.4.14, 5.0.6 and 6.1.1. Users may work around this vulnerability without upgrading by creating a custom `onentry` method which sanitizes the `entry.path` or a `filter` method which removes entries with absolute paths. See referenced GitHub Advisory for details. Be aware of CVE-2021-32803 which fixes a similar bug in later versions of tar.
<p>Publish Date: 2021-08-03
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-32804>CVE-2021-32804</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/npm/node-tar/security/advisories/GHSA-3jfq-g458-7qm9">https://github.com/npm/node-tar/security/advisories/GHSA-3jfq-g458-7qm9</a></p>
<p>Release Date: 2021-08-03</p>
<p>Fix Resolution: tar - 3.2.2, 4.4.14, 5.0.6, 6.1.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-32804 (High) detected in tar-2.2.2.tgz, tar-6.1.0.tgz - autoclosed - ## CVE-2021-32804 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>tar-2.2.2.tgz</b>, <b>tar-6.1.0.tgz</b></p></summary>
<p>
<details><summary><b>tar-2.2.2.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-2.2.2.tgz">https://registry.npmjs.org/tar/-/tar-2.2.2.tgz</a></p>
<p>Path to dependency file: github-action-user-contribution/package.json</p>
<p>Path to vulnerable library: github-action-user-contribution/node_modules/dtslint/node_modules/tar/package.json</p>
<p>
Dependency Hierarchy:
- dtslint-4.1.6.tgz (Root Library)
- utils-0.0.91.tgz
- :x: **tar-2.2.2.tgz** (Vulnerable Library)
</details>
<details><summary><b>tar-6.1.0.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-6.1.0.tgz">https://registry.npmjs.org/tar/-/tar-6.1.0.tgz</a></p>
<p>Path to dependency file: github-action-user-contribution/package.json</p>
<p>Path to vulnerable library: github-action-user-contribution/node_modules/tar/package.json,github-action-user-contribution/node_modules/npm/node_modules/tar/package.json</p>
<p>
Dependency Hierarchy:
- editorconfig-checker-4.0.2.tgz (Root Library)
- :x: **tar-6.1.0.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/github-action-user-contribution/commit/ff36aa255408dae983ba3ce55f6a432a63ee8665">ff36aa255408dae983ba3ce55f6a432a63ee8665</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The npm package "tar" (aka node-tar) before versions 6.1.1, 5.0.6, 4.4.14, and 3.3.2 has a arbitrary File Creation/Overwrite vulnerability due to insufficient absolute path sanitization. node-tar aims to prevent extraction of absolute file paths by turning absolute paths into relative paths when the `preservePaths` flag is not set to `true`. This is achieved by stripping the absolute path root from any absolute file paths contained in a tar file. For example `/home/user/.bashrc` would turn into `home/user/.bashrc`. This logic was insufficient when file paths contained repeated path roots such as `////home/user/.bashrc`. `node-tar` would only strip a single path root from such paths. When given an absolute file path with repeating path roots, the resulting path (e.g. `///home/user/.bashrc`) would still resolve to an absolute path, thus allowing arbitrary file creation and overwrite. This issue was addressed in releases 3.2.2, 4.4.14, 5.0.6 and 6.1.1. Users may work around this vulnerability without upgrading by creating a custom `onentry` method which sanitizes the `entry.path` or a `filter` method which removes entries with absolute paths. See referenced GitHub Advisory for details. Be aware of CVE-2021-32803 which fixes a similar bug in later versions of tar.
<p>Publish Date: 2021-08-03
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-32804>CVE-2021-32804</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/npm/node-tar/security/advisories/GHSA-3jfq-g458-7qm9">https://github.com/npm/node-tar/security/advisories/GHSA-3jfq-g458-7qm9</a></p>
<p>Release Date: 2021-08-03</p>
<p>Fix Resolution: tar - 3.2.2, 4.4.14, 5.0.6, 6.1.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | cve high detected in tar tgz tar tgz autoclosed cve high severity vulnerability vulnerable libraries tar tgz tar tgz tar tgz tar for node library home page a href path to dependency file github action user contribution package json path to vulnerable library github action user contribution node modules dtslint node modules tar package json dependency hierarchy dtslint tgz root library utils tgz x tar tgz vulnerable library tar tgz tar for node library home page a href path to dependency file github action user contribution package json path to vulnerable library github action user contribution node modules tar package json github action user contribution node modules npm node modules tar package json dependency hierarchy editorconfig checker tgz root library x tar tgz vulnerable library found in head commit a href vulnerability details the npm package tar aka node tar before versions and has a arbitrary file creation overwrite vulnerability due to insufficient absolute path sanitization node tar aims to prevent extraction of absolute file paths by turning absolute paths into relative paths when the preservepaths flag is not set to true this is achieved by stripping the absolute path root from any absolute file paths contained in a tar file for example home user bashrc would turn into home user bashrc this logic was insufficient when file paths contained repeated path roots such as home user bashrc node tar would only strip a single path root from such paths when given an absolute file path with repeating path roots the resulting path e g home user bashrc would still resolve to an absolute path thus allowing arbitrary file creation and overwrite this issue was addressed in releases and users may work around this vulnerability without upgrading by creating a custom onentry method which sanitizes the entry path or a filter method which removes entries with absolute paths see referenced github advisory for details be aware of cve which fixes a similar bug in later versions of tar publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tar step up your open source security game with whitesource | 0 |
246,834 | 7,895,683,407 | IssuesEvent | 2018-06-29 05:03:21 | kubeflow/website | https://api.github.com/repos/kubeflow/website | closed | Broken link to user guide | area/api priority/p1 sprint/2018-06-25-to-07-06 | Link to user guide at the end of the getting started guide to troubleshooting section is broken
https://www.kubeflow.org/docs/started/getting-started/ | 1.0 | Broken link to user guide - Link to user guide at the end of the getting started guide to troubleshooting section is broken
https://www.kubeflow.org/docs/started/getting-started/ | non_defect | broken link to user guide link to user guide at the end of the getting started guide to troubleshooting section is broken | 0 |
43,041 | 11,452,034,626 | IssuesEvent | 2020-02-06 12:54:44 | pymc-devs/pymc3 | https://api.github.com/repos/pymc-devs/pymc3 | closed | Metropolis chain tuning is differs between single- and multiprocessing | defects metropolis request review/discussion | (split from #3731)
### Observations
When `Metropolis` is used with `cores > 1 and chains > 1`, all chains are independently tuned.
With `cores=1` however, `Metropolis` initializes the 2nd chain with the `scaling` from the first. It is still tuned, but in the end it's a different between sequential single-process and parallelized multiprocess sampling.
### Cause
For `Metropolis`, the stepper is re-used and re-tuned, but no reset happens.
https://github.com/pymc-devs/pymc3/blob/dc9fd7251b34e9851308e91d622513ebe648f49e/pymc3/sampling.py#L711-L716
### Possible Solutions
+ re-setting the tuning parameter before continuing with the next chain
+ ...other ideas? | 1.0 | Metropolis chain tuning is differs between single- and multiprocessing - (split from #3731)
### Observations
When `Metropolis` is used with `cores > 1 and chains > 1`, all chains are independently tuned.
With `cores=1` however, `Metropolis` initializes the 2nd chain with the `scaling` from the first. It is still tuned, but in the end it's a different between sequential single-process and parallelized multiprocess sampling.
### Cause
For `Metropolis`, the stepper is re-used and re-tuned, but no reset happens.
https://github.com/pymc-devs/pymc3/blob/dc9fd7251b34e9851308e91d622513ebe648f49e/pymc3/sampling.py#L711-L716
### Possible Solutions
+ re-setting the tuning parameter before continuing with the next chain
+ ...other ideas? | defect | metropolis chain tuning is differs between single and multiprocessing split from observations when metropolis is used with cores and chains all chains are independently tuned with cores however metropolis initializes the chain with the scaling from the first it is still tuned but in the end it s a different between sequential single process and parallelized multiprocess sampling cause for metropolis the stepper is re used and re tuned but no reset happens possible solutions re setting the tuning parameter before continuing with the next chain other ideas | 1 |
39,373 | 9,418,846,601 | IssuesEvent | 2019-04-10 20:18:26 | bridgedotnet/Bridge | https://api.github.com/repos/bridgedotnet/Bridge | opened | UnknownIdentifierResolveResult when referencing a static defined in another project. | defect | Project A has a static class B that defines a constant string C
```csharp
namespace A
{
public static class B
{
public const string C = "C";
}
}
```
Project D has a reference to project A and defines a class E that accesses the string defined in A.B.C
```csharp
namespace D
{
public class E
{
public string F = A.B.C;
}
}
```
This solution builds but the output javascript has an UnknownIdentifierResolveResult in it:
D.js
```js
/**
* @version 1.0.0.0
* @copyright Copyright © 2019
* @compiler Bridge.NET 17.7.0
*/
Bridge.assembly("D", function ($asm, globals) {
"use strict";
Bridge.define("D.E", {
fields: {
F: null
},
ctors: {
init: function () {
this.F = [UnknownIdentifierResolveResult A].b.c;
}
}
});
});
```
### See Also
https://forums.bridge.net/forum/bridge-net-pro/bugs/6125-unknownidentifierresolveresult-when-referencing-a-static-defined-in-another-project | 1.0 | UnknownIdentifierResolveResult when referencing a static defined in another project. - Project A has a static class B that defines a constant string C
```csharp
namespace A
{
public static class B
{
public const string C = "C";
}
}
```
Project D has a reference to project A and defines a class E that accesses the string defined in A.B.C
```csharp
namespace D
{
public class E
{
public string F = A.B.C;
}
}
```
This solution builds but the output javascript has an UnknownIdentifierResolveResult in it:
D.js
```js
/**
* @version 1.0.0.0
* @copyright Copyright © 2019
* @compiler Bridge.NET 17.7.0
*/
Bridge.assembly("D", function ($asm, globals) {
"use strict";
Bridge.define("D.E", {
fields: {
F: null
},
ctors: {
init: function () {
this.F = [UnknownIdentifierResolveResult A].b.c;
}
}
});
});
```
### See Also
https://forums.bridge.net/forum/bridge-net-pro/bugs/6125-unknownidentifierresolveresult-when-referencing-a-static-defined-in-another-project | defect | unknownidentifierresolveresult when referencing a static defined in another project project a has a static class b that defines a constant string c csharp namespace a public static class b public const string c c project d has a reference to project a and defines a class e that accesses the string defined in a b c csharp namespace d public class e public string f a b c this solution builds but the output javascript has an unknownidentifierresolveresult in it d js js version copyright copyright © compiler bridge net bridge assembly d function asm globals use strict bridge define d e fields f null ctors init function this f b c see also | 1 |
85,060 | 7,960,691,830 | IssuesEvent | 2018-07-13 08:13:15 | researchstudio-sat/webofneeds | https://api.github.com/repos/researchstudio-sat/webofneeds | closed | connectionMessageReceived (messages-actions.js) Error | testing | This method is called whenever we receive a message over the websocket, we will check the messageEffects of that message as well, which is ok.
however within our agreement methods (in post-messages.js) and (combined-message-content.js) we will dispatch the same event for ownMessages and remoteMessages and thus resulting in an http 500 error when we try to get the messageEffects of a message that belongs to you already | 1.0 | connectionMessageReceived (messages-actions.js) Error - This method is called whenever we receive a message over the websocket, we will check the messageEffects of that message as well, which is ok.
however within our agreement methods (in post-messages.js) and (combined-message-content.js) we will dispatch the same event for ownMessages and remoteMessages and thus resulting in an http 500 error when we try to get the messageEffects of a message that belongs to you already | non_defect | connectionmessagereceived messages actions js error this method is called whenever we receive a message over the websocket we will check the messageeffects of that message as well which is ok however within our agreement methods in post messages js and combined message content js we will dispatch the same event for ownmessages and remotemessages and thus resulting in an http error when we try to get the messageeffects of a message that belongs to you already | 0 |
67,612 | 12,978,320,044 | IssuesEvent | 2020-07-21 22:34:38 | microsoft/vscode-cpptools | https://api.github.com/repos/microsoft/vscode-cpptools | closed | Inline assembly-highlighting and strange highlighting of '__asm__' keyword | Feature: Colorization Visual Studio Code | When you use inline assembly like this:
`__asm__
(
"movl $1, %eax"
"movl $2, %ecx"
)`
The assembly-code is the same colour as strings.
I understand that there is no standard assembly highlighting but it could be useful if you could select one(Extension) in the settings.
Also: The `__asm__` is only highlighted in blue if the '(' is in the same line. Else it's white.
This is how it looks with '(' in same line:
<img width="216" alt="Снимок экрана 2020-07-19 в 16 43 10" src="https://user-images.githubusercontent.com/51021475/87877518-1aa2fa80-c9df-11ea-8eae-758bf6888d5f.png">
This is how it looks with '(' in next line:
<img width="227" alt="Снимок экрана 2020-07-19 в 16 43 29" src="https://user-images.githubusercontent.com/51021475/87877515-14ad1980-c9df-11ea-94a6-8c097b5b308e.png">
| 1.0 | Inline assembly-highlighting and strange highlighting of '__asm__' keyword - When you use inline assembly like this:
`__asm__
(
"movl $1, %eax"
"movl $2, %ecx"
)`
The assembly-code is the same colour as strings.
I understand that there is no standard assembly highlighting but it could be useful if you could select one(Extension) in the settings.
Also: The `__asm__` is only highlighted in blue if the '(' is in the same line. Else it's white.
This is how it looks with '(' in same line:
<img width="216" alt="Снимок экрана 2020-07-19 в 16 43 10" src="https://user-images.githubusercontent.com/51021475/87877518-1aa2fa80-c9df-11ea-8eae-758bf6888d5f.png">
This is how it looks with '(' in next line:
<img width="227" alt="Снимок экрана 2020-07-19 в 16 43 29" src="https://user-images.githubusercontent.com/51021475/87877515-14ad1980-c9df-11ea-94a6-8c097b5b308e.png">
| non_defect | inline assembly highlighting and strange highlighting of asm keyword when you use inline assembly like this asm movl eax movl ecx the assembly code is the same colour as strings i understand that there is no standard assembly highlighting but it could be useful if you could select one extension in the settings also the asm is only highlighted in blue if the is in the same line else it s white this is how it looks with in same line img width alt снимок экрана в src this is how it looks with in next line img width alt снимок экрана в src | 0 |
77,205 | 26,840,147,040 | IssuesEvent | 2023-02-02 23:22:15 | openzfs/zfs | https://api.github.com/repos/openzfs/zfs | closed | zpool export on a mounted pool leads to segmentation fault | Type: Defect | <!-- Please fill out the following template, which will help other contributors address your issue. -->
<!--
Thank you for reporting an issue.
*IMPORTANT* - Please check our issue tracker before opening a new issue.
Additional valuable information can be found in the OpenZFS documentation
and mailing list archives.
Please fill in as much of the template as possible.
-->
### System information
<!-- add version after "|" character -->
Type | Version/Name
--- | ---
Distribution Name | Arch Linux
Distribution Version |
Kernel Version | 6.0.12-arch1-1
Architecture | x86_64
OpenZFS Version | zfs-2.1.99-1635_gfb11b1570a, zfs-kmod-2.1.99-1
<!--
Command to find OpenZFS version:
zfs version
Commands to find kernel version:
uname -r # Linux
freebsd-version -r # FreeBSD
-->
### Describe the problem you're observing
Segmentation fault when exporting a mounted pool:

### Describe how to reproduce the problem
Create a mirrored pool with a single dataset, mount it.
Run `zfs export $poolname`.
If I unmount the dataset beforehand, the export works as expected.
### Include any warning/errors/backtraces from the system logs
I only took a screenshot sadly.

Coredump and related unstripped binaries and libraries:
[zfs-repro.zip](https://github.com/openzfs/zfs/files/10568368/zfs-repro.zip)
| 1.0 | zpool export on a mounted pool leads to segmentation fault - <!-- Please fill out the following template, which will help other contributors address your issue. -->
<!--
Thank you for reporting an issue.
*IMPORTANT* - Please check our issue tracker before opening a new issue.
Additional valuable information can be found in the OpenZFS documentation
and mailing list archives.
Please fill in as much of the template as possible.
-->
### System information
<!-- add version after "|" character -->
Type | Version/Name
--- | ---
Distribution Name | Arch Linux
Distribution Version |
Kernel Version | 6.0.12-arch1-1
Architecture | x86_64
OpenZFS Version | zfs-2.1.99-1635_gfb11b1570a, zfs-kmod-2.1.99-1
<!--
Command to find OpenZFS version:
zfs version
Commands to find kernel version:
uname -r # Linux
freebsd-version -r # FreeBSD
-->
### Describe the problem you're observing
Segmentation fault when exporting a mounted pool:

### Describe how to reproduce the problem
Create a mirrored pool with a single dataset, mount it.
Run `zfs export $poolname`.
If I unmount the dataset beforehand, the export works as expected.
### Include any warning/errors/backtraces from the system logs
I only took a screenshot sadly.

Coredump and related unstripped binaries and libraries:
[zfs-repro.zip](https://github.com/openzfs/zfs/files/10568368/zfs-repro.zip)
| defect | zpool export on a mounted pool leads to segmentation fault thank you for reporting an issue important please check our issue tracker before opening a new issue additional valuable information can be found in the openzfs documentation and mailing list archives please fill in as much of the template as possible system information type version name distribution name arch linux distribution version kernel version architecture openzfs version zfs zfs kmod command to find openzfs version zfs version commands to find kernel version uname r linux freebsd version r freebsd describe the problem you re observing segmentation fault when exporting a mounted pool describe how to reproduce the problem create a mirrored pool with a single dataset mount it run zfs export poolname if i unmount the dataset beforehand the export works as expected include any warning errors backtraces from the system logs i only took a screenshot sadly coredump and related unstripped binaries and libraries | 1 |
136,938 | 18,751,513,790 | IssuesEvent | 2021-11-05 03:01:00 | Dima2022/Resiliency-Studio | https://api.github.com/repos/Dima2022/Resiliency-Studio | closed | CVE-2020-9546 (High) detected in jackson-databind-2.8.6.jar - autoclosed | security vulnerability | ## CVE-2020-9546 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.8.6.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: Resiliency-Studio/resiliency-studio-agent/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.6/jackson-databind-2.8.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.6/jackson-databind-2.8.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.6/jackson-databind-2.8.6.jar</p>
<p>
Dependency Hierarchy:
- sdk-java-rest-6.2.0.4-oss.jar (Root Library)
- :x: **jackson-databind-2.8.6.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Dima2022/Resiliency-Studio/commit/9809d9b7bfdc114eafb0a14d86667f3a76a014e8">9809d9b7bfdc114eafb0a14d86667f3a76a014e8</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.apache.hadoop.shaded.com.zaxxer.hikari.HikariConfig (aka shaded hikari-config).
<p>Publish Date: 2020-03-02
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-9546>CVE-2020-9546</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-9546">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-9546</a></p>
<p>Release Date: 2020-03-02</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.10.3</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.8.6","packageFilePaths":["/resiliency-studio-agent/pom.xml","/resiliency-studio-security/pom.xml","/resiliency-studio-service/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"com.att.ajsc:sdk-java-rest:6.2.0.4-oss;com.fasterxml.jackson.core:jackson-databind:2.8.6","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.10.3"}],"baseBranches":[],"vulnerabilityIdentifier":"CVE-2020-9546","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.apache.hadoop.shaded.com.zaxxer.hikari.HikariConfig (aka shaded hikari-config).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-9546","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | True | CVE-2020-9546 (High) detected in jackson-databind-2.8.6.jar - autoclosed - ## CVE-2020-9546 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.8.6.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: Resiliency-Studio/resiliency-studio-agent/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.6/jackson-databind-2.8.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.6/jackson-databind-2.8.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.6/jackson-databind-2.8.6.jar</p>
<p>
Dependency Hierarchy:
- sdk-java-rest-6.2.0.4-oss.jar (Root Library)
- :x: **jackson-databind-2.8.6.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Dima2022/Resiliency-Studio/commit/9809d9b7bfdc114eafb0a14d86667f3a76a014e8">9809d9b7bfdc114eafb0a14d86667f3a76a014e8</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.apache.hadoop.shaded.com.zaxxer.hikari.HikariConfig (aka shaded hikari-config).
<p>Publish Date: 2020-03-02
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-9546>CVE-2020-9546</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-9546">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-9546</a></p>
<p>Release Date: 2020-03-02</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.10.3</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.8.6","packageFilePaths":["/resiliency-studio-agent/pom.xml","/resiliency-studio-security/pom.xml","/resiliency-studio-service/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"com.att.ajsc:sdk-java-rest:6.2.0.4-oss;com.fasterxml.jackson.core:jackson-databind:2.8.6","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.10.3"}],"baseBranches":[],"vulnerabilityIdentifier":"CVE-2020-9546","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.apache.hadoop.shaded.com.zaxxer.hikari.HikariConfig (aka shaded hikari-config).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-9546","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | non_defect | cve high detected in jackson databind jar autoclosed cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file resiliency studio resiliency studio agent pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy sdk java rest oss jar root library x jackson databind jar vulnerable library found in head commit a href vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org apache hadoop shaded com zaxxer hikari hikariconfig aka shaded hikari config publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree com att ajsc sdk java rest oss com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion com fasterxml jackson core jackson databind basebranches vulnerabilityidentifier cve vulnerabilitydetails fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org apache hadoop shaded com zaxxer hikari hikariconfig aka shaded hikari config vulnerabilityurl | 0 |
53,553 | 13,261,906,304 | IssuesEvent | 2020-08-20 20:45:03 | icecube-trac/tix4 | https://api.github.com/repos/icecube-trac/tix4 | closed | wavedeform tests depend on the order in which they are run (Trac #1673) | Migrated from Trac combo reconstruction defect | This is okay on our bots because they run all tests sequentially but will make them fail if you run the tests in parallel. (ctest -j10 or similar)
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1673">https://code.icecube.wisc.edu/projects/icecube/ticket/1673</a>, reported by claudio.kopperand owned by jbraun</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:12:38",
"_ts": "1550067158057333",
"description": "This is okay on our bots because they run all tests sequentially but will make them fail if you run the tests in parallel. (ctest -j10 or similar)\n",
"reporter": "claudio.kopper",
"cc": "",
"resolution": "fixed",
"time": "2016-04-28T23:20:04",
"component": "combo reconstruction",
"summary": "wavedeform tests depend on the order in which they are run",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "jbraun",
"type": "defect"
}
```
</p>
</details>
| 1.0 | wavedeform tests depend on the order in which they are run (Trac #1673) - This is okay on our bots because they run all tests sequentially but will make them fail if you run the tests in parallel. (ctest -j10 or similar)
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1673">https://code.icecube.wisc.edu/projects/icecube/ticket/1673</a>, reported by claudio.kopperand owned by jbraun</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:12:38",
"_ts": "1550067158057333",
"description": "This is okay on our bots because they run all tests sequentially but will make them fail if you run the tests in parallel. (ctest -j10 or similar)\n",
"reporter": "claudio.kopper",
"cc": "",
"resolution": "fixed",
"time": "2016-04-28T23:20:04",
"component": "combo reconstruction",
"summary": "wavedeform tests depend on the order in which they are run",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "jbraun",
"type": "defect"
}
```
</p>
</details>
| defect | wavedeform tests depend on the order in which they are run trac this is okay on our bots because they run all tests sequentially but will make them fail if you run the tests in parallel ctest or similar migrated from json status closed changetime ts description this is okay on our bots because they run all tests sequentially but will make them fail if you run the tests in parallel ctest or similar n reporter claudio kopper cc resolution fixed time component combo reconstruction summary wavedeform tests depend on the order in which they are run priority normal keywords milestone owner jbraun type defect | 1 |
119,459 | 4,770,536,347 | IssuesEvent | 2016-10-26 15:30:56 | gtagency/buzzmobile | https://api.github.com/repos/gtagency/buzzmobile | opened | Automatically figure out when we should be dropping a fix in bearing node | low priority | Currently, we are just using a distance threshhold. Ideally, we'd want to automatically drop a "new_fix" iff we believe it's not improving our current calculation for bearing.
https://github.com/gtagency/buzzmobile/pull/77/files#diff-3d31e05db1c95ca0eece5af0f4f7064eR60
There are many ways this can be done, including setting a decay rate for previous fixes. Or we could do something fancier like a particle filtering algo. | 1.0 | Automatically figure out when we should be dropping a fix in bearing node - Currently, we are just using a distance threshhold. Ideally, we'd want to automatically drop a "new_fix" iff we believe it's not improving our current calculation for bearing.
https://github.com/gtagency/buzzmobile/pull/77/files#diff-3d31e05db1c95ca0eece5af0f4f7064eR60
There are many ways this can be done, including setting a decay rate for previous fixes. Or we could do something fancier like a particle filtering algo. | non_defect | automatically figure out when we should be dropping a fix in bearing node currently we are just using a distance threshhold ideally we d want to automatically drop a new fix iff we believe it s not improving our current calculation for bearing there are many ways this can be done including setting a decay rate for previous fixes or we could do something fancier like a particle filtering algo | 0 |
531,875 | 15,526,640,995 | IssuesEvent | 2021-03-13 02:04:00 | CreeperMagnet/the-creepers-code | https://api.github.com/repos/CreeperMagnet/the-creepers-code | closed | The installed datapack advancement doesn't have 3/7 of the devs. | priority: low | I just forgot to add the new 3 devs. Low priority, will be fixed in V0.5.
(Imported from old repository) | 1.0 | The installed datapack advancement doesn't have 3/7 of the devs. - I just forgot to add the new 3 devs. Low priority, will be fixed in V0.5.
(Imported from old repository) | non_defect | the installed datapack advancement doesn t have of the devs i just forgot to add the new devs low priority will be fixed in imported from old repository | 0 |
292,293 | 8,955,992,908 | IssuesEvent | 2019-01-26 13:42:01 | BarackOLlama/Festispec-App | https://api.github.com/repos/BarackOLlama/Festispec-App | closed | Evenement aanmaken | Priority TODO | PUC nr: 29
Trigger: Nieuw evenement
Preconditie:
Actor: Medewerker
Stapsgewijze beschrijving (scenario):
- [x] Scherm
- [x] VM
- [ ] Database
- [ ] Getest
- 29.1 Medewerker krijgt informatie over evenement
- 29.2 Medewerker klikt op de knop evenement aanmaken
- 29.3 Medewerker vult informatie in in het systeem
- 29.4 Medewerker klikt op opslaan
- 29.5 Het systeem slaat de gegevens op
Resultaat:
Er is een nieuw evenement aangemaakt | 1.0 | Evenement aanmaken - PUC nr: 29
Trigger: Nieuw evenement
Preconditie:
Actor: Medewerker
Stapsgewijze beschrijving (scenario):
- [x] Scherm
- [x] VM
- [ ] Database
- [ ] Getest
- 29.1 Medewerker krijgt informatie over evenement
- 29.2 Medewerker klikt op de knop evenement aanmaken
- 29.3 Medewerker vult informatie in in het systeem
- 29.4 Medewerker klikt op opslaan
- 29.5 Het systeem slaat de gegevens op
Resultaat:
Er is een nieuw evenement aangemaakt | non_defect | evenement aanmaken puc nr trigger nieuw evenement preconditie actor medewerker stapsgewijze beschrijving scenario scherm vm database getest medewerker krijgt informatie over evenement medewerker klikt op de knop evenement aanmaken medewerker vult informatie in in het systeem medewerker klikt op opslaan het systeem slaat de gegevens op resultaat er is een nieuw evenement aangemaakt | 0 |
441,362 | 30,779,838,815 | IssuesEvent | 2023-07-31 09:15:27 | Avaiga/taipy-core | https://api.github.com/repos/Avaiga/taipy-core | closed | move global config attributes to Core section | Core: ⚙️ Configuration 📄 Documentation 🟨 Priority: Medium 🔒 Staff only ⚙️Configuration 📈 Improvement | - [x] Remove enable-clean-all-entities config attribute
- [x] Move attributes from Global section to Core
- [x] Migrate tests
- [x] Update documentation Ref manual and User manual
- [x] Update release notes and migration page | 1.0 | move global config attributes to Core section - - [x] Remove enable-clean-all-entities config attribute
- [x] Move attributes from Global section to Core
- [x] Migrate tests
- [x] Update documentation Ref manual and User manual
- [x] Update release notes and migration page | non_defect | move global config attributes to core section remove enable clean all entities config attribute move attributes from global section to core migrate tests update documentation ref manual and user manual update release notes and migration page | 0 |
113,725 | 24,479,929,119 | IssuesEvent | 2022-10-08 17:39:55 | kubevirt/kubevirt | https://api.github.com/repos/kubevirt/kubevirt | closed | tests: functests should not refer to xml | kind/bug lifecycle/rotten sig/code-quality | /kind bug
For KubeVirt users, xml is an implementation detail. IMHO we should have a very good reason to mention it in a functional test, and I feel that too many functests use `util.GetRunningVMIDomainSpec`.
```
$ git grep GetRunningVMIDomainSpec -- tests | wc -l
29
```
This issue is filed per [suggestion of @xpivarc](https://github.com/kubevirt/kubevirt/pull/7183#discussion_r803605651) to track review/elimination of these usages.
| 1.0 | tests: functests should not refer to xml - /kind bug
For KubeVirt users, xml is an implementation detail. IMHO we should have a very good reason to mention it in a functional test, and I feel that too many functests use `util.GetRunningVMIDomainSpec`.
```
$ git grep GetRunningVMIDomainSpec -- tests | wc -l
29
```
This issue is filed per [suggestion of @xpivarc](https://github.com/kubevirt/kubevirt/pull/7183#discussion_r803605651) to track review/elimination of these usages.
| non_defect | tests functests should not refer to xml kind bug for kubevirt users xml is an implementation detail imho we should have a very good reason to mention it in a functional test and i feel that too many functests use util getrunningvmidomainspec git grep getrunningvmidomainspec tests wc l this issue is filed per to track review elimination of these usages | 0 |
81,618 | 31,152,007,636 | IssuesEvent | 2023-08-16 10:35:27 | primefaces/primefaces | https://api.github.com/repos/primefaces/primefaces | closed | Datatable Column Filter: Initial value set in view is overwritten | :lady_beetle: defect | ### Describe the bug
We switched from PF10 to PF12 and found a bug in the datatable column filter using selectOneButton component for filtering.
We set the default value of the filter in the view, but when the selectOneButton component is renderd, the value is overwritten with 'null'. As soon as the filter has been clicked on, it works as intended.
I also tested it in PF13 and there it seems to work again as expected.
### Reproducer
Open start page of primefaces-test repo forked here https://github.com/raphisuter/primefaces-test-datatable-filter using mvn clean jetty:run -Pmyfaces22
### Expected behavior
Default value set in view is applied to component and filter is active on page visit
### PrimeFaces edition
Community
### PrimeFaces version
12.0.0
### Theme
_No response_
### JSF implementation
MyFaces
### JSF version
2.2
### Java version
1.8
### Browser(s)
chrome, firefox, edge | 1.0 | Datatable Column Filter: Initial value set in view is overwritten - ### Describe the bug
We switched from PF10 to PF12 and found a bug in the datatable column filter using selectOneButton component for filtering.
We set the default value of the filter in the view, but when the selectOneButton component is renderd, the value is overwritten with 'null'. As soon as the filter has been clicked on, it works as intended.
I also tested it in PF13 and there it seems to work again as expected.
### Reproducer
Open start page of primefaces-test repo forked here https://github.com/raphisuter/primefaces-test-datatable-filter using mvn clean jetty:run -Pmyfaces22
### Expected behavior
Default value set in view is applied to component and filter is active on page visit
### PrimeFaces edition
Community
### PrimeFaces version
12.0.0
### Theme
_No response_
### JSF implementation
MyFaces
### JSF version
2.2
### Java version
1.8
### Browser(s)
chrome, firefox, edge | defect | datatable column filter initial value set in view is overwritten describe the bug we switched from to and found a bug in the datatable column filter using selectonebutton component for filtering we set the default value of the filter in the view but when the selectonebutton component is renderd the value is overwritten with null as soon as the filter has been clicked on it works as intended i also tested it in and there it seems to work again as expected reproducer open start page of primefaces test repo forked here using mvn clean jetty run expected behavior default value set in view is applied to component and filter is active on page visit primefaces edition community primefaces version theme no response jsf implementation myfaces jsf version java version browser s chrome firefox edge | 1 |
30,370 | 2,723,600,757 | IssuesEvent | 2015-04-14 13:36:54 | CruxFramework/crux-widgets | https://api.github.com/repos/CruxFramework/crux-widgets | closed | ClassPathResolver section in UserManual is out of date | bug imported Milestone-3.0.0 Priority-Medium Wiki | _From [brunodep...@gmail.com](https://code.google.com/u/108972312674998482139/) on May 21, 2010 16:20:51_
What steps will reproduce the problem? 1.Go to Wiki/ UserManual 2.Check instructions for creating a WeblogicClassPathResolver
3.Check method public URL findWebBaseDir()
The document says to override method public URL findWebBaseDir(). However
the class ClassPathResolverImpl doesn't have this method. It has a similar
method:
public URL[] findWebBaseDirs().
Seems like this section of the UserManual is out of date. Could you guys
update it?
Cheers
B
_Original issue: http://code.google.com/p/crux-framework/issues/detail?id=115_ | 1.0 | ClassPathResolver section in UserManual is out of date - _From [brunodep...@gmail.com](https://code.google.com/u/108972312674998482139/) on May 21, 2010 16:20:51_
What steps will reproduce the problem? 1.Go to Wiki/ UserManual 2.Check instructions for creating a WeblogicClassPathResolver
3.Check method public URL findWebBaseDir()
The document says to override method public URL findWebBaseDir(). However
the class ClassPathResolverImpl doesn't have this method. It has a similar
method:
public URL[] findWebBaseDirs().
Seems like this section of the UserManual is out of date. Could you guys
update it?
Cheers
B
_Original issue: http://code.google.com/p/crux-framework/issues/detail?id=115_ | non_defect | classpathresolver section in usermanual is out of date from on may what steps will reproduce the problem go to wiki usermanual check instructions for creating a weblogicclasspathresolver check method public url findwebbasedir the document says to override method public url findwebbasedir however the class classpathresolverimpl doesn t have this method it has a similar method public url findwebbasedirs seems like this section of the usermanual is out of date could you guys update it cheers b original issue | 0 |
8,053 | 2,611,450,275 | IssuesEvent | 2015-02-27 04:58:43 | chrsmith/hedgewars | https://api.github.com/repos/chrsmith/hedgewars | closed | Can't use cake when standing on explosive! | auto-migrated Priority-Medium Type-Defect | ```
What steps will reproduce the problem?
1. Load a map with explosives.
2. Stand on one of them.
3. Use cake.
What is the expected output? What do you see instead?
The cake doesn't walk as usual, but it "sinks" into explosive. Also noticed the
cake activation timer runs much faster than usual.
What version of the product are you using? On what operating system?
0.9.13 on Windows XP SP2
Please provide any additional information below.
```
Original issue reported on code.google.com by `adibiaz...@gmail.com` on 31 Aug 2010 at 5:47 | 1.0 | Can't use cake when standing on explosive! - ```
What steps will reproduce the problem?
1. Load a map with explosives.
2. Stand on one of them.
3. Use cake.
What is the expected output? What do you see instead?
The cake doesn't walk as usual, but it "sinks" into explosive. Also noticed the
cake activation timer runs much faster than usual.
What version of the product are you using? On what operating system?
0.9.13 on Windows XP SP2
Please provide any additional information below.
```
Original issue reported on code.google.com by `adibiaz...@gmail.com` on 31 Aug 2010 at 5:47 | defect | can t use cake when standing on explosive what steps will reproduce the problem load a map with explosives stand on one of them use cake what is the expected output what do you see instead the cake doesn t walk as usual but it sinks into explosive also noticed the cake activation timer runs much faster than usual what version of the product are you using on what operating system on windows xp please provide any additional information below original issue reported on code google com by adibiaz gmail com on aug at | 1 |
1,994 | 2,603,974,621 | IssuesEvent | 2015-02-24 19:01:11 | chrsmith/nishazi6 | https://api.github.com/repos/chrsmith/nishazi6 | opened | 沈阳沈阳生殖疱疹的症状 | auto-migrated Priority-Medium Type-Defect | ```
沈阳沈阳生殖疱疹的症状〓沈陽軍區政治部醫院性病〓TEL:02
4-31023308〓成立于1946年,68年專注于性傳播疾病的研究和治療�
��位于沈陽市沈河區二緯路32號。是一所與新中國同建立共輝�
��的歷史悠久、設備精良、技術權威、專家云集,是預防、保
健、醫療、科研康復為一體的綜合性醫院。是國家首批公立��
�等部隊醫院、全國首批醫療規范定點單位,是第四軍醫大學�
��東南大學等知名高等院校的教學醫院。曾被中國人民解放軍
空軍后勤部衛生部評為衛生工作先進單位,先后兩次榮立集��
�二等功。
```
-----
Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 8:12 | 1.0 | 沈阳沈阳生殖疱疹的症状 - ```
沈阳沈阳生殖疱疹的症状〓沈陽軍區政治部醫院性病〓TEL:02
4-31023308〓成立于1946年,68年專注于性傳播疾病的研究和治療�
��位于沈陽市沈河區二緯路32號。是一所與新中國同建立共輝�
��的歷史悠久、設備精良、技術權威、專家云集,是預防、保
健、醫療、科研康復為一體的綜合性醫院。是國家首批公立��
�等部隊醫院、全國首批醫療規范定點單位,是第四軍醫大學�
��東南大學等知名高等院校的教學醫院。曾被中國人民解放軍
空軍后勤部衛生部評為衛生工作先進單位,先后兩次榮立集��
�二等功。
```
-----
Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 8:12 | defect | 沈阳沈阳生殖疱疹的症状 沈阳沈阳生殖疱疹的症状〓沈陽軍區政治部醫院性病〓tel: 〓 , � �� 。是一所與新中國同建立共輝� ��的歷史悠久、設備精良、技術權威、專家云集,是預防、保 健、醫療、科研康復為一體的綜合性醫院。是國家首批公立�� �等部隊醫院、全國首批醫療規范定點單位,是第四軍醫大學� ��東南大學等知名高等院校的教學醫院。曾被中國人民解放軍 空軍后勤部衛生部評為衛生工作先進單位,先后兩次榮立集�� �二等功。 original issue reported on code google com by gmail com on jun at | 1 |
62,968 | 17,274,223,953 | IssuesEvent | 2021-07-23 02:24:24 | milvus-io/milvus-insight | https://api.github.com/repos/milvus-io/milvus-insight | opened | Auto id set true is not working when create collection | defect | **Describe the bug:**
Auto id set true is not working when create collection
**Steps to reproduce:**
1. create collection
2. set autoid true
3. it's always return false
**Milvus-insight version:**
latest
**Milvus version:**
| 1.0 | Auto id set true is not working when create collection - **Describe the bug:**
Auto id set true is not working when create collection
**Steps to reproduce:**
1. create collection
2. set autoid true
3. it's always return false
**Milvus-insight version:**
latest
**Milvus version:**
| defect | auto id set true is not working when create collection describe the bug auto id set true is not working when create collection steps to reproduce create collection set autoid true it s always return false milvus insight version latest milvus version | 1 |
53,375 | 13,261,477,729 | IssuesEvent | 2020-08-20 19:58:24 | icecube-trac/tix4 | https://api.github.com/repos/icecube-trac/tix4 | closed | [clsim] run the python tests (Trac #1252) | Migrated from Trac combo simulation defect | clsim has python tests under `resources/tests`, but cmake doesn't know about them. Perhaps add `i3_test_scripts(resources/tests/*.py)` in the right place?
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1252">https://code.icecube.wisc.edu/projects/icecube/ticket/1252</a>, reported by david.schultzand owned by claudio.kopper</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:11:45",
"_ts": "1550067105393059",
"description": "clsim has python tests under `resources/tests`, but cmake doesn't know about them. Perhaps add `i3_test_scripts(resources/tests/*.py)` in the right place?",
"reporter": "david.schultz",
"cc": "",
"resolution": "fixed",
"time": "2015-08-20T18:32:32",
"component": "combo simulation",
"summary": "[clsim] run the python tests",
"priority": "blocker",
"keywords": "",
"milestone": "",
"owner": "claudio.kopper",
"type": "defect"
}
```
</p>
</details>
| 1.0 | [clsim] run the python tests (Trac #1252) - clsim has python tests under `resources/tests`, but cmake doesn't know about them. Perhaps add `i3_test_scripts(resources/tests/*.py)` in the right place?
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1252">https://code.icecube.wisc.edu/projects/icecube/ticket/1252</a>, reported by david.schultzand owned by claudio.kopper</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:11:45",
"_ts": "1550067105393059",
"description": "clsim has python tests under `resources/tests`, but cmake doesn't know about them. Perhaps add `i3_test_scripts(resources/tests/*.py)` in the right place?",
"reporter": "david.schultz",
"cc": "",
"resolution": "fixed",
"time": "2015-08-20T18:32:32",
"component": "combo simulation",
"summary": "[clsim] run the python tests",
"priority": "blocker",
"keywords": "",
"milestone": "",
"owner": "claudio.kopper",
"type": "defect"
}
```
</p>
</details>
| defect | run the python tests trac clsim has python tests under resources tests but cmake doesn t know about them perhaps add test scripts resources tests py in the right place migrated from json status closed changetime ts description clsim has python tests under resources tests but cmake doesn t know about them perhaps add test scripts resources tests py in the right place reporter david schultz cc resolution fixed time component combo simulation summary run the python tests priority blocker keywords milestone owner claudio kopper type defect | 1 |
26,093 | 4,581,659,262 | IssuesEvent | 2016-09-19 07:00:02 | primefaces/primefaces | https://api.github.com/repos/primefaces/primefaces | closed | TieredMenu Overlay disappears on Mouse Down when item's text is clicked | 5.3.18 6.0.5 defect | Reported By PRO User;
```
http://www.primefaces.org/showcase/ui/menu/tieredMenu.xhtml
Steps:
1. Click the "Show" button to bring up the dynamic overlay menu.
2. Navigate to "Ajax MenuItems->Save" and just hover over Save with your mouse.
3. Press your mouse down but do NOT release the mouse button.
Result: The menu disappears.
``` | 1.0 | TieredMenu Overlay disappears on Mouse Down when item's text is clicked - Reported By PRO User;
```
http://www.primefaces.org/showcase/ui/menu/tieredMenu.xhtml
Steps:
1. Click the "Show" button to bring up the dynamic overlay menu.
2. Navigate to "Ajax MenuItems->Save" and just hover over Save with your mouse.
3. Press your mouse down but do NOT release the mouse button.
Result: The menu disappears.
``` | defect | tieredmenu overlay disappears on mouse down when item s text is clicked reported by pro user steps click the show button to bring up the dynamic overlay menu navigate to ajax menuitems save and just hover over save with your mouse press your mouse down but do not release the mouse button result the menu disappears | 1 |
252,690 | 8,039,168,629 | IssuesEvent | 2018-07-30 17:32:01 | thirtybees/thirtybees | https://api.github.com/repos/thirtybees/thirtybees | opened | Some URLs are marked as being translatable | Priority: low | Looking into translations, one can find strings offered for translation like this:
`https://forum.thirtybees.com/`
`https://docs.thirtybees.com/`
As URLs shouldn't be subject to translation, the translation function should get removed. | 1.0 | Some URLs are marked as being translatable - Looking into translations, one can find strings offered for translation like this:
`https://forum.thirtybees.com/`
`https://docs.thirtybees.com/`
As URLs shouldn't be subject to translation, the translation function should get removed. | non_defect | some urls are marked as being translatable looking into translations one can find strings offered for translation like this as urls shouldn t be subject to translation the translation function should get removed | 0 |
79,853 | 29,480,389,335 | IssuesEvent | 2023-06-02 04:51:01 | idaholab/moose | https://api.github.com/repos/idaholab/moose | closed | PorousFlow multi app material sizing bug | T: defect P: normal | ## Bug Description
@Joseph-0123 noted in #23650 that porous flow was seg faulting while sizing old properties in `PorousFlowMaterial` in a multi-app scenario, which is the base class for all of our PorousFlow materials.
## Steps to Reproduce
I could reproduce this bug in this particular scenario where porous flow is the master app and a second moose app is the sub.
## Impact
Stopping PorousFlow being useful in this scenario, even though we do test its use in a multi app situation.
| 1.0 | PorousFlow multi app material sizing bug - ## Bug Description
@Joseph-0123 noted in #23650 that porous flow was seg faulting while sizing old properties in `PorousFlowMaterial` in a multi-app scenario, which is the base class for all of our PorousFlow materials.
## Steps to Reproduce
I could reproduce this bug in this particular scenario where porous flow is the master app and a second moose app is the sub.
## Impact
Stopping PorousFlow being useful in this scenario, even though we do test its use in a multi app situation.
| defect | porousflow multi app material sizing bug bug description joseph noted in that porous flow was seg faulting while sizing old properties in porousflowmaterial in a multi app scenario which is the base class for all of our porousflow materials steps to reproduce i could reproduce this bug in this particular scenario where porous flow is the master app and a second moose app is the sub impact stopping porousflow being useful in this scenario even though we do test its use in a multi app situation | 1 |
58,143 | 16,370,237,458 | IssuesEvent | 2021-05-15 01:05:22 | idaholab/moose | https://api.github.com/repos/idaholab/moose | closed | CIVET result documetation do not appear from submodules | P: normal PR: Auto Merge T: defect | ## Bug Description
<!--A clear and concise description of the problem (Note: A missing feature is not a bug).-->
The links to CIVET results for moose or other submodules do not appear for in the RTM for applications.
## Steps to Reproduce
<!--Steps to reproduce the behavior (input file, or modifications to an existing input file, etc.)-->
Build documentation for an application (e.g., blackbear) and follow the links to the MOOSE RTM, the CIVET badges do not appear.
## Impact
<!--Does this prevent you from getting your work done, or is it more of an annoyance?-->
This is an annoyance, but it is possible to get to those links by going to the website for the animal itself.
| 1.0 | CIVET result documetation do not appear from submodules - ## Bug Description
<!--A clear and concise description of the problem (Note: A missing feature is not a bug).-->
The links to CIVET results for moose or other submodules do not appear for in the RTM for applications.
## Steps to Reproduce
<!--Steps to reproduce the behavior (input file, or modifications to an existing input file, etc.)-->
Build documentation for an application (e.g., blackbear) and follow the links to the MOOSE RTM, the CIVET badges do not appear.
## Impact
<!--Does this prevent you from getting your work done, or is it more of an annoyance?-->
This is an annoyance, but it is possible to get to those links by going to the website for the animal itself.
| defect | civet result documetation do not appear from submodules bug description the links to civet results for moose or other submodules do not appear for in the rtm for applications steps to reproduce build documentation for an application e g blackbear and follow the links to the moose rtm the civet badges do not appear impact this is an annoyance but it is possible to get to those links by going to the website for the animal itself | 1 |
426,825 | 29,661,316,801 | IssuesEvent | 2023-06-10 07:23:39 | hasanmiadev/magical-ride | https://api.github.com/repos/hasanmiadev/magical-ride | closed | Software Requirement Specification | documentation | The purpose of this document is to define the requirements for the development of a REST API for a ride sharing system. The API will enable users to create profiles, request rides, and allow drivers to view ride requests and respond if available. The system aims to facilitate efficient and convenient transportation services for users. | 1.0 | Software Requirement Specification - The purpose of this document is to define the requirements for the development of a REST API for a ride sharing system. The API will enable users to create profiles, request rides, and allow drivers to view ride requests and respond if available. The system aims to facilitate efficient and convenient transportation services for users. | non_defect | software requirement specification the purpose of this document is to define the requirements for the development of a rest api for a ride sharing system the api will enable users to create profiles request rides and allow drivers to view ride requests and respond if available the system aims to facilitate efficient and convenient transportation services for users | 0 |
473,354 | 13,641,072,838 | IssuesEvent | 2020-09-25 13:40:49 | xournalpp/xournalpp | https://api.github.com/repos/xournalpp/xournalpp | closed | Default location for image import (?) | bug confirmed papercut priority: medium | When importing an image, I now get "Folder contents cannot be displayed" and I have to walk down the tree from my home directory to find the desired image. This occurs even if I launch xournal++ from the desired working directory.
Previous behavior was (if I recall correctly) to open the working directory.
It would also be nice if one could specify a default image working directory. | 1.0 | Default location for image import (?) - When importing an image, I now get "Folder contents cannot be displayed" and I have to walk down the tree from my home directory to find the desired image. This occurs even if I launch xournal++ from the desired working directory.
Previous behavior was (if I recall correctly) to open the working directory.
It would also be nice if one could specify a default image working directory. | non_defect | default location for image import when importing an image i now get folder contents cannot be displayed and i have to walk down the tree from my home directory to find the desired image this occurs even if i launch xournal from the desired working directory previous behavior was if i recall correctly to open the working directory it would also be nice if one could specify a default image working directory | 0 |
71,267 | 23,513,375,922 | IssuesEvent | 2022-08-18 18:50:48 | idaholab/moose | https://api.github.com/repos/idaholab/moose | closed | Conda checks not working properly | T: defect P: normal | Conda checks in the makefile are having issues and displaying when they shouldn't, or more accurately, it appears that conda is not working correctly on people's installs and causing problems. Regardless, the checks should be removed in the interim to avoid annoyances while the problem is fixed. | 1.0 | Conda checks not working properly - Conda checks in the makefile are having issues and displaying when they shouldn't, or more accurately, it appears that conda is not working correctly on people's installs and causing problems. Regardless, the checks should be removed in the interim to avoid annoyances while the problem is fixed. | defect | conda checks not working properly conda checks in the makefile are having issues and displaying when they shouldn t or more accurately it appears that conda is not working correctly on people s installs and causing problems regardless the checks should be removed in the interim to avoid annoyances while the problem is fixed | 1 |
50,635 | 13,187,648,719 | IssuesEvent | 2020-08-13 04:06:23 | icecube-trac/tix3 | https://api.github.com/repos/icecube-trac/tix3 | closed | errors when running L2 processing on noise generation files (Trac #1105) | Migrated from Trac cmake defect | I tried to do some L2 processing on L1 vuvuzela noise files. By running level2_Master.py from the project std-processing, I ran into the following error from cobol machines 38-39, 40-42, and 89-90 at UMD. Other machines can successfully run my jobs.
python: /data/condor_builds/users/elims/software/IC2011-L2_V12-08-00_IceSim4compat_V4/src/dataclasses/private/dataclasses/payload/I3SuperDST.cxx:1181: void I3SuperDST::load_v1(Archive&) [with Archive = boost::archive::portable_binary_iarchive]: Assertion `width_it != width_end' failed.
/data/condor_builds/users/elims/software/IC2011-L2_V12-08-00_IceSim4compat_V4/build/env-shell.sh: line 139: 10105 Aborted $NEW_SHELL $ARGV
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1105">https://code.icecube.wisc.edu/ticket/1105</a>, reported by elims and owned by nega</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2015-08-11T19:24:51",
"description": "I tried to do some L2 processing on L1 vuvuzela noise files. By running level2_Master.py from the project std-processing, I ran into the following error from cobol machines 38-39, 40-42, and 89-90 at UMD. Other machines can successfully run my jobs.\n\npython: /data/condor_builds/users/elims/software/IC2011-L2_V12-08-00_IceSim4compat_V4/src/dataclasses/private/dataclasses/payload/I3SuperDST.cxx:1181: void I3SuperDST::load_v1(Archive&) [with Archive = boost::archive::portable_binary_iarchive]: Assertion `width_it != width_end' failed.\n/data/condor_builds/users/elims/software/IC2011-L2_V12-08-00_IceSim4compat_V4/build/env-shell.sh: line 139: 10105 Aborted $NEW_SHELL $ARGV\n",
"reporter": "elims",
"cc": "",
"resolution": "duplicate",
"_ts": "1439321091027396",
"component": "cmake",
"summary": "errors when running L2 processing on noise generation files",
"priority": "normal",
"keywords": "",
"time": "2015-08-11T16:35:51",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
</p>
</details>
| 1.0 | errors when running L2 processing on noise generation files (Trac #1105) - I tried to do some L2 processing on L1 vuvuzela noise files. By running level2_Master.py from the project std-processing, I ran into the following error from cobol machines 38-39, 40-42, and 89-90 at UMD. Other machines can successfully run my jobs.
python: /data/condor_builds/users/elims/software/IC2011-L2_V12-08-00_IceSim4compat_V4/src/dataclasses/private/dataclasses/payload/I3SuperDST.cxx:1181: void I3SuperDST::load_v1(Archive&) [with Archive = boost::archive::portable_binary_iarchive]: Assertion `width_it != width_end' failed.
/data/condor_builds/users/elims/software/IC2011-L2_V12-08-00_IceSim4compat_V4/build/env-shell.sh: line 139: 10105 Aborted $NEW_SHELL $ARGV
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1105">https://code.icecube.wisc.edu/ticket/1105</a>, reported by elims and owned by nega</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2015-08-11T19:24:51",
"description": "I tried to do some L2 processing on L1 vuvuzela noise files. By running level2_Master.py from the project std-processing, I ran into the following error from cobol machines 38-39, 40-42, and 89-90 at UMD. Other machines can successfully run my jobs.\n\npython: /data/condor_builds/users/elims/software/IC2011-L2_V12-08-00_IceSim4compat_V4/src/dataclasses/private/dataclasses/payload/I3SuperDST.cxx:1181: void I3SuperDST::load_v1(Archive&) [with Archive = boost::archive::portable_binary_iarchive]: Assertion `width_it != width_end' failed.\n/data/condor_builds/users/elims/software/IC2011-L2_V12-08-00_IceSim4compat_V4/build/env-shell.sh: line 139: 10105 Aborted $NEW_SHELL $ARGV\n",
"reporter": "elims",
"cc": "",
"resolution": "duplicate",
"_ts": "1439321091027396",
"component": "cmake",
"summary": "errors when running L2 processing on noise generation files",
"priority": "normal",
"keywords": "",
"time": "2015-08-11T16:35:51",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
</p>
</details>
| defect | errors when running processing on noise generation files trac i tried to do some processing on vuvuzela noise files by running master py from the project std processing i ran into the following error from cobol machines and at umd other machines can successfully run my jobs python data condor builds users elims software src dataclasses private dataclasses payload cxx void load archive assertion width it width end failed data condor builds users elims software build env shell sh line aborted new shell argv migrated from json status closed changetime description i tried to do some processing on vuvuzela noise files by running master py from the project std processing i ran into the following error from cobol machines and at umd other machines can successfully run my jobs n npython data condor builds users elims software src dataclasses private dataclasses payload cxx void load archive assertion width it width end failed n data condor builds users elims software build env shell sh line aborted new shell argv n reporter elims cc resolution duplicate ts component cmake summary errors when running processing on noise generation files priority normal keywords time milestone owner nega type defect | 1 |
48,373 | 13,068,470,739 | IssuesEvent | 2020-07-31 03:40:56 | icecube-trac/tix2 | https://api.github.com/repos/icecube-trac/tix2 | closed | [steamshovel] doesn't work on macos with python3 (Trac #2250) | Migrated from Trac combo core defect | Steamshovel works fine with python2.7 but fails with python3.7.2. On macOS Mojave Version 10.14.3. Qt version 5.12.1.
The exception that is thrown is `boost::python::error_already_set` which is boost::python catching an error and not telling you which one, see #2213.
Note that the interactive console does not run as well (which is not causing the crash), even though that works fine in python2 as well.
The exception involving 'DOMLabel' also appears to be unrelated and not present in python2.
```text
$ steamshovel
No module named 'IPython'
WARN (steamshovel): Cannot embed IPython Qt widget, falling back to tty-based console (embed.cpp:95 in scripting::PyConsole::PyConsole(const scripting::PyInterpreter &, scripting::PyConsole::Type))
WARN (steamshovel): Could not use IPython, falling back to vanilla python console (embed.cpp:103 in scripting::PyConsole::PyConsole(const scripting::PyInterpreter &, scripting::PyConsole::Type))
Python 3.7.2 (default, Feb 12 2019, 08:15:36)
[Clang 10.0.0 (clang-1000.11.45.5)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
(InteractiveConsole)
>>> ERROR (ShovelMainWindow): Problem executing scenario code:
Traceback (most recent call last):
File "<string>", line 25, in _dumpScenario
RuntimeError: No such factory name: 'DOMLabel'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<string>", line 113, in <module>
File "<string>", line 30, in _dumpScenario
NameError: name 'StandardError' is not defined
String was:
def _dumpScenario():
from icecube.shovelart import ActivePixmapOverlay, Arrow, ArtistHandle, ArtistHandleList, ArtistKeylist, BaseLineObject, ChoiceSetting, ColorMap, ColoredObject, ConstantColorMap, ConstantFloat, ConstantQColor, ConstantTime, ConstantVec3d, Cylinder, DynamicLines, FileSetting, I3TimeColorMap, KeySetting, LinterpFunctionFloat, LinterpFunctionQColor, LinterpFunctionVec3d, OMKeySet, OverlayLine, OverlaySizeHint, OverlaySizeHints, ParticlePath, ParticlePoint, Phantom, PixmapOverlay, PyArtist, PyQColor, PyQFont, PyVariantFloat, PyVariantQColor, PyVariantTime, PyVariantVec3d, RangeSetting, Scenario, SceneGroup, SceneObject, SceneOverlay, SolidObject, Sphere, StaticLines, StepFunctionFloat, StepFunctionQColor, StepFunctionTime, StepFunctionVec3d, Text, TextOverlay, TimePoint, TimeWindow, TimeWindowColor, VariantFloat, VariantQColor, VariantTime, VariantVec3d, VariantVec3dList, Vec3dList, ZPlane, vec3d
from icecube.icetray import OMKey
from icecube.icetray import logging
scenario = window.gl.scenario
scenario.clear()
try:
artist = scenario.add( 'Detector', ['I3Geometry', ] )
scenario.setIsActive( artist, False )
scenario.changeSetting( artist, 'DOM color', PyQColor(255,255,255,255) )
scenario.changeSetting( artist, 'DOM radius', 1 )
scenario.changeSetting( artist, 'outline width', 0 )
scenario.changeSetting( artist, 'high quality DOMs', True )
scenario.changeSetting( artist, 'string cross', False )
scenario.changeSetting( artist, 'string color', PyQColor(255,255,255,76) )
scenario.changeSetting( artist, 'string width', 1 )
scenario.changeSetting( artist, 'hide', 0 )
scenario.changeSetting( artist, 'DOM labels', False )
scenario.setIsActive( artist, True )
except StandardError as e:
logging.log_error( e.__class__.__name__ + " occured while loading saved state of Detector: " + str(e) )
except:
logging.log_error( "Unknown error occured while loading saved state of Detector: " + str(e) )
try:
artist = scenario.add( 'DOMLabel', ['I3Geometry', ] )
scenario.setIsActive( artist, False )
scenario.changeSetting( artist, 'font', PyQFont.fromString('Arial,13,-1,5,50,0,0,0,0,0') )
scenario.changeSetting( artist, 'selection', [] )
scenario.changeSetting( artist, 'fontsize', 13 )
except StandardError as e:
logging.log_error( e.__class__.__name__ + " occured while loading saved state of DOMLabel: " + str(e) )
except:
logging.log_error( "Unknown error occured while loading saved state of DOMLabel: " + str(e) )
try:
artist = scenario.add( 'DOMOrientation', ['I3Geometry', ] )
scenario.setIsActive( artist, False )
scenario.changeSetting( artist, 'opacity', 1 )
scenario.changeSetting( artist, 'Label Axes', False )
scenario.changeSetting( artist, 'Draw Orthogonal Direction', True )
scenario.changeSetting( artist, 'ortho length', '10m' )
scenario.changeSetting( artist, 'Draw DOM Direction', True )
scenario.changeSetting( artist, 'Draw Flasher Direction', True )
scenario.changeSetting( artist, 'dom length', '10m' )
scenario.changeSetting( artist, 'flasher length', '10m' )
scenario.changeSetting( artist, 'line width', 10 )
scenario.changeSetting( artist, 'head angle', 20 )
scenario.changeSetting( artist, 'Draw Default Orientation', False )
scenario.changeSetting( artist, 'head length', 1 )
except StandardError as e:
logging.log_error( e.__class__.__name__ + " occured while loading saved state of DOMOrientation: " + str(e) )
except:
logging.log_error( "Unknown error occured while loading saved state of DOMOrientation: " + str(e) )
try:
artist = scenario.add( 'Particles', ['I3MCTree', ] )
scenario.setIsActive( artist, False )
scenario.changeSetting( artist, 'min. energy [track]', '' )
scenario.changeSetting( artist, 'scale', 10 )
scenario.changeSetting( artist, 'show light fronts', False )
scenario.changeSetting( artist, 'colormap', I3TimeColorMap() )
scenario.changeSetting( artist, 'power', 0.15 )
scenario.changeSetting( artist, 'color', PyQColor(243,51,51,153) )
scenario.changeSetting( artist, 'vertex size', 0 )
scenario.changeSetting( artist, 'labels', True )
scenario.changeSetting( artist, 'Cherenkov cone size', 0 )
scenario.changeSetting( artist, 'blue light fronts', True )
scenario.changeSetting( artist, 'incoming/outgoing', True )
scenario.changeSetting( artist, 'color by type', True )
scenario.changeSetting( artist, 'arrow head size', 120 )
scenario.changeSetting( artist, 'linewidth', 3 )
scenario.changeSetting( artist, 'min. energy [cascade]', '' )
scenario.setIsActive( artist, True )
except StandardError as e:
logging.log_error( e.__class__.__name__ + " occured while loading saved state of Particles: " + str(e) )
except:
logging.log_error( "Unknown error occured while loading saved state of Particles: " + str(e) )
try:
artist = scenario.add( 'Bubbles', ['I3Geometry', 'IceTopRawData', ] )
scenario.setIsActive( artist, False )
scenario.changeSetting( artist, 'scale', 10 )
scenario.changeSetting( artist, 'colormap', I3TimeColorMap() )
scenario.changeSetting( artist, 'power', 0.15 )
scenario.changeSetting( artist, 'custom color window', '' )
scenario.changeSetting( artist, 'log10(delay/ns)', 5 )
scenario.changeSetting( artist, 'static', PyQColor(255,0,255,255) )
scenario.setIsActive( artist, True )
except StandardError as e:
logging.log_error( e.__class__.__name__ + " occured while loading saved state of Bubbles: " + str(e) )
except:
logging.log_error( "Unknown error occured while loading saved state of Bubbles: " + str(e) )
try:
artist = scenario.add( 'Bubbles', ['I3Geometry', 'InIceRawData', ] )
scenario.setIsActive( artist, False )
scenario.changeSetting( artist, 'scale', 10 )
scenario.changeSetting( artist, 'colormap', I3TimeColorMap() )
scenario.changeSetting( artist, 'power', 0.15 )
scenario.changeSetting( artist, 'custom color window', '' )
scenario.changeSetting( artist, 'log10(delay/ns)', 5 )
scenario.changeSetting( artist, 'static', PyQColor(255,0,255,255) )
scenario.setIsActive( artist, True )
except StandardError as e:
logging.log_error( e.__class__.__name__ + " occured while loading saved state of Bubbles: " + str(e) )
except:
logging.log_error( "Unknown error occured while loading saved state of Bubbles: " + str(e) )
window.gl.setCameraPivot(0.0, 0.0, 0.0)
window.gl.setCameraLoc(-1864.75549316, -362.478149414, 590.385009766)
window.gl.setCameraOrientation(0.190808907151, -0.981627702713, 3.51667404175e-06)
window.gl.cameraLock = False
window.gl.perspectiveView = True
window.gl.backgroundColor = PyQColor(38,38,38,255)
window.timeline.rangeFinder = "Default"
window.frame_filter.code = ""
window.activeView = 0
_dumpScenario()
del _dumpScenario (ShovelMainWindow.cpp:333 in bool ShovelMainWindow::loadScenarioCode(const QString &))
objc[43690]: Class FIFinderSyncExtensionHost is implemented in both /System/Library/PrivateFrameworks/FinderKit.framework/Versions/A/FinderKit (0x7fff99c38210) and /System/Library/PrivateFrameworks/FileProvider.framework/OverrideBundles/FinderSyncCollaborationFileProviderOverride.bundle/Contents/MacOS/FinderSyncCollaborationFileProviderOverride (0x12c0eadc8). One of the two will be used. Which one is undefined.
libc++abi.dylib: terminating with uncaught exception of type boost::python::error_already_set
Abort trap: 6
```
Migrated from https://code.icecube.wisc.edu/ticket/2250
```json
{
"status": "closed",
"changetime": "2020-06-24T12:31:42",
"description": "Steamshovel works fine with python2.7 but fails with python3.7.2. On macOS Mojave Version 10.14.3. Qt version 5.12.1.\n\nThe exception that is thrown is `boost::python::error_already_set` which is boost::python catching an error and not telling you which one, see #2221.\n\nNote that the interactive console does not run as well (which is not causing the crash), even though that works fine in python2 as well.\n\nThe exception involving 'DOMLabel' also appears to be unrelated and not present in python2. \n\n{{{\n$ steamshovel\nNo module named 'IPython'\nWARN (steamshovel): Cannot embed IPython Qt widget, falling back to tty-based console (embed.cpp:95 in scripting::PyConsole::PyConsole(const scripting::PyInterpreter &, scripting::PyConsole::Type))\nWARN (steamshovel): Could not use IPython, falling back to vanilla python console (embed.cpp:103 in scripting::PyConsole::PyConsole(const scripting::PyInterpreter &, scripting::PyConsole::Type))\nPython 3.7.2 (default, Feb 12 2019, 08:15:36) \n[Clang 10.0.0 (clang-1000.11.45.5)] on darwin\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n(InteractiveConsole)\n>>> ERROR (ShovelMainWindow): Problem executing scenario code:\nTraceback (most recent call last):\n File \"<string>\", line 25, in _dumpScenario\nRuntimeError: No such factory name: 'DOMLabel'\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"<string>\", line 113, in <module>\n File \"<string>\", line 30, in _dumpScenario\nNameError: name 'StandardError' is not defined\nString was:\ndef _dumpScenario():\n from icecube.shovelart import ActivePixmapOverlay, Arrow, ArtistHandle, ArtistHandleList, ArtistKeylist, BaseLineObject, ChoiceSetting, ColorMap, ColoredObject, ConstantColorMap, ConstantFloat, ConstantQColor, ConstantTime, ConstantVec3d, Cylinder, DynamicLines, FileSetting, I3TimeColorMap, KeySetting, LinterpFunctionFloat, LinterpFunctionQColor, LinterpFunctionVec3d, OMKeySet, OverlayLine, OverlaySizeHint, OverlaySizeHints, ParticlePath, ParticlePoint, Phantom, PixmapOverlay, PyArtist, PyQColor, PyQFont, PyVariantFloat, PyVariantQColor, PyVariantTime, PyVariantVec3d, RangeSetting, Scenario, SceneGroup, SceneObject, SceneOverlay, SolidObject, Sphere, StaticLines, StepFunctionFloat, StepFunctionQColor, StepFunctionTime, StepFunctionVec3d, Text, TextOverlay, TimePoint, TimeWindow, TimeWindowColor, VariantFloat, VariantQColor, VariantTime, VariantVec3d, VariantVec3dList, Vec3dList, ZPlane, vec3d\n from icecube.icetray import OMKey\n from icecube.icetray import logging\n scenario = window.gl.scenario\n scenario.clear()\n try:\n artist = scenario.add( 'Detector', ['I3Geometry', ] )\n scenario.setIsActive( artist, False )\n scenario.changeSetting( artist, 'DOM color', PyQColor(255,255,255,255) )\n scenario.changeSetting( artist, 'DOM radius', 1 )\n scenario.changeSetting( artist, 'outline width', 0 )\n scenario.changeSetting( artist, 'high quality DOMs', True )\n scenario.changeSetting( artist, 'string cross', False )\n scenario.changeSetting( artist, 'string color', PyQColor(255,255,255,76) )\n scenario.changeSetting( artist, 'string width', 1 )\n scenario.changeSetting( artist, 'hide', 0 )\n scenario.changeSetting( artist, 'DOM labels', False )\n scenario.setIsActive( artist, True )\n except StandardError as e:\n logging.log_error( e.__class__.__name__ + \" occured while loading saved state of Detector: \" + str(e) )\n except:\n logging.log_error( \"Unknown error occured while loading saved state of Detector: \" + str(e) )\n try:\n artist = scenario.add( 'DOMLabel', ['I3Geometry', ] )\n scenario.setIsActive( artist, False )\n scenario.changeSetting( artist, 'font', PyQFont.fromString('Arial,13,-1,5,50,0,0,0,0,0') )\n scenario.changeSetting( artist, 'selection', [] )\n scenario.changeSetting( artist, 'fontsize', 13 )\n except StandardError as e:\n logging.log_error( e.__class__.__name__ + \" occured while loading saved state of DOMLabel: \" + str(e) )\n except:\n logging.log_error( \"Unknown error occured while loading saved state of DOMLabel: \" + str(e) )\n try:\n artist = scenario.add( 'DOMOrientation', ['I3Geometry', ] )\n scenario.setIsActive( artist, False )\n scenario.changeSetting( artist, 'opacity', 1 )\n scenario.changeSetting( artist, 'Label Axes', False )\n scenario.changeSetting( artist, 'Draw Orthogonal Direction', True )\n scenario.changeSetting( artist, 'ortho length', '10m' )\n scenario.changeSetting( artist, 'Draw DOM Direction', True )\n scenario.changeSetting( artist, 'Draw Flasher Direction', True )\n scenario.changeSetting( artist, 'dom length', '10m' )\n scenario.changeSetting( artist, 'flasher length', '10m' )\n scenario.changeSetting( artist, 'line width', 10 )\n scenario.changeSetting( artist, 'head angle', 20 )\n scenario.changeSetting( artist, 'Draw Default Orientation', False )\n scenario.changeSetting( artist, 'head length', 1 )\n except StandardError as e:\n logging.log_error( e.__class__.__name__ + \" occured while loading saved state of DOMOrientation: \" + str(e) )\n except:\n logging.log_error( \"Unknown error occured while loading saved state of DOMOrientation: \" + str(e) )\n try:\n artist = scenario.add( 'Particles', ['I3MCTree', ] )\n scenario.setIsActive( artist, False )\n scenario.changeSetting( artist, 'min. energy [track]', '' )\n scenario.changeSetting( artist, 'scale', 10 )\n scenario.changeSetting( artist, 'show light fronts', False )\n scenario.changeSetting( artist, 'colormap', I3TimeColorMap() )\n scenario.changeSetting( artist, 'power', 0.15 )\n scenario.changeSetting( artist, 'color', PyQColor(243,51,51,153) )\n scenario.changeSetting( artist, 'vertex size', 0 )\n scenario.changeSetting( artist, 'labels', True )\n scenario.changeSetting( artist, 'Cherenkov cone size', 0 )\n scenario.changeSetting( artist, 'blue light fronts', True )\n scenario.changeSetting( artist, 'incoming/outgoing', True )\n scenario.changeSetting( artist, 'color by type', True )\n scenario.changeSetting( artist, 'arrow head size', 120 )\n scenario.changeSetting( artist, 'linewidth', 3 )\n scenario.changeSetting( artist, 'min. energy [cascade]', '' )\n scenario.setIsActive( artist, True )\n except StandardError as e:\n logging.log_error( e.__class__.__name__ + \" occured while loading saved state of Particles: \" + str(e) )\n except:\n logging.log_error( \"Unknown error occured while loading saved state of Particles: \" + str(e) )\n try:\n artist = scenario.add( 'Bubbles', ['I3Geometry', 'IceTopRawData', ] )\n scenario.setIsActive( artist, False )\n scenario.changeSetting( artist, 'scale', 10 )\n scenario.changeSetting( artist, 'colormap', I3TimeColorMap() )\n scenario.changeSetting( artist, 'power', 0.15 )\n scenario.changeSetting( artist, 'custom color window', '' )\n scenario.changeSetting( artist, 'log10(delay/ns)', 5 )\n scenario.changeSetting( artist, 'static', PyQColor(255,0,255,255) )\n scenario.setIsActive( artist, True )\n except StandardError as e:\n logging.log_error( e.__class__.__name__ + \" occured while loading saved state of Bubbles: \" + str(e) )\n except:\n logging.log_error( \"Unknown error occured while loading saved state of Bubbles: \" + str(e) )\n try:\n artist = scenario.add( 'Bubbles', ['I3Geometry', 'InIceRawData', ] )\n scenario.setIsActive( artist, False )\n scenario.changeSetting( artist, 'scale', 10 )\n scenario.changeSetting( artist, 'colormap', I3TimeColorMap() )\n scenario.changeSetting( artist, 'power', 0.15 )\n scenario.changeSetting( artist, 'custom color window', '' )\n scenario.changeSetting( artist, 'log10(delay/ns)', 5 )\n scenario.changeSetting( artist, 'static', PyQColor(255,0,255,255) )\n scenario.setIsActive( artist, True )\n except StandardError as e:\n logging.log_error( e.__class__.__name__ + \" occured while loading saved state of Bubbles: \" + str(e) )\n except:\n logging.log_error( \"Unknown error occured while loading saved state of Bubbles: \" + str(e) )\n window.gl.setCameraPivot(0.0, 0.0, 0.0)\n window.gl.setCameraLoc(-1864.75549316, -362.478149414, 590.385009766)\n window.gl.setCameraOrientation(0.190808907151, -0.981627702713, 3.51667404175e-06)\n window.gl.cameraLock = False\n window.gl.perspectiveView = True\n window.gl.backgroundColor = PyQColor(38,38,38,255)\n window.timeline.rangeFinder = \"Default\"\n window.frame_filter.code = \"\"\n window.activeView = 0\n_dumpScenario()\ndel _dumpScenario (ShovelMainWindow.cpp:333 in bool ShovelMainWindow::loadScenarioCode(const QString &))\nobjc[43690]: Class FIFinderSyncExtensionHost is implemented in both /System/Library/PrivateFrameworks/FinderKit.framework/Versions/A/FinderKit (0x7fff99c38210) and /System/Library/PrivateFrameworks/FileProvider.framework/OverrideBundles/FinderSyncCollaborationFileProviderOverride.bundle/Contents/MacOS/FinderSyncCollaborationFileProviderOverride (0x12c0eadc8). One of the two will be used. Which one is undefined.\nlibc++abi.dylib: terminating with uncaught exception of type boost::python::error_already_set\nAbort trap: 6\n\n}}}\n",
"reporter": "kjmeagher",
"cc": "",
"resolution": "fixed",
"_ts": "1593001902142004",
"component": "combo core",
"summary": "[steamshovel] doesn't work on macos with python3",
"priority": "normal",
"keywords": "",
"time": "2019-03-14T16:46:49",
"milestone": "Autumnal Equinox 2020",
"owner": "",
"type": "defect"
}
```
| 1.0 | [steamshovel] doesn't work on macos with python3 (Trac #2250) - Steamshovel works fine with python2.7 but fails with python3.7.2. On macOS Mojave Version 10.14.3. Qt version 5.12.1.
The exception that is thrown is `boost::python::error_already_set` which is boost::python catching an error and not telling you which one, see #2213.
Note that the interactive console does not run as well (which is not causing the crash), even though that works fine in python2 as well.
The exception involving 'DOMLabel' also appears to be unrelated and not present in python2.
```text
$ steamshovel
No module named 'IPython'
WARN (steamshovel): Cannot embed IPython Qt widget, falling back to tty-based console (embed.cpp:95 in scripting::PyConsole::PyConsole(const scripting::PyInterpreter &, scripting::PyConsole::Type))
WARN (steamshovel): Could not use IPython, falling back to vanilla python console (embed.cpp:103 in scripting::PyConsole::PyConsole(const scripting::PyInterpreter &, scripting::PyConsole::Type))
Python 3.7.2 (default, Feb 12 2019, 08:15:36)
[Clang 10.0.0 (clang-1000.11.45.5)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
(InteractiveConsole)
>>> ERROR (ShovelMainWindow): Problem executing scenario code:
Traceback (most recent call last):
File "<string>", line 25, in _dumpScenario
RuntimeError: No such factory name: 'DOMLabel'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<string>", line 113, in <module>
File "<string>", line 30, in _dumpScenario
NameError: name 'StandardError' is not defined
String was:
def _dumpScenario():
from icecube.shovelart import ActivePixmapOverlay, Arrow, ArtistHandle, ArtistHandleList, ArtistKeylist, BaseLineObject, ChoiceSetting, ColorMap, ColoredObject, ConstantColorMap, ConstantFloat, ConstantQColor, ConstantTime, ConstantVec3d, Cylinder, DynamicLines, FileSetting, I3TimeColorMap, KeySetting, LinterpFunctionFloat, LinterpFunctionQColor, LinterpFunctionVec3d, OMKeySet, OverlayLine, OverlaySizeHint, OverlaySizeHints, ParticlePath, ParticlePoint, Phantom, PixmapOverlay, PyArtist, PyQColor, PyQFont, PyVariantFloat, PyVariantQColor, PyVariantTime, PyVariantVec3d, RangeSetting, Scenario, SceneGroup, SceneObject, SceneOverlay, SolidObject, Sphere, StaticLines, StepFunctionFloat, StepFunctionQColor, StepFunctionTime, StepFunctionVec3d, Text, TextOverlay, TimePoint, TimeWindow, TimeWindowColor, VariantFloat, VariantQColor, VariantTime, VariantVec3d, VariantVec3dList, Vec3dList, ZPlane, vec3d
from icecube.icetray import OMKey
from icecube.icetray import logging
scenario = window.gl.scenario
scenario.clear()
try:
artist = scenario.add( 'Detector', ['I3Geometry', ] )
scenario.setIsActive( artist, False )
scenario.changeSetting( artist, 'DOM color', PyQColor(255,255,255,255) )
scenario.changeSetting( artist, 'DOM radius', 1 )
scenario.changeSetting( artist, 'outline width', 0 )
scenario.changeSetting( artist, 'high quality DOMs', True )
scenario.changeSetting( artist, 'string cross', False )
scenario.changeSetting( artist, 'string color', PyQColor(255,255,255,76) )
scenario.changeSetting( artist, 'string width', 1 )
scenario.changeSetting( artist, 'hide', 0 )
scenario.changeSetting( artist, 'DOM labels', False )
scenario.setIsActive( artist, True )
except StandardError as e:
logging.log_error( e.__class__.__name__ + " occured while loading saved state of Detector: " + str(e) )
except:
logging.log_error( "Unknown error occured while loading saved state of Detector: " + str(e) )
try:
artist = scenario.add( 'DOMLabel', ['I3Geometry', ] )
scenario.setIsActive( artist, False )
scenario.changeSetting( artist, 'font', PyQFont.fromString('Arial,13,-1,5,50,0,0,0,0,0') )
scenario.changeSetting( artist, 'selection', [] )
scenario.changeSetting( artist, 'fontsize', 13 )
except StandardError as e:
logging.log_error( e.__class__.__name__ + " occured while loading saved state of DOMLabel: " + str(e) )
except:
logging.log_error( "Unknown error occured while loading saved state of DOMLabel: " + str(e) )
try:
artist = scenario.add( 'DOMOrientation', ['I3Geometry', ] )
scenario.setIsActive( artist, False )
scenario.changeSetting( artist, 'opacity', 1 )
scenario.changeSetting( artist, 'Label Axes', False )
scenario.changeSetting( artist, 'Draw Orthogonal Direction', True )
scenario.changeSetting( artist, 'ortho length', '10m' )
scenario.changeSetting( artist, 'Draw DOM Direction', True )
scenario.changeSetting( artist, 'Draw Flasher Direction', True )
scenario.changeSetting( artist, 'dom length', '10m' )
scenario.changeSetting( artist, 'flasher length', '10m' )
scenario.changeSetting( artist, 'line width', 10 )
scenario.changeSetting( artist, 'head angle', 20 )
scenario.changeSetting( artist, 'Draw Default Orientation', False )
scenario.changeSetting( artist, 'head length', 1 )
except StandardError as e:
logging.log_error( e.__class__.__name__ + " occured while loading saved state of DOMOrientation: " + str(e) )
except:
logging.log_error( "Unknown error occured while loading saved state of DOMOrientation: " + str(e) )
try:
artist = scenario.add( 'Particles', ['I3MCTree', ] )
scenario.setIsActive( artist, False )
scenario.changeSetting( artist, 'min. energy [track]', '' )
scenario.changeSetting( artist, 'scale', 10 )
scenario.changeSetting( artist, 'show light fronts', False )
scenario.changeSetting( artist, 'colormap', I3TimeColorMap() )
scenario.changeSetting( artist, 'power', 0.15 )
scenario.changeSetting( artist, 'color', PyQColor(243,51,51,153) )
scenario.changeSetting( artist, 'vertex size', 0 )
scenario.changeSetting( artist, 'labels', True )
scenario.changeSetting( artist, 'Cherenkov cone size', 0 )
scenario.changeSetting( artist, 'blue light fronts', True )
scenario.changeSetting( artist, 'incoming/outgoing', True )
scenario.changeSetting( artist, 'color by type', True )
scenario.changeSetting( artist, 'arrow head size', 120 )
scenario.changeSetting( artist, 'linewidth', 3 )
scenario.changeSetting( artist, 'min. energy [cascade]', '' )
scenario.setIsActive( artist, True )
except StandardError as e:
logging.log_error( e.__class__.__name__ + " occured while loading saved state of Particles: " + str(e) )
except:
logging.log_error( "Unknown error occured while loading saved state of Particles: " + str(e) )
try:
artist = scenario.add( 'Bubbles', ['I3Geometry', 'IceTopRawData', ] )
scenario.setIsActive( artist, False )
scenario.changeSetting( artist, 'scale', 10 )
scenario.changeSetting( artist, 'colormap', I3TimeColorMap() )
scenario.changeSetting( artist, 'power', 0.15 )
scenario.changeSetting( artist, 'custom color window', '' )
scenario.changeSetting( artist, 'log10(delay/ns)', 5 )
scenario.changeSetting( artist, 'static', PyQColor(255,0,255,255) )
scenario.setIsActive( artist, True )
except StandardError as e:
logging.log_error( e.__class__.__name__ + " occured while loading saved state of Bubbles: " + str(e) )
except:
logging.log_error( "Unknown error occured while loading saved state of Bubbles: " + str(e) )
try:
artist = scenario.add( 'Bubbles', ['I3Geometry', 'InIceRawData', ] )
scenario.setIsActive( artist, False )
scenario.changeSetting( artist, 'scale', 10 )
scenario.changeSetting( artist, 'colormap', I3TimeColorMap() )
scenario.changeSetting( artist, 'power', 0.15 )
scenario.changeSetting( artist, 'custom color window', '' )
scenario.changeSetting( artist, 'log10(delay/ns)', 5 )
scenario.changeSetting( artist, 'static', PyQColor(255,0,255,255) )
scenario.setIsActive( artist, True )
except StandardError as e:
logging.log_error( e.__class__.__name__ + " occured while loading saved state of Bubbles: " + str(e) )
except:
logging.log_error( "Unknown error occured while loading saved state of Bubbles: " + str(e) )
window.gl.setCameraPivot(0.0, 0.0, 0.0)
window.gl.setCameraLoc(-1864.75549316, -362.478149414, 590.385009766)
window.gl.setCameraOrientation(0.190808907151, -0.981627702713, 3.51667404175e-06)
window.gl.cameraLock = False
window.gl.perspectiveView = True
window.gl.backgroundColor = PyQColor(38,38,38,255)
window.timeline.rangeFinder = "Default"
window.frame_filter.code = ""
window.activeView = 0
_dumpScenario()
del _dumpScenario (ShovelMainWindow.cpp:333 in bool ShovelMainWindow::loadScenarioCode(const QString &))
objc[43690]: Class FIFinderSyncExtensionHost is implemented in both /System/Library/PrivateFrameworks/FinderKit.framework/Versions/A/FinderKit (0x7fff99c38210) and /System/Library/PrivateFrameworks/FileProvider.framework/OverrideBundles/FinderSyncCollaborationFileProviderOverride.bundle/Contents/MacOS/FinderSyncCollaborationFileProviderOverride (0x12c0eadc8). One of the two will be used. Which one is undefined.
libc++abi.dylib: terminating with uncaught exception of type boost::python::error_already_set
Abort trap: 6
```
Migrated from https://code.icecube.wisc.edu/ticket/2250
```json
{
"status": "closed",
"changetime": "2020-06-24T12:31:42",
"description": "Steamshovel works fine with python2.7 but fails with python3.7.2. On macOS Mojave Version 10.14.3. Qt version 5.12.1.\n\nThe exception that is thrown is `boost::python::error_already_set` which is boost::python catching an error and not telling you which one, see #2221.\n\nNote that the interactive console does not run as well (which is not causing the crash), even though that works fine in python2 as well.\n\nThe exception involving 'DOMLabel' also appears to be unrelated and not present in python2. \n\n{{{\n$ steamshovel\nNo module named 'IPython'\nWARN (steamshovel): Cannot embed IPython Qt widget, falling back to tty-based console (embed.cpp:95 in scripting::PyConsole::PyConsole(const scripting::PyInterpreter &, scripting::PyConsole::Type))\nWARN (steamshovel): Could not use IPython, falling back to vanilla python console (embed.cpp:103 in scripting::PyConsole::PyConsole(const scripting::PyInterpreter &, scripting::PyConsole::Type))\nPython 3.7.2 (default, Feb 12 2019, 08:15:36) \n[Clang 10.0.0 (clang-1000.11.45.5)] on darwin\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n(InteractiveConsole)\n>>> ERROR (ShovelMainWindow): Problem executing scenario code:\nTraceback (most recent call last):\n File \"<string>\", line 25, in _dumpScenario\nRuntimeError: No such factory name: 'DOMLabel'\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"<string>\", line 113, in <module>\n File \"<string>\", line 30, in _dumpScenario\nNameError: name 'StandardError' is not defined\nString was:\ndef _dumpScenario():\n from icecube.shovelart import ActivePixmapOverlay, Arrow, ArtistHandle, ArtistHandleList, ArtistKeylist, BaseLineObject, ChoiceSetting, ColorMap, ColoredObject, ConstantColorMap, ConstantFloat, ConstantQColor, ConstantTime, ConstantVec3d, Cylinder, DynamicLines, FileSetting, I3TimeColorMap, KeySetting, LinterpFunctionFloat, LinterpFunctionQColor, LinterpFunctionVec3d, OMKeySet, OverlayLine, OverlaySizeHint, OverlaySizeHints, ParticlePath, ParticlePoint, Phantom, PixmapOverlay, PyArtist, PyQColor, PyQFont, PyVariantFloat, PyVariantQColor, PyVariantTime, PyVariantVec3d, RangeSetting, Scenario, SceneGroup, SceneObject, SceneOverlay, SolidObject, Sphere, StaticLines, StepFunctionFloat, StepFunctionQColor, StepFunctionTime, StepFunctionVec3d, Text, TextOverlay, TimePoint, TimeWindow, TimeWindowColor, VariantFloat, VariantQColor, VariantTime, VariantVec3d, VariantVec3dList, Vec3dList, ZPlane, vec3d\n from icecube.icetray import OMKey\n from icecube.icetray import logging\n scenario = window.gl.scenario\n scenario.clear()\n try:\n artist = scenario.add( 'Detector', ['I3Geometry', ] )\n scenario.setIsActive( artist, False )\n scenario.changeSetting( artist, 'DOM color', PyQColor(255,255,255,255) )\n scenario.changeSetting( artist, 'DOM radius', 1 )\n scenario.changeSetting( artist, 'outline width', 0 )\n scenario.changeSetting( artist, 'high quality DOMs', True )\n scenario.changeSetting( artist, 'string cross', False )\n scenario.changeSetting( artist, 'string color', PyQColor(255,255,255,76) )\n scenario.changeSetting( artist, 'string width', 1 )\n scenario.changeSetting( artist, 'hide', 0 )\n scenario.changeSetting( artist, 'DOM labels', False )\n scenario.setIsActive( artist, True )\n except StandardError as e:\n logging.log_error( e.__class__.__name__ + \" occured while loading saved state of Detector: \" + str(e) )\n except:\n logging.log_error( \"Unknown error occured while loading saved state of Detector: \" + str(e) )\n try:\n artist = scenario.add( 'DOMLabel', ['I3Geometry', ] )\n scenario.setIsActive( artist, False )\n scenario.changeSetting( artist, 'font', PyQFont.fromString('Arial,13,-1,5,50,0,0,0,0,0') )\n scenario.changeSetting( artist, 'selection', [] )\n scenario.changeSetting( artist, 'fontsize', 13 )\n except StandardError as e:\n logging.log_error( e.__class__.__name__ + \" occured while loading saved state of DOMLabel: \" + str(e) )\n except:\n logging.log_error( \"Unknown error occured while loading saved state of DOMLabel: \" + str(e) )\n try:\n artist = scenario.add( 'DOMOrientation', ['I3Geometry', ] )\n scenario.setIsActive( artist, False )\n scenario.changeSetting( artist, 'opacity', 1 )\n scenario.changeSetting( artist, 'Label Axes', False )\n scenario.changeSetting( artist, 'Draw Orthogonal Direction', True )\n scenario.changeSetting( artist, 'ortho length', '10m' )\n scenario.changeSetting( artist, 'Draw DOM Direction', True )\n scenario.changeSetting( artist, 'Draw Flasher Direction', True )\n scenario.changeSetting( artist, 'dom length', '10m' )\n scenario.changeSetting( artist, 'flasher length', '10m' )\n scenario.changeSetting( artist, 'line width', 10 )\n scenario.changeSetting( artist, 'head angle', 20 )\n scenario.changeSetting( artist, 'Draw Default Orientation', False )\n scenario.changeSetting( artist, 'head length', 1 )\n except StandardError as e:\n logging.log_error( e.__class__.__name__ + \" occured while loading saved state of DOMOrientation: \" + str(e) )\n except:\n logging.log_error( \"Unknown error occured while loading saved state of DOMOrientation: \" + str(e) )\n try:\n artist = scenario.add( 'Particles', ['I3MCTree', ] )\n scenario.setIsActive( artist, False )\n scenario.changeSetting( artist, 'min. energy [track]', '' )\n scenario.changeSetting( artist, 'scale', 10 )\n scenario.changeSetting( artist, 'show light fronts', False )\n scenario.changeSetting( artist, 'colormap', I3TimeColorMap() )\n scenario.changeSetting( artist, 'power', 0.15 )\n scenario.changeSetting( artist, 'color', PyQColor(243,51,51,153) )\n scenario.changeSetting( artist, 'vertex size', 0 )\n scenario.changeSetting( artist, 'labels', True )\n scenario.changeSetting( artist, 'Cherenkov cone size', 0 )\n scenario.changeSetting( artist, 'blue light fronts', True )\n scenario.changeSetting( artist, 'incoming/outgoing', True )\n scenario.changeSetting( artist, 'color by type', True )\n scenario.changeSetting( artist, 'arrow head size', 120 )\n scenario.changeSetting( artist, 'linewidth', 3 )\n scenario.changeSetting( artist, 'min. energy [cascade]', '' )\n scenario.setIsActive( artist, True )\n except StandardError as e:\n logging.log_error( e.__class__.__name__ + \" occured while loading saved state of Particles: \" + str(e) )\n except:\n logging.log_error( \"Unknown error occured while loading saved state of Particles: \" + str(e) )\n try:\n artist = scenario.add( 'Bubbles', ['I3Geometry', 'IceTopRawData', ] )\n scenario.setIsActive( artist, False )\n scenario.changeSetting( artist, 'scale', 10 )\n scenario.changeSetting( artist, 'colormap', I3TimeColorMap() )\n scenario.changeSetting( artist, 'power', 0.15 )\n scenario.changeSetting( artist, 'custom color window', '' )\n scenario.changeSetting( artist, 'log10(delay/ns)', 5 )\n scenario.changeSetting( artist, 'static', PyQColor(255,0,255,255) )\n scenario.setIsActive( artist, True )\n except StandardError as e:\n logging.log_error( e.__class__.__name__ + \" occured while loading saved state of Bubbles: \" + str(e) )\n except:\n logging.log_error( \"Unknown error occured while loading saved state of Bubbles: \" + str(e) )\n try:\n artist = scenario.add( 'Bubbles', ['I3Geometry', 'InIceRawData', ] )\n scenario.setIsActive( artist, False )\n scenario.changeSetting( artist, 'scale', 10 )\n scenario.changeSetting( artist, 'colormap', I3TimeColorMap() )\n scenario.changeSetting( artist, 'power', 0.15 )\n scenario.changeSetting( artist, 'custom color window', '' )\n scenario.changeSetting( artist, 'log10(delay/ns)', 5 )\n scenario.changeSetting( artist, 'static', PyQColor(255,0,255,255) )\n scenario.setIsActive( artist, True )\n except StandardError as e:\n logging.log_error( e.__class__.__name__ + \" occured while loading saved state of Bubbles: \" + str(e) )\n except:\n logging.log_error( \"Unknown error occured while loading saved state of Bubbles: \" + str(e) )\n window.gl.setCameraPivot(0.0, 0.0, 0.0)\n window.gl.setCameraLoc(-1864.75549316, -362.478149414, 590.385009766)\n window.gl.setCameraOrientation(0.190808907151, -0.981627702713, 3.51667404175e-06)\n window.gl.cameraLock = False\n window.gl.perspectiveView = True\n window.gl.backgroundColor = PyQColor(38,38,38,255)\n window.timeline.rangeFinder = \"Default\"\n window.frame_filter.code = \"\"\n window.activeView = 0\n_dumpScenario()\ndel _dumpScenario (ShovelMainWindow.cpp:333 in bool ShovelMainWindow::loadScenarioCode(const QString &))\nobjc[43690]: Class FIFinderSyncExtensionHost is implemented in both /System/Library/PrivateFrameworks/FinderKit.framework/Versions/A/FinderKit (0x7fff99c38210) and /System/Library/PrivateFrameworks/FileProvider.framework/OverrideBundles/FinderSyncCollaborationFileProviderOverride.bundle/Contents/MacOS/FinderSyncCollaborationFileProviderOverride (0x12c0eadc8). One of the two will be used. Which one is undefined.\nlibc++abi.dylib: terminating with uncaught exception of type boost::python::error_already_set\nAbort trap: 6\n\n}}}\n",
"reporter": "kjmeagher",
"cc": "",
"resolution": "fixed",
"_ts": "1593001902142004",
"component": "combo core",
"summary": "[steamshovel] doesn't work on macos with python3",
"priority": "normal",
"keywords": "",
"time": "2019-03-14T16:46:49",
"milestone": "Autumnal Equinox 2020",
"owner": "",
"type": "defect"
}
```
| defect | doesn t work on macos with trac steamshovel works fine with but fails with on macos mojave version qt version the exception that is thrown is boost python error already set which is boost python catching an error and not telling you which one see note that the interactive console does not run as well which is not causing the crash even though that works fine in as well the exception involving domlabel also appears to be unrelated and not present in text steamshovel no module named ipython warn steamshovel cannot embed ipython qt widget falling back to tty based console embed cpp in scripting pyconsole pyconsole const scripting pyinterpreter scripting pyconsole type warn steamshovel could not use ipython falling back to vanilla python console embed cpp in scripting pyconsole pyconsole const scripting pyinterpreter scripting pyconsole type python default feb on darwin type help copyright credits or license for more information interactiveconsole error shovelmainwindow problem executing scenario code traceback most recent call last file line in dumpscenario runtimeerror no such factory name domlabel during handling of the above exception another exception occurred traceback most recent call last file line in file line in dumpscenario nameerror name standarderror is not defined string was def dumpscenario from icecube shovelart import activepixmapoverlay arrow artisthandle artisthandlelist artistkeylist baselineobject choicesetting colormap coloredobject constantcolormap constantfloat constantqcolor constanttime cylinder dynamiclines filesetting keysetting linterpfunctionfloat linterpfunctionqcolor omkeyset overlayline overlaysizehint overlaysizehints particlepath particlepoint phantom pixmapoverlay pyartist pyqcolor pyqfont pyvariantfloat pyvariantqcolor pyvarianttime rangesetting scenario scenegroup sceneobject sceneoverlay solidobject sphere staticlines stepfunctionfloat stepfunctionqcolor stepfunctiontime text textoverlay timepoint timewindow timewindowcolor variantfloat variantqcolor varianttime zplane from icecube icetray import omkey from icecube icetray import logging scenario window gl scenario scenario clear try artist scenario add detector scenario setisactive artist false scenario changesetting artist dom color pyqcolor scenario changesetting artist dom radius scenario changesetting artist outline width scenario changesetting artist high quality doms true scenario changesetting artist string cross false scenario changesetting artist string color pyqcolor scenario changesetting artist string width scenario changesetting artist hide scenario changesetting artist dom labels false scenario setisactive artist true except standarderror as e logging log error e class name occured while loading saved state of detector str e except logging log error unknown error occured while loading saved state of detector str e try artist scenario add domlabel scenario setisactive artist false scenario changesetting artist font pyqfont fromstring arial scenario changesetting artist selection scenario changesetting artist fontsize except standarderror as e logging log error e class name occured while loading saved state of domlabel str e except logging log error unknown error occured while loading saved state of domlabel str e try artist scenario add domorientation scenario setisactive artist false scenario changesetting artist opacity scenario changesetting artist label axes false scenario changesetting artist draw orthogonal direction true scenario changesetting artist ortho length scenario changesetting artist draw dom direction true scenario changesetting artist draw flasher direction true scenario changesetting artist dom length scenario changesetting artist flasher length scenario changesetting artist line width scenario changesetting artist head angle scenario changesetting artist draw default orientation false scenario changesetting artist head length except standarderror as e logging log error e class name occured while loading saved state of domorientation str e except logging log error unknown error occured while loading saved state of domorientation str e try artist scenario add particles scenario setisactive artist false scenario changesetting artist min energy scenario changesetting artist scale scenario changesetting artist show light fronts false scenario changesetting artist colormap scenario changesetting artist power scenario changesetting artist color pyqcolor scenario changesetting artist vertex size scenario changesetting artist labels true scenario changesetting artist cherenkov cone size scenario changesetting artist blue light fronts true scenario changesetting artist incoming outgoing true scenario changesetting artist color by type true scenario changesetting artist arrow head size scenario changesetting artist linewidth scenario changesetting artist min energy scenario setisactive artist true except standarderror as e logging log error e class name occured while loading saved state of particles str e except logging log error unknown error occured while loading saved state of particles str e try artist scenario add bubbles scenario setisactive artist false scenario changesetting artist scale scenario changesetting artist colormap scenario changesetting artist power scenario changesetting artist custom color window scenario changesetting artist delay ns scenario changesetting artist static pyqcolor scenario setisactive artist true except standarderror as e logging log error e class name occured while loading saved state of bubbles str e except logging log error unknown error occured while loading saved state of bubbles str e try artist scenario add bubbles scenario setisactive artist false scenario changesetting artist scale scenario changesetting artist colormap scenario changesetting artist power scenario changesetting artist custom color window scenario changesetting artist delay ns scenario changesetting artist static pyqcolor scenario setisactive artist true except standarderror as e logging log error e class name occured while loading saved state of bubbles str e except logging log error unknown error occured while loading saved state of bubbles str e window gl setcamerapivot window gl setcameraloc window gl setcameraorientation window gl cameralock false window gl perspectiveview true window gl backgroundcolor pyqcolor window timeline rangefinder default window frame filter code window activeview dumpscenario del dumpscenario shovelmainwindow cpp in bool shovelmainwindow loadscenariocode const qstring objc class fifindersyncextensionhost is implemented in both system library privateframeworks finderkit framework versions a finderkit and system library privateframeworks fileprovider framework overridebundles findersynccollaborationfileprovideroverride bundle contents macos findersynccollaborationfileprovideroverride one of the two will be used which one is undefined libc abi dylib terminating with uncaught exception of type boost python error already set abort trap migrated from json status closed changetime description steamshovel works fine with but fails with on macos mojave version qt version n nthe exception that is thrown is boost python error already set which is boost python catching an error and not telling you which one see n nnote that the interactive console does not run as well which is not causing the crash even though that works fine in as well n nthe exception involving domlabel also appears to be unrelated and not present in n n n steamshovel nno module named ipython nwarn steamshovel cannot embed ipython qt widget falling back to tty based console embed cpp in scripting pyconsole pyconsole const scripting pyinterpreter scripting pyconsole type nwarn steamshovel could not use ipython falling back to vanilla python console embed cpp in scripting pyconsole pyconsole const scripting pyinterpreter scripting pyconsole type npython default feb n on darwin ntype help copyright credits or license for more information n interactiveconsole n error shovelmainwindow problem executing scenario code ntraceback most recent call last n file line in dumpscenario nruntimeerror no such factory name domlabel n nduring handling of the above exception another exception occurred n ntraceback most recent call last n file line in n file line in dumpscenario nnameerror name standarderror is not defined nstring was ndef dumpscenario n from icecube shovelart import activepixmapoverlay arrow artisthandle artisthandlelist artistkeylist baselineobject choicesetting colormap coloredobject constantcolormap constantfloat constantqcolor constanttime cylinder dynamiclines filesetting keysetting linterpfunctionfloat linterpfunctionqcolor omkeyset overlayline overlaysizehint overlaysizehints particlepath particlepoint phantom pixmapoverlay pyartist pyqcolor pyqfont pyvariantfloat pyvariantqcolor pyvarianttime rangesetting scenario scenegroup sceneobject sceneoverlay solidobject sphere staticlines stepfunctionfloat stepfunctionqcolor stepfunctiontime text textoverlay timepoint timewindow timewindowcolor variantfloat variantqcolor varianttime zplane n from icecube icetray import omkey n from icecube icetray import logging n scenario window gl scenario n scenario clear n try n artist scenario add detector n scenario setisactive artist false n scenario changesetting artist dom color pyqcolor n scenario changesetting artist dom radius n scenario changesetting artist outline width n scenario changesetting artist high quality doms true n scenario changesetting artist string cross false n scenario changesetting artist string color pyqcolor n scenario changesetting artist string width n scenario changesetting artist hide n scenario changesetting artist dom labels false n scenario setisactive artist true n except standarderror as e n logging log error e class name occured while loading saved state of detector str e n except n logging log error unknown error occured while loading saved state of detector str e n try n artist scenario add domlabel n scenario setisactive artist false n scenario changesetting artist font pyqfont fromstring arial n scenario changesetting artist selection n scenario changesetting artist fontsize n except standarderror as e n logging log error e class name occured while loading saved state of domlabel str e n except n logging log error unknown error occured while loading saved state of domlabel str e n try n artist scenario add domorientation n scenario setisactive artist false n scenario changesetting artist opacity n scenario changesetting artist label axes false n scenario changesetting artist draw orthogonal direction true n scenario changesetting artist ortho length n scenario changesetting artist draw dom direction true n scenario changesetting artist draw flasher direction true n scenario changesetting artist dom length n scenario changesetting artist flasher length n scenario changesetting artist line width n scenario changesetting artist head angle n scenario changesetting artist draw default orientation false n scenario changesetting artist head length n except standarderror as e n logging log error e class name occured while loading saved state of domorientation str e n except n logging log error unknown error occured while loading saved state of domorientation str e n try n artist scenario add particles n scenario setisactive artist false n scenario changesetting artist min energy n scenario changesetting artist scale n scenario changesetting artist show light fronts false n scenario changesetting artist colormap n scenario changesetting artist power n scenario changesetting artist color pyqcolor n scenario changesetting artist vertex size n scenario changesetting artist labels true n scenario changesetting artist cherenkov cone size n scenario changesetting artist blue light fronts true n scenario changesetting artist incoming outgoing true n scenario changesetting artist color by type true n scenario changesetting artist arrow head size n scenario changesetting artist linewidth n scenario changesetting artist min energy n scenario setisactive artist true n except standarderror as e n logging log error e class name occured while loading saved state of particles str e n except n logging log error unknown error occured while loading saved state of particles str e n try n artist scenario add bubbles n scenario setisactive artist false n scenario changesetting artist scale n scenario changesetting artist colormap n scenario changesetting artist power n scenario changesetting artist custom color window n scenario changesetting artist delay ns n scenario changesetting artist static pyqcolor n scenario setisactive artist true n except standarderror as e n logging log error e class name occured while loading saved state of bubbles str e n except n logging log error unknown error occured while loading saved state of bubbles str e n try n artist scenario add bubbles n scenario setisactive artist false n scenario changesetting artist scale n scenario changesetting artist colormap n scenario changesetting artist power n scenario changesetting artist custom color window n scenario changesetting artist delay ns n scenario changesetting artist static pyqcolor n scenario setisactive artist true n except standarderror as e n logging log error e class name occured while loading saved state of bubbles str e n except n logging log error unknown error occured while loading saved state of bubbles str e n window gl setcamerapivot n window gl setcameraloc n window gl setcameraorientation n window gl cameralock false n window gl perspectiveview true n window gl backgroundcolor pyqcolor n window timeline rangefinder default n window frame filter code n window activeview n dumpscenario ndel dumpscenario shovelmainwindow cpp in bool shovelmainwindow loadscenariocode const qstring nobjc class fifindersyncextensionhost is implemented in both system library privateframeworks finderkit framework versions a finderkit and system library privateframeworks fileprovider framework overridebundles findersynccollaborationfileprovideroverride bundle contents macos findersynccollaborationfileprovideroverride one of the two will be used which one is undefined nlibc abi dylib terminating with uncaught exception of type boost python error already set nabort trap n n n reporter kjmeagher cc resolution fixed ts component combo core summary doesn t work on macos with priority normal keywords time milestone autumnal equinox owner type defect | 1 |
56,081 | 8,050,167,067 | IssuesEvent | 2018-08-01 12:39:53 | usds/us-forms-system | https://api.github.com/repos/usds/us-forms-system | opened | Create reference documentation site | [type] documentation | Here are some examples that we might follow when we get to the detailed per-widget documentation. Emphasis on pics of the rendered form elements and copy/paste of sample code.
https://joepuzzo.github.io/informed/?selectedKind=Inputs&selectedStory=Radio%20Input&full=0&addons=0&stories=1&panelRight=0&addonPanel=REACT_STORYBOOK%2Freadme%2Fpanel
| 1.0 | Create reference documentation site - Here are some examples that we might follow when we get to the detailed per-widget documentation. Emphasis on pics of the rendered form elements and copy/paste of sample code.
https://joepuzzo.github.io/informed/?selectedKind=Inputs&selectedStory=Radio%20Input&full=0&addons=0&stories=1&panelRight=0&addonPanel=REACT_STORYBOOK%2Freadme%2Fpanel
| non_defect | create reference documentation site here are some examples that we might follow when we get to the detailed per widget documentation emphasis on pics of the rendered form elements and copy paste of sample code | 0 |
50,577 | 13,187,592,747 | IssuesEvent | 2020-08-13 03:55:35 | icecube-trac/tix3 | https://api.github.com/repos/icecube-trac/tix3 | closed | Coordinate services documentation is Unhelpful (Trac #977) | Migrated from Trac combo reconstruction defect | Most Astronomers use a coordinate system called J2000 to locate the position of astronomical objects which where the pole is the celestial pole at January 1st 2000. However sometimes a coordinate system called current is used where the pole is wherever the hell the celestial pole it is at the time of observation. Both of these cooridinate systems are refered to as equatorial cooridinates. as the difference is at most 18 arcseconds for time scales on the order of icecubes's lifetime.
coordinate-services handles these two cooridinate systems in a rather confusing way: to get from equatoraial to local cooridinates it provides two sets of functions:
Local2RA(), Local2Dec() and Local2RA_no(), Local2Dec_no()
as well as two sets of functions for the reverse transforms:
Equa2LocalAzimuth, Equa2LocalZenith() and Equa2LocalAzimuth_inv(), Equal2LocalZenith_inv()
It is completly unclear weather one should use the _no() functions or not and the _inv() functions or not, the function's doc strings only seem to add to the confusion
In addition there are the totally confusing Equa2LocalRA and Equal2LocalDec which claim to take an RA and Dec and convert them to another RA and Dec, such a function should not have local in its name.
furthermore all of these functions have epoch variables which is even more confusing, why would current coordinates need a local function. as well you need to be reading a pretty old paper to find anything in non J2000 coordinates. so epoch should never be anything but 2000 ( it looks like that is the default from the code, but you can't tell from the documentation)
What is needed is to figure out which functions are are for J2000 and which ones are for current. and properly document them. Then depreciate the old functions and replace them with functions with the string "J2000" and "Current". none of which should have an epoch variable. ( If a new epoch becomes popular J2050 for example then new functions can be added )
Python examples should also be provided
<details>
<summary><em>Migrated from https://code.icecube.wisc.edu/ticket/977
, reported by icecube and owned by </em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2015-06-26T21:13:56",
"description": "Most Astronomers use a coordinate system called J2000 to locate the position of astronomical objects which where the pole is the celestial pole at January 1st 2000. However sometimes a coordinate system called current is used where the pole is wherever the hell the celestial pole it is at the time of observation. Both of these cooridinate systems are refered to as equatorial cooridinates. as the difference is at most 18 arcseconds for time scales on the order of icecubes's lifetime. \n\ncoordinate-services handles these two cooridinate systems in a rather confusing way: to get from equatoraial to local cooridinates it provides two sets of functions:\nLocal2RA(), Local2Dec() and Local2RA_no(), Local2Dec_no()\n\nas well as two sets of functions for the reverse transforms:\nEqua2LocalAzimuth, Equa2LocalZenith() and Equa2LocalAzimuth_inv(), Equal2LocalZenith_inv()\n\nIt is completly unclear weather one should use the _no() functions or not and the _inv() functions or not, the function's doc strings only seem to add to the confusion\n\nIn addition there are the totally confusing Equa2LocalRA and Equal2LocalDec which claim to take an RA and Dec and convert them to another RA and Dec, such a function should not have local in its name.\n\nfurthermore all of these functions have epoch variables which is even more confusing, why would current coordinates need a local function. as well you need to be reading a pretty old paper to find anything in non J2000 coordinates. so epoch should never be anything but 2000 ( it looks like that is the default from the code, but you can't tell from the documentation)\n\nWhat is needed is to figure out which functions are are for J2000 and which ones are for current. and properly document them. Then depreciate the old functions and replace them with functions with the string \"J2000\" and \"Current\". none of which should have an epoch variable. ( If a new epoch becomes popular J2050 for example then new functions can be added )\n\nPython examples should also be provided\n",
"reporter": "icecube",
"cc": "",
"resolution": "wontfix",
"_ts": "1435353236861155",
"component": "combo reconstruction",
"summary": "Coordinate services documentation is Unhelpful",
"priority": "normal",
"keywords": "",
"time": "2015-05-13T16:30:51",
"milestone": "",
"owner": "",
"type": "defect"
}
```
</p>
</details>
| 1.0 | Coordinate services documentation is Unhelpful (Trac #977) - Most Astronomers use a coordinate system called J2000 to locate the position of astronomical objects which where the pole is the celestial pole at January 1st 2000. However sometimes a coordinate system called current is used where the pole is wherever the hell the celestial pole it is at the time of observation. Both of these cooridinate systems are refered to as equatorial cooridinates. as the difference is at most 18 arcseconds for time scales on the order of icecubes's lifetime.
coordinate-services handles these two cooridinate systems in a rather confusing way: to get from equatoraial to local cooridinates it provides two sets of functions:
Local2RA(), Local2Dec() and Local2RA_no(), Local2Dec_no()
as well as two sets of functions for the reverse transforms:
Equa2LocalAzimuth, Equa2LocalZenith() and Equa2LocalAzimuth_inv(), Equal2LocalZenith_inv()
It is completly unclear weather one should use the _no() functions or not and the _inv() functions or not, the function's doc strings only seem to add to the confusion
In addition there are the totally confusing Equa2LocalRA and Equal2LocalDec which claim to take an RA and Dec and convert them to another RA and Dec, such a function should not have local in its name.
furthermore all of these functions have epoch variables which is even more confusing, why would current coordinates need a local function. as well you need to be reading a pretty old paper to find anything in non J2000 coordinates. so epoch should never be anything but 2000 ( it looks like that is the default from the code, but you can't tell from the documentation)
What is needed is to figure out which functions are are for J2000 and which ones are for current. and properly document them. Then depreciate the old functions and replace them with functions with the string "J2000" and "Current". none of which should have an epoch variable. ( If a new epoch becomes popular J2050 for example then new functions can be added )
Python examples should also be provided
<details>
<summary><em>Migrated from https://code.icecube.wisc.edu/ticket/977
, reported by icecube and owned by </em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2015-06-26T21:13:56",
"description": "Most Astronomers use a coordinate system called J2000 to locate the position of astronomical objects which where the pole is the celestial pole at January 1st 2000. However sometimes a coordinate system called current is used where the pole is wherever the hell the celestial pole it is at the time of observation. Both of these cooridinate systems are refered to as equatorial cooridinates. as the difference is at most 18 arcseconds for time scales on the order of icecubes's lifetime. \n\ncoordinate-services handles these two cooridinate systems in a rather confusing way: to get from equatoraial to local cooridinates it provides two sets of functions:\nLocal2RA(), Local2Dec() and Local2RA_no(), Local2Dec_no()\n\nas well as two sets of functions for the reverse transforms:\nEqua2LocalAzimuth, Equa2LocalZenith() and Equa2LocalAzimuth_inv(), Equal2LocalZenith_inv()\n\nIt is completly unclear weather one should use the _no() functions or not and the _inv() functions or not, the function's doc strings only seem to add to the confusion\n\nIn addition there are the totally confusing Equa2LocalRA and Equal2LocalDec which claim to take an RA and Dec and convert them to another RA and Dec, such a function should not have local in its name.\n\nfurthermore all of these functions have epoch variables which is even more confusing, why would current coordinates need a local function. as well you need to be reading a pretty old paper to find anything in non J2000 coordinates. so epoch should never be anything but 2000 ( it looks like that is the default from the code, but you can't tell from the documentation)\n\nWhat is needed is to figure out which functions are are for J2000 and which ones are for current. and properly document them. Then depreciate the old functions and replace them with functions with the string \"J2000\" and \"Current\". none of which should have an epoch variable. ( If a new epoch becomes popular J2050 for example then new functions can be added )\n\nPython examples should also be provided\n",
"reporter": "icecube",
"cc": "",
"resolution": "wontfix",
"_ts": "1435353236861155",
"component": "combo reconstruction",
"summary": "Coordinate services documentation is Unhelpful",
"priority": "normal",
"keywords": "",
"time": "2015-05-13T16:30:51",
"milestone": "",
"owner": "",
"type": "defect"
}
```
</p>
</details>
| defect | coordinate services documentation is unhelpful trac most astronomers use a coordinate system called to locate the position of astronomical objects which where the pole is the celestial pole at january however sometimes a coordinate system called current is used where the pole is wherever the hell the celestial pole it is at the time of observation both of these cooridinate systems are refered to as equatorial cooridinates as the difference is at most arcseconds for time scales on the order of icecubes s lifetime coordinate services handles these two cooridinate systems in a rather confusing way to get from equatoraial to local cooridinates it provides two sets of functions and no no as well as two sets of functions for the reverse transforms and inv inv it is completly unclear weather one should use the no functions or not and the inv functions or not the function s doc strings only seem to add to the confusion in addition there are the totally confusing and which claim to take an ra and dec and convert them to another ra and dec such a function should not have local in its name furthermore all of these functions have epoch variables which is even more confusing why would current coordinates need a local function as well you need to be reading a pretty old paper to find anything in non coordinates so epoch should never be anything but it looks like that is the default from the code but you can t tell from the documentation what is needed is to figure out which functions are are for and which ones are for current and properly document them then depreciate the old functions and replace them with functions with the string and current none of which should have an epoch variable if a new epoch becomes popular for example then new functions can be added python examples should also be provided migrated from reported by icecube and owned by json status closed changetime description most astronomers use a coordinate system called to locate the position of astronomical objects which where the pole is the celestial pole at january however sometimes a coordinate system called current is used where the pole is wherever the hell the celestial pole it is at the time of observation both of these cooridinate systems are refered to as equatorial cooridinates as the difference is at most arcseconds for time scales on the order of icecubes s lifetime n ncoordinate services handles these two cooridinate systems in a rather confusing way to get from equatoraial to local cooridinates it provides two sets of functions and no no n nas well as two sets of functions for the reverse transforms and inv inv n nit is completly unclear weather one should use the no functions or not and the inv functions or not the function s doc strings only seem to add to the confusion n nin addition there are the totally confusing and which claim to take an ra and dec and convert them to another ra and dec such a function should not have local in its name n nfurthermore all of these functions have epoch variables which is even more confusing why would current coordinates need a local function as well you need to be reading a pretty old paper to find anything in non coordinates so epoch should never be anything but it looks like that is the default from the code but you can t tell from the documentation n nwhat is needed is to figure out which functions are are for and which ones are for current and properly document them then depreciate the old functions and replace them with functions with the string and current none of which should have an epoch variable if a new epoch becomes popular for example then new functions can be added n npython examples should also be provided n reporter icecube cc resolution wontfix ts component combo reconstruction summary coordinate services documentation is unhelpful priority normal keywords time milestone owner type defect | 1 |
29 | 2,495,004,028 | IssuesEvent | 2015-01-06 05:05:56 | networkx/networkx | https://api.github.com/repos/networkx/networkx | closed | xrange vs python 3 | Defect | Searching the networkx code for `xrange` I see it's used in some "Shapefile" related code and tests. Should this be updated for python 3 compatibility, and is it not tested in the TravisCI testing? | 1.0 | xrange vs python 3 - Searching the networkx code for `xrange` I see it's used in some "Shapefile" related code and tests. Should this be updated for python 3 compatibility, and is it not tested in the TravisCI testing? | defect | xrange vs python searching the networkx code for xrange i see it s used in some shapefile related code and tests should this be updated for python compatibility and is it not tested in the travisci testing | 1 |
260,330 | 22,612,648,608 | IssuesEvent | 2022-06-29 18:37:25 | gravitational/teleport | https://api.github.com/repos/gravitational/teleport | closed | ssh -J <teleport-proxy> fails with tls routing enabled | bug test-plan-problem | Expected behavior:
According to the test plan, `ssh -J <teleport-proxy>` should work.
Current behavior:
When the Teleport proxy receives the ssh request, it denies it with the log `[MX:PROXY:] Closing SSH connection: SSH listener is disabled. multiplexer/multiplexer.go:241`.
Is it possible for us to multiplex SSH proxy connections on the proxy listener? Or do we need to update the testplan to only expect this with tls routing disabled?
Bug details:
- Teleport version - All versions with tls routing
- Recreation steps - `ssh -p 3022 -J proxy.example.com:3080 server01` | 1.0 | ssh -J <teleport-proxy> fails with tls routing enabled - Expected behavior:
According to the test plan, `ssh -J <teleport-proxy>` should work.
Current behavior:
When the Teleport proxy receives the ssh request, it denies it with the log `[MX:PROXY:] Closing SSH connection: SSH listener is disabled. multiplexer/multiplexer.go:241`.
Is it possible for us to multiplex SSH proxy connections on the proxy listener? Or do we need to update the testplan to only expect this with tls routing disabled?
Bug details:
- Teleport version - All versions with tls routing
- Recreation steps - `ssh -p 3022 -J proxy.example.com:3080 server01` | non_defect | ssh j fails with tls routing enabled expected behavior according to the test plan ssh j should work current behavior when the teleport proxy receives the ssh request it denies it with the log closing ssh connection ssh listener is disabled multiplexer multiplexer go is it possible for us to multiplex ssh proxy connections on the proxy listener or do we need to update the testplan to only expect this with tls routing disabled bug details teleport version all versions with tls routing recreation steps ssh p j proxy example com | 0 |
19,580 | 25,904,651,154 | IssuesEvent | 2022-12-15 09:17:01 | alphagov/govuk-design-system | https://api.github.com/repos/alphagov/govuk-design-system | opened | Move the team sprint board to the new GitHub projects feature | 🕔 days process | ## What
Github released [a new version of their projects feature](https://docs.github.com/en/issues/planning-and-tracking-with-projects/learning-about-projects/about-projects) earlier this year. We want to move out sprint board to this new version.
At the same time, we should consider reviewing and simplifying our labelling system.
## Why
The iterated projects feature better meets our needs, for example, it has a built in analytics which can help us measure our sprints better.
## Who needs to work on this
Kelly
## Who needs to review this
The whole team
## Done when
- [ ] Agree what transitions from current board to new (eg review backlog items)
- [ ] Investigate most appropriate layout
- [ ] Review labelling system
- [ ] Create new board
- [ ] Review from team
- [ ] Close the original sprint board
| 1.0 | Move the team sprint board to the new GitHub projects feature - ## What
Github released [a new version of their projects feature](https://docs.github.com/en/issues/planning-and-tracking-with-projects/learning-about-projects/about-projects) earlier this year. We want to move out sprint board to this new version.
At the same time, we should consider reviewing and simplifying our labelling system.
## Why
The iterated projects feature better meets our needs, for example, it has a built in analytics which can help us measure our sprints better.
## Who needs to work on this
Kelly
## Who needs to review this
The whole team
## Done when
- [ ] Agree what transitions from current board to new (eg review backlog items)
- [ ] Investigate most appropriate layout
- [ ] Review labelling system
- [ ] Create new board
- [ ] Review from team
- [ ] Close the original sprint board
| non_defect | move the team sprint board to the new github projects feature what github released earlier this year we want to move out sprint board to this new version at the same time we should consider reviewing and simplifying our labelling system why the iterated projects feature better meets our needs for example it has a built in analytics which can help us measure our sprints better who needs to work on this kelly who needs to review this the whole team done when agree what transitions from current board to new eg review backlog items investigate most appropriate layout review labelling system create new board review from team close the original sprint board | 0 |
34,689 | 7,458,751,627 | IssuesEvent | 2018-03-30 12:08:03 | kerdokullamae/test_koik_issued | https://api.github.com/repos/kerdokullamae/test_koik_issued | closed | ontoloogiate filtrid ei tööta | C: AIS P: highest R: fixed T: defect | **Reported by aadikaljuvee on 25 Feb 2015 12:46 UTC**
Ainese, valdkonna ja äkki teistegi ontoloogiate puhul filtri valimise järel tuleb valge leht | 1.0 | ontoloogiate filtrid ei tööta - **Reported by aadikaljuvee on 25 Feb 2015 12:46 UTC**
Ainese, valdkonna ja äkki teistegi ontoloogiate puhul filtri valimise järel tuleb valge leht | defect | ontoloogiate filtrid ei tööta reported by aadikaljuvee on feb utc ainese valdkonna ja äkki teistegi ontoloogiate puhul filtri valimise järel tuleb valge leht | 1 |
53,453 | 13,261,643,860 | IssuesEvent | 2020-08-20 20:16:30 | icecube-trac/tix4 | https://api.github.com/repos/icecube-trac/tix4 | closed | [cmake] I3_SRC read only (Trac #1429) | Migrated from Trac cmake defect | Make these errors go away, so I can attempt to build a cvmfs src locally:
```text
CMake Error at cmake/project.cmake:682 (file):
file failed to open for writing (Read-only file system):
/cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_14_x86_64/metaprojects/simulation/trunk/dataclasses/CMakeFiles/tests.list
Call Stack (most recent call first):
dataclasses/CMakeLists.txt:134 (i3_test_scripts)
CMake Error at cmake/toplevel.cmake:241 (file):
file failed to open for writing (No such file or directory):
/cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_14_x86_64/metaprojects/simulation/trunk/docs/doxygen/.tagfiles/dataclasses.include.tmp
Call Stack (most recent call first):
CMakeLists.txt:22 (include)
CMake Error at cmake/toplevel.cmake:242 (file):
file failed to open for writing (Read-only file system):
/cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_14_x86_64/metaprojects/simulation/trunk/docs/doxygen/.tagfiles/dataclasses.include.tmp
Call Stack (most recent call first):
CMakeLists.txt:22 (include)
```
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1429">https://code.icecube.wisc.edu/projects/icecube/ticket/1429</a>, reported by david.schultzand owned by nega</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:11:22",
"_ts": "1550067082284240",
"description": "Make these errors go away, so I can attempt to build a cvmfs src locally:\n\n{{{\nCMake Error at cmake/project.cmake:682 (file):\n file failed to open for writing (Read-only file system):\n\n /cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_14_x86_64/metaprojects/simulation/trunk/dataclasses/CMakeFiles/tests.list\nCall Stack (most recent call first):\n dataclasses/CMakeLists.txt:134 (i3_test_scripts)\n\nCMake Error at cmake/toplevel.cmake:241 (file):\n file failed to open for writing (No such file or directory):\n\n /cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_14_x86_64/metaprojects/simulation/trunk/docs/doxygen/.tagfiles/dataclasses.include.tmp\nCall Stack (most recent call first):\n CMakeLists.txt:22 (include)\n\nCMake Error at cmake/toplevel.cmake:242 (file):\n file failed to open for writing (Read-only file system):\n\n /cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_14_x86_64/metaprojects/simulation/trunk/docs/doxygen/.tagfiles/dataclasses.include.tmp\nCall Stack (most recent call first):\n CMakeLists.txt:22 (include)\n}}}",
"reporter": "david.schultz",
"cc": "",
"resolution": "invalid",
"time": "2015-11-09T23:04:16",
"component": "cmake",
"summary": "[cmake] I3_SRC read only",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
</p>
</details>
| 1.0 | [cmake] I3_SRC read only (Trac #1429) - Make these errors go away, so I can attempt to build a cvmfs src locally:
```text
CMake Error at cmake/project.cmake:682 (file):
file failed to open for writing (Read-only file system):
/cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_14_x86_64/metaprojects/simulation/trunk/dataclasses/CMakeFiles/tests.list
Call Stack (most recent call first):
dataclasses/CMakeLists.txt:134 (i3_test_scripts)
CMake Error at cmake/toplevel.cmake:241 (file):
file failed to open for writing (No such file or directory):
/cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_14_x86_64/metaprojects/simulation/trunk/docs/doxygen/.tagfiles/dataclasses.include.tmp
Call Stack (most recent call first):
CMakeLists.txt:22 (include)
CMake Error at cmake/toplevel.cmake:242 (file):
file failed to open for writing (Read-only file system):
/cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_14_x86_64/metaprojects/simulation/trunk/docs/doxygen/.tagfiles/dataclasses.include.tmp
Call Stack (most recent call first):
CMakeLists.txt:22 (include)
```
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1429">https://code.icecube.wisc.edu/projects/icecube/ticket/1429</a>, reported by david.schultzand owned by nega</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:11:22",
"_ts": "1550067082284240",
"description": "Make these errors go away, so I can attempt to build a cvmfs src locally:\n\n{{{\nCMake Error at cmake/project.cmake:682 (file):\n file failed to open for writing (Read-only file system):\n\n /cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_14_x86_64/metaprojects/simulation/trunk/dataclasses/CMakeFiles/tests.list\nCall Stack (most recent call first):\n dataclasses/CMakeLists.txt:134 (i3_test_scripts)\n\nCMake Error at cmake/toplevel.cmake:241 (file):\n file failed to open for writing (No such file or directory):\n\n /cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_14_x86_64/metaprojects/simulation/trunk/docs/doxygen/.tagfiles/dataclasses.include.tmp\nCall Stack (most recent call first):\n CMakeLists.txt:22 (include)\n\nCMake Error at cmake/toplevel.cmake:242 (file):\n file failed to open for writing (Read-only file system):\n\n /cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_14_x86_64/metaprojects/simulation/trunk/docs/doxygen/.tagfiles/dataclasses.include.tmp\nCall Stack (most recent call first):\n CMakeLists.txt:22 (include)\n}}}",
"reporter": "david.schultz",
"cc": "",
"resolution": "invalid",
"time": "2015-11-09T23:04:16",
"component": "cmake",
"summary": "[cmake] I3_SRC read only",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
</p>
</details>
| defect | src read only trac make these errors go away so i can attempt to build a cvmfs src locally text cmake error at cmake project cmake file file failed to open for writing read only file system cvmfs icecube opensciencegrid org ubuntu metaprojects simulation trunk dataclasses cmakefiles tests list call stack most recent call first dataclasses cmakelists txt test scripts cmake error at cmake toplevel cmake file file failed to open for writing no such file or directory cvmfs icecube opensciencegrid org ubuntu metaprojects simulation trunk docs doxygen tagfiles dataclasses include tmp call stack most recent call first cmakelists txt include cmake error at cmake toplevel cmake file file failed to open for writing read only file system cvmfs icecube opensciencegrid org ubuntu metaprojects simulation trunk docs doxygen tagfiles dataclasses include tmp call stack most recent call first cmakelists txt include migrated from json status closed changetime ts description make these errors go away so i can attempt to build a cvmfs src locally n n ncmake error at cmake project cmake file n file failed to open for writing read only file system n n cvmfs icecube opensciencegrid org ubuntu metaprojects simulation trunk dataclasses cmakefiles tests list ncall stack most recent call first n dataclasses cmakelists txt test scripts n ncmake error at cmake toplevel cmake file n file failed to open for writing no such file or directory n n cvmfs icecube opensciencegrid org ubuntu metaprojects simulation trunk docs doxygen tagfiles dataclasses include tmp ncall stack most recent call first n cmakelists txt include n ncmake error at cmake toplevel cmake file n file failed to open for writing read only file system n n cvmfs icecube opensciencegrid org ubuntu metaprojects simulation trunk docs doxygen tagfiles dataclasses include tmp ncall stack most recent call first n cmakelists txt include n reporter david schultz cc resolution invalid time component cmake summary src read only priority normal keywords milestone owner nega type defect | 1 |
11,691 | 2,660,702,333 | IssuesEvent | 2015-03-19 09:42:06 | hazelcast/hazelcast | https://api.github.com/repos/hazelcast/hazelcast | closed | [TEST-FAILURE] ClientTxnMultiMapTest failures | Team: Client Type: Defect | com.hazelcast.client.txn.ClientTxnMultiMapTest.testRemove_whenBackedWithList
com.hazelcast.client.txn.ClientTxnMultiMapTest.testGet_whenBackedWithList
```
com.hazelcast.core.HazelcastException: Could not obtain Connection!!!
at com.hazelcast.client.txn.ClientTransactionManager.getRandomAddress(ClientTransactionManager.java:216)
at com.hazelcast.client.txn.ClientTransactionManager.connect(ClientTransactionManager.java:140)
at com.hazelcast.client.txn.TransactionContextProxy.<init>(TransactionContextProxy.java:63)
at com.hazelcast.client.txn.ClientTransactionManager.newTransactionContext(ClientTransactionManager.java:87)
at com.hazelcast.client.txn.ClientTransactionManager.newTransactionContext(ClientTransactionManager.java:83)
at com.hazelcast.client.impl.HazelcastClientInstanceImpl.newTransactionContext(HazelcastClientInstanceImpl.java:343)
at com.hazelcast.client.impl.HazelcastClientProxy.newTransactionContext(HazelcastClientProxy.java:158)
at com.hazelcast.client.txn.ClientTxnMultiMapTest.testRemove_whenBackedWithList(ClientTxnMultiMapTest.java:241)
```
https://hazelcast-l337.ci.cloudbees.com/job/Hazelcast-3.x-OpenJDK8/492/testReport/ | 1.0 | [TEST-FAILURE] ClientTxnMultiMapTest failures - com.hazelcast.client.txn.ClientTxnMultiMapTest.testRemove_whenBackedWithList
com.hazelcast.client.txn.ClientTxnMultiMapTest.testGet_whenBackedWithList
```
com.hazelcast.core.HazelcastException: Could not obtain Connection!!!
at com.hazelcast.client.txn.ClientTransactionManager.getRandomAddress(ClientTransactionManager.java:216)
at com.hazelcast.client.txn.ClientTransactionManager.connect(ClientTransactionManager.java:140)
at com.hazelcast.client.txn.TransactionContextProxy.<init>(TransactionContextProxy.java:63)
at com.hazelcast.client.txn.ClientTransactionManager.newTransactionContext(ClientTransactionManager.java:87)
at com.hazelcast.client.txn.ClientTransactionManager.newTransactionContext(ClientTransactionManager.java:83)
at com.hazelcast.client.impl.HazelcastClientInstanceImpl.newTransactionContext(HazelcastClientInstanceImpl.java:343)
at com.hazelcast.client.impl.HazelcastClientProxy.newTransactionContext(HazelcastClientProxy.java:158)
at com.hazelcast.client.txn.ClientTxnMultiMapTest.testRemove_whenBackedWithList(ClientTxnMultiMapTest.java:241)
```
https://hazelcast-l337.ci.cloudbees.com/job/Hazelcast-3.x-OpenJDK8/492/testReport/ | defect | clienttxnmultimaptest failures com hazelcast client txn clienttxnmultimaptest testremove whenbackedwithlist com hazelcast client txn clienttxnmultimaptest testget whenbackedwithlist com hazelcast core hazelcastexception could not obtain connection at com hazelcast client txn clienttransactionmanager getrandomaddress clienttransactionmanager java at com hazelcast client txn clienttransactionmanager connect clienttransactionmanager java at com hazelcast client txn transactioncontextproxy transactioncontextproxy java at com hazelcast client txn clienttransactionmanager newtransactioncontext clienttransactionmanager java at com hazelcast client txn clienttransactionmanager newtransactioncontext clienttransactionmanager java at com hazelcast client impl hazelcastclientinstanceimpl newtransactioncontext hazelcastclientinstanceimpl java at com hazelcast client impl hazelcastclientproxy newtransactioncontext hazelcastclientproxy java at com hazelcast client txn clienttxnmultimaptest testremove whenbackedwithlist clienttxnmultimaptest java | 1 |
61,026 | 17,023,583,041 | IssuesEvent | 2021-07-03 02:46:27 | tomhughes/trac-tickets | https://api.github.com/repos/tomhughes/trac-tickets | closed | Slow undo of move nodes | Component: merkaartor Priority: major Resolution: fixed Type: defect | **[Submitted to the original trac issue database at 10.41pm, Friday, 30th April 2010]**
Sometimes, when editing a long way, it's easy to accidentally move the way instead of one of its nodes. Undoing this takes several minutes of maxed-out CPU if the way has more than a few hundred nodes. Either undo should be faster, or move mode should be lockable to moving nodes only when that's required. | 1.0 | Slow undo of move nodes - **[Submitted to the original trac issue database at 10.41pm, Friday, 30th April 2010]**
Sometimes, when editing a long way, it's easy to accidentally move the way instead of one of its nodes. Undoing this takes several minutes of maxed-out CPU if the way has more than a few hundred nodes. Either undo should be faster, or move mode should be lockable to moving nodes only when that's required. | defect | slow undo of move nodes sometimes when editing a long way it s easy to accidentally move the way instead of one of its nodes undoing this takes several minutes of maxed out cpu if the way has more than a few hundred nodes either undo should be faster or move mode should be lockable to moving nodes only when that s required | 1 |
59,273 | 17,016,784,735 | IssuesEvent | 2021-07-02 13:10:26 | tomhughes/trac-tickets | https://api.github.com/repos/tomhughes/trac-tickets | opened | mod_tile fails to build debian package with Jessie | Component: mod_tile Priority: minor Type: defect | **[Submitted to the original trac issue database at 1.04pm, Sunday, 18th March 2018]**
I have a tile server and wanted to build a Debian package for mod_tile. But with the current git version, it fails:
dpkg-source -b mod_tile
dpkg-source: error: can't build with source format '3.0 (native)': native package version may not have a revision
After googleing I found a solution for this problem here: https://github.com/jamesdbloom/grunt-debian-package/issues/23
The solution is to change the first line of the debian/changelog file from:
libapache2-mod-tile (0.4-12~precise2) precise; urgency=low
to
libapache2-mod-tile (0.4.12~precise2) precise; urgency=low
or add a new line. Can you please add this updated changelog.
thanks
Philipp | 1.0 | mod_tile fails to build debian package with Jessie - **[Submitted to the original trac issue database at 1.04pm, Sunday, 18th March 2018]**
I have a tile server and wanted to build a Debian package for mod_tile. But with the current git version, it fails:
dpkg-source -b mod_tile
dpkg-source: error: can't build with source format '3.0 (native)': native package version may not have a revision
After googleing I found a solution for this problem here: https://github.com/jamesdbloom/grunt-debian-package/issues/23
The solution is to change the first line of the debian/changelog file from:
libapache2-mod-tile (0.4-12~precise2) precise; urgency=low
to
libapache2-mod-tile (0.4.12~precise2) precise; urgency=low
or add a new line. Can you please add this updated changelog.
thanks
Philipp | defect | mod tile fails to build debian package with jessie i have a tile server and wanted to build a debian package for mod tile but with the current git version it fails dpkg source b mod tile dpkg source error can t build with source format native native package version may not have a revision after googleing i found a solution for this problem here the solution is to change the first line of the debian changelog file from mod tile precise urgency low to mod tile precise urgency low or add a new line can you please add this updated changelog thanks philipp | 1 |
355,281 | 25,175,887,307 | IssuesEvent | 2022-11-11 09:13:24 | guowei42/pe | https://api.github.com/repos/guowei42/pe | opened | Clear command could be clearer | severity.VeryLow type.DocumentationBug | Though there is a warning indicating that the clear command will only remove the suppliers panel, I think it would be better to warn the users of potential consequences. For example, some functions in the Inventory section will not work as intended
<!--session: 1668153119072-f8f9ca95-f9d9-4063-9a63-2c63cdf283cf-->
<!--Version: Web v3.4.4--> | 1.0 | Clear command could be clearer - Though there is a warning indicating that the clear command will only remove the suppliers panel, I think it would be better to warn the users of potential consequences. For example, some functions in the Inventory section will not work as intended
<!--session: 1668153119072-f8f9ca95-f9d9-4063-9a63-2c63cdf283cf-->
<!--Version: Web v3.4.4--> | non_defect | clear command could be clearer though there is a warning indicating that the clear command will only remove the suppliers panel i think it would be better to warn the users of potential consequences for example some functions in the inventory section will not work as intended | 0 |
65,124 | 26,986,615,791 | IssuesEvent | 2023-02-09 16:35:05 | gradido/gradido | https://api.github.com/repos/gradido/gradido | closed | 🐛 [Bug] Text Generate Link incorrect | bug service: wallet frontend | <!-- You can find the latest issue templates here https://github.com/ulfgebhardt/issue-templates -->
## 🐛 Bugreport
<!-- Describe your issue in detail. Include screenshots if needed. Give us as much information as possible. Use a clear and concise description of what the bug is.-->
Text Generate Link incorrect(en)

Should be `Check now` or vice versa.

Also Textchange:
```
Wähle einen Betrag aus, welchen du per Link versenden möchtest, und trage eine Nachricht ein.
Die Nachricht ist Pflichtfeld.
``` | 1.0 | 🐛 [Bug] Text Generate Link incorrect - <!-- You can find the latest issue templates here https://github.com/ulfgebhardt/issue-templates -->
## 🐛 Bugreport
<!-- Describe your issue in detail. Include screenshots if needed. Give us as much information as possible. Use a clear and concise description of what the bug is.-->
Text Generate Link incorrect(en)

Should be `Check now` or vice versa.

Also Textchange:
```
Wähle einen Betrag aus, welchen du per Link versenden möchtest, und trage eine Nachricht ein.
Die Nachricht ist Pflichtfeld.
``` | non_defect | 🐛 text generate link incorrect 🐛 bugreport text generate link incorrect en should be check now or vice versa also textchange wähle einen betrag aus welchen du per link versenden möchtest und trage eine nachricht ein die nachricht ist pflichtfeld | 0 |
21,120 | 3,461,696,101 | IssuesEvent | 2015-12-20 09:26:25 | arti01/jkursy | https://api.github.com/repos/arti01/jkursy | closed | newsy z 1 strony | auto-migrated Priority-Medium Type-Defect | ```
Po wybraniu "więcej" dobrze byłoby móc widzieć zdjęcie o szerokości
całej kolumny. Może być mniejsze, ale nie większe niż szerokość kolumny.
To samo w tych kolumnach z boksikami.
```
Original issue reported on code.google.com by `juko...@gmail.com` on 28 Mar 2011 at 7:40 | 1.0 | newsy z 1 strony - ```
Po wybraniu "więcej" dobrze byłoby móc widzieć zdjęcie o szerokości
całej kolumny. Może być mniejsze, ale nie większe niż szerokość kolumny.
To samo w tych kolumnach z boksikami.
```
Original issue reported on code.google.com by `juko...@gmail.com` on 28 Mar 2011 at 7:40 | defect | newsy z strony po wybraniu więcej dobrze byłoby móc widzieć zdjęcie o szerokości całej kolumny może być mniejsze ale nie większe niż szerokość kolumny to samo w tych kolumnach z boksikami original issue reported on code google com by juko gmail com on mar at | 1 |
610,996 | 18,941,876,592 | IssuesEvent | 2021-11-18 04:34:57 | hackforla/tdm-calculator | https://api.github.com/repos/hackforla/tdm-calculator | closed | Revise the TDM Strategies that are included in the School Package | role: back-end level: medium decision p-Feature - Bonus Packages priority: MUST HAVE | ### Overview
Revise the TDM Strategies that are included in the School Package from the Employee Package currently in the Calculator to one that includes the TDM strategies below:
### Action Items
Include only these TDM Strategies in a School Package that is separate from the Employee Package:
- **Bicycle Facilities: Bicycle Parking**
- **Information: Encouragement Program** - select the Voluntary Travel Behavior Change Program option from the menu
- **High-Occupancy Vehicles: HOV Program**
- **Information: School Safety Campaign**
- [ ] Add checkbox for school package on calculator
### Resources/Instructions
The changes are reflected in the [google doc](https://docs.google.com/document/d/1pK9W6Wjeddjbla29l3rsA8F41SpIV8X1pe9eg-W3b7c/edit#).
@entrotech @KPHowley | 1.0 | Revise the TDM Strategies that are included in the School Package - ### Overview
Revise the TDM Strategies that are included in the School Package from the Employee Package currently in the Calculator to one that includes the TDM strategies below:
### Action Items
Include only these TDM Strategies in a School Package that is separate from the Employee Package:
- **Bicycle Facilities: Bicycle Parking**
- **Information: Encouragement Program** - select the Voluntary Travel Behavior Change Program option from the menu
- **High-Occupancy Vehicles: HOV Program**
- **Information: School Safety Campaign**
- [ ] Add checkbox for school package on calculator
### Resources/Instructions
The changes are reflected in the [google doc](https://docs.google.com/document/d/1pK9W6Wjeddjbla29l3rsA8F41SpIV8X1pe9eg-W3b7c/edit#).
@entrotech @KPHowley | non_defect | revise the tdm strategies that are included in the school package overview revise the tdm strategies that are included in the school package from the employee package currently in the calculator to one that includes the tdm strategies below action items include only these tdm strategies in a school package that is separate from the employee package bicycle facilities bicycle parking information encouragement program select the voluntary travel behavior change program option from the menu high occupancy vehicles hov program information school safety campaign add checkbox for school package on calculator resources instructions the changes are reflected in the entrotech kphowley | 0 |
49,509 | 13,187,223,378 | IssuesEvent | 2020-08-13 02:44:23 | icecube-trac/tix3 | https://api.github.com/repos/icecube-trac/tix3 | opened | [IceHive] tests gone wild! (Trac #1562) | Incomplete Migration Migrated from Trac combo reconstruction defect | <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1562">https://code.icecube.wisc.edu/ticket/1562</a>, reported by nega and owned by mzoll</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2016-04-28T16:29:13",
"description": "See #1561\n\n{{{ 21230 ? Rl 26416:47 python /home/nega/i3/combo/src/IceHive/resources/test/i3hiveclusterTest.py}}}\n\n{{{\n(gdb) bt\n#0 0x00007f80e00414fd in write () at ../sysdeps/unix/syscall-template.S:81\n#1 0x00007f80dffc3bff in _IO_new_file_write (f=0x7f80e030f640 <_IO_2_1_stderr_>, data=0x7ffd9bf81300, n=121) at fileops.c:1251\n#2 0x00007f80dffc439f in new_do_write (to_do=121, data=0x7ffd9bf81300 \"INFO (HiveSplitter): DistanceMap built (HiveSplitter.cxx:759 in void HiveSplitter::BuildLookUpTables(const I3Geometry&))\\n\", fp=0x7f80e030f640 <_IO_2_1_stderr_>)\n at fileops.c:506\n#3 _IO_new_file_xsputn (f=0x7f80e030f640 <_IO_2_1_stderr_>, data=<optimized out>, n=121) at fileops.c:1330\n#4 0x00007f80dffb9488 in __GI__IO_fputs (str=0x7ffd9bf81300 \"INFO (HiveSplitter): DistanceMap built (HiveSplitter.cxx:759 in void HiveSplitter::BuildLookUpTables(const I3Geometry&))\\n\", fp=0x7f80e030f640 <_IO_2_1_stderr_>)\n at iofputs.c:40\n#5 0x00007f80de67d127 in boost::serialization::serialize<boost::archive::portable_binary_oarchive, boost::serialization::nvp<I3PODHolder<bool> > > (ar=..., t=..., file_version=32640)\n at /usr/include/boost/serialization/serialization.hpp:66\n#6 0x00007f80cab9cec6 in HiveSplitter::BuildLookUpTables (this=0x7f80de9257d3, geo=...) at ../../src/IceHive/private/IceHive/HiveSplitter.cxx:759\n#7 0x00007f80cac01a88 in I3HiveCluster<I3RecoPulse>::PerformCleaning (this=0x16b2c60, frame=...) at ../../src/IceHive/private/IceHive/I3HiveCluster.h:232\n#8 0x00007f80cac001a0 in I3HiveCluster<I3RecoPulse>::Configure (this=0x7ffd9bf82698) at ../../src/IceHive/private/IceHive/I3HiveCluster.h:169\n#9 0x00007f80de64c610 in std::__find_if<__gnu_cxx::__normal_iterator<std::string*, std::vector<std::string, std::allocator<std::string> > >, __gnu_cxx::__ops::_Iter_equals_val<std::string const> > (\n __first=\"P+\\370\\233\\375\\177\\000\\000\\206\\247d\u0780\\177\\000\\000&\\260d\u0780\\177\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000 ,k\\001\\000\\000\\000\\000\\260D\\230\\001\\000\\000\\000\\000 ,k\\001\\000\\000\\000\\000\\200*\\370\\233\\375\\177\\000\\000\\220*\\370\\233\\375\\177\\000\\000\\nU_\u0780\\177\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\030,k\\001\\000\\000\\000\\000 ,k\\001\\000\\000\\000\\000\\000\\037m\\375\\272\\064\\252\\250\\340*\\370\\233\\375\\177\\000\\000\\250T_\u0780\\177\\000\\000 ,k\\001\\000\\000\\000\\000\\320*\\370\\233\\375\\177\\000\\000\\340*\\370\\233\\375\\177\\000\\000\\000,k\\001\\000\\000\\000\\000\\200,\\370\\233\\375\\177\\000\\000\\001\\000\\000\\000\\000\\000\\000\\000P1\\370\\233\\377\\377\\377\\377h,k\\001\", '\\000' <repeats 12 times>..., \n __last=\"P\\354\\324\u0780\\177\\000\\000\\004\\000\\000\\000\\001\\000\\000\\000\\240\\366\\242\\001\\000\\000\\000\\000q\\000\\000\\000\\000\\000\\000\\000A\\000\\000\\000\\000\\000\\000\\000A\\000\\000\\000\\000\\000\\000\\000\\001\\000\\000\\000\\000\\000\\000\\000/home/nega/i3/ports/test-data/IceHive/hivecluster_testcase.i3.bz2\", '\\000' <repeats 15 times>, \"Q\\000\\000\\000\\000\\000\\000\\000\\220+i\u02c0\\177\\000\\000 \\340\\232\\001\\000\\000\\000\\000\\060\\301\\232\\001\", '\\000' <repeats 28 times>, \"@\\340\\232\\001\\000\\000\\000\\000\"..., \n __pred=...) at /usr/include/c++/4.9/bits/stl_algo.h:132\n#10 0x00007f80de64b981 in std::_Rb_tree<std::string, std::pair<std::string const, boost::shared_ptr<I3Module> >, std::_Select1st<std::pair<std::string const, boost::shared_ptr<I3Module> > >, std::less<std::string>, std::allocator<std::pair<std::string const, boost::shared_ptr<I3Module> > > >::_M_lower_bound (this=0x16b2c00, __x=0x21, __y=0xa8aa34bafd6d1f00, __k=\"\") at /usr/include/c++/4.9/bits/stl_tree.h:1262\n#11 0x00007f80de64a786 in ?? () from /home/nega/i3/combo/build/lib/libicetray.so\n#12 0x00007f80de64b026 in std::_Rb_tree<double, std::pair<double const, std::string>, std::_Select1st<std::pair<double const, std::string> >, std::less<double>, std::allocator<std::pair<double const, std::string> > >::_M_destroy_node (\n this=0x16b2c00, __p=0x1983fb0) at /usr/include/c++/4.9/bits/stl_tree.h:410\n#13 0x00007f80de64aad4 in __gnu_cxx::hashtable<std::pair<std::string const, boost::any>, std::string, __gnu_cxx::hash<std::string>, std::_Select1st<std::pair<std::string const, boost::any> >, std::equal_to<std::string>, std::allocator<boost::any> >::resize (this=0x1, __num_elements_hint=140727220186528) at /usr/include/c++/4.9/backward/hashtable.h:1055\n#14 0x00007f80de6376b8 in boost::archive::detail::save_pointer_type<boost::archive::xml_oarchive>::polymorphic::save<OMKey> (ar=..., t=...) at /usr/include/boost/archive/detail/oserializer.hpp:387\n#15 0x00007f80de637235 in boost::serialization::nvp<int const>::save<boost::archive::portable_binary_oarchive> (this=0x1958360, ar=...) at /usr/include/boost/serialization/nvp.hpp:74\n#16 0x00007f80de7692d0 in boost::python::detail::make_function_aux<bool (I3Tray::*)(std::string const&, std::string const&, std::string const&), boost::python::default_call_policies, boost::mpl::vector5<bool, I3Tray&, std::string const&, std::string const&, std::string const&>, mpl_::int_<0> > (f=\n (bool (I3Tray::*)(I3Tray * const, const std::basic_string<char, std::char_traits<char>, std::allocator<char> > &, const std::basic_string<char, std::char_traits<char>, std::allocator<char> > &, const std::basic_string<char, std::char_traits<char>, std::allocator<char> > &)) 0x7f80cb302950, p=..., kw=...) at /usr/include/boost/python/make_function.hpp:46\n#17 0x00007f80de7674ee in boost::python::class_<I3Tray, boost::noncopyable_::noncopyable, boost::python::detail::not_specified, boost::python::detail::not_specified>::def_impl<I3Tray, I3Tray::param_setter (I3Tray::*)(std::string const&, std::string), boost::python::detail::def_helper<char const*, boost::python::detail::not_specified, boost::python::detail::not_specified, boost::python::detail::not_specified> > (this=0x7ffd9bf83150, name=0x16b9e88 \"\\024rc\u0780\\177\", \n fn=NULL, helper=...) at /usr/include/boost/python/class.hpp:529\n#18 0x00007f80de76561d in I3Tray::SetParameter<std::vector<int, std::allocator<int> > > (\n this=0x7f80de76561d <I3Tray::SetParameter<std::vector<int, std::allocator<int> > >(std::string const&, std::string const&, std::vector<int, std::allocator<int> > const&)+67>, \n module=<error reading variable: Cannot access memory at address 0xffffffffffffffe9>, parameter=<error reading variable: Cannot access memory at address 0x1>, value=std::vector of length 35047862944671, capacity 5957535 = {...})\n at ../../src/icetray/public/icetray/I3Tray.h:230\n#19 0x00007f80dd8cd78d in boost::python::objects::function::call(_object*, _object*) const () from /usr/lib/x86_64-linux-gnu/libboost_python-py27.so.1.55.0\n#20 0x00007f80dd8cd9a8 in ?? () from /usr/lib/x86_64-linux-gnu/libboost_python-py27.so.1.55.0\n#21 0x00007f80dd8d7653 in boost::python::detail::exception_handler::operator()(boost::function0<void> const&) const () from /usr/lib/x86_64-linux-gnu/libboost_python-py27.so.1.55.0\n#22 0x00007f80cf6d430e in boost::serialization::make_nvp<I3Particle> (name=0x7ffd9bf82fc0 \"\\020\\060\\370\\233\\375\\177\", t=...) at /usr/include/boost/serialization/nvp.hpp:97\n#23 0x00007ffd9bf83150 in ?? ()\n#24 0x00000000013e0860 in ?? ()\n#25 0x00000000013e0868 in ?? ()\n#26 0x00007ffd9bf82fc0 in ?? ()\n#27 0x00007f80cf6d42e0 in std::stack<unsigned int, std::deque<unsigned int, std::allocator<unsigned int> > >::push (this=0xc3c900000000b8ff, __x=@0xffff40b58948f07d: <error reading variable>) at /usr/include/c++/4.9/bits/stl_stack.h:190\nBacktrace stopped: previous frame inner to this frame (corrupt stack?)\n\n}}}",
"reporter": "nega",
"cc": "",
"resolution": "fixed",
"_ts": "1461860953266426",
"component": "combo reconstruction",
"summary": "[IceHive] tests gone wild!",
"priority": "normal",
"keywords": "icehive, tests, SIGPIPE, signal-handler, root",
"time": "2016-02-23T04:56:31",
"milestone": "Long-Term Future",
"owner": "mzoll",
"type": "defect"
}
```
</p>
</details>
| 1.0 | [IceHive] tests gone wild! (Trac #1562) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1562">https://code.icecube.wisc.edu/ticket/1562</a>, reported by nega and owned by mzoll</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2016-04-28T16:29:13",
"description": "See #1561\n\n{{{ 21230 ? Rl 26416:47 python /home/nega/i3/combo/src/IceHive/resources/test/i3hiveclusterTest.py}}}\n\n{{{\n(gdb) bt\n#0 0x00007f80e00414fd in write () at ../sysdeps/unix/syscall-template.S:81\n#1 0x00007f80dffc3bff in _IO_new_file_write (f=0x7f80e030f640 <_IO_2_1_stderr_>, data=0x7ffd9bf81300, n=121) at fileops.c:1251\n#2 0x00007f80dffc439f in new_do_write (to_do=121, data=0x7ffd9bf81300 \"INFO (HiveSplitter): DistanceMap built (HiveSplitter.cxx:759 in void HiveSplitter::BuildLookUpTables(const I3Geometry&))\\n\", fp=0x7f80e030f640 <_IO_2_1_stderr_>)\n at fileops.c:506\n#3 _IO_new_file_xsputn (f=0x7f80e030f640 <_IO_2_1_stderr_>, data=<optimized out>, n=121) at fileops.c:1330\n#4 0x00007f80dffb9488 in __GI__IO_fputs (str=0x7ffd9bf81300 \"INFO (HiveSplitter): DistanceMap built (HiveSplitter.cxx:759 in void HiveSplitter::BuildLookUpTables(const I3Geometry&))\\n\", fp=0x7f80e030f640 <_IO_2_1_stderr_>)\n at iofputs.c:40\n#5 0x00007f80de67d127 in boost::serialization::serialize<boost::archive::portable_binary_oarchive, boost::serialization::nvp<I3PODHolder<bool> > > (ar=..., t=..., file_version=32640)\n at /usr/include/boost/serialization/serialization.hpp:66\n#6 0x00007f80cab9cec6 in HiveSplitter::BuildLookUpTables (this=0x7f80de9257d3, geo=...) at ../../src/IceHive/private/IceHive/HiveSplitter.cxx:759\n#7 0x00007f80cac01a88 in I3HiveCluster<I3RecoPulse>::PerformCleaning (this=0x16b2c60, frame=...) at ../../src/IceHive/private/IceHive/I3HiveCluster.h:232\n#8 0x00007f80cac001a0 in I3HiveCluster<I3RecoPulse>::Configure (this=0x7ffd9bf82698) at ../../src/IceHive/private/IceHive/I3HiveCluster.h:169\n#9 0x00007f80de64c610 in std::__find_if<__gnu_cxx::__normal_iterator<std::string*, std::vector<std::string, std::allocator<std::string> > >, __gnu_cxx::__ops::_Iter_equals_val<std::string const> > (\n __first=\"P+\\370\\233\\375\\177\\000\\000\\206\\247d\u0780\\177\\000\\000&\\260d\u0780\\177\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000 ,k\\001\\000\\000\\000\\000\\260D\\230\\001\\000\\000\\000\\000 ,k\\001\\000\\000\\000\\000\\200*\\370\\233\\375\\177\\000\\000\\220*\\370\\233\\375\\177\\000\\000\\nU_\u0780\\177\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\030,k\\001\\000\\000\\000\\000 ,k\\001\\000\\000\\000\\000\\000\\037m\\375\\272\\064\\252\\250\\340*\\370\\233\\375\\177\\000\\000\\250T_\u0780\\177\\000\\000 ,k\\001\\000\\000\\000\\000\\320*\\370\\233\\375\\177\\000\\000\\340*\\370\\233\\375\\177\\000\\000\\000,k\\001\\000\\000\\000\\000\\200,\\370\\233\\375\\177\\000\\000\\001\\000\\000\\000\\000\\000\\000\\000P1\\370\\233\\377\\377\\377\\377h,k\\001\", '\\000' <repeats 12 times>..., \n __last=\"P\\354\\324\u0780\\177\\000\\000\\004\\000\\000\\000\\001\\000\\000\\000\\240\\366\\242\\001\\000\\000\\000\\000q\\000\\000\\000\\000\\000\\000\\000A\\000\\000\\000\\000\\000\\000\\000A\\000\\000\\000\\000\\000\\000\\000\\001\\000\\000\\000\\000\\000\\000\\000/home/nega/i3/ports/test-data/IceHive/hivecluster_testcase.i3.bz2\", '\\000' <repeats 15 times>, \"Q\\000\\000\\000\\000\\000\\000\\000\\220+i\u02c0\\177\\000\\000 \\340\\232\\001\\000\\000\\000\\000\\060\\301\\232\\001\", '\\000' <repeats 28 times>, \"@\\340\\232\\001\\000\\000\\000\\000\"..., \n __pred=...) at /usr/include/c++/4.9/bits/stl_algo.h:132\n#10 0x00007f80de64b981 in std::_Rb_tree<std::string, std::pair<std::string const, boost::shared_ptr<I3Module> >, std::_Select1st<std::pair<std::string const, boost::shared_ptr<I3Module> > >, std::less<std::string>, std::allocator<std::pair<std::string const, boost::shared_ptr<I3Module> > > >::_M_lower_bound (this=0x16b2c00, __x=0x21, __y=0xa8aa34bafd6d1f00, __k=\"\") at /usr/include/c++/4.9/bits/stl_tree.h:1262\n#11 0x00007f80de64a786 in ?? () from /home/nega/i3/combo/build/lib/libicetray.so\n#12 0x00007f80de64b026 in std::_Rb_tree<double, std::pair<double const, std::string>, std::_Select1st<std::pair<double const, std::string> >, std::less<double>, std::allocator<std::pair<double const, std::string> > >::_M_destroy_node (\n this=0x16b2c00, __p=0x1983fb0) at /usr/include/c++/4.9/bits/stl_tree.h:410\n#13 0x00007f80de64aad4 in __gnu_cxx::hashtable<std::pair<std::string const, boost::any>, std::string, __gnu_cxx::hash<std::string>, std::_Select1st<std::pair<std::string const, boost::any> >, std::equal_to<std::string>, std::allocator<boost::any> >::resize (this=0x1, __num_elements_hint=140727220186528) at /usr/include/c++/4.9/backward/hashtable.h:1055\n#14 0x00007f80de6376b8 in boost::archive::detail::save_pointer_type<boost::archive::xml_oarchive>::polymorphic::save<OMKey> (ar=..., t=...) at /usr/include/boost/archive/detail/oserializer.hpp:387\n#15 0x00007f80de637235 in boost::serialization::nvp<int const>::save<boost::archive::portable_binary_oarchive> (this=0x1958360, ar=...) at /usr/include/boost/serialization/nvp.hpp:74\n#16 0x00007f80de7692d0 in boost::python::detail::make_function_aux<bool (I3Tray::*)(std::string const&, std::string const&, std::string const&), boost::python::default_call_policies, boost::mpl::vector5<bool, I3Tray&, std::string const&, std::string const&, std::string const&>, mpl_::int_<0> > (f=\n (bool (I3Tray::*)(I3Tray * const, const std::basic_string<char, std::char_traits<char>, std::allocator<char> > &, const std::basic_string<char, std::char_traits<char>, std::allocator<char> > &, const std::basic_string<char, std::char_traits<char>, std::allocator<char> > &)) 0x7f80cb302950, p=..., kw=...) at /usr/include/boost/python/make_function.hpp:46\n#17 0x00007f80de7674ee in boost::python::class_<I3Tray, boost::noncopyable_::noncopyable, boost::python::detail::not_specified, boost::python::detail::not_specified>::def_impl<I3Tray, I3Tray::param_setter (I3Tray::*)(std::string const&, std::string), boost::python::detail::def_helper<char const*, boost::python::detail::not_specified, boost::python::detail::not_specified, boost::python::detail::not_specified> > (this=0x7ffd9bf83150, name=0x16b9e88 \"\\024rc\u0780\\177\", \n fn=NULL, helper=...) at /usr/include/boost/python/class.hpp:529\n#18 0x00007f80de76561d in I3Tray::SetParameter<std::vector<int, std::allocator<int> > > (\n this=0x7f80de76561d <I3Tray::SetParameter<std::vector<int, std::allocator<int> > >(std::string const&, std::string const&, std::vector<int, std::allocator<int> > const&)+67>, \n module=<error reading variable: Cannot access memory at address 0xffffffffffffffe9>, parameter=<error reading variable: Cannot access memory at address 0x1>, value=std::vector of length 35047862944671, capacity 5957535 = {...})\n at ../../src/icetray/public/icetray/I3Tray.h:230\n#19 0x00007f80dd8cd78d in boost::python::objects::function::call(_object*, _object*) const () from /usr/lib/x86_64-linux-gnu/libboost_python-py27.so.1.55.0\n#20 0x00007f80dd8cd9a8 in ?? () from /usr/lib/x86_64-linux-gnu/libboost_python-py27.so.1.55.0\n#21 0x00007f80dd8d7653 in boost::python::detail::exception_handler::operator()(boost::function0<void> const&) const () from /usr/lib/x86_64-linux-gnu/libboost_python-py27.so.1.55.0\n#22 0x00007f80cf6d430e in boost::serialization::make_nvp<I3Particle> (name=0x7ffd9bf82fc0 \"\\020\\060\\370\\233\\375\\177\", t=...) at /usr/include/boost/serialization/nvp.hpp:97\n#23 0x00007ffd9bf83150 in ?? ()\n#24 0x00000000013e0860 in ?? ()\n#25 0x00000000013e0868 in ?? ()\n#26 0x00007ffd9bf82fc0 in ?? ()\n#27 0x00007f80cf6d42e0 in std::stack<unsigned int, std::deque<unsigned int, std::allocator<unsigned int> > >::push (this=0xc3c900000000b8ff, __x=@0xffff40b58948f07d: <error reading variable>) at /usr/include/c++/4.9/bits/stl_stack.h:190\nBacktrace stopped: previous frame inner to this frame (corrupt stack?)\n\n}}}",
"reporter": "nega",
"cc": "",
"resolution": "fixed",
"_ts": "1461860953266426",
"component": "combo reconstruction",
"summary": "[IceHive] tests gone wild!",
"priority": "normal",
"keywords": "icehive, tests, SIGPIPE, signal-handler, root",
"time": "2016-02-23T04:56:31",
"milestone": "Long-Term Future",
"owner": "mzoll",
"type": "defect"
}
```
</p>
</details>
| defect | tests gone wild trac migrated from json status closed changetime description see n n rl python home nega combo src icehive resources test py n n n gdb bt n in write at sysdeps unix syscall template s n in io new file write f data n at fileops c n in new do write to do data info hivesplitter distancemap built hivesplitter cxx in void hivesplitter buildlookuptables const n fp n at fileops c n io new file xsputn f data n at fileops c n in gi io fputs str info hivesplitter distancemap built hivesplitter cxx in void hivesplitter buildlookuptables const n fp n at iofputs c n in boost serialization serialize ar t file version n at usr include boost serialization serialization hpp n in hivesplitter buildlookuptables this geo at src icehive private icehive hivesplitter cxx n in performcleaning this frame at src icehive private icehive h n in configure this at src icehive private icehive h n in std find if gnu cxx ops iter equals val n first p k k nu k k k k k n last p home nega ports test data icehive hivecluster testcase q i n pred at usr include c bits stl algo h n in std rb tree std std less std allocator m lower bound this x y k at usr include c bits stl tree h n in from home nega combo build lib libicetray so n in std rb tree std std less std allocator m destroy node n this p at usr include c bits stl tree h n in gnu cxx hashtable std string gnu cxx hash std std equal to std allocator resize this num elements hint at usr include c backward hashtable h n in boost archive detail save pointer type polymorphic save ar t at usr include boost archive detail oserializer hpp n in boost serialization nvp save this ar at usr include boost serialization nvp hpp n in boost python detail make function aux mpl int f n bool const const std basic string std allocator const std basic string std allocator const std basic string std allocator p kw at usr include boost python make function hpp n in boost python class def impl this name n fn null helper at usr include boost python class hpp n in setparameter n this std string const std string const std vector const n module parameter value std vector of length capacity n at src icetray public icetray h n in boost python objects function call object object const from usr lib linux gnu libboost python so n in from usr lib linux gnu libboost python so n in boost python detail exception handler operator boost const const from usr lib linux gnu libboost python so n in boost serialization make nvp name t at usr include boost serialization nvp hpp n in n in n in n in n in std stack push this x at usr include c bits stl stack h nbacktrace stopped previous frame inner to this frame corrupt stack n n reporter nega cc resolution fixed ts component combo reconstruction summary tests gone wild priority normal keywords icehive tests sigpipe signal handler root time milestone long term future owner mzoll type defect | 1 |
148,906 | 13,250,499,167 | IssuesEvent | 2020-08-19 23:12:47 | boto/boto3 | https://api.github.com/repos/boto/boto3 | closed | SQS Client Send Message Documentation Typo | closed-for-staleness documentation | The documentation uses parameter 'DataType': 'string', the correct parameter is 'DataType': 'String'
Must use capitalized 'String'.
Documentation Snippet:
response = client.send_message(
QueueUrl='string',
MessageBody='string',
DelaySeconds=123,
MessageAttributes={
'string': {
'StringValue': 'string',
'BinaryValue': b'bytes',
'StringListValues': [
'string',
],
'BinaryListValues': [
b'bytes',
],
'DataType': 'string'
}
},
MessageDeduplicationId='string',
MessageGroupId='string'
)
Link to documentation:
https://boto3.readthedocs.io/en/latest/reference/services/sqs.html#SQS.Client.send_message | 1.0 | SQS Client Send Message Documentation Typo - The documentation uses parameter 'DataType': 'string', the correct parameter is 'DataType': 'String'
Must use capitalized 'String'.
Documentation Snippet:
response = client.send_message(
QueueUrl='string',
MessageBody='string',
DelaySeconds=123,
MessageAttributes={
'string': {
'StringValue': 'string',
'BinaryValue': b'bytes',
'StringListValues': [
'string',
],
'BinaryListValues': [
b'bytes',
],
'DataType': 'string'
}
},
MessageDeduplicationId='string',
MessageGroupId='string'
)
Link to documentation:
https://boto3.readthedocs.io/en/latest/reference/services/sqs.html#SQS.Client.send_message | non_defect | sqs client send message documentation typo the documentation uses parameter datatype string the correct parameter is datatype string must use capitalized string documentation snippet response client send message queueurl string messagebody string delayseconds messageattributes string stringvalue string binaryvalue b bytes stringlistvalues string binarylistvalues b bytes datatype string messagededuplicationid string messagegroupid string link to documentation | 0 |
34,416 | 7,451,224,629 | IssuesEvent | 2018-03-29 01:38:50 | kerdokullamae/test_koik_issued | https://api.github.com/repos/kerdokullamae/test_koik_issued | closed | Nimistu arvutatud kogused | P: high R: fixed T: defect | **Reported by sven syld on 13 Mar 2013 10:50 UTC**
'''Object'''
[Nimistu detailvaade](http://test.raju.teepub/et/directory/view/110000000179/?page=1&resultsPerPage=10)
'''Description'''
Kogus "Säilikute arv" on kaks korda. Üks neist on ilmselt nimistule sisestatud "planeeritud" säilikute arv, teine on säilikute "säilikute arv" pealt arvutatud kogus.
''Säilikute arv: 1115, Säilikute arv (tegelik): 1115, Alatise säilitustähtajaga (tegelik): 1115, Pikaajalise säilitustähtajaga (tegelik): 0, Hoidlaga seotud (tegelik): 1115, Karpe (tegelik): 0, Kirjeldusüksuste arv (tegelik): 1115, Lehti: 116046, Säilikute arv: 0''
'''Todo'''
Uurida välja tegelik põhjus ja:
1) kui on tegemist erinevate ühikutega, siis nimetada üks neist ümber
2) kui on sama ühik, siis väljanäitamisel eelistada päritud numbrit. | 1.0 | Nimistu arvutatud kogused - **Reported by sven syld on 13 Mar 2013 10:50 UTC**
'''Object'''
[Nimistu detailvaade](http://test.raju.teepub/et/directory/view/110000000179/?page=1&resultsPerPage=10)
'''Description'''
Kogus "Säilikute arv" on kaks korda. Üks neist on ilmselt nimistule sisestatud "planeeritud" säilikute arv, teine on säilikute "säilikute arv" pealt arvutatud kogus.
''Säilikute arv: 1115, Säilikute arv (tegelik): 1115, Alatise säilitustähtajaga (tegelik): 1115, Pikaajalise säilitustähtajaga (tegelik): 0, Hoidlaga seotud (tegelik): 1115, Karpe (tegelik): 0, Kirjeldusüksuste arv (tegelik): 1115, Lehti: 116046, Säilikute arv: 0''
'''Todo'''
Uurida välja tegelik põhjus ja:
1) kui on tegemist erinevate ühikutega, siis nimetada üks neist ümber
2) kui on sama ühik, siis väljanäitamisel eelistada päritud numbrit. | defect | nimistu arvutatud kogused reported by sven syld on mar utc object description kogus säilikute arv on kaks korda üks neist on ilmselt nimistule sisestatud planeeritud säilikute arv teine on säilikute säilikute arv pealt arvutatud kogus säilikute arv säilikute arv tegelik alatise säilitustähtajaga tegelik pikaajalise säilitustähtajaga tegelik hoidlaga seotud tegelik karpe tegelik kirjeldusüksuste arv tegelik lehti säilikute arv todo uurida välja tegelik põhjus ja kui on tegemist erinevate ühikutega siis nimetada üks neist ümber kui on sama ühik siis väljanäitamisel eelistada päritud numbrit | 1 |
253,953 | 8,067,981,746 | IssuesEvent | 2018-08-05 14:54:55 | DedSecInside/TorBot | https://api.github.com/repos/DedSecInside/TorBot | closed | Allow user to configure IP Address and Port | ENHANCEMENT MED PRIORITY NEW FEATURE | So currently TorBot runs on localhost (127.0.0.1) and port 9050, we'd like to add `--ip` & `--port` flags to allow a user to optionally give a new ip address or port. | 1.0 | Allow user to configure IP Address and Port - So currently TorBot runs on localhost (127.0.0.1) and port 9050, we'd like to add `--ip` & `--port` flags to allow a user to optionally give a new ip address or port. | non_defect | allow user to configure ip address and port so currently torbot runs on localhost and port we d like to add ip port flags to allow a user to optionally give a new ip address or port | 0 |
48,868 | 13,184,761,205 | IssuesEvent | 2020-08-12 20:02:43 | icecube-trac/tix3 | https://api.github.com/repos/icecube-trac/tix3 | opened | radioeventbrowser (Trac #344) | Incomplete Migration Migrated from Trac RASTA defect | <details>
<summary>_Migrated from https://code.icecube.wisc.edu/ticket/344
, reported by tobias and owned by sboeser_</summary>
<p>
```json
{
"status": "closed",
"changetime": "2012-01-17T05:10:49",
"description": "radioeventbrowser not build by default. Have to specify -DBUILD_TOPEVENTBROWSER=True by compiling",
"reporter": "tobias",
"cc": "",
"resolution": "fixed",
"_ts": "1326777049000000",
"component": "RASTA",
"summary": "radioeventbrowser",
"priority": "normal",
"keywords": "",
"time": "2012-01-16T14:11:28",
"milestone": "",
"owner": "sboeser",
"type": "defect"
}
```
</p>
</details>
| 1.0 | radioeventbrowser (Trac #344) - <details>
<summary>_Migrated from https://code.icecube.wisc.edu/ticket/344
, reported by tobias and owned by sboeser_</summary>
<p>
```json
{
"status": "closed",
"changetime": "2012-01-17T05:10:49",
"description": "radioeventbrowser not build by default. Have to specify -DBUILD_TOPEVENTBROWSER=True by compiling",
"reporter": "tobias",
"cc": "",
"resolution": "fixed",
"_ts": "1326777049000000",
"component": "RASTA",
"summary": "radioeventbrowser",
"priority": "normal",
"keywords": "",
"time": "2012-01-16T14:11:28",
"milestone": "",
"owner": "sboeser",
"type": "defect"
}
```
</p>
</details>
| defect | radioeventbrowser trac migrated from reported by tobias and owned by sboeser json status closed changetime description radioeventbrowser not build by default have to specify dbuild topeventbrowser true by compiling reporter tobias cc resolution fixed ts component rasta summary radioeventbrowser priority normal keywords time milestone owner sboeser type defect | 1 |
179,766 | 14,711,852,120 | IssuesEvent | 2021-01-05 08:05:25 | camunda/camunda-modeler | https://api.github.com/repos/camunda/camunda-modeler | closed | Update Element Templates Documentation to Feature Versions | documentation needs review | __What should we do?__
<!-- Clearly describe the activity we should carry out. -->
Document element template versions and the rules by which element templates are updated to a new version.
----
Related to https://github.com/camunda/camunda-modeler/pull/2025
Child of https://github.com/camunda/camunda-modeler/issues/1969 | 1.0 | Update Element Templates Documentation to Feature Versions - __What should we do?__
<!-- Clearly describe the activity we should carry out. -->
Document element template versions and the rules by which element templates are updated to a new version.
----
Related to https://github.com/camunda/camunda-modeler/pull/2025
Child of https://github.com/camunda/camunda-modeler/issues/1969 | non_defect | update element templates documentation to feature versions what should we do document element template versions and the rules by which element templates are updated to a new version related to child of | 0 |
74,057 | 24,922,485,551 | IssuesEvent | 2022-10-31 02:31:00 | line/armeria | https://api.github.com/repos/line/armeria | opened | Do not set a `0` content-length for a chunked response of HTTP `HEAD` | defect | The HTTP [HEAD](https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/HEAD) method should return the same `Content-Length` as the GET method.
A `Content-Length` is intentionally unspecified for chunked encoding but the actual content is empty for the HEAD method.
```java
// Assume a request method is HEAD and a service returns the body with chunked encoding.
ResponseHeaders headers = ...;
assert headers.contentLength() == -1; // Undefined
HttpResponse response = HttpResponse.of(headers);
// The undefined content-length (-1) is overwritten to 0 due to the empty content.
assert response.aggregate().join().contentLength() == 0;
```
We should preserve the null content-length and not set it to 0. | 1.0 | Do not set a `0` content-length for a chunked response of HTTP `HEAD` - The HTTP [HEAD](https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/HEAD) method should return the same `Content-Length` as the GET method.
A `Content-Length` is intentionally unspecified for chunked encoding but the actual content is empty for the HEAD method.
```java
// Assume a request method is HEAD and a service returns the body with chunked encoding.
ResponseHeaders headers = ...;
assert headers.contentLength() == -1; // Undefined
HttpResponse response = HttpResponse.of(headers);
// The undefined content-length (-1) is overwritten to 0 due to the empty content.
assert response.aggregate().join().contentLength() == 0;
```
We should preserve the null content-length and not set it to 0. | defect | do not set a content length for a chunked response of http head the http method should return the same content length as the get method a content length is intentionally unspecified for chunked encoding but the actual content is empty for the head method java assume a request method is head and a service returns the body with chunked encoding responseheaders headers assert headers contentlength undefined httpresponse response httpresponse of headers the undefined content length is overwritten to due to the empty content assert response aggregate join contentlength we should preserve the null content length and not set it to | 1 |
61,410 | 17,023,687,186 | IssuesEvent | 2021-07-03 03:18:19 | tomhughes/trac-tickets | https://api.github.com/repos/tomhughes/trac-tickets | closed | Change not update | Component: mapnik Priority: major Resolution: wontfix Type: defect | **[Submitted to the original trac issue database at 3.00pm, Monday, 14th March 2011]**
I have update some areas in
http://www.openstreetmap.org/?lat=8.503&lon=-72.303&zoom=10&layers=M
and
http://www.openstreetmap.org/?lat=9.392&lon=-70.563&zoom=10&layers=M
this area only seen at higher values zoom > 13
http://www.openstreetmap.org/?lat=9.4025&lon=-70.5156&zoom=13&layers=M
In this area mapnik no update
http://www.openstreetmap.org/?lat=8.1692&lon=-72.2054&zoom=14&layers=M
| 1.0 | Change not update - **[Submitted to the original trac issue database at 3.00pm, Monday, 14th March 2011]**
I have update some areas in
http://www.openstreetmap.org/?lat=8.503&lon=-72.303&zoom=10&layers=M
and
http://www.openstreetmap.org/?lat=9.392&lon=-70.563&zoom=10&layers=M
this area only seen at higher values zoom > 13
http://www.openstreetmap.org/?lat=9.4025&lon=-70.5156&zoom=13&layers=M
In this area mapnik no update
http://www.openstreetmap.org/?lat=8.1692&lon=-72.2054&zoom=14&layers=M
| defect | change not update i have update some areas in and this area only seen at higher values zoom in this area mapnik no update | 1 |
16,083 | 2,870,865,197 | IssuesEvent | 2015-06-07 16:06:44 | suian20/Isheep | https://api.github.com/repos/suian20/Isheep | closed | New bug found | auto-migrated Priority-Medium Type-Defect | ```
DEBUG SESSION START! Tue Mar 20 21:39:16 GMT+01:00 2012
Droidsheep path:
/data/data/de.trier.infsec.koch.droidsheep/files/droidsheep
ARPSPoof Path: /data/data/de.trier.infsec.koch.droidsheep/files/arpspoof
Testing SU
Error in SetupHelper 1:
java.io.IOException: Error running exec(). Command: [busybox] Working
Directory: null Environment: null
-rwxrwxrwx app_103 app_103 116992 2012-03-20 21:31 droidsheep
-rwxrwxrwx app_103 app_103 32256 2012-03-20 21:31 arpspoof
executing command: chmod 777
/data/data/de.trier.infsec.koch.droidsheep/files/droidsheep
Error with command: chmod 777
/data/data/de.trier.infsec.koch.droidsheep/files/droidsheep: killall: not
foundkillall: not foundkillall: not foundkillall: not foundkillall: not
foundkillall: not foundkillall: not foundkillall: not foundkillall: not found
executing command: chmod 777
/data/data/de.trier.infsec.koch.droidsheep/files/arpspoof
executing command: killall droidsheep
Error with command: killall droidsheep
: killall: not found
executing command: chmod 777
/data/data/de.trier.infsec.koch.droidsheep/files/arpspoof
executing command: chmod 777
/data/data/de.trier.infsec.koch.droidsheep/files/droidsheep
executing command: echo 1 > /proc/sys/net/ipv4/ip_forward
executing command:
/data/data/de.trier.infsec.koch.droidsheep/files/droidsheep
executing command: iptables -F
command: /data/data/de.trier.infsec.koch.droidsheep/files/droidsheepline:
executing command: iptables -t nat -F
executing command: iptables -t nat -I POSTROUTING -s 0/0 -j MASQUERADE
executing command: iptables -P FORWARD ACCEPT
executing command: /data/data/de.trier.infsec.koch.droidsheep/files/arpspoof
-s 1 -i eth0 ***.***.*.***
command: /data/data/de.trier.infsec.koch.droidsheep/files/arpspoof -s 1 -i
eth0 ***.***.*.***line:
command: /data/data/de.trier.infsec.koch.droidsheep/files/arpspoof -s 1 -i
eth0 ***.***.*.***line: 50:cc:f8:76:9c:3b ff:ff:ff:ff:ff:ff 0806 42: arp reply
***.***.*.*** is-at 50:cc:f8:76:9c:3b
command: /data/data/de.trier.infsec.koch.droidsheep/files/arpspoof -s 1 -i
eth0 ***.***.*.***line: 50:cc:f8:76:9c:3b ff:ff:ff:ff:ff:ff 0806 42: arp reply
***.***.*.*** is-at 50:cc:f8:76:9c:3b
command: /data/data/de.trier.infsec.koch.droidsheep/files/arpspoof -s 1 -i
eth0 ***.***.*.***line: 50:cc:f8:76:9c:3b ff:ff:ff:ff:ff:ff 0806 42: arp reply
***.***.*.*** is-at 50:cc:f8:76:9c:3b
command: /data/data/de.trier.infsec.koch.droidsheep/files/arpspoof -s 1 -i
eth0 ***.***.*.***line: 50:cc:f8:76:9c:3b ff:ff:ff:ff:ff:ff 0806 42: arp reply
***.***.*.*** is-at 50:cc:f8:76:9c:3b
command: /data/data/de.trier.infsec.koch.droidsheep/files/arpspoof -s 1 -i
eth0 ***.***.*.***line: 50:cc:f8:76:9c:3b ff:ff:ff:ff:ff:ff 0806 42: arp reply
***.***.*.*** is-at 50:cc:f8:76:9c:3b
command: /data/data/de.trier.infsec.koch.droidsheep/files/arpspoof -s 1 -i
eth0 ***.***.*.***line: 50:cc:f8:76:9c:3b ff:ff:ff:ff:ff:ff 0806 42: arp reply
***.***.*.*** is-at 50:cc:f8:76:9c:3b
command: /data/data/de.trier.infsec.koch.droidsheep/files/arpspoof -s 1 -i
eth0 ***.***.*.***line: 50:cc:f8:76:9c:3b ff:ff:ff:ff:ff:ff 0806 42: arp reply
***.***.*.*** is-at 50:cc:f8:76:9c:3b
command: /data/data/de.trier.infsec.koch.droidsheep/files/arpspoof -s 1 -i
eth0 ***.***.*.***line: 50:cc:f8:76:9c:3b ff:ff:ff:ff:ff:ff 0806 42: arp reply
***.***.*.*** is-at 50:cc:f8:76:9c:3b
command: /data/data/de.trier.infsec.koch.droidsheep/files/arpspoof -s 1 -i
eth0 ***.***.*.***line: 50:cc:f8:76:9c:3b ff:ff:ff:ff:ff:ff 0806 42: arp reply
***.***.*.*** is-at 50:cc:f8:76:9c:3b
command: /data/data/de.trier.infsec.koch.droidsheep/files/arpspoof -s 1 -i
eth0 ***.***.*.***line: 50:cc:f8:76:9c:3b ff:ff:ff:ff:ff:ff 0806 42: arp reply
***.***.*.*** is-at 50:cc:f8:76:9c:3b
command: /data/data/de.trier.infsec.koch.droidsheep/files/arpspoof -s 1 -i
eth0 ***.***.*.***line: 50:cc:f8:76:9c:3b ff:ff:ff:ff:ff:ff 0806 42: arp reply
***.***.*.*** is-at 50:cc:f8:76:9c:3b
command: /data/data/de.trier.infsec.koch.droidsheep/files/arpspoof -s 1 -i
eth0 ***.***.*.***line: 50:cc:f8:76:9c:3b ff:ff:ff:ff:ff:ff 0806 42: arp reply
***.***.*.*** is-at 50:cc:f8:76:9c:3b
command: /data/data/de.trier.infsec.koch.droidsheep/files/arpspoof -s 1 -i
eth0 ***.***.*.***line: 50:cc:f8:76:9c:3b ff:ff:ff:ff:ff:ff 0806 42: arp reply
***.***.*.*** is-at 50:cc:f8:76:9c:3b
command: /data/data/de.trier.infsec.koch.droidsheep/files/arpspoof -s 1 -i
eth0 ***.***.*.***line: 50:cc:f8:76:9c:3b ff:ff:ff:ff:ff:ff 0806 42: arp reply
***.***.*.*** is-at 50:cc:f8:76:9c:3b
command: /data/data/de.trier.infsec.koch.droidsheep/files/arpspoof -s 1 -i
eth0 ***.***.*.***line: 50:cc:f8:76:9c:3b ff:ff:ff:ff:ff:ff 0806 42: arp reply
***.***.*.*** is-at 50:cc:f8:76:9c:3b
command: /data/data/de.trier.infsec.koch.droidsheep/files/arpspoof -s 1 -i
eth0 ***.***.*.***line: 50:cc:f8:76:9c:3b ff:ff:ff:ff:ff:ff 0806 42: arp reply
***.***.*.*** is-at 50:cc:f8:76:9c:3b
command: /data/data/de.trier.infsec.koch.droidsheep/files/arpspoof -s 1 -i
eth0 ***.***.*.***line: 50:cc:f8:76:9c:3b ff:ff:ff:ff:ff:ff 0806 42: arp reply
***.***.*.*** is-at 50:cc:f8:76:9c:3b
command: /data/data/de.trier.infsec.koch.droidsheep/files/arpspoof -s 1 -i
eth0 ***.***.*.***line: 50:cc:f8:76:9c:3b ff:ff:ff:ff:ff:ff 0806 42: arp reply
***.***.*.*** is-at 50:cc:f8:76:9c:3b
command: /data/data/de.trier.infsec.koch.droidsheep/files/arpspoof -s 1 -i
eth0 ***.***.*.***line: 50:cc:f8:76:9c:3b ff:ff:ff:ff:ff:ff 0806 42: arp reply
***.***.*.*** is-at 50:cc:f8:76:9c:3b
command: /data/data/de.trier.infsec.koch.droidsheep/files/arpspoof -s 1 -i
eth0 ***.***.*.***line: 50:cc:f8:76:9c:3b ff:ff:ff:ff:ff:ff 0806 42: arp reply
***.***.*.*** is-at 50:cc:f8:76:9c:3b
command: /data/data/de.trier.infsec.koch.droidsheep/files/arpspoof -s 1 -i
eth0 ***.***.*.***line: 50:cc:f8:76:9c:3b ff:ff:ff:ff:ff:ff 0806 42: arp reply
***.***.*.*** is-at 50:cc:f8:76:9c:3b
command: /data/data/de.trier.infsec.koch.droidsheep/files/arpspoof -s 1 -i
eth0 ***.***.*.***line: 50:cc:f8:76:9c:3b ff:ff:ff:ff:ff:ff 0806 42: arp reply
***.***.*.*** is-at 50:cc:f8:76:9c:3b
command: /data/data/de.trier.infsec.koch.droidsheep/files/arpspoof -s 1 -i
eth0 ***.***.*.***line: 50:cc:f8:76:9c:3b ff:ff:ff:ff:ff:ff 0806 42: arp reply
***.***.*.*** is-at 50:cc:f8:76:9c:3b
***.***.*.*** = IP filter
Goodluck!
```
Original issue reported on code.google.com by `maartenb...@gmail.com` on 20 Mar 2012 at 8:47 | 1.0 | New bug found - ```
DEBUG SESSION START! Tue Mar 20 21:39:16 GMT+01:00 2012
Droidsheep path:
/data/data/de.trier.infsec.koch.droidsheep/files/droidsheep
ARPSPoof Path: /data/data/de.trier.infsec.koch.droidsheep/files/arpspoof
Testing SU
Error in SetupHelper 1:
java.io.IOException: Error running exec(). Command: [busybox] Working
Directory: null Environment: null
-rwxrwxrwx app_103 app_103 116992 2012-03-20 21:31 droidsheep
-rwxrwxrwx app_103 app_103 32256 2012-03-20 21:31 arpspoof
executing command: chmod 777
/data/data/de.trier.infsec.koch.droidsheep/files/droidsheep
Error with command: chmod 777
/data/data/de.trier.infsec.koch.droidsheep/files/droidsheep: killall: not
foundkillall: not foundkillall: not foundkillall: not foundkillall: not
foundkillall: not foundkillall: not foundkillall: not foundkillall: not found
executing command: chmod 777
/data/data/de.trier.infsec.koch.droidsheep/files/arpspoof
executing command: killall droidsheep
Error with command: killall droidsheep
: killall: not found
executing command: chmod 777
/data/data/de.trier.infsec.koch.droidsheep/files/arpspoof
executing command: chmod 777
/data/data/de.trier.infsec.koch.droidsheep/files/droidsheep
executing command: echo 1 > /proc/sys/net/ipv4/ip_forward
executing command:
/data/data/de.trier.infsec.koch.droidsheep/files/droidsheep
executing command: iptables -F
command: /data/data/de.trier.infsec.koch.droidsheep/files/droidsheepline:
executing command: iptables -t nat -F
executing command: iptables -t nat -I POSTROUTING -s 0/0 -j MASQUERADE
executing command: iptables -P FORWARD ACCEPT
executing command: /data/data/de.trier.infsec.koch.droidsheep/files/arpspoof
-s 1 -i eth0 ***.***.*.***
command: /data/data/de.trier.infsec.koch.droidsheep/files/arpspoof -s 1 -i
eth0 ***.***.*.***line:
command: /data/data/de.trier.infsec.koch.droidsheep/files/arpspoof -s 1 -i
eth0 ***.***.*.***line: 50:cc:f8:76:9c:3b ff:ff:ff:ff:ff:ff 0806 42: arp reply
***.***.*.*** is-at 50:cc:f8:76:9c:3b
command: /data/data/de.trier.infsec.koch.droidsheep/files/arpspoof -s 1 -i
eth0 ***.***.*.***line: 50:cc:f8:76:9c:3b ff:ff:ff:ff:ff:ff 0806 42: arp reply
***.***.*.*** is-at 50:cc:f8:76:9c:3b
command: /data/data/de.trier.infsec.koch.droidsheep/files/arpspoof -s 1 -i
eth0 ***.***.*.***line: 50:cc:f8:76:9c:3b ff:ff:ff:ff:ff:ff 0806 42: arp reply
***.***.*.*** is-at 50:cc:f8:76:9c:3b
command: /data/data/de.trier.infsec.koch.droidsheep/files/arpspoof -s 1 -i
eth0 ***.***.*.***line: 50:cc:f8:76:9c:3b ff:ff:ff:ff:ff:ff 0806 42: arp reply
***.***.*.*** is-at 50:cc:f8:76:9c:3b
command: /data/data/de.trier.infsec.koch.droidsheep/files/arpspoof -s 1 -i
eth0 ***.***.*.***line: 50:cc:f8:76:9c:3b ff:ff:ff:ff:ff:ff 0806 42: arp reply
***.***.*.*** is-at 50:cc:f8:76:9c:3b
command: /data/data/de.trier.infsec.koch.droidsheep/files/arpspoof -s 1 -i
eth0 ***.***.*.***line: 50:cc:f8:76:9c:3b ff:ff:ff:ff:ff:ff 0806 42: arp reply
***.***.*.*** is-at 50:cc:f8:76:9c:3b
command: /data/data/de.trier.infsec.koch.droidsheep/files/arpspoof -s 1 -i
eth0 ***.***.*.***line: 50:cc:f8:76:9c:3b ff:ff:ff:ff:ff:ff 0806 42: arp reply
***.***.*.*** is-at 50:cc:f8:76:9c:3b
command: /data/data/de.trier.infsec.koch.droidsheep/files/arpspoof -s 1 -i
eth0 ***.***.*.***line: 50:cc:f8:76:9c:3b ff:ff:ff:ff:ff:ff 0806 42: arp reply
***.***.*.*** is-at 50:cc:f8:76:9c:3b
command: /data/data/de.trier.infsec.koch.droidsheep/files/arpspoof -s 1 -i
eth0 ***.***.*.***line: 50:cc:f8:76:9c:3b ff:ff:ff:ff:ff:ff 0806 42: arp reply
***.***.*.*** is-at 50:cc:f8:76:9c:3b
command: /data/data/de.trier.infsec.koch.droidsheep/files/arpspoof -s 1 -i
eth0 ***.***.*.***line: 50:cc:f8:76:9c:3b ff:ff:ff:ff:ff:ff 0806 42: arp reply
***.***.*.*** is-at 50:cc:f8:76:9c:3b
command: /data/data/de.trier.infsec.koch.droidsheep/files/arpspoof -s 1 -i
eth0 ***.***.*.***line: 50:cc:f8:76:9c:3b ff:ff:ff:ff:ff:ff 0806 42: arp reply
***.***.*.*** is-at 50:cc:f8:76:9c:3b
command: /data/data/de.trier.infsec.koch.droidsheep/files/arpspoof -s 1 -i
eth0 ***.***.*.***line: 50:cc:f8:76:9c:3b ff:ff:ff:ff:ff:ff 0806 42: arp reply
***.***.*.*** is-at 50:cc:f8:76:9c:3b
command: /data/data/de.trier.infsec.koch.droidsheep/files/arpspoof -s 1 -i
eth0 ***.***.*.***line: 50:cc:f8:76:9c:3b ff:ff:ff:ff:ff:ff 0806 42: arp reply
***.***.*.*** is-at 50:cc:f8:76:9c:3b
command: /data/data/de.trier.infsec.koch.droidsheep/files/arpspoof -s 1 -i
eth0 ***.***.*.***line: 50:cc:f8:76:9c:3b ff:ff:ff:ff:ff:ff 0806 42: arp reply
***.***.*.*** is-at 50:cc:f8:76:9c:3b
command: /data/data/de.trier.infsec.koch.droidsheep/files/arpspoof -s 1 -i
eth0 ***.***.*.***line: 50:cc:f8:76:9c:3b ff:ff:ff:ff:ff:ff 0806 42: arp reply
***.***.*.*** is-at 50:cc:f8:76:9c:3b
command: /data/data/de.trier.infsec.koch.droidsheep/files/arpspoof -s 1 -i
eth0 ***.***.*.***line: 50:cc:f8:76:9c:3b ff:ff:ff:ff:ff:ff 0806 42: arp reply
***.***.*.*** is-at 50:cc:f8:76:9c:3b
command: /data/data/de.trier.infsec.koch.droidsheep/files/arpspoof -s 1 -i
eth0 ***.***.*.***line: 50:cc:f8:76:9c:3b ff:ff:ff:ff:ff:ff 0806 42: arp reply
***.***.*.*** is-at 50:cc:f8:76:9c:3b
command: /data/data/de.trier.infsec.koch.droidsheep/files/arpspoof -s 1 -i
eth0 ***.***.*.***line: 50:cc:f8:76:9c:3b ff:ff:ff:ff:ff:ff 0806 42: arp reply
***.***.*.*** is-at 50:cc:f8:76:9c:3b
command: /data/data/de.trier.infsec.koch.droidsheep/files/arpspoof -s 1 -i
eth0 ***.***.*.***line: 50:cc:f8:76:9c:3b ff:ff:ff:ff:ff:ff 0806 42: arp reply
***.***.*.*** is-at 50:cc:f8:76:9c:3b
command: /data/data/de.trier.infsec.koch.droidsheep/files/arpspoof -s 1 -i
eth0 ***.***.*.***line: 50:cc:f8:76:9c:3b ff:ff:ff:ff:ff:ff 0806 42: arp reply
***.***.*.*** is-at 50:cc:f8:76:9c:3b
command: /data/data/de.trier.infsec.koch.droidsheep/files/arpspoof -s 1 -i
eth0 ***.***.*.***line: 50:cc:f8:76:9c:3b ff:ff:ff:ff:ff:ff 0806 42: arp reply
***.***.*.*** is-at 50:cc:f8:76:9c:3b
command: /data/data/de.trier.infsec.koch.droidsheep/files/arpspoof -s 1 -i
eth0 ***.***.*.***line: 50:cc:f8:76:9c:3b ff:ff:ff:ff:ff:ff 0806 42: arp reply
***.***.*.*** is-at 50:cc:f8:76:9c:3b
command: /data/data/de.trier.infsec.koch.droidsheep/files/arpspoof -s 1 -i
eth0 ***.***.*.***line: 50:cc:f8:76:9c:3b ff:ff:ff:ff:ff:ff 0806 42: arp reply
***.***.*.*** is-at 50:cc:f8:76:9c:3b
***.***.*.*** = IP filter
Goodluck!
```
Original issue reported on code.google.com by `maartenb...@gmail.com` on 20 Mar 2012 at 8:47 | defect | new bug found debug session start tue mar gmt droidsheep path data data de trier infsec koch droidsheep files droidsheep arpspoof path data data de trier infsec koch droidsheep files arpspoof testing su error in setuphelper java io ioexception error running exec command working directory null environment null rwxrwxrwx app app droidsheep rwxrwxrwx app app arpspoof executing command chmod data data de trier infsec koch droidsheep files droidsheep error with command chmod data data de trier infsec koch droidsheep files droidsheep killall not foundkillall not foundkillall not foundkillall not foundkillall not foundkillall not foundkillall not foundkillall not foundkillall not found executing command chmod data data de trier infsec koch droidsheep files arpspoof executing command killall droidsheep error with command killall droidsheep killall not found executing command chmod data data de trier infsec koch droidsheep files arpspoof executing command chmod data data de trier infsec koch droidsheep files droidsheep executing command echo proc sys net ip forward executing command data data de trier infsec koch droidsheep files droidsheep executing command iptables f command data data de trier infsec koch droidsheep files droidsheepline executing command iptables t nat f executing command iptables t nat i postrouting s j masquerade executing command iptables p forward accept executing command data data de trier infsec koch droidsheep files arpspoof s i command data data de trier infsec koch droidsheep files arpspoof s i line command data data de trier infsec koch droidsheep files arpspoof s i line cc ff ff ff ff ff ff arp reply is at cc command data data de trier infsec koch droidsheep files arpspoof s i line cc ff ff ff ff ff ff arp reply is at cc command data data de trier infsec koch droidsheep files arpspoof s i line cc ff ff ff ff ff ff arp reply is at cc command data data de trier infsec koch droidsheep files arpspoof s i line cc ff ff ff ff ff ff arp reply is at cc command data data de trier infsec koch droidsheep files arpspoof s i line cc ff ff ff ff ff ff arp reply is at cc command data data de trier infsec koch droidsheep files arpspoof s i line cc ff ff ff ff ff ff arp reply is at cc command data data de trier infsec koch droidsheep files arpspoof s i line cc ff ff ff ff ff ff arp reply is at cc command data data de trier infsec koch droidsheep files arpspoof s i line cc ff ff ff ff ff ff arp reply is at cc command data data de trier infsec koch droidsheep files arpspoof s i line cc ff ff ff ff ff ff arp reply is at cc command data data de trier infsec koch droidsheep files arpspoof s i line cc ff ff ff ff ff ff arp reply is at cc command data data de trier infsec koch droidsheep files arpspoof s i line cc ff ff ff ff ff ff arp reply is at cc command data data de trier infsec koch droidsheep files arpspoof s i line cc ff ff ff ff ff ff arp reply is at cc command data data de trier infsec koch droidsheep files arpspoof s i line cc ff ff ff ff ff ff arp reply is at cc command data data de trier infsec koch droidsheep files arpspoof s i line cc ff ff ff ff ff ff arp reply is at cc command data data de trier infsec koch droidsheep files arpspoof s i line cc ff ff ff ff ff ff arp reply is at cc command data data de trier infsec koch droidsheep files arpspoof s i line cc ff ff ff ff ff ff arp reply is at cc command data data de trier infsec koch droidsheep files arpspoof s i line cc ff ff ff ff ff ff arp reply is at cc command data data de trier infsec koch droidsheep files arpspoof s i line cc ff ff ff ff ff ff arp reply is at cc command data data de trier infsec koch droidsheep files arpspoof s i line cc ff ff ff ff ff ff arp reply is at cc command data data de trier infsec koch droidsheep files arpspoof s i line cc ff ff ff ff ff ff arp reply is at cc command data data de trier infsec koch droidsheep files arpspoof s i line cc ff ff ff ff ff ff arp reply is at cc command data data de trier infsec koch droidsheep files arpspoof s i line cc ff ff ff ff ff ff arp reply is at cc command data data de trier infsec koch droidsheep files arpspoof s i line cc ff ff ff ff ff ff arp reply is at cc ip filter goodluck original issue reported on code google com by maartenb gmail com on mar at | 1 |
488,859 | 14,087,413,015 | IssuesEvent | 2020-11-05 06:21:51 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | www.apple.com - desktop site instead of mobile site | browser-firefox-mobile engine-gecko ml-needsdiagnosis-false ml-probability-high priority-critical | <!-- @browser: Firefox Mobile 68.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:68.0) Gecko/68.0 Firefox/68.0 -->
<!-- @reported_with: mobile-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/61141 -->
**URL**: https://www.apple.com/shop
**Browser / Version**: Firefox Mobile 68.0
**Operating System**: Android
**Tested Another Browser**: No
**Problem type**: Desktop site instead of mobile site
**Description**: Desktop site instead of mobile site
**Steps to Reproduce**:
I don't know what go wrong and happens to my tv tcl too
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20200316183117</li><li>channel: beta</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2020/11/d50a1eed-2ada-4db0-ba54-6e92bb2d79ba)
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | www.apple.com - desktop site instead of mobile site - <!-- @browser: Firefox Mobile 68.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:68.0) Gecko/68.0 Firefox/68.0 -->
<!-- @reported_with: mobile-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/61141 -->
**URL**: https://www.apple.com/shop
**Browser / Version**: Firefox Mobile 68.0
**Operating System**: Android
**Tested Another Browser**: No
**Problem type**: Desktop site instead of mobile site
**Description**: Desktop site instead of mobile site
**Steps to Reproduce**:
I don't know what go wrong and happens to my tv tcl too
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20200316183117</li><li>channel: beta</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2020/11/d50a1eed-2ada-4db0-ba54-6e92bb2d79ba)
_From [webcompat.com](https://webcompat.com/) with ❤️_ | non_defect | desktop site instead of mobile site url browser version firefox mobile operating system android tested another browser no problem type desktop site instead of mobile site description desktop site instead of mobile site steps to reproduce i don t know what go wrong and happens to my tv tcl too browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel beta hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️ | 0 |
7,701 | 2,610,433,691 | IssuesEvent | 2015-02-26 20:22:07 | chrsmith/scribefire-chrome | https://api.github.com/repos/chrsmith/scribefire-chrome | opened | 'Save Progress' button publishes post | auto-migrated Priority-Medium Type-Defect | ```
What's the problem?
When I click on the 'Save Progress' button, Scribefire actually publishes the
post, even though it's not finished.
What browser are you using?
Chrome 34.0.1847.131
What Operating system are you using
Mavericks [OS X 10.9.2]
What version of ScribeFire are you running?
4.2.4
What Blog Type are you having this problem with? Please include version #
if known or applicable
Blogger-version unknown, whatever's current on the site
```
-----
Original issue reported on code.google.com by `margotvp...@gmail.com` on 18 May 2014 at 4:04 | 1.0 | 'Save Progress' button publishes post - ```
What's the problem?
When I click on the 'Save Progress' button, Scribefire actually publishes the
post, even though it's not finished.
What browser are you using?
Chrome 34.0.1847.131
What Operating system are you using
Mavericks [OS X 10.9.2]
What version of ScribeFire are you running?
4.2.4
What Blog Type are you having this problem with? Please include version #
if known or applicable
Blogger-version unknown, whatever's current on the site
```
-----
Original issue reported on code.google.com by `margotvp...@gmail.com` on 18 May 2014 at 4:04 | defect | save progress button publishes post what s the problem when i click on the save progress button scribefire actually publishes the post even though it s not finished what browser are you using chrome what operating system are you using mavericks what version of scribefire are you running what blog type are you having this problem with please include version if known or applicable blogger version unknown whatever s current on the site original issue reported on code google com by margotvp gmail com on may at | 1 |
80,435 | 30,285,809,629 | IssuesEvent | 2023-07-08 17:07:17 | vector-im/element-x-ios | https://api.github.com/repos/vector-im/element-x-ios | opened | Screen transitions are clunky when opening a room from a push | T-Defect | ### Steps to reproduce
1. Tap on a push notification to open it
2. Observe that the app shows other spurious rooms (room list and/or previously viewed room) before slowly popping to open the right room.
### Outcome
#### What did you expect?
The app should instantly open the right room rather than traversing other rooms/views along the way.
### Your phone model
_No response_
### Operating system version
_No response_
### Application version
279
### Homeserver
_No response_
### Will you send logs?
No | 1.0 | Screen transitions are clunky when opening a room from a push - ### Steps to reproduce
1. Tap on a push notification to open it
2. Observe that the app shows other spurious rooms (room list and/or previously viewed room) before slowly popping to open the right room.
### Outcome
#### What did you expect?
The app should instantly open the right room rather than traversing other rooms/views along the way.
### Your phone model
_No response_
### Operating system version
_No response_
### Application version
279
### Homeserver
_No response_
### Will you send logs?
No | defect | screen transitions are clunky when opening a room from a push steps to reproduce tap on a push notification to open it observe that the app shows other spurious rooms room list and or previously viewed room before slowly popping to open the right room outcome what did you expect the app should instantly open the right room rather than traversing other rooms views along the way your phone model no response operating system version no response application version homeserver no response will you send logs no | 1 |
33,677 | 7,744,872,670 | IssuesEvent | 2018-05-29 16:32:12 | PapirusDevelopmentTeam/papirus-icon-theme | https://api.github.com/repos/PapirusDevelopmentTeam/papirus-icon-theme | closed | [Icon Request] RubyMine | hardcoded invalid | [RubyMine](https://www.jetbrains.com/ruby/) The most intelligent Ruby and Rails IDE

```
[Desktop Entry]
Version=1.0
Type=Application
Name=RubyMine Release
Icon=/home/toby/.local/share/JetBrains/Toolbox/apps/RubyMine/ch-1/181.4203.562/.icon.svg
Exec="/home/toby/.local/share/JetBrains/Toolbox/apps/RubyMine/ch-1/181.4203.562/bin/rubymine.sh" %f
Comment=The most intelligent Ruby IDE
Categories=Development;IDE;
Terminal=false
StartupWMClass=jetbrains-rubymine
``` | 1.0 | [Icon Request] RubyMine - [RubyMine](https://www.jetbrains.com/ruby/) The most intelligent Ruby and Rails IDE

```
[Desktop Entry]
Version=1.0
Type=Application
Name=RubyMine Release
Icon=/home/toby/.local/share/JetBrains/Toolbox/apps/RubyMine/ch-1/181.4203.562/.icon.svg
Exec="/home/toby/.local/share/JetBrains/Toolbox/apps/RubyMine/ch-1/181.4203.562/bin/rubymine.sh" %f
Comment=The most intelligent Ruby IDE
Categories=Development;IDE;
Terminal=false
StartupWMClass=jetbrains-rubymine
``` | non_defect | rubymine the most intelligent ruby and rails ide version type application name rubymine release icon home toby local share jetbrains toolbox apps rubymine ch icon svg exec home toby local share jetbrains toolbox apps rubymine ch bin rubymine sh f comment the most intelligent ruby ide categories development ide terminal false startupwmclass jetbrains rubymine | 0 |
10,662 | 7,268,155,418 | IssuesEvent | 2018-02-20 09:05:16 | owncloud/core | https://api.github.com/repos/owncloud/core | reopened | Avatar gets updated at every login with LDAP | app:user_ldap bug feature:avatars performance sev3-medium | ### Steps
1. Setup LDAP with avatars and email address
2. Do a curl operation with basic auth
3. Debug into the avatar code
### Expected result
Avatar only updated once
### Actual result
Avatar updated for **every** login.
This causes file operations like `getDirectoryContents` to delete all avatars and re-set the new one.
This should be debounced and done only once an hour or so, or whichever LDAP TTL is set.
This can make the connection slower for clients that do not support sessions, like Windows Webdav mounts or Linux file managers.
@jvillafanez @DeepDiver1975
| True | Avatar gets updated at every login with LDAP - ### Steps
1. Setup LDAP with avatars and email address
2. Do a curl operation with basic auth
3. Debug into the avatar code
### Expected result
Avatar only updated once
### Actual result
Avatar updated for **every** login.
This causes file operations like `getDirectoryContents` to delete all avatars and re-set the new one.
This should be debounced and done only once an hour or so, or whichever LDAP TTL is set.
This can make the connection slower for clients that do not support sessions, like Windows Webdav mounts or Linux file managers.
@jvillafanez @DeepDiver1975
| non_defect | avatar gets updated at every login with ldap steps setup ldap with avatars and email address do a curl operation with basic auth debug into the avatar code expected result avatar only updated once actual result avatar updated for every login this causes file operations like getdirectorycontents to delete all avatars and re set the new one this should be debounced and done only once an hour or so or whichever ldap ttl is set this can make the connection slower for clients that do not support sessions like windows webdav mounts or linux file managers jvillafanez | 0 |
48,814 | 13,184,748,964 | IssuesEvent | 2020-08-12 20:01:24 | icecube-trac/tix3 | https://api.github.com/repos/icecube-trac/tix3 | opened | libarchive review (Trac #261) | Incomplete Migration Migrated from Trac combo core defect | <details>
<summary>_Migrated from https://code.icecube.wisc.edu/ticket/261
, reported by nega and owned by olivas_</summary>
<p>
```json
{
"status": "closed",
"changetime": "2014-11-23T03:37:57",
"description": "See: #IT282",
"reporter": "nega",
"cc": "",
"resolution": "worksforme",
"_ts": "1416713877165085",
"component": "combo core",
"summary": "libarchive review",
"priority": "normal",
"keywords": "libarchive",
"time": "2011-05-11T20:39:48",
"milestone": "",
"owner": "olivas",
"type": "defect"
}
```
</p>
</details>
| 1.0 | libarchive review (Trac #261) - <details>
<summary>_Migrated from https://code.icecube.wisc.edu/ticket/261
, reported by nega and owned by olivas_</summary>
<p>
```json
{
"status": "closed",
"changetime": "2014-11-23T03:37:57",
"description": "See: #IT282",
"reporter": "nega",
"cc": "",
"resolution": "worksforme",
"_ts": "1416713877165085",
"component": "combo core",
"summary": "libarchive review",
"priority": "normal",
"keywords": "libarchive",
"time": "2011-05-11T20:39:48",
"milestone": "",
"owner": "olivas",
"type": "defect"
}
```
</p>
</details>
| defect | libarchive review trac migrated from reported by nega and owned by olivas json status closed changetime description see reporter nega cc resolution worksforme ts component combo core summary libarchive review priority normal keywords libarchive time milestone owner olivas type defect | 1 |
271,245 | 8,481,819,100 | IssuesEvent | 2018-10-25 16:44:04 | ClangBuiltLinux/linux | https://api.github.com/repos/ClangBuiltLinux/linux | closed | -Wpointer-bool-conversion in drivers/net/ethernet/intel/i40e/i40e_debugfs.c | -Wpointer-bool-conversion [BUG] linux [PATCH] Accepted low priority | ```
drivers/net/ethernet/intel/i40e/i40e_debugfs.c:136:9: warning: address of array 'vsi->active_vlans' will
always evaluate to 'true' [-Wpointer-bool-conversion]
vsi->active_vlans ? "<valid>" : "<null>");
~~~~~^~~~~~~~~~~~ ~
./include/linux/device.h:1424:33: note: expanded from macro 'dev_info'
_dev_info(dev, dev_fmt(fmt), ##__VA_ARGS__)
^~~~~~~~~~~
``` | 1.0 | -Wpointer-bool-conversion in drivers/net/ethernet/intel/i40e/i40e_debugfs.c - ```
drivers/net/ethernet/intel/i40e/i40e_debugfs.c:136:9: warning: address of array 'vsi->active_vlans' will
always evaluate to 'true' [-Wpointer-bool-conversion]
vsi->active_vlans ? "<valid>" : "<null>");
~~~~~^~~~~~~~~~~~ ~
./include/linux/device.h:1424:33: note: expanded from macro 'dev_info'
_dev_info(dev, dev_fmt(fmt), ##__VA_ARGS__)
^~~~~~~~~~~
``` | non_defect | wpointer bool conversion in drivers net ethernet intel debugfs c drivers net ethernet intel debugfs c warning address of array vsi active vlans will always evaluate to true vsi active vlans include linux device h note expanded from macro dev info dev info dev dev fmt fmt va args | 0 |
69,263 | 22,303,713,879 | IssuesEvent | 2022-06-13 11:05:42 | vector-im/element-web | https://api.github.com/repos/vector-im/element-web | closed | Composer overflow on long input | T-Defect X-Regression S-Major X-Release-Blocker A-Composer O-Occasional Team: Delight | ### Steps to reproduce
1. Input `aaa...`
### Outcome
#### What did you expect?
the new composer should not overflow.
#### What happened instead?
It overflows.

### Operating system
_No response_
### Browser information
_No response_
### URL for webapp
localhost
### Application version
develop branch
### Homeserver
_No response_
### Will you send logs?
No | 1.0 | Composer overflow on long input - ### Steps to reproduce
1. Input `aaa...`
### Outcome
#### What did you expect?
the new composer should not overflow.
#### What happened instead?
It overflows.

### Operating system
_No response_
### Browser information
_No response_
### URL for webapp
localhost
### Application version
develop branch
### Homeserver
_No response_
### Will you send logs?
No | defect | composer overflow on long input steps to reproduce input aaa outcome what did you expect the new composer should not overflow what happened instead it overflows operating system no response browser information no response url for webapp localhost application version develop branch homeserver no response will you send logs no | 1 |
11,185 | 13,194,854,140 | IssuesEvent | 2020-08-13 17:34:21 | opendistro-for-elasticsearch/alerting | https://api.github.com/repos/opendistro-for-elasticsearch/alerting | closed | Compatibility with Elasticsearch 7.5.1 | version compatibility | As 7.5.1 has been released, would be great to get support for it. | True | Compatibility with Elasticsearch 7.5.1 - As 7.5.1 has been released, would be great to get support for it. | non_defect | compatibility with elasticsearch as has been released would be great to get support for it | 0 |
52,384 | 13,224,708,732 | IssuesEvent | 2020-08-17 19:41:09 | icecube-trac/tix4 | https://api.github.com/repos/icecube-trac/tix4 | opened | [clsim] fix I3XMLSummaryService usage for benchmark (Trac #2140) | Incomplete Migration Migrated from Trac combo simulation defect | <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/2140">https://code.icecube.wisc.edu/projects/icecube/ticket/2140</a>, reported by david.schultzand owned by david.schultz</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2018-03-06T22:43:52",
"_ts": "1520376232671387",
"description": "The current benchmark script is broken because it uses the I3XMLSummaryService, which was removed from simulation a while ago. Fix it.",
"reporter": "david.schultz",
"cc": "claudio.kopper, cweaver",
"resolution": "fixed",
"time": "2018-03-06T22:20:00",
"component": "combo simulation",
"summary": "[clsim] fix I3XMLSummaryService usage for benchmark",
"priority": "major",
"keywords": "",
"milestone": "",
"owner": "david.schultz",
"type": "defect"
}
```
</p>
</details>
| 1.0 | [clsim] fix I3XMLSummaryService usage for benchmark (Trac #2140) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/2140">https://code.icecube.wisc.edu/projects/icecube/ticket/2140</a>, reported by david.schultzand owned by david.schultz</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2018-03-06T22:43:52",
"_ts": "1520376232671387",
"description": "The current benchmark script is broken because it uses the I3XMLSummaryService, which was removed from simulation a while ago. Fix it.",
"reporter": "david.schultz",
"cc": "claudio.kopper, cweaver",
"resolution": "fixed",
"time": "2018-03-06T22:20:00",
"component": "combo simulation",
"summary": "[clsim] fix I3XMLSummaryService usage for benchmark",
"priority": "major",
"keywords": "",
"milestone": "",
"owner": "david.schultz",
"type": "defect"
}
```
</p>
</details>
| defect | fix usage for benchmark trac migrated from json status closed changetime ts description the current benchmark script is broken because it uses the which was removed from simulation a while ago fix it reporter david schultz cc claudio kopper cweaver resolution fixed time component combo simulation summary fix usage for benchmark priority major keywords milestone owner david schultz type defect | 1 |
328,775 | 24,199,219,264 | IssuesEvent | 2022-09-24 10:07:23 | www-splitcells-net/net.splitcells.network | https://api.github.com/repos/www-splitcells-net/net.splitcells.network | closed | Blog why accessibility was not a relevant issue yet. | documentation | * [ ] The following article reminded my, that accessibility was not really tackled yet, regarding the most important stuff like installing the software: https://itvision.altervista.org/why.linux.is.not.ready.for.the.desktop.current.html | 1.0 | Blog why accessibility was not a relevant issue yet. - * [ ] The following article reminded my, that accessibility was not really tackled yet, regarding the most important stuff like installing the software: https://itvision.altervista.org/why.linux.is.not.ready.for.the.desktop.current.html | non_defect | blog why accessibility was not a relevant issue yet the following article reminded my that accessibility was not really tackled yet regarding the most important stuff like installing the software | 0 |
105,881 | 9,102,707,607 | IssuesEvent | 2019-02-20 14:23:24 | zahedmohammed/testingApi | https://api.github.com/repos/zahedmohammed/testingApi | closed | test2531 : ApiV1PrimaryTransactionIdGetRoleAdminDisallowedRbac | test2531 test2531 | Project : test2531
Job : Default
Env : Default
Category : null
Tags : null
Severity : null
Region : FXLabs/US_WEST_1
Result : fail
Status Code : 200
Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Wed, 20 Feb 2019 14:21:52 GMT]}
Endpoint : http://54.215.136.217/api/v1/primary-transaction/1c4BOV1L
Request :
Response :
{
"requestId" : "None",
"requestTime" : "2019-02-20T14:21:52.192+0000",
"errors" : true,
"messages" : [ {
"type" : "ERROR",
"key" : "",
"value" : null
} ],
"data" : null,
"totalPages" : 0,
"totalElements" : 0
}
Logs :
Assertion [@StatusCode == 401 OR @StatusCode == 403] resolved-to [200 == 401 OR 200 == 403] result [Failed]
--- FX Bot --- | 2.0 | test2531 : ApiV1PrimaryTransactionIdGetRoleAdminDisallowedRbac - Project : test2531
Job : Default
Env : Default
Category : null
Tags : null
Severity : null
Region : FXLabs/US_WEST_1
Result : fail
Status Code : 200
Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Wed, 20 Feb 2019 14:21:52 GMT]}
Endpoint : http://54.215.136.217/api/v1/primary-transaction/1c4BOV1L
Request :
Response :
{
"requestId" : "None",
"requestTime" : "2019-02-20T14:21:52.192+0000",
"errors" : true,
"messages" : [ {
"type" : "ERROR",
"key" : "",
"value" : null
} ],
"data" : null,
"totalPages" : 0,
"totalElements" : 0
}
Logs :
Assertion [@StatusCode == 401 OR @StatusCode == 403] resolved-to [200 == 401 OR 200 == 403] result [Failed]
--- FX Bot --- | non_defect | project job default env default category null tags null severity null region fxlabs us west result fail status code headers x content type options x xss protection cache control pragma expires x frame options content type transfer encoding date endpoint request response requestid none requesttime errors true messages type error key value null data null totalpages totalelements logs assertion resolved to result fx bot | 0 |
89,487 | 10,601,594,941 | IssuesEvent | 2019-10-10 12:40:44 | cornellius-gp/gpytorch | https://api.github.com/repos/cornellius-gp/gpytorch | closed | [Docs] Change uses of WhitenedVariationalStrategy in tutorials to VariationalStrategy | documentation | Change uses of WhitenedVariationalStrategy in tutorials to VariationalStrategy | 1.0 | [Docs] Change uses of WhitenedVariationalStrategy in tutorials to VariationalStrategy - Change uses of WhitenedVariationalStrategy in tutorials to VariationalStrategy | non_defect | change uses of whitenedvariationalstrategy in tutorials to variationalstrategy change uses of whitenedvariationalstrategy in tutorials to variationalstrategy | 0 |
90,520 | 18,166,708,446 | IssuesEvent | 2021-09-27 15:16:58 | wazuh/wazuh | https://api.github.com/repos/wazuh/wazuh | closed | Add changes from 4.1 to 4.2 in ruleset to changelog.md | threatintel sca rules decoders | |Wazuh version| Component | Action type |
|---| --- | --- |
| 4.2 | Rules/Decoders | changelog |
Add all the changes made from [4.1 to 4.2](https://github.com/wazuh/wazuh-ruleset/pull/852/files) to changelog.md | 1.0 | Add changes from 4.1 to 4.2 in ruleset to changelog.md - |Wazuh version| Component | Action type |
|---| --- | --- |
| 4.2 | Rules/Decoders | changelog |
Add all the changes made from [4.1 to 4.2](https://github.com/wazuh/wazuh-ruleset/pull/852/files) to changelog.md | non_defect | add changes from to in ruleset to changelog md wazuh version component action type rules decoders changelog add all the changes made from to changelog md | 0 |
17,302 | 2,998,204,570 | IssuesEvent | 2015-07-23 12:53:34 | bardsoftware/ganttproject | https://api.github.com/repos/bardsoftware/ganttproject | closed | Resource Chart is out of sync with Gantt chart after task is deleted | auto-migrated Resources Tasks Type-Defect __Target-Ostrava | ```
What steps will reproduce the problem?
1. Create 2 resources in resources tab
2. Create 2 tasks in Gantt tab
3. Allocate both resources to both the tasks
4. Delete one of the tasks
5. Switch to Resources Chart - it should show both tasks under each resources
What is the expected output? What do you see instead?
Once task is deleted from project, all the resources that has been allocated to
the task should get released. Resource chart view should not display deleted
task under any of resources.
What version of the product are you using? On what operating system?
GanttProject 2.7 Ostrava (build 1891)
Windows 7 Enterprise edition
Please provide any additional information below.
java version "1.7.0_75"
Java(TM) SE Runtime Environment (build 1.7.0_75-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.75-b04, mixed mode)
```
Original issue reported on code.google.com by `kvas...@gmail.com` on 30 Mar 2015 at 9:05 | 1.0 | Resource Chart is out of sync with Gantt chart after task is deleted - ```
What steps will reproduce the problem?
1. Create 2 resources in resources tab
2. Create 2 tasks in Gantt tab
3. Allocate both resources to both the tasks
4. Delete one of the tasks
5. Switch to Resources Chart - it should show both tasks under each resources
What is the expected output? What do you see instead?
Once task is deleted from project, all the resources that has been allocated to
the task should get released. Resource chart view should not display deleted
task under any of resources.
What version of the product are you using? On what operating system?
GanttProject 2.7 Ostrava (build 1891)
Windows 7 Enterprise edition
Please provide any additional information below.
java version "1.7.0_75"
Java(TM) SE Runtime Environment (build 1.7.0_75-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.75-b04, mixed mode)
```
Original issue reported on code.google.com by `kvas...@gmail.com` on 30 Mar 2015 at 9:05 | defect | resource chart is out of sync with gantt chart after task is deleted what steps will reproduce the problem create resources in resources tab create tasks in gantt tab allocate both resources to both the tasks delete one of the tasks switch to resources chart it should show both tasks under each resources what is the expected output what do you see instead once task is deleted from project all the resources that has been allocated to the task should get released resource chart view should not display deleted task under any of resources what version of the product are you using on what operating system ganttproject ostrava build windows enterprise edition please provide any additional information below java version java tm se runtime environment build java hotspot tm bit server vm build mixed mode original issue reported on code google com by kvas gmail com on mar at | 1 |
160,784 | 6,102,505,243 | IssuesEvent | 2017-06-20 16:38:05 | Mary2424/FiPCE | https://api.github.com/repos/Mary2424/FiPCE | closed | Subtotal table by one or more columns | High Priority | User identifies one or more columns (could be columns currently sorted by)
Subtotal lines would be added for each unique combination of the identified fields for summable fields only. | 1.0 | Subtotal table by one or more columns - User identifies one or more columns (could be columns currently sorted by)
Subtotal lines would be added for each unique combination of the identified fields for summable fields only. | non_defect | subtotal table by one or more columns user identifies one or more columns could be columns currently sorted by subtotal lines would be added for each unique combination of the identified fields for summable fields only | 0 |
104,683 | 22,742,233,323 | IssuesEvent | 2022-07-07 05:37:33 | appsmithorg/appsmith | https://api.github.com/repos/appsmithorg/appsmith | closed | [Bug]: Header and params get added into Authenticated API when the value is deleted via keyboard | Bug API pane Low Needs Triaging BE Coders Pod | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Description
On deleting Header and Params via keyboard keys it is observed that the on save and empty fields are displayed to the user
### Steps To Reproduce
1) navigate to Auth API
2) Add the header and Params
3) Add a the bearer token
4) now navigate to header and Params
5) Delete the data in the fields using the keyboard
6) Click on the Save option
Expectation :
On deleting the data in the header and Params fields using the keyboard the empty fields must not be displayed to the user

### Public Sample App
_No response_
### Version
Cloud | 1.0 | [Bug]: Header and params get added into Authenticated API when the value is deleted via keyboard - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Description
On deleting Header and Params via keyboard keys it is observed that the on save and empty fields are displayed to the user
### Steps To Reproduce
1) navigate to Auth API
2) Add the header and Params
3) Add a the bearer token
4) now navigate to header and Params
5) Delete the data in the fields using the keyboard
6) Click on the Save option
Expectation :
On deleting the data in the header and Params fields using the keyboard the empty fields must not be displayed to the user

### Public Sample App
_No response_
### Version
Cloud | non_defect | header and params get added into authenticated api when the value is deleted via keyboard is there an existing issue for this i have searched the existing issues description on deleting header and params via keyboard keys it is observed that the on save and empty fields are displayed to the user steps to reproduce navigate to auth api add the header and params add a the bearer token now navigate to header and params delete the data in the fields using the keyboard click on the save option expectation on deleting the data in the header and params fields using the keyboard the empty fields must not be displayed to the user public sample app no response version cloud | 0 |
24,859 | 12,178,689,317 | IssuesEvent | 2020-04-28 09:25:09 | MicrosoftDocs/azure-docs | https://api.github.com/repos/MicrosoftDocs/azure-docs | closed | Get error at step "Give NETWORK SERVICE access to the certificate's private key" | Pri2 cxp product-question service-fabric/svc triaged | Following the steps to enable https endpoint.
After deploy the app to Azure service fabric cluster in development environment, when SetupEntryPoint is hit to run Setup.bat, got error:
.\SetCertAccess.ps1 : The term '.\SetCertAccess.ps1' is not recognized as the name of a cmdlet, function, script file, or.....
It seems the ps1 file could not be found by powershell cmd. I looked at the ps1 file location at the cluster node, it is under "SvcFab/_App/Application***/Service***pkgCode/", same directory as the Setup.bat file.
What is wrong?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 3ac102a5-e9ad-0046-f217-446f1ad263c5
* Version Independent ID: a47523e2-0c30-dee4-0037-5f58836fa37f
* Content: [Add an HTTPS endpoint using Kestrel - Azure Service Fabric](https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-tutorial-dotnet-app-enable-https-endpoint)
* Content Source: [articles/service-fabric/service-fabric-tutorial-dotnet-app-enable-https-endpoint.md](https://github.com/Microsoft/azure-docs/blob/master/articles/service-fabric/service-fabric-tutorial-dotnet-app-enable-https-endpoint.md)
* Service: **service-fabric**
* GitHub Login: @athinanthny
* Microsoft Alias: **atsenthi** | 1.0 | Get error at step "Give NETWORK SERVICE access to the certificate's private key" - Following the steps to enable https endpoint.
After deploy the app to Azure service fabric cluster in development environment, when SetupEntryPoint is hit to run Setup.bat, got error:
.\SetCertAccess.ps1 : The term '.\SetCertAccess.ps1' is not recognized as the name of a cmdlet, function, script file, or.....
It seems the ps1 file could not be found by powershell cmd. I looked at the ps1 file location at the cluster node, it is under "SvcFab/_App/Application***/Service***pkgCode/", same directory as the Setup.bat file.
What is wrong?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 3ac102a5-e9ad-0046-f217-446f1ad263c5
* Version Independent ID: a47523e2-0c30-dee4-0037-5f58836fa37f
* Content: [Add an HTTPS endpoint using Kestrel - Azure Service Fabric](https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-tutorial-dotnet-app-enable-https-endpoint)
* Content Source: [articles/service-fabric/service-fabric-tutorial-dotnet-app-enable-https-endpoint.md](https://github.com/Microsoft/azure-docs/blob/master/articles/service-fabric/service-fabric-tutorial-dotnet-app-enable-https-endpoint.md)
* Service: **service-fabric**
* GitHub Login: @athinanthny
* Microsoft Alias: **atsenthi** | non_defect | get error at step give network service access to the certificate s private key following the steps to enable https endpoint after deploy the app to azure service fabric cluster in development environment when setupentrypoint is hit to run setup bat got error setcertaccess the term setcertaccess is not recognized as the name of a cmdlet function script file or it seems the file could not be found by powershell cmd i looked at the file location at the cluster node it is under svcfab app application service pkgcode same directory as the setup bat file what is wrong document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service service fabric github login athinanthny microsoft alias atsenthi | 0 |
191,305 | 6,827,868,407 | IssuesEvent | 2017-11-08 18:27:31 | craftercms/craftercms | https://api.github.com/repos/craftercms/craftercms | opened | [studio] Publishing mechanics updates | enhancement priority: high | Replace cherry picking with checking out specified commit id for published content | 1.0 | [studio] Publishing mechanics updates - Replace cherry picking with checking out specified commit id for published content | non_defect | publishing mechanics updates replace cherry picking with checking out specified commit id for published content | 0 |
28,164 | 13,551,331,691 | IssuesEvent | 2020-09-17 10:56:21 | status-im/status-react | https://api.github.com/repos/status-im/status-react | closed | Update to reagent 0.8 | performance | # Problem
In many components we need to return multiple Dom nodes at once, to achieve that we use a warpping view but that is inneficient because it requires creation of a superfluous Dom nodes.
## Implementation
React has a solution for that named fragments
Reagent 0.8 introduces a syntax to use react fragments
Updating reagent will allow use to use that syntax https://github.com/reagent-project/reagent/blob/master/doc/ReactFeatures.md#fragments
## Acceptance Criteria
Reagent is updated | True | Update to reagent 0.8 - # Problem
In many components we need to return multiple Dom nodes at once, to achieve that we use a warpping view but that is inneficient because it requires creation of a superfluous Dom nodes.
## Implementation
React has a solution for that named fragments
Reagent 0.8 introduces a syntax to use react fragments
Updating reagent will allow use to use that syntax https://github.com/reagent-project/reagent/blob/master/doc/ReactFeatures.md#fragments
## Acceptance Criteria
Reagent is updated | non_defect | update to reagent problem in many components we need to return multiple dom nodes at once to achieve that we use a warpping view but that is inneficient because it requires creation of a superfluous dom nodes implementation react has a solution for that named fragments reagent introduces a syntax to use react fragments updating reagent will allow use to use that syntax acceptance criteria reagent is updated | 0 |
4,072 | 2,610,086,920 | IssuesEvent | 2015-02-26 18:26:24 | chrsmith/dsdsdaadf | https://api.github.com/repos/chrsmith/dsdsdaadf | opened | 深圳瓷肌祛痘印效果如何 | auto-migrated Priority-Medium Type-Defect | ```
深圳瓷肌祛痘印效果如何【深圳韩方科颜全国热线400-869-1818��
�24小时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以��
�国秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品�
��韩方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反
弹”健康祛痘技术并结合先进“先进豪华彩光”仪,开创国��
�专业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸�
��的痘痘。
```
-----
Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 7:16 | 1.0 | 深圳瓷肌祛痘印效果如何 - ```
深圳瓷肌祛痘印效果如何【深圳韩方科颜全国热线400-869-1818��
�24小时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以��
�国秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品�
��韩方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反
弹”健康祛痘技术并结合先进“先进豪华彩光”仪,开创国��
�专业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸�
��的痘痘。
```
-----
Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 7:16 | defect | 深圳瓷肌祛痘印效果如何 深圳瓷肌祛痘印效果如何【 �� � 】深圳韩方科颜专业祛痘连锁机构,机构以�� �国秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品� ��韩方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反 弹”健康祛痘技术并结合先进“先进豪华彩光”仪,开创国�� �专业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸� ��的痘痘。 original issue reported on code google com by szft com on may at | 1 |
61,832 | 17,023,788,659 | IssuesEvent | 2021-07-03 03:51:40 | tomhughes/trac-tickets | https://api.github.com/repos/tomhughes/trac-tickets | closed | OSM layers drop down menu | Component: website Priority: minor Resolution: wontfix Type: defect | **[Submitted to the original trac issue database at 4.08pm, Thursday, 29th March 2012]**
Referring to the new layers drop down menu.
The new menu is rather large and rolls back after changing layer rather than being "switched on/off". The former + was quite discrete and had the advantage of either being open or closed and didn't open when cursor passed over as the new menu does. The new menu is also large and can't be hidden; it is distracting.
Is there a chance of rolling the change back or contributing to OSM but from another website which would have a different interface? | 1.0 | OSM layers drop down menu - **[Submitted to the original trac issue database at 4.08pm, Thursday, 29th March 2012]**
Referring to the new layers drop down menu.
The new menu is rather large and rolls back after changing layer rather than being "switched on/off". The former + was quite discrete and had the advantage of either being open or closed and didn't open when cursor passed over as the new menu does. The new menu is also large and can't be hidden; it is distracting.
Is there a chance of rolling the change back or contributing to OSM but from another website which would have a different interface? | defect | osm layers drop down menu referring to the new layers drop down menu the new menu is rather large and rolls back after changing layer rather than being switched on off the former was quite discrete and had the advantage of either being open or closed and didn t open when cursor passed over as the new menu does the new menu is also large and can t be hidden it is distracting is there a chance of rolling the change back or contributing to osm but from another website which would have a different interface | 1 |
120,969 | 17,644,543,834 | IssuesEvent | 2021-08-20 02:43:32 | Killy85/game_ai_trainer | https://api.github.com/repos/Killy85/game_ai_trainer | opened | CVE-2021-29525 (High) detected in tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl | security vulnerability | ## CVE-2021-29525 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl</b></p></summary>
<p>TensorFlow is an open source machine learning framework for everyone.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/d2/ea/ab2c8c0e81bd051cc1180b104c75a865ab0fc66c89be992c4b20bbf6d624/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/d2/ea/ab2c8c0e81bd051cc1180b104c75a865ab0fc66c89be992c4b20bbf6d624/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl</a></p>
<p>
Dependency Hierarchy:
- :x: **tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
TensorFlow is an end-to-end open source platform for machine learning. An attacker can trigger a division by 0 in `tf.raw_ops.Conv2DBackpropInput`. This is because the implementation(https://github.com/tensorflow/tensorflow/blob/b40060c9f697b044e3107917c797ba052f4506ab/tensorflow/core/kernels/conv_grad_input_ops.h#L625-L655) does a division by a quantity that is controlled by the caller. The fix will be included in TensorFlow 2.5.0. We will also cherrypick this commit on TensorFlow 2.4.2, TensorFlow 2.3.3, TensorFlow 2.2.3 and TensorFlow 2.1.4, as these are also affected and still in supported range.
<p>Publish Date: 2021-05-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-29525>CVE-2021-29525</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/tensorflow/tensorflow/security/advisories/GHSA-xm2v-8rrw-w9pm">https://github.com/tensorflow/tensorflow/security/advisories/GHSA-xm2v-8rrw-w9pm</a></p>
<p>Release Date: 2021-05-14</p>
<p>Fix Resolution: tensorflow - 2.5.0, tensorflow-cpu - 2.5.0, tensorflow-gpu - 2.5.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-29525 (High) detected in tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl - ## CVE-2021-29525 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl</b></p></summary>
<p>TensorFlow is an open source machine learning framework for everyone.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/d2/ea/ab2c8c0e81bd051cc1180b104c75a865ab0fc66c89be992c4b20bbf6d624/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/d2/ea/ab2c8c0e81bd051cc1180b104c75a865ab0fc66c89be992c4b20bbf6d624/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl</a></p>
<p>
Dependency Hierarchy:
- :x: **tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
TensorFlow is an end-to-end open source platform for machine learning. An attacker can trigger a division by 0 in `tf.raw_ops.Conv2DBackpropInput`. This is because the implementation(https://github.com/tensorflow/tensorflow/blob/b40060c9f697b044e3107917c797ba052f4506ab/tensorflow/core/kernels/conv_grad_input_ops.h#L625-L655) does a division by a quantity that is controlled by the caller. The fix will be included in TensorFlow 2.5.0. We will also cherrypick this commit on TensorFlow 2.4.2, TensorFlow 2.3.3, TensorFlow 2.2.3 and TensorFlow 2.1.4, as these are also affected and still in supported range.
<p>Publish Date: 2021-05-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-29525>CVE-2021-29525</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/tensorflow/tensorflow/security/advisories/GHSA-xm2v-8rrw-w9pm">https://github.com/tensorflow/tensorflow/security/advisories/GHSA-xm2v-8rrw-w9pm</a></p>
<p>Release Date: 2021-05-14</p>
<p>Fix Resolution: tensorflow - 2.5.0, tensorflow-cpu - 2.5.0, tensorflow-gpu - 2.5.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | cve high detected in tensorflow whl cve high severity vulnerability vulnerable library tensorflow whl tensorflow is an open source machine learning framework for everyone library home page a href dependency hierarchy x tensorflow whl vulnerable library vulnerability details tensorflow is an end to end open source platform for machine learning an attacker can trigger a division by in tf raw ops this is because the implementation does a division by a quantity that is controlled by the caller the fix will be included in tensorflow we will also cherrypick this commit on tensorflow tensorflow tensorflow and tensorflow as these are also affected and still in supported range publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tensorflow tensorflow cpu tensorflow gpu step up your open source security game with whitesource | 0 |
53,698 | 13,262,113,559 | IssuesEvent | 2020-08-20 21:07:46 | icecube-trac/tix4 | https://api.github.com/repos/icecube-trac/tix4 | closed | [spline-reco] GetLogLikelihood_testWithFakePulses uses nonexistent geometry (Trac #1900) | Migrated from Trac combo reconstruction defect | The test GetLogLikelihood_testWithFakePulses creates pulses on OMKeys (1,1), (2,2), (3,3), (4,4), (5,5), and (10,5), but the test data used has a geometry with only one string, numbered (-1). The test happens to pass because I3SplineRecoLikelihood doesn't check whether the entries it looks up in the geometry actually exist.
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1900">https://code.icecube.wisc.edu/projects/icecube/ticket/1900</a>, reported by jvansantenand owned by gmaggi</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:13:30",
"_ts": "1550067210114669",
"description": "The test GetLogLikelihood_testWithFakePulses creates pulses on OMKeys (1,1), (2,2), (3,3), (4,4), (5,5), and (10,5), but the test data used has a geometry with only one string, numbered (-1). The test happens to pass because I3SplineRecoLikelihood doesn't check whether the entries it looks up in the geometry actually exist.",
"reporter": "jvansanten",
"cc": "",
"resolution": "fixed",
"time": "2016-10-17T18:51:17",
"component": "combo reconstruction",
"summary": "[spline-reco] GetLogLikelihood_testWithFakePulses uses nonexistent geometry",
"priority": "blocker",
"keywords": "",
"milestone": "",
"owner": "gmaggi",
"type": "defect"
}
```
</p>
</details>
| 1.0 | [spline-reco] GetLogLikelihood_testWithFakePulses uses nonexistent geometry (Trac #1900) - The test GetLogLikelihood_testWithFakePulses creates pulses on OMKeys (1,1), (2,2), (3,3), (4,4), (5,5), and (10,5), but the test data used has a geometry with only one string, numbered (-1). The test happens to pass because I3SplineRecoLikelihood doesn't check whether the entries it looks up in the geometry actually exist.
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1900">https://code.icecube.wisc.edu/projects/icecube/ticket/1900</a>, reported by jvansantenand owned by gmaggi</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:13:30",
"_ts": "1550067210114669",
"description": "The test GetLogLikelihood_testWithFakePulses creates pulses on OMKeys (1,1), (2,2), (3,3), (4,4), (5,5), and (10,5), but the test data used has a geometry with only one string, numbered (-1). The test happens to pass because I3SplineRecoLikelihood doesn't check whether the entries it looks up in the geometry actually exist.",
"reporter": "jvansanten",
"cc": "",
"resolution": "fixed",
"time": "2016-10-17T18:51:17",
"component": "combo reconstruction",
"summary": "[spline-reco] GetLogLikelihood_testWithFakePulses uses nonexistent geometry",
"priority": "blocker",
"keywords": "",
"milestone": "",
"owner": "gmaggi",
"type": "defect"
}
```
</p>
</details>
| defect | getloglikelihood testwithfakepulses uses nonexistent geometry trac the test getloglikelihood testwithfakepulses creates pulses on omkeys and but the test data used has a geometry with only one string numbered the test happens to pass because doesn t check whether the entries it looks up in the geometry actually exist migrated from json status closed changetime ts description the test getloglikelihood testwithfakepulses creates pulses on omkeys and but the test data used has a geometry with only one string numbered the test happens to pass because doesn t check whether the entries it looks up in the geometry actually exist reporter jvansanten cc resolution fixed time component combo reconstruction summary getloglikelihood testwithfakepulses uses nonexistent geometry priority blocker keywords milestone owner gmaggi type defect | 1 |
15,894 | 2,869,090,301 | IssuesEvent | 2015-06-05 23:15:21 | dart-lang/sdk | https://api.github.com/repos/dart-lang/sdk | closed | polymer_expressions should support Bindable | Area-Pkg Pkg-PolymerExpressions PolymerMilestone-Next Priority-Low Triaged Type-Defect | *This issue was originally filed by @jolleekin*
_____
FACTS
- PathObserver uses '.' to separate path segments
- PathObserver is supposed to support map indexer
- PathObserver converts path segments into integer or Symbol.
PROBLEMS
import 'package:observe/observe.dart';
var model = new ObservableMap.from({
'123': 'does not work as key is intepreted as a List index',
'!@#': 'does not work as key contains invalid characters for a Symbol',
'a.b': 'does not work as key contains "."',
'[]=': 'does not work although key is a valid Symbol pattern (an operator)!!!',
'abc': 'works as key a valid Symbol pattern',
'var': 'works although key is not a valid Symbol pattern (it is a keyword)!!!'
});
main() {
for (var key in model.keys) {
var obs = new PathObserver(model, key);
print(new Symbol(key));
obs.changes.listen((_) {
print('model["$key"] changed to "${obs.value}"');
});
}
for (var key in model.keys) {
model[key] = 'Dart';
}
}
PRODUCT VERSION
- package observe v0.9.3
| 1.0 | polymer_expressions should support Bindable - *This issue was originally filed by @jolleekin*
_____
FACTS
- PathObserver uses '.' to separate path segments
- PathObserver is supposed to support map indexer
- PathObserver converts path segments into integer or Symbol.
PROBLEMS
import 'package:observe/observe.dart';
var model = new ObservableMap.from({
'123': 'does not work as key is intepreted as a List index',
'!@#': 'does not work as key contains invalid characters for a Symbol',
'a.b': 'does not work as key contains "."',
'[]=': 'does not work although key is a valid Symbol pattern (an operator)!!!',
'abc': 'works as key a valid Symbol pattern',
'var': 'works although key is not a valid Symbol pattern (it is a keyword)!!!'
});
main() {
for (var key in model.keys) {
var obs = new PathObserver(model, key);
print(new Symbol(key));
obs.changes.listen((_) {
print('model["$key"] changed to "${obs.value}"');
});
}
for (var key in model.keys) {
model[key] = 'Dart';
}
}
PRODUCT VERSION
- package observe v0.9.3
| defect | polymer expressions should support bindable this issue was originally filed by jolleekin facts pathobserver uses to separate path segments pathobserver is supposed to support map indexer pathobserver converts path segments into integer or symbol problems nbsp nbsp nbsp nbsp import package observe observe dart nbsp nbsp nbsp nbsp var model new observablemap from nbsp nbsp nbsp nbsp nbsp nbsp does not work as key is intepreted as a list index nbsp nbsp nbsp nbsp nbsp nbsp does not work as key contains invalid characters for a symbol nbsp nbsp nbsp nbsp nbsp nbsp a b does not work as key contains quot quot nbsp nbsp nbsp nbsp nbsp nbsp does not work although key is a valid symbol pattern an operator nbsp nbsp nbsp nbsp nbsp nbsp abc works as key a valid symbol pattern nbsp nbsp nbsp nbsp nbsp nbsp var works although key is not a valid symbol pattern it is a keyword nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp main nbsp nbsp nbsp nbsp nbsp nbsp for var key in model keys nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp var obs new pathobserver model key nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp print new symbol key nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp obs changes listen nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp print model changed to quot obs value quot nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp for var key in model keys nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp model dart nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp product version package observe | 1 |
645,240 | 20,999,136,309 | IssuesEvent | 2022-03-29 15:48:02 | decentraland/explorer-desktop | https://api.github.com/repos/decentraland/explorer-desktop | reopened | For many users the initial default fullscreen resolution is super small | bug confirmed high priority | 

We should check this PR to at least try to force the issue to solve it:
https://github.com/decentraland/explorer-desktop/pull/74
Apparently this is only happening with 4k displays (3840x2160 res) | 1.0 | For many users the initial default fullscreen resolution is super small - 

We should check this PR to at least try to force the issue to solve it:
https://github.com/decentraland/explorer-desktop/pull/74
Apparently this is only happening with 4k displays (3840x2160 res) | non_defect | for many users the initial default fullscreen resolution is super small we should check this pr to at least try to force the issue to solve it apparently this is only happening with displays res | 0 |
212,802 | 16,499,202,600 | IssuesEvent | 2021-05-25 13:17:15 | Unity-Technologies/com.unity.perception | https://api.github.com/repos/Unity-Technologies/com.unity.perception | closed | How do I create a SynthDet serialized PyTorch model file? | documentation | After completing the Perception Tutorial, in learning Unity SynthDet on TorchServe (https://github.com/Unity-Technologies/perception-synthdet-torchserve/blob/master/docs/getting-started.md) Torch Server configuration and operation It is completed.
But there is no guide on how to create the synthdet_faster_rcnn.pth file.
I would appreciate it if you let me know. | 1.0 | How do I create a SynthDet serialized PyTorch model file? - After completing the Perception Tutorial, in learning Unity SynthDet on TorchServe (https://github.com/Unity-Technologies/perception-synthdet-torchserve/blob/master/docs/getting-started.md) Torch Server configuration and operation It is completed.
But there is no guide on how to create the synthdet_faster_rcnn.pth file.
I would appreciate it if you let me know. | non_defect | how do i create a synthdet serialized pytorch model file after completing the perception tutorial in learning unity synthdet on torchserve torch server configuration and operation it is completed but there is no guide on how to create the synthdet faster rcnn pth file i would appreciate it if you let me know | 0 |
56,847 | 15,394,302,398 | IssuesEvent | 2021-03-03 17:44:17 | scipy/scipy | https://api.github.com/repos/scipy/scipy | closed | BUG: invgauss.cdf should return the correct value when `mu` is very small. | defect scipy.stats | Currently `invgauss.cdf` returns NaN when mu is too small. This is due to `exp(1 /mu)` blowing up when mu is small (the docs say that this happens for values smaller than 0.0028).
https://github.com/scipy/scipy/blob/fc77ea19923c39618547b4033a9185dd8a3afcc1/scipy/stats/_continuous_distns.py#L3504-L3509
In the expression evaluatating the CDF, the term `_norm_cdf(-fac*(x+mu)/mu)` is zero when mu is very small, so that the CDF evaluates to 1 due to the term `_norm_cdf(fac*(x-mu)/mu)` being approximately 1 for very small `mu`.
I believe that returning `nan` instead of 1 is not the best approach to handle the overflow. The overflow of `exp` does not practically affect the final value of the cdf is cases when `mu` is very small because `_norm_cdf(-fac*(x+mu)/mu)` is zero anyway.
#### Reproducing code example:
```python
In [1]: import numpy as np
In [2]: from scipy.stats import invgauss
In [3]: rng = np.random.RandomState(1)
In [4]: mu = rng.uniform(0., 0.01, size=5)
In [5]: mu
Out[5]:
array([4.17022005e-03, 7.20324493e-03, 1.14374817e-06, 3.02332573e-03,
1.46755891e-03])
In [6]: invgauss.cdf(0.4, mu=mu)
/home/scipy/scipy/stats/_continuous_distns.py:3508: RuntimeWarning: overflow encountered in exp
C1 += np.exp(1.0/mu) * _norm_cdf(-fac*(x+mu)/mu) * np.exp(1.0/mu)
/home/scipy/scipy/stats/_continuous_distns.py:3508: RuntimeWarning: invalid value encountered in multiply
C1 += np.exp(1.0/mu) * _norm_cdf(-fac*(x+mu)/mu) * np.exp(1.0/mu)
Out[6]: array([ 1., 1., nan, 1., 1.])
```
I played around with handling the overflow by setting the value of `exp(1/mu)` to the largest double and then the remainder of the function evaluates to the "correct" value, which is 1.
```python
In [4]: mu = np.random.uniform(0., 0.01, size=5)
In [5]: invgauss.cdf(0.4, mu=mu)
Out[5]: array([1., 1., 1., 1., 1.])
In [6]: mu
Out[6]: array([0.0006815 , 0.00685858, 0.00949644, 0.00324687, 0.00621239])
In [7]: rng = np.random.RandomState(1)
In [8]: mu = rng.uniform(0., 0.01, size=5)
In [9]: mu
Out[9]:
array([4.17022005e-03, 7.20324493e-03, 1.14374817e-06, 3.02332573e-03,
1.46755891e-03])
In [10]: invgauss.cdf(0.4, mu=mu)
Out[10]: array([1., 1., 1., 1., 1.])
```
Here is the code snippet:
https://github.com/scipy/scipy/compare/master...zoj613:invgauss
I was wondering what does everyone think regarding changing the current behavior of the function.
```
1.7.0.dev0+fc77ea1 1.20.1 sys.version_info(major=3, minor=8, micro=6, releaselevel='final', serial=0)
```
| 1.0 | BUG: invgauss.cdf should return the correct value when `mu` is very small. - Currently `invgauss.cdf` returns NaN when mu is too small. This is due to `exp(1 /mu)` blowing up when mu is small (the docs say that this happens for values smaller than 0.0028).
https://github.com/scipy/scipy/blob/fc77ea19923c39618547b4033a9185dd8a3afcc1/scipy/stats/_continuous_distns.py#L3504-L3509
In the expression evaluatating the CDF, the term `_norm_cdf(-fac*(x+mu)/mu)` is zero when mu is very small, so that the CDF evaluates to 1 due to the term `_norm_cdf(fac*(x-mu)/mu)` being approximately 1 for very small `mu`.
I believe that returning `nan` instead of 1 is not the best approach to handle the overflow. The overflow of `exp` does not practically affect the final value of the cdf is cases when `mu` is very small because `_norm_cdf(-fac*(x+mu)/mu)` is zero anyway.
#### Reproducing code example:
```python
In [1]: import numpy as np
In [2]: from scipy.stats import invgauss
In [3]: rng = np.random.RandomState(1)
In [4]: mu = rng.uniform(0., 0.01, size=5)
In [5]: mu
Out[5]:
array([4.17022005e-03, 7.20324493e-03, 1.14374817e-06, 3.02332573e-03,
1.46755891e-03])
In [6]: invgauss.cdf(0.4, mu=mu)
/home/scipy/scipy/stats/_continuous_distns.py:3508: RuntimeWarning: overflow encountered in exp
C1 += np.exp(1.0/mu) * _norm_cdf(-fac*(x+mu)/mu) * np.exp(1.0/mu)
/home/scipy/scipy/stats/_continuous_distns.py:3508: RuntimeWarning: invalid value encountered in multiply
C1 += np.exp(1.0/mu) * _norm_cdf(-fac*(x+mu)/mu) * np.exp(1.0/mu)
Out[6]: array([ 1., 1., nan, 1., 1.])
```
I played around with handling the overflow by setting the value of `exp(1/mu)` to the largest double and then the remainder of the function evaluates to the "correct" value, which is 1.
```python
In [4]: mu = np.random.uniform(0., 0.01, size=5)
In [5]: invgauss.cdf(0.4, mu=mu)
Out[5]: array([1., 1., 1., 1., 1.])
In [6]: mu
Out[6]: array([0.0006815 , 0.00685858, 0.00949644, 0.00324687, 0.00621239])
In [7]: rng = np.random.RandomState(1)
In [8]: mu = rng.uniform(0., 0.01, size=5)
In [9]: mu
Out[9]:
array([4.17022005e-03, 7.20324493e-03, 1.14374817e-06, 3.02332573e-03,
1.46755891e-03])
In [10]: invgauss.cdf(0.4, mu=mu)
Out[10]: array([1., 1., 1., 1., 1.])
```
Here is the code snippet:
https://github.com/scipy/scipy/compare/master...zoj613:invgauss
I was wondering what does everyone think regarding changing the current behavior of the function.
```
1.7.0.dev0+fc77ea1 1.20.1 sys.version_info(major=3, minor=8, micro=6, releaselevel='final', serial=0)
```
| defect | bug invgauss cdf should return the correct value when mu is very small currently invgauss cdf returns nan when mu is too small this is due to exp mu blowing up when mu is small the docs say that this happens for values smaller than in the expression evaluatating the cdf the term norm cdf fac x mu mu is zero when mu is very small so that the cdf evaluates to due to the term norm cdf fac x mu mu being approximately for very small mu i believe that returning nan instead of is not the best approach to handle the overflow the overflow of exp does not practically affect the final value of the cdf is cases when mu is very small because norm cdf fac x mu mu is zero anyway reproducing code example python in import numpy as np in from scipy stats import invgauss in rng np random randomstate in mu rng uniform size in mu out array in invgauss cdf mu mu home scipy scipy stats continuous distns py runtimewarning overflow encountered in exp np exp mu norm cdf fac x mu mu np exp mu home scipy scipy stats continuous distns py runtimewarning invalid value encountered in multiply np exp mu norm cdf fac x mu mu np exp mu out array i played around with handling the overflow by setting the value of exp mu to the largest double and then the remainder of the function evaluates to the correct value which is python in mu np random uniform size in invgauss cdf mu mu out array in mu out array in rng np random randomstate in mu rng uniform size in mu out array in invgauss cdf mu mu out array here is the code snippet i was wondering what does everyone think regarding changing the current behavior of the function sys version info major minor micro releaselevel final serial | 1 |
36,378 | 7,920,373,942 | IssuesEvent | 2018-07-04 23:49:23 | cakephp/cakephp | https://api.github.com/repos/cakephp/cakephp | closed | FixtureManager::_setupTable not respecting $drop argument | Defect Need more information fixtures testing | This is a (multiple allowed):
* [x] bug
* [ ] enhancement
* [ ] feature-discussion (RFC)
* CakePHP Version: 3.6.
### What I did
Tried to create shell script that will load fixtures using FixtureManager. Main goal of this script was to load data to tables in database without dropping them. So I did:
```php
// getting model name as argument
$modelName = $this->args[0];
// getting instance of FixtureManager
$this->fixtureManager = new FixtureManager();
// loads fixtures
$this->fixtureManager->fixturize($this);
// setting database connection with default database and without alias
$db = ConnectionManager::get('default', false);
// loading fixture with args: model name, connection to database, dropping tables set to false
$this->fixtureManager->loadSingle($modelName, $db, $dropTables = false);
```
During debugging we have found issue in these lines:
```php
$hasSchema = $fixture instanceof TableSchemaAwareInterface && $fixture->getTableSchema() instanceof TableSchema;
if (($drop && $exists) || ($exists && !$isFixtureSetup && $hasSchema)) {
$fixture->drop($db);
$fixture->create($db);
}
```
Source: https://github.com/cakephp/cakephp/blob/a3c937712d6617ca5c65acc4cde3d91a7e9698a4/src/TestSuite/Fixture/FixtureManager.php#L255
### What happened
Loading fixtures as itself works fine, but I noticed that despite setting dropping tables to false it still does that. Firstly FixtureManager drops table, then he create it, after those two steps it finally pushes records to database.
### What was expected
What I expected was to load records from fixtures without dropping tables when drop argument was set to false.
| 1.0 | FixtureManager::_setupTable not respecting $drop argument - This is a (multiple allowed):
* [x] bug
* [ ] enhancement
* [ ] feature-discussion (RFC)
* CakePHP Version: 3.6.
### What I did
Tried to create shell script that will load fixtures using FixtureManager. Main goal of this script was to load data to tables in database without dropping them. So I did:
```php
// getting model name as argument
$modelName = $this->args[0];
// getting instance of FixtureManager
$this->fixtureManager = new FixtureManager();
// loads fixtures
$this->fixtureManager->fixturize($this);
// setting database connection with default database and without alias
$db = ConnectionManager::get('default', false);
// loading fixture with args: model name, connection to database, dropping tables set to false
$this->fixtureManager->loadSingle($modelName, $db, $dropTables = false);
```
During debugging we have found issue in these lines:
```php
$hasSchema = $fixture instanceof TableSchemaAwareInterface && $fixture->getTableSchema() instanceof TableSchema;
if (($drop && $exists) || ($exists && !$isFixtureSetup && $hasSchema)) {
$fixture->drop($db);
$fixture->create($db);
}
```
Source: https://github.com/cakephp/cakephp/blob/a3c937712d6617ca5c65acc4cde3d91a7e9698a4/src/TestSuite/Fixture/FixtureManager.php#L255
### What happened
Loading fixtures as itself works fine, but I noticed that despite setting dropping tables to false it still does that. Firstly FixtureManager drops table, then he create it, after those two steps it finally pushes records to database.
### What was expected
What I expected was to load records from fixtures without dropping tables when drop argument was set to false.
| defect | fixturemanager setuptable not respecting drop argument this is a multiple allowed bug enhancement feature discussion rfc cakephp version what i did tried to create shell script that will load fixtures using fixturemanager main goal of this script was to load data to tables in database without dropping them so i did php getting model name as argument modelname this args getting instance of fixturemanager this fixturemanager new fixturemanager loads fixtures this fixturemanager fixturize this setting database connection with default database and without alias db connectionmanager get default false loading fixture with args model name connection to database dropping tables set to false this fixturemanager loadsingle modelname db droptables false during debugging we have found issue in these lines php hasschema fixture instanceof tableschemaawareinterface fixture gettableschema instanceof tableschema if drop exists exists isfixturesetup hasschema fixture drop db fixture create db source what happened loading fixtures as itself works fine but i noticed that despite setting dropping tables to false it still does that firstly fixturemanager drops table then he create it after those two steps it finally pushes records to database what was expected what i expected was to load records from fixtures without dropping tables when drop argument was set to false | 1 |
66,823 | 20,712,108,964 | IssuesEvent | 2022-03-12 03:41:31 | department-of-veterans-affairs/va.gov-cms | https://api.github.com/repos/department-of-veterans-affairs/va.gov-cms | opened | Content type label should not be linked on Section listing pages | Defect Needs refining | ## Describe the defect
this is causing a lot of access denied errors.
## To Reproduce
1. As any user, go to a section page like https://staging.cms.va.gov/section/vamc-facilities
2. SEe that you can click on the content type
## Expected behavior
You shouldn't be able to click on the content type.
## Screenshots
<details><summary>Screenshot</summary>
<img width="1262" alt="VAMC_facilities___VA_gov_CMS" src="https://user-images.githubusercontent.com/643678/158002365-169c516c-2978-42df-bcf3-1284aabc095f.png">
</details>
## Additional context
Add any other context about the problem here. Reach out to the Product Managers to determine if it should be escalated as critical (prevents users from accomplishing their work with no known workaround and needs to be addressed within 2 business days).
## Desktop (please complete the following information if relevant, or delete)
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
## Labels
(You can delete this section once it's complete)
- [x] Issue type (red) (defaults to "Defect")
- [ ] CMS subsystem (green)
- [ ] CMS practice area (blue)
- [x] CMS workstream (orange) (not needed for bug tickets)
- [ ] CMS-supported product (black)
### CMS Team
Please check the team(s) that will do this work.
- [ ] `CMS Program`
- [ ] `Platform CMS Team`
- [ ] `Sitewide CMS Team ` (leave Sitewide unchecked and check the specific team instead)
- [x] `⭐️ Content ops`
- [ ] `⭐️ CMS experience`
- [ ] `⭐️ Offices`
- [ ] `⭐️ Product support`
- [ ] `⭐️ User support`
| 1.0 | Content type label should not be linked on Section listing pages - ## Describe the defect
this is causing a lot of access denied errors.
## To Reproduce
1. As any user, go to a section page like https://staging.cms.va.gov/section/vamc-facilities
2. SEe that you can click on the content type
## Expected behavior
You shouldn't be able to click on the content type.
## Screenshots
<details><summary>Screenshot</summary>
<img width="1262" alt="VAMC_facilities___VA_gov_CMS" src="https://user-images.githubusercontent.com/643678/158002365-169c516c-2978-42df-bcf3-1284aabc095f.png">
</details>
## Additional context
Add any other context about the problem here. Reach out to the Product Managers to determine if it should be escalated as critical (prevents users from accomplishing their work with no known workaround and needs to be addressed within 2 business days).
## Desktop (please complete the following information if relevant, or delete)
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
## Labels
(You can delete this section once it's complete)
- [x] Issue type (red) (defaults to "Defect")
- [ ] CMS subsystem (green)
- [ ] CMS practice area (blue)
- [x] CMS workstream (orange) (not needed for bug tickets)
- [ ] CMS-supported product (black)
### CMS Team
Please check the team(s) that will do this work.
- [ ] `CMS Program`
- [ ] `Platform CMS Team`
- [ ] `Sitewide CMS Team ` (leave Sitewide unchecked and check the specific team instead)
- [x] `⭐️ Content ops`
- [ ] `⭐️ CMS experience`
- [ ] `⭐️ Offices`
- [ ] `⭐️ Product support`
- [ ] `⭐️ User support`
| defect | content type label should not be linked on section listing pages describe the defect this is causing a lot of access denied errors to reproduce as any user go to a section page like see that you can click on the content type expected behavior you shouldn t be able to click on the content type screenshots screenshot img width alt vamc facilities va gov cms src additional context add any other context about the problem here reach out to the product managers to determine if it should be escalated as critical prevents users from accomplishing their work with no known workaround and needs to be addressed within business days desktop please complete the following information if relevant or delete os browser version labels you can delete this section once it s complete issue type red defaults to defect cms subsystem green cms practice area blue cms workstream orange not needed for bug tickets cms supported product black cms team please check the team s that will do this work cms program platform cms team sitewide cms team leave sitewide unchecked and check the specific team instead ⭐️ content ops ⭐️ cms experience ⭐️ offices ⭐️ product support ⭐️ user support | 1 |
369,316 | 10,895,426,186 | IssuesEvent | 2019-11-19 10:37:44 | OpenSRP/opensrp-client-chw-anc | https://api.github.com/repos/OpenSRP/opensrp-client-chw-anc | closed | Exclusive breastfeeding task in PNC is not working correctly | BA-specific High Priority | - [x] When No is select, No should appear in the underneath text for the completed task
- [x] When Yes is select, Yes should appear in the underneath text for the completed task
Currently, the opposite is happening - When I select No or Yes, the opposite appears Yes or No, respectively
To replicate,
1. Do a PNC home visit
2. Select Yes in exclusive breastfeeding
3. Save
4. Text appears No and Yellow task coloring - The opposite should occur | 1.0 | Exclusive breastfeeding task in PNC is not working correctly - - [x] When No is select, No should appear in the underneath text for the completed task
- [x] When Yes is select, Yes should appear in the underneath text for the completed task
Currently, the opposite is happening - When I select No or Yes, the opposite appears Yes or No, respectively
To replicate,
1. Do a PNC home visit
2. Select Yes in exclusive breastfeeding
3. Save
4. Text appears No and Yellow task coloring - The opposite should occur | non_defect | exclusive breastfeeding task in pnc is not working correctly when no is select no should appear in the underneath text for the completed task when yes is select yes should appear in the underneath text for the completed task currently the opposite is happening when i select no or yes the opposite appears yes or no respectively to replicate do a pnc home visit select yes in exclusive breastfeeding save text appears no and yellow task coloring the opposite should occur | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.