Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 5
112
| repo_url
stringlengths 34
141
| action
stringclasses 3
values | title
stringlengths 1
757
| labels
stringlengths 4
664
| body
stringlengths 3
261k
| index
stringclasses 10
values | text_combine
stringlengths 96
261k
| label
stringclasses 2
values | text
stringlengths 96
232k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
261,626
| 22,759,997,044
|
IssuesEvent
|
2022-07-07 20:09:14
|
astronomer/astro-sdk
|
https://api.github.com/repos/astronomer/astro-sdk
|
closed
|
Evaluate the performance of loading datasets with Astro Python SDK 0.9.0 into Postgres
|
testing
|
**Dependencies**
* Depends on: #432
* Depends on: #434
* Depends on: #433
* Depends on: #435
**Acceptance criteria**
* Trigger the benchmark to load all the supported (CSV-format, available in GCS) datasets into Postgres (K8s-hosted), exporting the resulting metrics. Use a large worker node. Set timeout to 1h.
* Document the resulting metrics in the Astro SDK repo, within: `tests/benchmark`
|
1.0
|
Evaluate the performance of loading datasets with Astro Python SDK 0.9.0 into Postgres - **Dependencies**
* Depends on: #432
* Depends on: #434
* Depends on: #433
* Depends on: #435
**Acceptance criteria**
* Trigger the benchmark to load all the supported (CSV-format, available in GCS) datasets into Postgres (K8s-hosted), exporting the resulting metrics. Use a large worker node. Set timeout to 1h.
* Document the resulting metrics in the Astro SDK repo, within: `tests/benchmark`
|
non_defect
|
evaluate the performance of loading datasets with astro python sdk into postgres dependencies depends on depends on depends on depends on acceptance criteria trigger the benchmark to load all the supported csv format available in gcs datasets into postgres hosted exporting the resulting metrics use a large worker node set timeout to document the resulting metrics in the astro sdk repo within tests benchmark
| 0
|
76,397
| 26,409,143,403
|
IssuesEvent
|
2023-01-13 10:39:52
|
vector-im/element-call
|
https://api.github.com/repos/vector-im/element-call
|
closed
|
Splitbrain: People don't appear in call but EC is using same group call
|
T-Defect S-Major O-Occasional
|
### Steps to reproduce
Unknown
### Outcome
#### What did you expect?
Everyone can see each other
#### What happened instead?
We've observed one (so far it's always been one) party who thinks they are in the call and their client has found the same group call in the same room, but a bunch of other people are in the group call and can see each other but not the first party.
### Operating system
_No response_
### Browser information
_No response_
### URL for webapp
call.element.io
### Will you send logs?
Yes
|
1.0
|
Splitbrain: People don't appear in call but EC is using same group call - ### Steps to reproduce
Unknown
### Outcome
#### What did you expect?
Everyone can see each other
#### What happened instead?
We've observed one (so far it's always been one) party who thinks they are in the call and their client has found the same group call in the same room, but a bunch of other people are in the group call and can see each other but not the first party.
### Operating system
_No response_
### Browser information
_No response_
### URL for webapp
call.element.io
### Will you send logs?
Yes
|
defect
|
splitbrain people don t appear in call but ec is using same group call steps to reproduce unknown outcome what did you expect everyone can see each other what happened instead we ve observed one so far it s always been one party who thinks they are in the call and their client has found the same group call in the same room but a bunch of other people are in the group call and can see each other but not the first party operating system no response browser information no response url for webapp call element io will you send logs yes
| 1
|
22,613
| 11,761,243,243
|
IssuesEvent
|
2020-03-13 21:25:13
|
Azure/AppConfiguration
|
https://api.github.com/repos/Azure/AppConfiguration
|
closed
|
x-ms-useragent is no longer being accepted by appconfig (customer reported)
|
service
|
A customer has reported an issue where appconfig is no longer accepting the x-ms-useragent header. This prevents users from using appconfig in the browser.
This was fixed in #217 but appears to have regressed and is missing again. I see this list of headers when doing an OPTIONS request against my own appconfig server (`x-ms-useragent` is not present):
```
$ curl -XOPTIONS -vv https://[app config server].azconfig.io
```
Headers:
```
< Access-Control-Allow-Headers: DNT, X-CustomHeader, Keep-Alive, User-Agent, X-Requested-With, If-Modified-Since, Cache-Control, Content-Type, Authorization, x-ms-client-request-id, x-ms-content-sha256, x-ms-date, host, Accept, Accept-Datetime, Date, If-Match, If-None-Match, Sync-Token, x-ms-return-client-request-id, ETag, Last-Modified, Link, Memento-Datetime, x-ms-retry-after, x-ms-request-id, WWW-Authenticate
```
Customer issue: https://github.com/Azure/azure-sdk-for-js/issues/7529
|
1.0
|
x-ms-useragent is no longer being accepted by appconfig (customer reported) - A customer has reported an issue where appconfig is no longer accepting the x-ms-useragent header. This prevents users from using appconfig in the browser.
This was fixed in #217 but appears to have regressed and is missing again. I see this list of headers when doing an OPTIONS request against my own appconfig server (`x-ms-useragent` is not present):
```
$ curl -XOPTIONS -vv https://[app config server].azconfig.io
```
Headers:
```
< Access-Control-Allow-Headers: DNT, X-CustomHeader, Keep-Alive, User-Agent, X-Requested-With, If-Modified-Since, Cache-Control, Content-Type, Authorization, x-ms-client-request-id, x-ms-content-sha256, x-ms-date, host, Accept, Accept-Datetime, Date, If-Match, If-None-Match, Sync-Token, x-ms-return-client-request-id, ETag, Last-Modified, Link, Memento-Datetime, x-ms-retry-after, x-ms-request-id, WWW-Authenticate
```
Customer issue: https://github.com/Azure/azure-sdk-for-js/issues/7529
|
non_defect
|
x ms useragent is no longer being accepted by appconfig customer reported a customer has reported an issue where appconfig is no longer accepting the x ms useragent header this prevents users from using appconfig in the browser this was fixed in but appears to have regressed and is missing again i see this list of headers when doing an options request against my own appconfig server x ms useragent is not present curl xoptions vv https azconfig io headers access control allow headers dnt x customheader keep alive user agent x requested with if modified since cache control content type authorization x ms client request id x ms content x ms date host accept accept datetime date if match if none match sync token x ms return client request id etag last modified link memento datetime x ms retry after x ms request id www authenticate customer issue
| 0
|
186,390
| 6,735,704,602
|
IssuesEvent
|
2017-10-18 23:05:40
|
eustasy/howtoelementary.org
|
https://api.github.com/repos/eustasy/howtoelementary.org
|
closed
|
Move to a generation 5 server. (jamie@Moriarty)
|
Priority: Critical Status: Confirmed
|
- [ ] Location: NYC3
- [ ] Update: Auto
- [ ] Deploy: Auto
- [ ] Ubuntu: 16.04
- [ ] Nginx: Mainline
- [ ] PHP: 7.0
- [ ] SQL: MariaDB 10.1
- [ ] New Relic
- [ ] PHPMyAdmin
- [ ] VSFTPd
- [ ] Fail2Ban
- [ ] Keys & Second Factor with password for SUDO
<bountysource-plugin>
---
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/33961172-move-to-a-generation-5-server-jamie-moriarty?utm_campaign=plugin&utm_content=tracker%2F364104&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F364104&utm_medium=issues&utm_source=github).
</bountysource-plugin>
|
1.0
|
Move to a generation 5 server. (jamie@Moriarty) - - [ ] Location: NYC3
- [ ] Update: Auto
- [ ] Deploy: Auto
- [ ] Ubuntu: 16.04
- [ ] Nginx: Mainline
- [ ] PHP: 7.0
- [ ] SQL: MariaDB 10.1
- [ ] New Relic
- [ ] PHPMyAdmin
- [ ] VSFTPd
- [ ] Fail2Ban
- [ ] Keys & Second Factor with password for SUDO
<bountysource-plugin>
---
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/33961172-move-to-a-generation-5-server-jamie-moriarty?utm_campaign=plugin&utm_content=tracker%2F364104&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F364104&utm_medium=issues&utm_source=github).
</bountysource-plugin>
|
non_defect
|
move to a generation server jamie moriarty location update auto deploy auto ubuntu nginx mainline php sql mariadb new relic phpmyadmin vsftpd keys second factor with password for sudo want to back this issue we accept bounties via
| 0
|
81,017
| 30,659,176,539
|
IssuesEvent
|
2023-07-25 14:01:13
|
hazelcast/hazelcast
|
https://api.github.com/repos/hazelcast/hazelcast
|
closed
|
Exponential backoff resets by the AppendResponse from the previous round AppendRequest
|
Type: Defect Team: Core Type: Perf. Defect Source: Internal Module: CP Subsystem to-jira
|
The is an issue with exponential backoff (retries) behaviour in the CP subsystem
Explanation:
After sending the `AppendRequest` leader waits for `AppendResponse` for `the backoff_timeout` period, after which it retries the `AppendRequest`
The code assumes that `backoff_timeout` will be increasing exponentially after each `AppendRequest` without a response.
With default 100ms `appendRequestBackoffTimeoutInMillis` it will be: 100, 200, 400, 800, 1600, … ms and this behaviour should slow down retries:
https://github.com/hazelcast/hazelcast/blob/master/hazelcast/src/main/java/com/hazelcast/cp/internal/raft/impl/RaftNodeImpl.java#L756-L757
But there is an issue here.
If the `AppendResponse` which belongs to the previous `AppendRequest` round, arrives, for example, between 800-1600 ms waiting time of the current `AppendRequest`, it resets the `backoff_timeout`.
So we will have the following `backoff_timeouts` numbers:
100, 200, 400, 800, backoff reset, 100, 200, 400, backoff reset, 100, 200, ….
As a result, the back-off almost does not work.
In the MicroRaft there is a fix for this:
there is a way to track for what request we received a response by introducing a new `flowControlSequenceNumber` into AppendRequest/AppendResponse messages to perform matching between them https://github.com/MicroRaft/MicroRaft/blob/master/microraft/src/main/java/io/microraft/impl/state/FollowerState.java#L127-L141
Jira https://hazelcast.atlassian.net/browse/HZ-2600
|
2.0
|
Exponential backoff resets by the AppendResponse from the previous round AppendRequest - The is an issue with exponential backoff (retries) behaviour in the CP subsystem
Explanation:
After sending the `AppendRequest` leader waits for `AppendResponse` for `the backoff_timeout` period, after which it retries the `AppendRequest`
The code assumes that `backoff_timeout` will be increasing exponentially after each `AppendRequest` without a response.
With default 100ms `appendRequestBackoffTimeoutInMillis` it will be: 100, 200, 400, 800, 1600, … ms and this behaviour should slow down retries:
https://github.com/hazelcast/hazelcast/blob/master/hazelcast/src/main/java/com/hazelcast/cp/internal/raft/impl/RaftNodeImpl.java#L756-L757
But there is an issue here.
If the `AppendResponse` which belongs to the previous `AppendRequest` round, arrives, for example, between 800-1600 ms waiting time of the current `AppendRequest`, it resets the `backoff_timeout`.
So we will have the following `backoff_timeouts` numbers:
100, 200, 400, 800, backoff reset, 100, 200, 400, backoff reset, 100, 200, ….
As a result, the back-off almost does not work.
In the MicroRaft there is a fix for this:
there is a way to track for what request we received a response by introducing a new `flowControlSequenceNumber` into AppendRequest/AppendResponse messages to perform matching between them https://github.com/MicroRaft/MicroRaft/blob/master/microraft/src/main/java/io/microraft/impl/state/FollowerState.java#L127-L141
Jira https://hazelcast.atlassian.net/browse/HZ-2600
|
defect
|
exponential backoff resets by the appendresponse from the previous round appendrequest the is an issue with exponential backoff retries behaviour in the cp subsystem explanation after sending the appendrequest leader waits for appendresponse for the backoff timeout period after which it retries the appendrequest the code assumes that backoff timeout will be increasing exponentially after each appendrequest without a response with default appendrequestbackofftimeoutinmillis it will be … ms and this behaviour should slow down retries but there is an issue here if the appendresponse which belongs to the previous appendrequest round arrives for example between ms waiting time of the current appendrequest it resets the backoff timeout so we will have the following backoff timeouts numbers backoff reset backoff reset … as a result the back off almost does not work in the microraft there is a fix for this there is a way to track for what request we received a response by introducing a new flowcontrolsequencenumber into appendrequest appendresponse messages to perform matching between them jira
| 1
|
273,681
| 20,811,936,867
|
IssuesEvent
|
2022-03-18 04:28:51
|
SE701-T5/Backend
|
https://api.github.com/repos/SE701-T5/Backend
|
closed
|
Fix pull_request_template.md Broken Link
|
bug documentation
|
The link to the Contributions Guideline at the bottom of the pull_request_template.md file is broken.
It will be replaced with:
```
For more information, refer to the Contributing Guidelines and Code of Conduct links at the bottom of this page.
```
|
1.0
|
Fix pull_request_template.md Broken Link - The link to the Contributions Guideline at the bottom of the pull_request_template.md file is broken.
It will be replaced with:
```
For more information, refer to the Contributing Guidelines and Code of Conduct links at the bottom of this page.
```
|
non_defect
|
fix pull request template md broken link the link to the contributions guideline at the bottom of the pull request template md file is broken it will be replaced with for more information refer to the contributing guidelines and code of conduct links at the bottom of this page
| 0
|
307,986
| 26,572,325,295
|
IssuesEvent
|
2023-01-21 10:25:33
|
heyazoo1007/accountbook
|
https://api.github.com/repos/heyazoo1007/accountbook
|
closed
|
[feat] 공통된 예외/응답 처리, 인증 이메일, 회원가입, 로그인
|
✨feature ✅ Test
|
## 📌 Feature Issue
<!-- 구현할 기능에 대한 내용을 설명해주세요. -->
프로젝트 초반 틀 설정
* 공통된 예외처리
* 공통된 api 응답을 위한 ApiResponse
* 사용자 회원가입
* 로그인 구현
## 📝 To-do
<!-- 해야 할 일들을 적어주세요. -->
- [x] 예외처리 클래스(ControllerExceptionHandler, ErrorCode 등) 구현
- [x] ApiResponse 클래스 구현
- [x] 이메일 중복 확인
- [x] 인증 이메일 전송
- [x] 사용자 회원가입
- [x] JWT 토큰 이용한 로그인 구현
- [x] 서비스 테스트
|
1.0
|
[feat] 공통된 예외/응답 처리, 인증 이메일, 회원가입, 로그인 - ## 📌 Feature Issue
<!-- 구현할 기능에 대한 내용을 설명해주세요. -->
프로젝트 초반 틀 설정
* 공통된 예외처리
* 공통된 api 응답을 위한 ApiResponse
* 사용자 회원가입
* 로그인 구현
## 📝 To-do
<!-- 해야 할 일들을 적어주세요. -->
- [x] 예외처리 클래스(ControllerExceptionHandler, ErrorCode 등) 구현
- [x] ApiResponse 클래스 구현
- [x] 이메일 중복 확인
- [x] 인증 이메일 전송
- [x] 사용자 회원가입
- [x] JWT 토큰 이용한 로그인 구현
- [x] 서비스 테스트
|
non_defect
|
공통된 예외 응답 처리 인증 이메일 회원가입 로그인 📌 feature issue 프로젝트 초반 틀 설정 공통된 예외처리 공통된 api 응답을 위한 apiresponse 사용자 회원가입 로그인 구현 📝 to do 예외처리 클래스 controllerexceptionhandler errorcode 등 구현 apiresponse 클래스 구현 이메일 중복 확인 인증 이메일 전송 사용자 회원가입 jwt 토큰 이용한 로그인 구현 서비스 테스트
| 0
|
35,568
| 7,779,411,036
|
IssuesEvent
|
2018-06-05 16:44:15
|
HewlettPackard/yoda-demo
|
https://api.github.com/repos/HewlettPackard/yoda-demo
|
closed
|
Issue entering new data during weekends.
|
S3 - Medium T1 - Defect
|
<p>Lorem ipsum molestie consectetur litora ac leo urna iaculis, mauris neque eget pellentesque consequat suspendisse.</p>
<p>At eu vitae justo pretium ultrices, mi sociosqu ad semper ac proin, eu accumsan tincidunt ac.</p>
<p>Torquent cursus dui viverra ut blandit neque, mi aliquet pharetra vehicula lacinia consectetur phasellus, vivamus aliquam molestie rutrum felis.</p>
<p>Ornare dui nulla consectetur orci arcu aliquet lectus nisl cursus tortor, mi sollicitudin turpis himenaeos sociosqu bibendum molestie per.</p>
> estimate 3
|
1.0
|
Issue entering new data during weekends. - <p>Lorem ipsum molestie consectetur litora ac leo urna iaculis, mauris neque eget pellentesque consequat suspendisse.</p>
<p>At eu vitae justo pretium ultrices, mi sociosqu ad semper ac proin, eu accumsan tincidunt ac.</p>
<p>Torquent cursus dui viverra ut blandit neque, mi aliquet pharetra vehicula lacinia consectetur phasellus, vivamus aliquam molestie rutrum felis.</p>
<p>Ornare dui nulla consectetur orci arcu aliquet lectus nisl cursus tortor, mi sollicitudin turpis himenaeos sociosqu bibendum molestie per.</p>
> estimate 3
|
defect
|
issue entering new data during weekends lorem ipsum molestie consectetur litora ac leo urna iaculis mauris neque eget pellentesque consequat suspendisse at eu vitae justo pretium ultrices mi sociosqu ad semper ac proin eu accumsan tincidunt ac torquent cursus dui viverra ut blandit neque mi aliquet pharetra vehicula lacinia consectetur phasellus vivamus aliquam molestie rutrum felis ornare dui nulla consectetur orci arcu aliquet lectus nisl cursus tortor mi sollicitudin turpis himenaeos sociosqu bibendum molestie per estimate
| 1
|
55,268
| 3,072,600,730
|
IssuesEvent
|
2015-08-19 17:44:56
|
RobotiumTech/robotium
|
https://api.github.com/repos/RobotiumTech/robotium
|
closed
|
clickOnText can't scroll down ListView in a Fragment in a ViewPager
|
bug imported invalid Priority-Medium
|
_From [mrlhwlib...@gmail.com](https://code.google.com/u/107770464206980364909/) on April 14, 2012 09:12:47_
Robotium doesn't work well with a list view that can be scroll down in a ViewPager. What steps will reproduce the problem? 1. Use AnyMemo's APK here https://code.google.com/p/anymemo/downloads/list 2. clickOnText("Misc"); clickOnText("About"); What is the expected output? What do you see instead? The About should be clicked but actually not. What version of the product are you using? On what operating system? 3.1, Android 4.0.3, Android 2.3 Please provide any additional information below.
_Original issue: http://code.google.com/p/robotium/issues/detail?id=247_
|
1.0
|
clickOnText can't scroll down ListView in a Fragment in a ViewPager - _From [mrlhwlib...@gmail.com](https://code.google.com/u/107770464206980364909/) on April 14, 2012 09:12:47_
Robotium doesn't work well with a list view that can be scroll down in a ViewPager. What steps will reproduce the problem? 1. Use AnyMemo's APK here https://code.google.com/p/anymemo/downloads/list 2. clickOnText("Misc"); clickOnText("About"); What is the expected output? What do you see instead? The About should be clicked but actually not. What version of the product are you using? On what operating system? 3.1, Android 4.0.3, Android 2.3 Please provide any additional information below.
_Original issue: http://code.google.com/p/robotium/issues/detail?id=247_
|
non_defect
|
clickontext can t scroll down listview in a fragment in a viewpager from on april robotium doesn t work well with a list view that can be scroll down in a viewpager what steps will reproduce the problem use anymemo s apk here clickontext misc clickontext about what is the expected output what do you see instead the about should be clicked but actually not what version of the product are you using on what operating system android android please provide any additional information below original issue
| 0
|
78,216
| 22,155,575,457
|
IssuesEvent
|
2022-06-03 22:10:02
|
apache/beam
|
https://api.github.com/repos/apache/beam
|
opened
|
Support for provided configuration in Intellij Idea
|
P3 bug build-system
|
Intellij Idea (2018.2.1) does not pick up provided dependencies.
Imported from Jira [BEAM-5297](https://issues.apache.org/jira/browse/BEAM-5297). Original Jira may contain additional context.
Reported by: dmvk.
|
1.0
|
Support for provided configuration in Intellij Idea - Intellij Idea (2018.2.1) does not pick up provided dependencies.
Imported from Jira [BEAM-5297](https://issues.apache.org/jira/browse/BEAM-5297). Original Jira may contain additional context.
Reported by: dmvk.
|
non_defect
|
support for provided configuration in intellij idea intellij idea does not pick up provided dependencies imported from jira original jira may contain additional context reported by dmvk
| 0
|
18,447
| 3,061,982,597
|
IssuesEvent
|
2015-08-16 04:09:58
|
eczarny/spectacle
|
https://api.github.com/repos/eczarny/spectacle
|
closed
|
If window can't shrink to 1/3, it can't get to the 2/3 size
|
defect ★★
|
Cycling through the Left/Right Half sizes should size the window 1/2, 1/3, 2/3, but if the Applications doesn't allow the window to size to 1/3 of the screen, you also won't be able to get to the 2/3 size, which I'm going for. I'm experiencing this behavior in Navicat.
|
1.0
|
If window can't shrink to 1/3, it can't get to the 2/3 size - Cycling through the Left/Right Half sizes should size the window 1/2, 1/3, 2/3, but if the Applications doesn't allow the window to size to 1/3 of the screen, you also won't be able to get to the 2/3 size, which I'm going for. I'm experiencing this behavior in Navicat.
|
defect
|
if window can t shrink to it can t get to the size cycling through the left right half sizes should size the window but if the applications doesn t allow the window to size to of the screen you also won t be able to get to the size which i m going for i m experiencing this behavior in navicat
| 1
|
62,417
| 17,023,919,142
|
IssuesEvent
|
2021-07-03 04:33:22
|
tomhughes/trac-tickets
|
https://api.github.com/repos/tomhughes/trac-tickets
|
closed
|
"%{possible_points} points" Needs plural forms
|
Component: website Priority: minor Resolution: duplicate Type: defect
|
**[Submitted to the original trac issue database at 1.11pm, Sunday, 15th March 2015]**
Can you please set the following string to use the plural format?
OSM website
[Wiki]
loaded successfully with %{trace_points} out of a possible %{possible_points} points.
I think it should look like this:
loaded successfully with %{trace_points} out of {{PLURAL|a possible %{possible_points} point|a possible %{possible_points} points}}
|
1.0
|
"%{possible_points} points" Needs plural forms - **[Submitted to the original trac issue database at 1.11pm, Sunday, 15th March 2015]**
Can you please set the following string to use the plural format?
OSM website
[Wiki]
loaded successfully with %{trace_points} out of a possible %{possible_points} points.
I think it should look like this:
loaded successfully with %{trace_points} out of {{PLURAL|a possible %{possible_points} point|a possible %{possible_points} points}}
|
defect
|
possible points points needs plural forms can you please set the following string to use the plural format osm website loaded successfully with trace points out of a possible possible points points i think it should look like this loaded successfully with trace points out of plural a possible possible points point a possible possible points points
| 1
|
26,312
| 4,676,684,805
|
IssuesEvent
|
2016-10-07 12:50:20
|
phingofficial/phing-issues-test
|
https://api.github.com/repos/phingofficial/phing-issues-test
|
opened
|
User guide - Wrong property use (Trac #12)
|
defect Incomplete Migration Migrated from Trac
|
Migrated from https://www.phing.info/trac/ticket/12
```json
{
"status": "closed",
"changetime": "2009-03-22T00:47:53",
"description": "In the user guide PropertyTask example, the properties are inserted into the attributes with the dollar sign after the opening curly bracket, instead of before like it should be:\nhttp://phing.info/docs/guide/current/chapters/appendixes/AppendixB-CoreTasks.html#PropertyTask\n\nI'm new to Phing and this simple mistake made me lose lots of time.",
"reporter": "anonymous",
"cc": "",
"resolution": "fixed",
"_ts": "1237682873000000",
"component": "",
"summary": "User guide - Wrong property use",
"priority": "major",
"keywords": "",
"version": "",
"time": "2006-03-11T22:39:31",
"milestone": "2.2.0",
"owner": "",
"type": "defect"
}
```
|
1.0
|
User guide - Wrong property use (Trac #12) - Migrated from https://www.phing.info/trac/ticket/12
```json
{
"status": "closed",
"changetime": "2009-03-22T00:47:53",
"description": "In the user guide PropertyTask example, the properties are inserted into the attributes with the dollar sign after the opening curly bracket, instead of before like it should be:\nhttp://phing.info/docs/guide/current/chapters/appendixes/AppendixB-CoreTasks.html#PropertyTask\n\nI'm new to Phing and this simple mistake made me lose lots of time.",
"reporter": "anonymous",
"cc": "",
"resolution": "fixed",
"_ts": "1237682873000000",
"component": "",
"summary": "User guide - Wrong property use",
"priority": "major",
"keywords": "",
"version": "",
"time": "2006-03-11T22:39:31",
"milestone": "2.2.0",
"owner": "",
"type": "defect"
}
```
|
defect
|
user guide wrong property use trac migrated from json status closed changetime description in the user guide propertytask example the properties are inserted into the attributes with the dollar sign after the opening curly bracket instead of before like it should be n new to phing and this simple mistake made me lose lots of time reporter anonymous cc resolution fixed ts component summary user guide wrong property use priority major keywords version time milestone owner type defect
| 1
|
23,824
| 6,486,545,994
|
IssuesEvent
|
2017-08-19 20:45:22
|
rye/krye.io
|
https://api.github.com/repos/rye/krye.io
|
closed
|
Résumé: Restructure Styles and Distribute SCSS Into Multiple Files
|
Concerns: Code Style Concerns: Design Priority: Medium Type: Enhancement
|
This should take place in `resume.scss`. We need to move our styles into a nice structure and create multiple SCSS files for the various components of this.
This will increase the number of requests but also increase the modularity of our assets which will be very beneficial.
Conversely, it would also be acceptable to have our rules just be generalized and made less redundant.
|
1.0
|
Résumé: Restructure Styles and Distribute SCSS Into Multiple Files - This should take place in `resume.scss`. We need to move our styles into a nice structure and create multiple SCSS files for the various components of this.
This will increase the number of requests but also increase the modularity of our assets which will be very beneficial.
Conversely, it would also be acceptable to have our rules just be generalized and made less redundant.
|
non_defect
|
résumé restructure styles and distribute scss into multiple files this should take place in resume scss we need to move our styles into a nice structure and create multiple scss files for the various components of this this will increase the number of requests but also increase the modularity of our assets which will be very beneficial conversely it would also be acceptable to have our rules just be generalized and made less redundant
| 0
|
300,140
| 22,642,890,715
|
IssuesEvent
|
2022-07-01 05:17:53
|
arcus-azure/arcus.webapi
|
https://api.github.com/repos/arcus-azure/arcus.webapi
|
closed
|
Documentation: Authentication samples in docs are no longer up-to-date
|
enhancement good first issue documentation area:security
|
The examples that show how the authentication helpers that are defined in Arcus can be used, are not up-to-date.
The samples are still using the `AddMvc` extension method which is replaced since ASP.NET Core 3.x with other methods like `AddControllers`, `AddControllersWithViews` ....
Next to that, I also wonder if the approach with using Filters is still considered as a good practice for security related things ?
(I'm talking about the samples that are shown [here](https://webapi.arcus-azure.net/features/security/auth/jwt), [here](https://webapi.arcus-azure.net/features/security/auth/shared-access-key) and [here](https://webapi.arcus-azure.net/features/security/auth/certificate))
|
1.0
|
Documentation: Authentication samples in docs are no longer up-to-date - The examples that show how the authentication helpers that are defined in Arcus can be used, are not up-to-date.
The samples are still using the `AddMvc` extension method which is replaced since ASP.NET Core 3.x with other methods like `AddControllers`, `AddControllersWithViews` ....
Next to that, I also wonder if the approach with using Filters is still considered as a good practice for security related things ?
(I'm talking about the samples that are shown [here](https://webapi.arcus-azure.net/features/security/auth/jwt), [here](https://webapi.arcus-azure.net/features/security/auth/shared-access-key) and [here](https://webapi.arcus-azure.net/features/security/auth/certificate))
|
non_defect
|
documentation authentication samples in docs are no longer up to date the examples that show how the authentication helpers that are defined in arcus can be used are not up to date the samples are still using the addmvc extension method which is replaced since asp net core x with other methods like addcontrollers addcontrollerswithviews next to that i also wonder if the approach with using filters is still considered as a good practice for security related things i m talking about the samples that are shown and
| 0
|
296,499
| 25,554,176,077
|
IssuesEvent
|
2022-11-30 04:14:07
|
duckie-team/quack-quack-android
|
https://api.github.com/repos/duckie-team/quack-quack-android
|
closed
|
린트 테스트 코드 업데이트
|
lint test
|
1. suppress 가 과하게 사용됐음 (필요 없어도 되는 부분까지 사용됨)
2. 코틀린 더미 코드에 임폴트가 없어서 에러나는 경우가 대부분임. 더미 코드에 에러가 발생하면 테스트 결과 또한 잘못될 가능성이 생김.
|
1.0
|
린트 테스트 코드 업데이트 - 1. suppress 가 과하게 사용됐음 (필요 없어도 되는 부분까지 사용됨)
2. 코틀린 더미 코드에 임폴트가 없어서 에러나는 경우가 대부분임. 더미 코드에 에러가 발생하면 테스트 결과 또한 잘못될 가능성이 생김.
|
non_defect
|
린트 테스트 코드 업데이트 suppress 가 과하게 사용됐음 필요 없어도 되는 부분까지 사용됨 코틀린 더미 코드에 임폴트가 없어서 에러나는 경우가 대부분임 더미 코드에 에러가 발생하면 테스트 결과 또한 잘못될 가능성이 생김
| 0
|
73,405
| 24,615,117,763
|
IssuesEvent
|
2022-10-15 07:49:01
|
scipy/scipy
|
https://api.github.com/repos/scipy/scipy
|
closed
|
BUG: stats.rayleigh.fit: returns `loc` that is inconsistent with the data
|
defect scipy.stats
|
### Describe your issue.
`stats.rayleigh.fit` can return `loc` that is inconsistent with the data whenever `loc` is not fixed.
This is because the `fit` routine is overridden to solve the likelihood equation for the location without considering the constraint that $x - \texttt{loc} > 0$. The solution to the first-order condition can violate this condition, apparently.
Using `root_scalar` instead of `fsolve` might solve the problem. The bracket would allow us to enforce the constraint. ~~If we're lucky and a dumb bracket like `-1e300, np.min(rvs)` always changes sign exactly once, it would probably be faster, too.~~ We're not super lucky, but `root_scalar` does work.
See gh-12968 for history.
### Reproducing Code Example
```python
import numpy as np
from scipy import stats
rng = np.random.default_rng(456)
loc, scale, size = 50, 600, 500
rvs = stats.rayleigh.rvs(loc, scale, size=size, random_state=rng)
loc_fit, scale_fit = stats.rayleigh.fit(rvs)
print(loc_fit, np.min(rvs)) # 318.6486027001913 77.65244273752681
print(stats.rayleigh.nnlf((loc_fit, scale_fit), rvs)) # inf
```
### Error message
```shell
NA
```
### SciPy/NumPy/Python version information
1.10.0.dev0+1943.27521b7 1.23.1 sys.version_info(major=3, minor=10, micro=5, releaselevel='final', serial=0)
|
1.0
|
BUG: stats.rayleigh.fit: returns `loc` that is inconsistent with the data - ### Describe your issue.
`stats.rayleigh.fit` can return `loc` that is inconsistent with the data whenever `loc` is not fixed.
This is because the `fit` routine is overridden to solve the likelihood equation for the location without considering the constraint that $x - \texttt{loc} > 0$. The solution to the first-order condition can violate this condition, apparently.
Using `root_scalar` instead of `fsolve` might solve the problem. The bracket would allow us to enforce the constraint. ~~If we're lucky and a dumb bracket like `-1e300, np.min(rvs)` always changes sign exactly once, it would probably be faster, too.~~ We're not super lucky, but `root_scalar` does work.
See gh-12968 for history.
### Reproducing Code Example
```python
import numpy as np
from scipy import stats
rng = np.random.default_rng(456)
loc, scale, size = 50, 600, 500
rvs = stats.rayleigh.rvs(loc, scale, size=size, random_state=rng)
loc_fit, scale_fit = stats.rayleigh.fit(rvs)
print(loc_fit, np.min(rvs)) # 318.6486027001913 77.65244273752681
print(stats.rayleigh.nnlf((loc_fit, scale_fit), rvs)) # inf
```
### Error message
```shell
NA
```
### SciPy/NumPy/Python version information
1.10.0.dev0+1943.27521b7 1.23.1 sys.version_info(major=3, minor=10, micro=5, releaselevel='final', serial=0)
|
defect
|
bug stats rayleigh fit returns loc that is inconsistent with the data describe your issue stats rayleigh fit can return loc that is inconsistent with the data whenever loc is not fixed this is because the fit routine is overridden to solve the likelihood equation for the location without considering the constraint that x texttt loc the solution to the first order condition can violate this condition apparently using root scalar instead of fsolve might solve the problem the bracket would allow us to enforce the constraint if we re lucky and a dumb bracket like np min rvs always changes sign exactly once it would probably be faster too we re not super lucky but root scalar does work see gh for history reproducing code example python import numpy as np from scipy import stats rng np random default rng loc scale size rvs stats rayleigh rvs loc scale size size random state rng loc fit scale fit stats rayleigh fit rvs print loc fit np min rvs print stats rayleigh nnlf loc fit scale fit rvs inf error message shell na scipy numpy python version information sys version info major minor micro releaselevel final serial
| 1
|
247,712
| 26,728,781,913
|
IssuesEvent
|
2023-01-30 01:05:42
|
mTvare6/hello-world.rs
|
https://api.github.com/repos/mTvare6/hello-world.rs
|
opened
|
WS-2023-0020 (High) detected in warp-0.2.5.crate
|
security vulnerability
|
## WS-2023-0020 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>warp-0.2.5.crate</b></p></summary>
<p>serve the web at warp speeds</p>
<p>Library home page: <a href="https://crates.io/api/v1/crates/warp/0.2.5/download">https://crates.io/api/v1/crates/warp/0.2.5/download</a></p>
<p>
Dependency Hierarchy:
- webdriver-0.44.0.crate (Root Library)
- :x: **warp-0.2.5.crate** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/mTvare6/hello-world.rs/commit/a5a175063bd51fcbbce0eaba88d1b9b6ad315911">a5a175063bd51fcbbce0eaba88d1b9b6ad315911</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In the crate warp prior to 0.3.3, an improper validation of Windows paths could lead to directory traversal attack.
<p>Publish Date: 2023-01-28
<p>URL: <a href=https://github.com/seanmonstar/warp/pull/997/commits/22ea6dd0057000e1947f5e015a62abed4b5a21be>WS-2023-0020</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://rustsec.org/advisories/RUSTSEC-2022-0082.html">https://rustsec.org/advisories/RUSTSEC-2022-0082.html</a></p>
<p>Release Date: 2023-01-28</p>
<p>Fix Resolution: warp - 0.3.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
WS-2023-0020 (High) detected in warp-0.2.5.crate - ## WS-2023-0020 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>warp-0.2.5.crate</b></p></summary>
<p>serve the web at warp speeds</p>
<p>Library home page: <a href="https://crates.io/api/v1/crates/warp/0.2.5/download">https://crates.io/api/v1/crates/warp/0.2.5/download</a></p>
<p>
Dependency Hierarchy:
- webdriver-0.44.0.crate (Root Library)
- :x: **warp-0.2.5.crate** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/mTvare6/hello-world.rs/commit/a5a175063bd51fcbbce0eaba88d1b9b6ad315911">a5a175063bd51fcbbce0eaba88d1b9b6ad315911</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In the crate warp prior to 0.3.3, an improper validation of Windows paths could lead to directory traversal attack.
<p>Publish Date: 2023-01-28
<p>URL: <a href=https://github.com/seanmonstar/warp/pull/997/commits/22ea6dd0057000e1947f5e015a62abed4b5a21be>WS-2023-0020</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://rustsec.org/advisories/RUSTSEC-2022-0082.html">https://rustsec.org/advisories/RUSTSEC-2022-0082.html</a></p>
<p>Release Date: 2023-01-28</p>
<p>Fix Resolution: warp - 0.3.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
ws high detected in warp crate ws high severity vulnerability vulnerable library warp crate serve the web at warp speeds library home page a href dependency hierarchy webdriver crate root library x warp crate vulnerable library found in head commit a href found in base branch master vulnerability details in the crate warp prior to an improper validation of windows paths could lead to directory traversal attack publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution warp step up your open source security game with mend
| 0
|
635
| 2,577,793,747
|
IssuesEvent
|
2015-02-12 19:10:21
|
chrsmith/quake2-gwt-port
|
https://api.github.com/repos/chrsmith/quake2-gwt-port
|
opened
|
Build fails, unable to convert WAL file (crash in java)
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1. rm -rf war
2. ant run
What is the expected output? What do you see instead?
Successful build, but build crashes while extracting and converting files.
What version of the product are you using? On what operating system?
revision 39 works, revision 38 does not.
```
-----
Original issue reported on code.google.com by megazzt on 10 Dec 2010 at 3:35
|
1.0
|
Build fails, unable to convert WAL file (crash in java) - ```
What steps will reproduce the problem?
1. rm -rf war
2. ant run
What is the expected output? What do you see instead?
Successful build, but build crashes while extracting and converting files.
What version of the product are you using? On what operating system?
revision 39 works, revision 38 does not.
```
-----
Original issue reported on code.google.com by megazzt on 10 Dec 2010 at 3:35
|
defect
|
build fails unable to convert wal file crash in java what steps will reproduce the problem rm rf war ant run what is the expected output what do you see instead successful build but build crashes while extracting and converting files what version of the product are you using on what operating system revision works revision does not original issue reported on code google com by megazzt on dec at
| 1
|
9,455
| 2,615,150,933
|
IssuesEvent
|
2015-03-01 06:28:20
|
chrsmith/reaver-wps
|
https://api.github.com/repos/chrsmith/reaver-wps
|
closed
|
Reaver Skipped Between 73% and 90%
|
auto-migrated Priority-Triage Type-Defect
|
```
0. What version of Reaver are you using? (Only defects against the latest
version will be considered.)
1.4
1. What operating system are you using (Linux is the only supported OS)?
Backtrack 5.1
2. Is your wireless card in monitor mode (yes/no)?
Yes
3. What is the signal strength of the Access Point you are trying to crack?
V good....we are in the same room. :o)
4. What is the manufacturer and model # of the device you are trying to
crack?
TalkTalk
5. What is the entire command line string you are supplying to reaver?
reaver -i mon0 -b "My BSSID" -vv
6. Please describe what you think the issue is.
My ignorance ?
I am not sure whether this is some sort of clever optimising thing you have
made or an error.
I tested my router and (very sadly) spent a lot of time watching Reaver work.
I noticed that over time it ran steadily through 0% - 10% - 20% etc to 73% and
then it suddenly jumped to 90% ! 0% to 73% took about 9 hours but the leap
from 73% to 90% was seconds.
Fortunately for me my Pin started with an 8 so I allowed it to continue as I
could see that the numbers being tested were close.
Reaver worked and found my Pin but I was wondering why it missed out between
73% and 89% entirely ?
```
Original issue reported on code.google.com by `keyfo...@veryrealemail.com` on 22 Jan 2012 at 4:56
|
1.0
|
Reaver Skipped Between 73% and 90% - ```
0. What version of Reaver are you using? (Only defects against the latest
version will be considered.)
1.4
1. What operating system are you using (Linux is the only supported OS)?
Backtrack 5.1
2. Is your wireless card in monitor mode (yes/no)?
Yes
3. What is the signal strength of the Access Point you are trying to crack?
V good....we are in the same room. :o)
4. What is the manufacturer and model # of the device you are trying to
crack?
TalkTalk
5. What is the entire command line string you are supplying to reaver?
reaver -i mon0 -b "My BSSID" -vv
6. Please describe what you think the issue is.
My ignorance ?
I am not sure whether this is some sort of clever optimising thing you have
made or an error.
I tested my router and (very sadly) spent a lot of time watching Reaver work.
I noticed that over time it ran steadily through 0% - 10% - 20% etc to 73% and
then it suddenly jumped to 90% ! 0% to 73% took about 9 hours but the leap
from 73% to 90% was seconds.
Fortunately for me my Pin started with an 8 so I allowed it to continue as I
could see that the numbers being tested were close.
Reaver worked and found my Pin but I was wondering why it missed out between
73% and 89% entirely ?
```
Original issue reported on code.google.com by `keyfo...@veryrealemail.com` on 22 Jan 2012 at 4:56
|
defect
|
reaver skipped between and what version of reaver are you using only defects against the latest version will be considered what operating system are you using linux is the only supported os backtrack is your wireless card in monitor mode yes no yes what is the signal strength of the access point you are trying to crack v good we are in the same room o what is the manufacturer and model of the device you are trying to crack talktalk what is the entire command line string you are supplying to reaver reaver i b my bssid vv please describe what you think the issue is my ignorance i am not sure whether this is some sort of clever optimising thing you have made or an error i tested my router and very sadly spent a lot of time watching reaver work i noticed that over time it ran steadily through etc to and then it suddenly jumped to to took about hours but the leap from to was seconds fortunately for me my pin started with an so i allowed it to continue as i could see that the numbers being tested were close reaver worked and found my pin but i was wondering why it missed out between and entirely original issue reported on code google com by keyfo veryrealemail com on jan at
| 1
|
49,283
| 13,186,582,961
|
IssuesEvent
|
2020-08-13 00:38:26
|
icecube-trac/tix3
|
https://api.github.com/repos/icecube-trac/tix3
|
opened
|
example scripts for credo fail (Trac #1118)
|
Incomplete Migration Migrated from Trac combo reconstruction defect
|
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1118">https://code.icecube.wisc.edu/ticket/1118</a>, reported by kjmeagher and owned by jtatar</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:11:57",
"description": "credol3.py and nugen2107l2b.py require particleforge\ntest1.py and test2.py use old I3Units import\n\nno meta-project level documentation either",
"reporter": "kjmeagher",
"cc": "",
"resolution": "fixed",
"_ts": "1550067117911749",
"component": "combo reconstruction",
"summary": "example scripts for credo fail",
"priority": "normal",
"keywords": "",
"time": "2015-08-17T13:34:45",
"milestone": "",
"owner": "jtatar",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
example scripts for credo fail (Trac #1118) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1118">https://code.icecube.wisc.edu/ticket/1118</a>, reported by kjmeagher and owned by jtatar</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:11:57",
"description": "credol3.py and nugen2107l2b.py require particleforge\ntest1.py and test2.py use old I3Units import\n\nno meta-project level documentation either",
"reporter": "kjmeagher",
"cc": "",
"resolution": "fixed",
"_ts": "1550067117911749",
"component": "combo reconstruction",
"summary": "example scripts for credo fail",
"priority": "normal",
"keywords": "",
"time": "2015-08-17T13:34:45",
"milestone": "",
"owner": "jtatar",
"type": "defect"
}
```
</p>
</details>
|
defect
|
example scripts for credo fail trac migrated from json status closed changetime description py and py require particleforge py and py use old import n nno meta project level documentation either reporter kjmeagher cc resolution fixed ts component combo reconstruction summary example scripts for credo fail priority normal keywords time milestone owner jtatar type defect
| 1
|
46,699
| 13,055,961,489
|
IssuesEvent
|
2020-07-30 03:14:43
|
icecube-trac/tix2
|
https://api.github.com/repos/icecube-trac/tix2
|
opened
|
[tableio] named argument in class declaration (Trac #1734)
|
Incomplete Migration Migrated from Trac cmake defect
|
Migrated from https://code.icecube.wisc.edu/ticket/1734
```json
{
"status": "closed",
"changetime": "2016-06-10T08:01:03",
"description": "The sphinx build gives the following error:\n{{{\nTraceback (most recent call last):\n File \"/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py\", line 385, in import_object\n File \"/Users/kmeagher/icecube/combo/release/lib/icecube/tableio/enum3.py\", line 43\n class enum(baseEnum, metaclass=metaEnum):\n ^\nSyntaxError: invalid syntax\n}}}\n",
"reporter": "kjmeagher",
"cc": "",
"resolution": "wontfix",
"_ts": "1465545663829039",
"component": "cmake",
"summary": "[tableio] named argument in class declaration",
"priority": "normal",
"keywords": "",
"time": "2016-06-10T07:25:29",
"milestone": "Long-Term Future",
"owner": "jvansanten",
"type": "defect"
}
```
|
1.0
|
[tableio] named argument in class declaration (Trac #1734) - Migrated from https://code.icecube.wisc.edu/ticket/1734
```json
{
"status": "closed",
"changetime": "2016-06-10T08:01:03",
"description": "The sphinx build gives the following error:\n{{{\nTraceback (most recent call last):\n File \"/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py\", line 385, in import_object\n File \"/Users/kmeagher/icecube/combo/release/lib/icecube/tableio/enum3.py\", line 43\n class enum(baseEnum, metaclass=metaEnum):\n ^\nSyntaxError: invalid syntax\n}}}\n",
"reporter": "kjmeagher",
"cc": "",
"resolution": "wontfix",
"_ts": "1465545663829039",
"component": "cmake",
"summary": "[tableio] named argument in class declaration",
"priority": "normal",
"keywords": "",
"time": "2016-06-10T07:25:29",
"milestone": "Long-Term Future",
"owner": "jvansanten",
"type": "defect"
}
```
|
defect
|
named argument in class declaration trac migrated from json status closed changetime description the sphinx build gives the following error n ntraceback most recent call last n file private var folders rc g t pip build sphinx sphinx ext autodoc py line in import object n file users kmeagher icecube combo release lib icecube tableio py line n class enum baseenum metaclass metaenum n nsyntaxerror invalid syntax n n reporter kjmeagher cc resolution wontfix ts component cmake summary named argument in class declaration priority normal keywords time milestone long term future owner jvansanten type defect
| 1
|
157,585
| 19,959,072,192
|
IssuesEvent
|
2022-01-28 05:24:07
|
JeffResc/IP-API-Node.js
|
https://api.github.com/repos/JeffResc/IP-API-Node.js
|
closed
|
CVE-2012-6708 (Medium) detected in jquery-1.7.2.min.js
|
security vulnerability
|
## CVE-2012-6708 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.7.2.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.2/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.2/jquery.min.js</a></p>
<p>Path to dependency file: IP-API-Node.js/node_modules/marked/www/demo.html</p>
<p>Path to vulnerable library: IP-API-Node.js/node_modules/marked/www/demo.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.7.2.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/JeffResc/IP-API-Node.js/commit/99b7653bfce099be086c1b68c2b7b8499c3d63af">99b7653bfce099be086c1b68c2b7b8499c3d63af</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery before 1.9.0 is vulnerable to Cross-site Scripting (XSS) attacks. The jQuery(strInput) function does not differentiate selectors from HTML in a reliable fashion. In vulnerable versions, jQuery determined whether the input was HTML by looking for the '<' character anywhere in the string, giving attackers more flexibility when attempting to construct a malicious payload. In fixed versions, jQuery only deems the input to be HTML if it explicitly starts with the '<' character, limiting exploitability only to attackers who can control the beginning of a string, which is far less common.
<p>Publish Date: 2018-01-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2012-6708>CVE-2012-6708</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2012-6708">https://nvd.nist.gov/vuln/detail/CVE-2012-6708</a></p>
<p>Release Date: 2018-01-18</p>
<p>Fix Resolution: jQuery - v1.9.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2012-6708 (Medium) detected in jquery-1.7.2.min.js - ## CVE-2012-6708 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.7.2.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.2/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.2/jquery.min.js</a></p>
<p>Path to dependency file: IP-API-Node.js/node_modules/marked/www/demo.html</p>
<p>Path to vulnerable library: IP-API-Node.js/node_modules/marked/www/demo.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.7.2.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/JeffResc/IP-API-Node.js/commit/99b7653bfce099be086c1b68c2b7b8499c3d63af">99b7653bfce099be086c1b68c2b7b8499c3d63af</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery before 1.9.0 is vulnerable to Cross-site Scripting (XSS) attacks. The jQuery(strInput) function does not differentiate selectors from HTML in a reliable fashion. In vulnerable versions, jQuery determined whether the input was HTML by looking for the '<' character anywhere in the string, giving attackers more flexibility when attempting to construct a malicious payload. In fixed versions, jQuery only deems the input to be HTML if it explicitly starts with the '<' character, limiting exploitability only to attackers who can control the beginning of a string, which is far less common.
<p>Publish Date: 2018-01-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2012-6708>CVE-2012-6708</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2012-6708">https://nvd.nist.gov/vuln/detail/CVE-2012-6708</a></p>
<p>Release Date: 2018-01-18</p>
<p>Fix Resolution: jQuery - v1.9.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve medium detected in jquery min js cve medium severity vulnerability vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file ip api node js node modules marked www demo html path to vulnerable library ip api node js node modules marked www demo html dependency hierarchy x jquery min js vulnerable library found in head commit a href found in base branch master vulnerability details jquery before is vulnerable to cross site scripting xss attacks the jquery strinput function does not differentiate selectors from html in a reliable fashion in vulnerable versions jquery determined whether the input was html by looking for the character anywhere in the string giving attackers more flexibility when attempting to construct a malicious payload in fixed versions jquery only deems the input to be html if it explicitly starts with the character limiting exploitability only to attackers who can control the beginning of a string which is far less common publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery step up your open source security game with whitesource
| 0
|
96,303
| 16,129,611,665
|
IssuesEvent
|
2021-04-29 01:03:24
|
RG4421/ampere-centos-kernel
|
https://api.github.com/repos/RG4421/ampere-centos-kernel
|
opened
|
CVE-2019-19815 (Medium) detected in linuxv5.2, linuxv5.2
|
security vulnerability
|
## CVE-2019-19815 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>linuxv5.2</b>, <b>linuxv5.2</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In the Linux kernel 5.0.21, mounting a crafted f2fs filesystem image can cause a NULL pointer dereference in f2fs_recover_fsync_data in fs/f2fs/recovery.c. This is related to F2FS_P_SB in fs/f2fs/f2fs.h.
<p>Publish Date: 2019-12-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-19815>CVE-2019-19815</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-19816">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-19816</a></p>
<p>Release Date: 2019-12-17</p>
<p>Fix Resolution: v5.3-rc1</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[],"baseBranches":["amp-centos-8.0-kernel"],"vulnerabilityIdentifier":"CVE-2019-19815","vulnerabilityDetails":"In the Linux kernel 5.0.21, mounting a crafted f2fs filesystem image can cause a NULL pointer dereference in f2fs_recover_fsync_data in fs/f2fs/recovery.c. This is related to F2FS_P_SB in fs/f2fs/f2fs.h.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-19815","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"Required","AV":"Local","I":"None"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2019-19815 (Medium) detected in linuxv5.2, linuxv5.2 - ## CVE-2019-19815 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>linuxv5.2</b>, <b>linuxv5.2</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In the Linux kernel 5.0.21, mounting a crafted f2fs filesystem image can cause a NULL pointer dereference in f2fs_recover_fsync_data in fs/f2fs/recovery.c. This is related to F2FS_P_SB in fs/f2fs/f2fs.h.
<p>Publish Date: 2019-12-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-19815>CVE-2019-19815</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-19816">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-19816</a></p>
<p>Release Date: 2019-12-17</p>
<p>Fix Resolution: v5.3-rc1</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[],"baseBranches":["amp-centos-8.0-kernel"],"vulnerabilityIdentifier":"CVE-2019-19815","vulnerabilityDetails":"In the Linux kernel 5.0.21, mounting a crafted f2fs filesystem image can cause a NULL pointer dereference in f2fs_recover_fsync_data in fs/f2fs/recovery.c. This is related to F2FS_P_SB in fs/f2fs/f2fs.h.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-19815","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"Required","AV":"Local","I":"None"},"extraData":{}}</REMEDIATE> -->
|
non_defect
|
cve medium detected in cve medium severity vulnerability vulnerable libraries vulnerability details in the linux kernel mounting a crafted filesystem image can cause a null pointer dereference in recover fsync data in fs recovery c this is related to p sb in fs h publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability false ispackagebased true isdefaultbranch true packages basebranches vulnerabilityidentifier cve vulnerabilitydetails in the linux kernel mounting a crafted filesystem image can cause a null pointer dereference in recover fsync data in fs recovery c this is related to p sb in fs h vulnerabilityurl
| 0
|
349,815
| 31,832,942,076
|
IssuesEvent
|
2023-09-14 11:50:33
|
CUREd-Plus/cuRed
|
https://api.github.com/repos/CUREd-Plus/cuRed
|
closed
|
Write tests for existing functions
|
tests
|
The initial import of package structure (see #1) will include simple cleaning and tidying of `cleaning_fns_etl.r`, but
no tests. Tests need writing and should be undertaken as part of this task.
We should look to test coverage using the [covr](https://covr.r-lib.org/) package.
|
1.0
|
Write tests for existing functions - The initial import of package structure (see #1) will include simple cleaning and tidying of `cleaning_fns_etl.r`, but
no tests. Tests need writing and should be undertaken as part of this task.
We should look to test coverage using the [covr](https://covr.r-lib.org/) package.
|
non_defect
|
write tests for existing functions the initial import of package structure see will include simple cleaning and tidying of cleaning fns etl r but no tests tests need writing and should be undertaken as part of this task we should look to test coverage using the package
| 0
|
71,787
| 30,921,363,893
|
IssuesEvent
|
2023-08-06 00:15:46
|
ps2gg/ps2.gg
|
https://api.github.com/repos/ps2gg/ps2.gg
|
closed
|
Send discord message when issue is assigned
|
Scope: API Scope: UI Type: New Feature Service: Github
|
### Expected behavior <!-- Describe the desired behavior. -->
The discord bot should send an embed when issues have been assigned. The contents should reflect a message along the lines of "x started developing <feature>"
### Definition of Done <!-- What requirements need to be fulfilled before we can release it -->
- [Universal Definition of Done](https://github.com/ps2gg/ps2.gg/blob/master/docs/standards/Definition-Of-Done.md) is adhered to
|
1.0
|
Send discord message when issue is assigned - ### Expected behavior <!-- Describe the desired behavior. -->
The discord bot should send an embed when issues have been assigned. The contents should reflect a message along the lines of "x started developing <feature>"
### Definition of Done <!-- What requirements need to be fulfilled before we can release it -->
- [Universal Definition of Done](https://github.com/ps2gg/ps2.gg/blob/master/docs/standards/Definition-Of-Done.md) is adhered to
|
non_defect
|
send discord message when issue is assigned expected behavior the discord bot should send an embed when issues have been assigned the contents should reflect a message along the lines of x started developing definition of done is adhered to
| 0
|
37,398
| 8,286,452,012
|
IssuesEvent
|
2018-09-19 04:55:17
|
MicrosoftDocs/live-share
|
https://api.github.com/repos/MicrosoftDocs/live-share
|
closed
|
[Bug] VS Live Share double format on save event
|
area: co-edit area: workspace bug vscode
|
Issue Type: <b>Bug</b>
With "Format on save" active, save an html file, the format will trigger twice, one before save & one after, my theory is that the host receives the event after the save in the guest, so end's up formatting the code again, which ends up modifying the file again, so the file stays in "pending changes to save" state.

Check in the above link how i try to save 3 times, 1st and 3rd save attempts do save but then went back to changes pending state.
[DoubleSaveLiveShareLogs.zip](https://github.com/MicrosoftDocs/live-share/files/2099680/DoubleSaveLiveShareLogs.zip)
Possibly related: https://github.com/MicrosoftDocs/live-share/issues/412
Extension version: 0.3.292
VS Code version: Code 1.24.0 (6a6e02cef0f2122ee1469765b704faf5d0e0d859, 2018-06-06T17:37:01.579Z)
OS version: Linux x64 4.15.0-23-generic
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Core(TM) i7-7500U CPU @ 2.70GHz (4 x 3348)|
|GPU Status|2d_canvas: enabled<br>flash_3d: enabled<br>flash_stage3d: enabled<br>flash_stage3d_baseline: enabled<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>native_gpu_memory_buffers: disabled_software<br>rasterization: disabled_software<br>video_decode: unavailable_software<br>video_encode: enabled<br>vpx_decode: unavailable_software<br>webgl: enabled<br>webgl2: enabled|
|Load (avg)|1, 1, 2|
|Memory (System)|15.57GB (0.18GB free)|
|Process Argv|/usr/share/code/code --unity-launch|
|Screen Reader|no|
|VM|0%|
</details>
<!-- generated by issue reporter -->
|
1.0
|
[Bug] VS Live Share double format on save event - Issue Type: <b>Bug</b>
With "Format on save" active, save an html file, the format will trigger twice, one before save & one after, my theory is that the host receives the event after the save in the guest, so end's up formatting the code again, which ends up modifying the file again, so the file stays in "pending changes to save" state.

Check in the above link how i try to save 3 times, 1st and 3rd save attempts do save but then went back to changes pending state.
[DoubleSaveLiveShareLogs.zip](https://github.com/MicrosoftDocs/live-share/files/2099680/DoubleSaveLiveShareLogs.zip)
Possibly related: https://github.com/MicrosoftDocs/live-share/issues/412
Extension version: 0.3.292
VS Code version: Code 1.24.0 (6a6e02cef0f2122ee1469765b704faf5d0e0d859, 2018-06-06T17:37:01.579Z)
OS version: Linux x64 4.15.0-23-generic
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Core(TM) i7-7500U CPU @ 2.70GHz (4 x 3348)|
|GPU Status|2d_canvas: enabled<br>flash_3d: enabled<br>flash_stage3d: enabled<br>flash_stage3d_baseline: enabled<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>native_gpu_memory_buffers: disabled_software<br>rasterization: disabled_software<br>video_decode: unavailable_software<br>video_encode: enabled<br>vpx_decode: unavailable_software<br>webgl: enabled<br>webgl2: enabled|
|Load (avg)|1, 1, 2|
|Memory (System)|15.57GB (0.18GB free)|
|Process Argv|/usr/share/code/code --unity-launch|
|Screen Reader|no|
|VM|0%|
</details>
<!-- generated by issue reporter -->
|
non_defect
|
vs live share double format on save event issue type bug with format on save active save an html file the format will trigger twice one before save one after my theory is that the host receives the event after the save in the guest so end s up formatting the code again which ends up modifying the file again so the file stays in pending changes to save state check in the above link how i try to save times and save attempts do save but then went back to changes pending state possibly related extension version vs code version code os version linux generic system info item value cpus intel r core tm cpu x gpu status canvas enabled flash enabled flash enabled flash baseline enabled gpu compositing enabled multiple raster threads enabled on native gpu memory buffers disabled software rasterization disabled software video decode unavailable software video encode enabled vpx decode unavailable software webgl enabled enabled load avg memory system free process argv usr share code code unity launch screen reader no vm
| 0
|
52,279
| 6,225,996,110
|
IssuesEvent
|
2017-07-10 17:27:18
|
dotnet/coreclr
|
https://api.github.com/repos/dotnet/coreclr
|
closed
|
Test failure: Interop_SizeConst._SizeConstTest_SizeConstTest_/_SizeConstTest_SizeConstTest_cmd
|
arch-arm32 test-run-uwp-coreclr
|
Opened on behalf of @Jiayili1
The test `Interop_SizeConst._SizeConstTest_SizeConstTest_/_SizeConstTest_SizeConstTest_cmd` has failed.
Return code: 1
Raw output file: C:\dotnetbuild\work\424f1bc6-7df7-4bc4-b3fd-842a890e5e8f\Work\1e6c464e-00c0-46b0-b6f6-4b286657d784\Unzip\Reports\Interop.SizeConst\SizeConstTest\SizeConstTest.output.txt
Raw output:
BEGIN EXECUTION\r
"C:\dotnetbuild\work\424f1bc6-7df7-4bc4-b3fd-842a890e5e8f\Payload\corerun.exe" SizeConstTest.exe \r
Expected: 100\r
Actual: -532462766\r
END EXECUTION - FAILED\r
FAILED\r
Test Harness Exitcode is : 1\r
To run the test:
> set CORE_ROOT=C:\dotnetbuild\work\424f1bc6-7df7-4bc4-b3fd-842a890e5e8f\Payload
> C:\dotnetbuild\work\424f1bc6-7df7-4bc4-b3fd-842a890e5e8f\Work\1e6c464e-00c0-46b0-b6f6-4b286657d784\Unzip\SizeConstTest\SizeConstTest.cmd
\r
Expected: True\r
Actual: False
Stack Trace:
at Interop_SizeConst._SizeConstTest_SizeConstTest_._SizeConstTest_SizeConstTest_cmd()
Build : Master - 20170627.02 (Core Tests)
Failing configurations:
- windows.10.arm64
- arm
Detail: https://mc.dot.net/#/product/netcore/master/source/official~2Fcoreclr~2Fmaster~2F/type/test~2Ffunctional~2Fcli~2F/build/20170627.02/workItem/Interop.SizeConst.XUnitWrapper/analysis/xunit/Interop_SizeConst._SizeConstTest_SizeConstTest_~2F_SizeConstTest_SizeConstTest_cmd
|
1.0
|
Test failure: Interop_SizeConst._SizeConstTest_SizeConstTest_/_SizeConstTest_SizeConstTest_cmd - Opened on behalf of @Jiayili1
The test `Interop_SizeConst._SizeConstTest_SizeConstTest_/_SizeConstTest_SizeConstTest_cmd` has failed.
Return code: 1
Raw output file: C:\dotnetbuild\work\424f1bc6-7df7-4bc4-b3fd-842a890e5e8f\Work\1e6c464e-00c0-46b0-b6f6-4b286657d784\Unzip\Reports\Interop.SizeConst\SizeConstTest\SizeConstTest.output.txt
Raw output:
BEGIN EXECUTION\r
"C:\dotnetbuild\work\424f1bc6-7df7-4bc4-b3fd-842a890e5e8f\Payload\corerun.exe" SizeConstTest.exe \r
Expected: 100\r
Actual: -532462766\r
END EXECUTION - FAILED\r
FAILED\r
Test Harness Exitcode is : 1\r
To run the test:
> set CORE_ROOT=C:\dotnetbuild\work\424f1bc6-7df7-4bc4-b3fd-842a890e5e8f\Payload
> C:\dotnetbuild\work\424f1bc6-7df7-4bc4-b3fd-842a890e5e8f\Work\1e6c464e-00c0-46b0-b6f6-4b286657d784\Unzip\SizeConstTest\SizeConstTest.cmd
\r
Expected: True\r
Actual: False
Stack Trace:
at Interop_SizeConst._SizeConstTest_SizeConstTest_._SizeConstTest_SizeConstTest_cmd()
Build : Master - 20170627.02 (Core Tests)
Failing configurations:
- windows.10.arm64
- arm
Detail: https://mc.dot.net/#/product/netcore/master/source/official~2Fcoreclr~2Fmaster~2F/type/test~2Ffunctional~2Fcli~2F/build/20170627.02/workItem/Interop.SizeConst.XUnitWrapper/analysis/xunit/Interop_SizeConst._SizeConstTest_SizeConstTest_~2F_SizeConstTest_SizeConstTest_cmd
|
non_defect
|
test failure interop sizeconst sizeconsttest sizeconsttest sizeconsttest sizeconsttest cmd opened on behalf of the test interop sizeconst sizeconsttest sizeconsttest sizeconsttest sizeconsttest cmd has failed return code raw output file c dotnetbuild work work unzip reports interop sizeconst sizeconsttest sizeconsttest output txt raw output begin execution r c dotnetbuild work payload corerun exe sizeconsttest exe r expected r actual r end execution failed r failed r test harness exitcode is r to run the test set core root c dotnetbuild work payload c dotnetbuild work work unzip sizeconsttest sizeconsttest cmd r expected true r actual false stack trace at interop sizeconst sizeconsttest sizeconsttest sizeconsttest sizeconsttest cmd build master core tests failing configurations windows arm detail
| 0
|
2,846
| 5,012,152,397
|
IssuesEvent
|
2016-12-13 10:23:57
|
GovernIB/rolsac
|
https://api.github.com/repos/GovernIB/rolsac
|
closed
|
Error traductor automàtic
|
Estat:Estado_Pendiente_Despliege Lloc:WebServices Prioritat:Alta Tipus:Error Versió:1.4.0
|
Es genera un error per ClassNotFound quan s'utilitza el traductor automàtic, sobre ROLSAC 1.4
Adjunt log del servidor d'aplicacions.
[log_error_traductor.txt](https://github.com/GovernIB/rolsac/files/580254/log_error_traductor.txt)
|
1.0
|
Error traductor automàtic - Es genera un error per ClassNotFound quan s'utilitza el traductor automàtic, sobre ROLSAC 1.4
Adjunt log del servidor d'aplicacions.
[log_error_traductor.txt](https://github.com/GovernIB/rolsac/files/580254/log_error_traductor.txt)
|
non_defect
|
error traductor automàtic es genera un error per classnotfound quan s utilitza el traductor automàtic sobre rolsac adjunt log del servidor d aplicacions
| 0
|
99,260
| 11,137,208,315
|
IssuesEvent
|
2019-12-20 18:41:54
|
CoderLine/alphaTab
|
https://api.github.com/repos/CoderLine/alphaTab
|
closed
|
Unable to access alphatab docs for the develop branch
|
area-documentation priority-high
|
<!--
Thanks for contributing to alphaTab. Before entering a new bug please check following points
- Please make sure that no other bug with the same topic exists already. Rather reopen a closed one than enter a new one.
- Ensure that you are using the lastest version.
-->
# Your environment
* Version used: (master branch)
* Platform used: JavaScript or C#
* Rendering engine used: SVG (default), HTML5 or GDI
* Browser Name and Version: Firefox
* Operating System and version (desktop or mobile): Arch Linux desktop
* Link to your project:
# Expected Results
The docs for the develop branch should be accessible.
# Observed Results
https://docs.alphatab.net/develop returns a 403 Forbidden Access error. https://docs.alphatab.net/master works fine though.
# Steps to Reproduce (for bugs)
Go to https://docs.alphatab.net/develop.
|
1.0
|
Unable to access alphatab docs for the develop branch - <!--
Thanks for contributing to alphaTab. Before entering a new bug please check following points
- Please make sure that no other bug with the same topic exists already. Rather reopen a closed one than enter a new one.
- Ensure that you are using the lastest version.
-->
# Your environment
* Version used: (master branch)
* Platform used: JavaScript or C#
* Rendering engine used: SVG (default), HTML5 or GDI
* Browser Name and Version: Firefox
* Operating System and version (desktop or mobile): Arch Linux desktop
* Link to your project:
# Expected Results
The docs for the develop branch should be accessible.
# Observed Results
https://docs.alphatab.net/develop returns a 403 Forbidden Access error. https://docs.alphatab.net/master works fine though.
# Steps to Reproduce (for bugs)
Go to https://docs.alphatab.net/develop.
|
non_defect
|
unable to access alphatab docs for the develop branch thanks for contributing to alphatab before entering a new bug please check following points please make sure that no other bug with the same topic exists already rather reopen a closed one than enter a new one ensure that you are using the lastest version your environment version used master branch platform used javascript or c rendering engine used svg default or gdi browser name and version firefox operating system and version desktop or mobile arch linux desktop link to your project expected results the docs for the develop branch should be accessible observed results returns a forbidden access error works fine though steps to reproduce for bugs go to
| 0
|
34,550
| 7,453,641,557
|
IssuesEvent
|
2018-03-29 12:43:05
|
kerdokullamae/test_koik_issued
|
https://api.github.com/repos/kerdokullamae/test_koik_issued
|
closed
|
perioodide pärandumine/summeerumine
|
P: normal R: duplicate T: defect
|
**Reported by katrin vesterblom on 23 Aug 2013 08:05 UTC**
Mulle tundub, et see muidu töötab, aga ebaselgus tuleb sisse siis, kui mõne alamüksuse piirdaatumid on Määramata. Et siis tuleb (genereerub) ka ülemüksuse piirdaatumiteks Määramata. Nt. vaatasin TLA.230 sarjade piirdaatumeid. Need on kindlasti (andmeülekandel) genereeritud, kuna vanas AIS-is neid ei ole. On näha, et osadel sarjadel on piirdaatumid, osadel on Määramata.
Ilmselt tuleks otsustada, kas seda Määramatust arvestada, või mitte - äkki peaks piirdaatumite summeerimisel arvestama ainult määratud piirdaatumeid?
|
1.0
|
perioodide pärandumine/summeerumine - **Reported by katrin vesterblom on 23 Aug 2013 08:05 UTC**
Mulle tundub, et see muidu töötab, aga ebaselgus tuleb sisse siis, kui mõne alamüksuse piirdaatumid on Määramata. Et siis tuleb (genereerub) ka ülemüksuse piirdaatumiteks Määramata. Nt. vaatasin TLA.230 sarjade piirdaatumeid. Need on kindlasti (andmeülekandel) genereeritud, kuna vanas AIS-is neid ei ole. On näha, et osadel sarjadel on piirdaatumid, osadel on Määramata.
Ilmselt tuleks otsustada, kas seda Määramatust arvestada, või mitte - äkki peaks piirdaatumite summeerimisel arvestama ainult määratud piirdaatumeid?
|
defect
|
perioodide pärandumine summeerumine reported by katrin vesterblom on aug utc mulle tundub et see muidu töötab aga ebaselgus tuleb sisse siis kui mõne alamüksuse piirdaatumid on määramata et siis tuleb genereerub ka ülemüksuse piirdaatumiteks määramata nt vaatasin tla sarjade piirdaatumeid need on kindlasti andmeülekandel genereeritud kuna vanas ais is neid ei ole on näha et osadel sarjadel on piirdaatumid osadel on määramata ilmselt tuleks otsustada kas seda määramatust arvestada või mitte äkki peaks piirdaatumite summeerimisel arvestama ainult määratud piirdaatumeid
| 1
|
37,635
| 18,680,603,736
|
IssuesEvent
|
2021-11-01 04:46:51
|
mdn/content
|
https://api.github.com/repos/mdn/content
|
opened
|
Content suggestion: performance glossary term: largest contentful paint/First Input Delay
|
Opportunity assessment performance
|
## What is the new suggestion?
Glossary terms:
- LCP -> largest contentful paint.
- FID -> First Input Delay
## Why is it important or useful?
TTFB and FCP are defined. LCP and FID are needed
How many pages are likely to be needed?
2
How much time do you think this work should take? 5 hours
Will the work enable learners or professionals to achieve their goals better? yes
Does it address critical needs in the web industry? yes
Is the work an operational necessity, i.e. is not having it a security risk? no
Does the content help make the web more ethical? yes
|
True
|
Content suggestion: performance glossary term: largest contentful paint/First Input Delay - ## What is the new suggestion?
Glossary terms:
- LCP -> largest contentful paint.
- FID -> First Input Delay
## Why is it important or useful?
TTFB and FCP are defined. LCP and FID are needed
How many pages are likely to be needed?
2
How much time do you think this work should take? 5 hours
Will the work enable learners or professionals to achieve their goals better? yes
Does it address critical needs in the web industry? yes
Is the work an operational necessity, i.e. is not having it a security risk? no
Does the content help make the web more ethical? yes
|
non_defect
|
content suggestion performance glossary term largest contentful paint first input delay what is the new suggestion glossary terms lcp largest contentful paint fid first input delay why is it important or useful ttfb and fcp are defined lcp and fid are needed how many pages are likely to be needed how much time do you think this work should take hours will the work enable learners or professionals to achieve their goals better yes does it address critical needs in the web industry yes is the work an operational necessity i e is not having it a security risk no does the content help make the web more ethical yes
| 0
|
27,636
| 5,068,078,706
|
IssuesEvent
|
2016-12-24 11:17:41
|
cakephp/cakephp
|
https://api.github.com/repos/cakephp/cakephp
|
opened
|
FlashHelper::render() cannot render flash messages from SessionComponent::setFlash()
|
Defect
|
This is a (multiple allowed):
* [x] bug
* [ ] enhancement
* [ ] feature-discussion (RFC)
* CakePHP Version: 2.9.4
### What you did
Install CakePHP without using bake project. Call `$this->Session->setFlash('Test');` in some controller.
### What happened
I got a notice instead of a flash message.
```
Notice (1024): Element Not Found: Elements\default.ctp [CORE\Cake\View\View.php, line 425]
```
### What you expected to happen
I get the flash message.
### Why this issue happens
Related to #9061. Maybe the same issue. But I think `Flash/default.ctp` was not related this issue. The reason why the the issue occurs is that `FlashHelper::redner()` cannot render flash messages from `SessionComponent::setFlash()`. Because the `default` key must be handled specially like [this](https://github.com/cakephp/cakephp/blob/9eafde13d220209f4e783ddde2dd82f6bb8499db/lib/Cake/View/Helper/SessionHelper.php#L145-L149). `SessionHelper` does that, but `FlashHelper` doesn't.
Therefore, currently, any existing applications/plugins must replace all `SessionComponent::setFlash()` calls with `FlashComponent::set()`. I would like to fix this issue.
|
1.0
|
FlashHelper::render() cannot render flash messages from SessionComponent::setFlash() - This is a (multiple allowed):
* [x] bug
* [ ] enhancement
* [ ] feature-discussion (RFC)
* CakePHP Version: 2.9.4
### What you did
Install CakePHP without using bake project. Call `$this->Session->setFlash('Test');` in some controller.
### What happened
I got a notice instead of a flash message.
```
Notice (1024): Element Not Found: Elements\default.ctp [CORE\Cake\View\View.php, line 425]
```
### What you expected to happen
I get the flash message.
### Why this issue happens
Related to #9061. Maybe the same issue. But I think `Flash/default.ctp` was not related this issue. The reason why the the issue occurs is that `FlashHelper::redner()` cannot render flash messages from `SessionComponent::setFlash()`. Because the `default` key must be handled specially like [this](https://github.com/cakephp/cakephp/blob/9eafde13d220209f4e783ddde2dd82f6bb8499db/lib/Cake/View/Helper/SessionHelper.php#L145-L149). `SessionHelper` does that, but `FlashHelper` doesn't.
Therefore, currently, any existing applications/plugins must replace all `SessionComponent::setFlash()` calls with `FlashComponent::set()`. I would like to fix this issue.
|
defect
|
flashhelper render cannot render flash messages from sessioncomponent setflash this is a multiple allowed bug enhancement feature discussion rfc cakephp version what you did install cakephp without using bake project call this session setflash test in some controller what happened i got a notice instead of a flash message notice element not found elements default ctp what you expected to happen i get the flash message why this issue happens related to maybe the same issue but i think flash default ctp was not related this issue the reason why the the issue occurs is that flashhelper redner cannot render flash messages from sessioncomponent setflash because the default key must be handled specially like sessionhelper does that but flashhelper doesn t therefore currently any existing applications plugins must replace all sessioncomponent setflash calls with flashcomponent set i would like to fix this issue
| 1
|
58,652
| 16,673,549,156
|
IssuesEvent
|
2021-06-07 13:47:19
|
gwaldron/osgearth
|
https://api.github.com/repos/gwaldron/osgearth
|
closed
|
Crash with async layer
|
defect
|
Repro:
```xml
<Map>
<xi:include href="readymap_imagery.xml"/>
<OGRFeatures name="world-data">
<url>../data/world.shp</url>
</OGRFeatures>
<FeatureImage async="true">
<features>world-data</features>
<styles>
<style type="text/css">
default {
fill: #ff7700;
stroke: #ffff00;
stroke-width: 5km;
}
</style>
</styles>
</FeatureImage>
</Map>
```
|
1.0
|
Crash with async layer - Repro:
```xml
<Map>
<xi:include href="readymap_imagery.xml"/>
<OGRFeatures name="world-data">
<url>../data/world.shp</url>
</OGRFeatures>
<FeatureImage async="true">
<features>world-data</features>
<styles>
<style type="text/css">
default {
fill: #ff7700;
stroke: #ffff00;
stroke-width: 5km;
}
</style>
</styles>
</FeatureImage>
</Map>
```
|
defect
|
crash with async layer repro xml data world shp world data default fill stroke stroke width
| 1
|
9,865
| 2,616,004,642
|
IssuesEvent
|
2015-03-02 00:49:15
|
jasonhall/bwapi
|
https://api.github.com/repos/jasonhall/bwapi
|
closed
|
saving fog/buildability/walkability maps randomly crashes?
|
auto-migrated Maintainability Priority-Medium Type-Defect
|
```
Saving any fog/buildability/walkability maps in anything other than Python
or Longinus seem to crash.
```
Original issue reported on code.google.com by `AHeinerm` on 20 Sep 2008 at 6:10
|
1.0
|
saving fog/buildability/walkability maps randomly crashes? - ```
Saving any fog/buildability/walkability maps in anything other than Python
or Longinus seem to crash.
```
Original issue reported on code.google.com by `AHeinerm` on 20 Sep 2008 at 6:10
|
defect
|
saving fog buildability walkability maps randomly crashes saving any fog buildability walkability maps in anything other than python or longinus seem to crash original issue reported on code google com by aheinerm on sep at
| 1
|
691,215
| 23,688,097,724
|
IssuesEvent
|
2022-08-29 08:19:51
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
www.twitch.tv - see bug description
|
browser-firefox priority-important engine-gecko
|
<!-- @browser: Firefox 106.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:106.0) Gecko/20100101 Firefox/106.0 -->
<!-- @reported_with: desktop-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/109830 -->
**URL**: https://www.twitch.tv/enzak
**Browser / Version**: Firefox 106.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes Edge
**Problem type**: Something else
**Description**: white screen on stream. Other streams on website work.
**Steps to Reproduce**:
Some streams on Twitch work but on certain ones just a white screen with just audio. On Firefox stable, I have no yet encountered such a problem. I tested the exact address on Edge and works fine.
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2022/8/3935f9ab-e297-4d04-bfa8-048a9e0b36ef.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20220826214835</li><li>channel: nightly</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2022/8/0b30fb11-e603-48a0-8080-d98c1b0376d0)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
www.twitch.tv - see bug description - <!-- @browser: Firefox 106.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:106.0) Gecko/20100101 Firefox/106.0 -->
<!-- @reported_with: desktop-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/109830 -->
**URL**: https://www.twitch.tv/enzak
**Browser / Version**: Firefox 106.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes Edge
**Problem type**: Something else
**Description**: white screen on stream. Other streams on website work.
**Steps to Reproduce**:
Some streams on Twitch work but on certain ones just a white screen with just audio. On Firefox stable, I have no yet encountered such a problem. I tested the exact address on Edge and works fine.
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2022/8/3935f9ab-e297-4d04-bfa8-048a9e0b36ef.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20220826214835</li><li>channel: nightly</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2022/8/0b30fb11-e603-48a0-8080-d98c1b0376d0)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_defect
|
see bug description url browser version firefox operating system windows tested another browser yes edge problem type something else description white screen on stream other streams on website work steps to reproduce some streams on twitch work but on certain ones just a white screen with just audio on firefox stable i have no yet encountered such a problem i tested the exact address on edge and works fine view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel nightly hastouchscreen false mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️
| 0
|
41,998
| 22,164,534,406
|
IssuesEvent
|
2022-06-05 02:06:48
|
intellij-rust/intellij-rust
|
https://api.github.com/repos/intellij-rust/intellij-rust
|
closed
|
Rust plugin froze for 42 sec on Win right after project was opened
|
performance subsystem::wsl
|
## Environment
* **IntelliJ Rust plugin version:** 0.4.172.4642-222-nightly
* **Rust toolchain version:** 1.62.0-nightly (60e50fc1c 2022-04-04) x86_64-pc-windows-msvc
* **IDE name and version:** CLion 2022.2 EAP (CL-222.2733)
* **Operating system:** Windows 10 10.0
* **Macro expansion engine:** new
* **Name resolution engine:** new
* **Additional experimental features:** org.rust.cargo.emulate.terminal
## Problem description
Freeze for 42 seconds
Sampled time: 15900ms, sampling rate: 100ms, GC time: 576ms (1%), Class loading: 0%, cpu load: 0%
The stack is from the thread that was blocking EDT
com.intellij.diagnostic.Freeze
at java.base@17.0.3/sun.nio.fs.WindowsNativeDispatcher.GetFileAttributesEx0(Native Method)
at java.base@17.0.3/sun.nio.fs.WindowsNativeDispatcher.GetFileAttributesEx(WindowsNativeDispatcher.java:429)
at java.base@17.0.3/sun.nio.fs.WindowsFileAttributes.get(WindowsFileAttributes.java:443)
at java.base@17.0.3/sun.nio.fs.WindowsFileAttributeViews$Basic.readAttributes(WindowsFileAttributeViews.java:51)
at java.base@17.0.3/sun.nio.fs.WindowsFileAttributeViews$Basic.readAttributes(WindowsFileAttributeViews.java:38)
at java.base@17.0.3/sun.nio.fs.WindowsFileSystemProvider.readAttributes(WindowsFileSystemProvider.java:201)
at java.base@17.0.3/sun.nio.fs.AbstractFileSystemProvider.isDirectory(AbstractFileSystemProvider.java:122)
at java.base@17.0.3/java.nio.file.Files.isDirectory(Files.java:2318)
at com.intellij.util.io.PathKt.isDirectory(path.kt:172)
at org.rust.cargo.toolchain.flavors.RsToolchainFlavor.isValidToolchainPath(RsToolchainFlavor.kt:36)
at org.rust.cargo.toolchain.flavors.RsToolchainFlavor$Companion.getFlavor(RsToolchainFlavor.kt:52)
at org.rust.cargo.toolchain.RsToolchainBase.looksLikeValidToolchain(RsToolchainBase.kt:32)
at org.rust.ide.inspections.RsLocalInspectionTool.isApplicableTo(RsLocalInspectionTool.kt:63)
at org.rust.ide.inspections.RsLocalInspectionTool.buildVisitor(RsLocalInspectionTool.kt:32)
[report.txt](https://github.com/intellij-rust/intellij-rust/files/8769758/report.txt)
[threadDump-20220524-092530.txt](https://github.com/intellij-rust/intellij-rust/files/8769763/threadDump-20220524-092530.txt)
|
True
|
Rust plugin froze for 42 sec on Win right after project was opened - ## Environment
* **IntelliJ Rust plugin version:** 0.4.172.4642-222-nightly
* **Rust toolchain version:** 1.62.0-nightly (60e50fc1c 2022-04-04) x86_64-pc-windows-msvc
* **IDE name and version:** CLion 2022.2 EAP (CL-222.2733)
* **Operating system:** Windows 10 10.0
* **Macro expansion engine:** new
* **Name resolution engine:** new
* **Additional experimental features:** org.rust.cargo.emulate.terminal
## Problem description
Freeze for 42 seconds
Sampled time: 15900ms, sampling rate: 100ms, GC time: 576ms (1%), Class loading: 0%, cpu load: 0%
The stack is from the thread that was blocking EDT
com.intellij.diagnostic.Freeze
at java.base@17.0.3/sun.nio.fs.WindowsNativeDispatcher.GetFileAttributesEx0(Native Method)
at java.base@17.0.3/sun.nio.fs.WindowsNativeDispatcher.GetFileAttributesEx(WindowsNativeDispatcher.java:429)
at java.base@17.0.3/sun.nio.fs.WindowsFileAttributes.get(WindowsFileAttributes.java:443)
at java.base@17.0.3/sun.nio.fs.WindowsFileAttributeViews$Basic.readAttributes(WindowsFileAttributeViews.java:51)
at java.base@17.0.3/sun.nio.fs.WindowsFileAttributeViews$Basic.readAttributes(WindowsFileAttributeViews.java:38)
at java.base@17.0.3/sun.nio.fs.WindowsFileSystemProvider.readAttributes(WindowsFileSystemProvider.java:201)
at java.base@17.0.3/sun.nio.fs.AbstractFileSystemProvider.isDirectory(AbstractFileSystemProvider.java:122)
at java.base@17.0.3/java.nio.file.Files.isDirectory(Files.java:2318)
at com.intellij.util.io.PathKt.isDirectory(path.kt:172)
at org.rust.cargo.toolchain.flavors.RsToolchainFlavor.isValidToolchainPath(RsToolchainFlavor.kt:36)
at org.rust.cargo.toolchain.flavors.RsToolchainFlavor$Companion.getFlavor(RsToolchainFlavor.kt:52)
at org.rust.cargo.toolchain.RsToolchainBase.looksLikeValidToolchain(RsToolchainBase.kt:32)
at org.rust.ide.inspections.RsLocalInspectionTool.isApplicableTo(RsLocalInspectionTool.kt:63)
at org.rust.ide.inspections.RsLocalInspectionTool.buildVisitor(RsLocalInspectionTool.kt:32)
[report.txt](https://github.com/intellij-rust/intellij-rust/files/8769758/report.txt)
[threadDump-20220524-092530.txt](https://github.com/intellij-rust/intellij-rust/files/8769763/threadDump-20220524-092530.txt)
|
non_defect
|
rust plugin froze for sec on win right after project was opened environment intellij rust plugin version nightly rust toolchain version nightly pc windows msvc ide name and version clion eap cl operating system windows macro expansion engine new name resolution engine new additional experimental features org rust cargo emulate terminal problem description freeze for seconds sampled time sampling rate gc time class loading cpu load the stack is from the thread that was blocking edt com intellij diagnostic freeze at java base sun nio fs windowsnativedispatcher native method at java base sun nio fs windowsnativedispatcher getfileattributesex windowsnativedispatcher java at java base sun nio fs windowsfileattributes get windowsfileattributes java at java base sun nio fs windowsfileattributeviews basic readattributes windowsfileattributeviews java at java base sun nio fs windowsfileattributeviews basic readattributes windowsfileattributeviews java at java base sun nio fs windowsfilesystemprovider readattributes windowsfilesystemprovider java at java base sun nio fs abstractfilesystemprovider isdirectory abstractfilesystemprovider java at java base java nio file files isdirectory files java at com intellij util io pathkt isdirectory path kt at org rust cargo toolchain flavors rstoolchainflavor isvalidtoolchainpath rstoolchainflavor kt at org rust cargo toolchain flavors rstoolchainflavor companion getflavor rstoolchainflavor kt at org rust cargo toolchain rstoolchainbase lookslikevalidtoolchain rstoolchainbase kt at org rust ide inspections rslocalinspectiontool isapplicableto rslocalinspectiontool kt at org rust ide inspections rslocalinspectiontool buildvisitor rslocalinspectiontool kt
| 0
|
20,571
| 3,385,064,714
|
IssuesEvent
|
2015-11-27 09:20:30
|
jOOQ/jOOQ
|
https://api.github.com/repos/jOOQ/jOOQ
|
closed
|
Regression in AbstractParam throwing StackOverflowError when calling UDTRecord.toString()
|
C: Functionality P: Medium T: Defect
|
I get a stack overflow when attempting to insert a record that is a UDT. If I go back to 3.5.0, everything runs fine again. Seems to be related to UDTRecordImpl.toString calling DSL.inline(this) that finally depends on itself, thru AbstractParam.name method.
After some investigations, here what I have found:
It's a regession from ticket 3707.
This commit originally had the code to handle the situation: 1a6e4c01e5d9720a5f714ad3960ea3229e128363
Here is the original code from AbstractParam class:
```java
private static String name(Object value, String paramName) {
return paramName != null
? paramName
// [#3707] Protect value.toString call for certain jOOQ types.
: value instanceof UDTRecord
? ((UDTRecord<?>) value).getUDT().getName()
: value instanceof ArrayRecord
? ((ArrayRecord<?>) value).getName()
: String.valueOf(value);
}
```
But this commit screwed everything: e8b3e0953eec12fc519b3e15ce9a91da49dbcaba
```java
private static String name(Object value, String paramName) {
return paramName != null
? paramName
/* [pro] xx
xx xxxxxxx xxxxxxx xxxxxxxxxxxxxx xxxx xxx xxxxxxx xxxx xxxxxx
x xxxxx xxxxxxxxxx xxxxxxxxx
x xxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxx
x xxxxx xxxxxxxxxx xxxxxxxxxxx
x xxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxx
xx [/pro] */
: String.valueOf(value);
}
```
Your "professional edition screening tool" seems to have commented too much code. Also unit tests seems to have disappeared after version 3.5.3. Where are they gone ? Has the community edition become a sub-par edition ?
Here is a part of the stack trace. If you need more information, I will be pleased to provide you with it.
```
at java.lang.String.valueOf(String.java:2982)
at org.jooq.impl.AbstractParam.name(AbstractParam.java:104)
at org.jooq.impl.AbstractParam.<init>(AbstractParam.java:78)
at org.jooq.impl.AbstractParam.<init>(AbstractParam.java:74)
at org.jooq.impl.UDTConstant.<init>(UDTConstant.java:62)
at org.jooq.impl.DSL.val(DSL.java:12808)
at org.jooq.impl.DSL.val(DSL.java:12759)
at org.jooq.impl.DSL.inline(DSL.java:12617)
at org.jooq.impl.UDTRecordImpl.toString(UDTRecordImpl.java:141)
at java.lang.String.valueOf(String.java:2982)
at org.jooq.impl.AbstractParam.name(AbstractParam.java:104)
at org.jooq.impl.AbstractParam.<init>(AbstractParam.java:78)
at org.jooq.impl.AbstractParam.<init>(AbstractParam.java:74)
at org.jooq.impl.UDTConstant.<init>(UDTConstant.java:62)
at org.jooq.impl.DSL.val(DSL.java:12808)
at org.jooq.impl.DSL.val(DSL.java:12759)
at org.jooq.impl.DSL.inline(DSL.java:12617)
at org.jooq.impl.UDTRecordImpl.toString(UDTRecordImpl.java:141)
at java.lang.String.valueOf(String.java:2982)
at org.jooq.impl.AbstractParam.name(AbstractParam.java:104)
at org.jooq.impl.AbstractParam.<init>(AbstractParam.java:78)
at org.jooq.impl.AbstractParam.<init>(AbstractParam.java:74)
at org.jooq.impl.UDTConstant.<init>(UDTConstant.java:62)
at org.jooq.impl.DSL.val(DSL.java:12808)
at org.jooq.impl.DSL.val(DSL.java:12759)
at org.jooq.impl.DSL.inline(DSL.java:12617)
at org.jooq.impl.UDTRecordImpl.toString(UDTRecordImpl.java:141)
at java.lang.String.valueOf(String.java:2982)
at org.jooq.impl.AbstractParam.name(AbstractParam.java:104)
at org.jooq.impl.AbstractParam.<init>(AbstractParam.java:78)
at org.jooq.impl.AbstractParam.<init>(AbstractParam.java:74)
at org.jooq.impl.UDTConstant.<init>(UDTConstant.java:62)
at org.jooq.impl.DSL.val(DSL.java:12808)
at org.jooq.impl.DSL.val(DSL.java:12759)
at org.jooq.impl.DSL.inline(DSL.java:12617)
at org.jooq.impl.UDTRecordImpl.toString(UDTRecordImpl.java:141)
at java.lang.String.valueOf(String.java:2982)
at org.jooq.impl.AbstractParam.name(AbstractParam.java:104)
at org.jooq.impl.AbstractParam.<init>(AbstractParam.java:78)
at org.jooq.impl.AbstractParam.<init>(AbstractParam.java:74)
at org.jooq.impl.UDTConstant.<init>(UDTConstant.java:62)
at org.jooq.impl.DSL.val(DSL.java:12808)
at org.jooq.impl.DSL.val(DSL.java:12759)
at org.jooq.impl.DSL.inline(DSL.java:12617)
at org.jooq.impl.UDTRecordImpl.toString(UDTRecordImpl.java:141)
at java.lang.String.valueOf(String.java:2982)
at org.jooq.impl.AbstractParam.name(AbstractParam.java:104)
at org.jooq.impl.AbstractParam.<init>(AbstractParam.java:78)
at org.jooq.impl.AbstractParam.<init>(AbstractParam.java:74)
at org.jooq.impl.UDTConstant.<init>(UDTConstant.java:62)
at org.jooq.impl.DSL.val(DSL.java:12808)
at org.jooq.impl.DSL.val(DSL.java:12759)
at org.jooq.impl.DSL.inline(DSL.java:12617)
at org.jooq.impl.UDTRecordImpl.toString(UDTRecordImpl.java:141)
```
|
1.0
|
Regression in AbstractParam throwing StackOverflowError when calling UDTRecord.toString() - I get a stack overflow when attempting to insert a record that is a UDT. If I go back to 3.5.0, everything runs fine again. Seems to be related to UDTRecordImpl.toString calling DSL.inline(this) that finally depends on itself, thru AbstractParam.name method.
After some investigations, here what I have found:
It's a regession from ticket 3707.
This commit originally had the code to handle the situation: 1a6e4c01e5d9720a5f714ad3960ea3229e128363
Here is the original code from AbstractParam class:
```java
private static String name(Object value, String paramName) {
return paramName != null
? paramName
// [#3707] Protect value.toString call for certain jOOQ types.
: value instanceof UDTRecord
? ((UDTRecord<?>) value).getUDT().getName()
: value instanceof ArrayRecord
? ((ArrayRecord<?>) value).getName()
: String.valueOf(value);
}
```
But this commit screwed everything: e8b3e0953eec12fc519b3e15ce9a91da49dbcaba
```java
private static String name(Object value, String paramName) {
return paramName != null
? paramName
/* [pro] xx
xx xxxxxxx xxxxxxx xxxxxxxxxxxxxx xxxx xxx xxxxxxx xxxx xxxxxx
x xxxxx xxxxxxxxxx xxxxxxxxx
x xxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxx
x xxxxx xxxxxxxxxx xxxxxxxxxxx
x xxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxx
xx [/pro] */
: String.valueOf(value);
}
```
Your "professional edition screening tool" seems to have commented too much code. Also unit tests seems to have disappeared after version 3.5.3. Where are they gone ? Has the community edition become a sub-par edition ?
Here is a part of the stack trace. If you need more information, I will be pleased to provide you with it.
```
at java.lang.String.valueOf(String.java:2982)
at org.jooq.impl.AbstractParam.name(AbstractParam.java:104)
at org.jooq.impl.AbstractParam.<init>(AbstractParam.java:78)
at org.jooq.impl.AbstractParam.<init>(AbstractParam.java:74)
at org.jooq.impl.UDTConstant.<init>(UDTConstant.java:62)
at org.jooq.impl.DSL.val(DSL.java:12808)
at org.jooq.impl.DSL.val(DSL.java:12759)
at org.jooq.impl.DSL.inline(DSL.java:12617)
at org.jooq.impl.UDTRecordImpl.toString(UDTRecordImpl.java:141)
at java.lang.String.valueOf(String.java:2982)
at org.jooq.impl.AbstractParam.name(AbstractParam.java:104)
at org.jooq.impl.AbstractParam.<init>(AbstractParam.java:78)
at org.jooq.impl.AbstractParam.<init>(AbstractParam.java:74)
at org.jooq.impl.UDTConstant.<init>(UDTConstant.java:62)
at org.jooq.impl.DSL.val(DSL.java:12808)
at org.jooq.impl.DSL.val(DSL.java:12759)
at org.jooq.impl.DSL.inline(DSL.java:12617)
at org.jooq.impl.UDTRecordImpl.toString(UDTRecordImpl.java:141)
at java.lang.String.valueOf(String.java:2982)
at org.jooq.impl.AbstractParam.name(AbstractParam.java:104)
at org.jooq.impl.AbstractParam.<init>(AbstractParam.java:78)
at org.jooq.impl.AbstractParam.<init>(AbstractParam.java:74)
at org.jooq.impl.UDTConstant.<init>(UDTConstant.java:62)
at org.jooq.impl.DSL.val(DSL.java:12808)
at org.jooq.impl.DSL.val(DSL.java:12759)
at org.jooq.impl.DSL.inline(DSL.java:12617)
at org.jooq.impl.UDTRecordImpl.toString(UDTRecordImpl.java:141)
at java.lang.String.valueOf(String.java:2982)
at org.jooq.impl.AbstractParam.name(AbstractParam.java:104)
at org.jooq.impl.AbstractParam.<init>(AbstractParam.java:78)
at org.jooq.impl.AbstractParam.<init>(AbstractParam.java:74)
at org.jooq.impl.UDTConstant.<init>(UDTConstant.java:62)
at org.jooq.impl.DSL.val(DSL.java:12808)
at org.jooq.impl.DSL.val(DSL.java:12759)
at org.jooq.impl.DSL.inline(DSL.java:12617)
at org.jooq.impl.UDTRecordImpl.toString(UDTRecordImpl.java:141)
at java.lang.String.valueOf(String.java:2982)
at org.jooq.impl.AbstractParam.name(AbstractParam.java:104)
at org.jooq.impl.AbstractParam.<init>(AbstractParam.java:78)
at org.jooq.impl.AbstractParam.<init>(AbstractParam.java:74)
at org.jooq.impl.UDTConstant.<init>(UDTConstant.java:62)
at org.jooq.impl.DSL.val(DSL.java:12808)
at org.jooq.impl.DSL.val(DSL.java:12759)
at org.jooq.impl.DSL.inline(DSL.java:12617)
at org.jooq.impl.UDTRecordImpl.toString(UDTRecordImpl.java:141)
at java.lang.String.valueOf(String.java:2982)
at org.jooq.impl.AbstractParam.name(AbstractParam.java:104)
at org.jooq.impl.AbstractParam.<init>(AbstractParam.java:78)
at org.jooq.impl.AbstractParam.<init>(AbstractParam.java:74)
at org.jooq.impl.UDTConstant.<init>(UDTConstant.java:62)
at org.jooq.impl.DSL.val(DSL.java:12808)
at org.jooq.impl.DSL.val(DSL.java:12759)
at org.jooq.impl.DSL.inline(DSL.java:12617)
at org.jooq.impl.UDTRecordImpl.toString(UDTRecordImpl.java:141)
```
|
defect
|
regression in abstractparam throwing stackoverflowerror when calling udtrecord tostring i get a stack overflow when attempting to insert a record that is a udt if i go back to everything runs fine again seems to be related to udtrecordimpl tostring calling dsl inline this that finally depends on itself thru abstractparam name method after some investigations here what i have found it s a regession from ticket this commit originally had the code to handle the situation here is the original code from abstractparam class java private static string name object value string paramname return paramname null paramname protect value tostring call for certain jooq types value instanceof udtrecord udtrecord value getudt getname value instanceof arrayrecord arrayrecord value getname string valueof value but this commit screwed everything java private static string name object value string paramname return paramname null paramname xx xx xxxxxxx xxxxxxx xxxxxxxxxxxxxx xxxx xxx xxxxxxx xxxx xxxxxx x xxxxx xxxxxxxxxx xxxxxxxxx x xxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxx x xxxxx xxxxxxxxxx xxxxxxxxxxx x xxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxx xx string valueof value your professional edition screening tool seems to have commented too much code also unit tests seems to have disappeared after version where are they gone has the community edition become a sub par edition here is a part of the stack trace if you need more information i will be pleased to provide you with it at java lang string valueof string java at org jooq impl abstractparam name abstractparam java at org jooq impl abstractparam abstractparam java at org jooq impl abstractparam abstractparam java at org jooq impl udtconstant udtconstant java at org jooq impl dsl val dsl java at org jooq impl dsl val dsl java at org jooq impl dsl inline dsl java at org jooq impl udtrecordimpl tostring udtrecordimpl java at java lang string valueof string java at org jooq impl abstractparam name abstractparam java at org jooq impl abstractparam abstractparam java at org jooq impl abstractparam abstractparam java at org jooq impl udtconstant udtconstant java at org jooq impl dsl val dsl java at org jooq impl dsl val dsl java at org jooq impl dsl inline dsl java at org jooq impl udtrecordimpl tostring udtrecordimpl java at java lang string valueof string java at org jooq impl abstractparam name abstractparam java at org jooq impl abstractparam abstractparam java at org jooq impl abstractparam abstractparam java at org jooq impl udtconstant udtconstant java at org jooq impl dsl val dsl java at org jooq impl dsl val dsl java at org jooq impl dsl inline dsl java at org jooq impl udtrecordimpl tostring udtrecordimpl java at java lang string valueof string java at org jooq impl abstractparam name abstractparam java at org jooq impl abstractparam abstractparam java at org jooq impl abstractparam abstractparam java at org jooq impl udtconstant udtconstant java at org jooq impl dsl val dsl java at org jooq impl dsl val dsl java at org jooq impl dsl inline dsl java at org jooq impl udtrecordimpl tostring udtrecordimpl java at java lang string valueof string java at org jooq impl abstractparam name abstractparam java at org jooq impl abstractparam abstractparam java at org jooq impl abstractparam abstractparam java at org jooq impl udtconstant udtconstant java at org jooq impl dsl val dsl java at org jooq impl dsl val dsl java at org jooq impl dsl inline dsl java at org jooq impl udtrecordimpl tostring udtrecordimpl java at java lang string valueof string java at org jooq impl abstractparam name abstractparam java at org jooq impl abstractparam abstractparam java at org jooq impl abstractparam abstractparam java at org jooq impl udtconstant udtconstant java at org jooq impl dsl val dsl java at org jooq impl dsl val dsl java at org jooq impl dsl inline dsl java at org jooq impl udtrecordimpl tostring udtrecordimpl java
| 1
|
8,530
| 2,611,516,490
|
IssuesEvent
|
2015-02-27 05:51:30
|
chrsmith/hedgewars
|
https://api.github.com/repos/chrsmith/hedgewars
|
closed
|
Video settings migration 0.0.18->0.0.19
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
My video section in settings.ini from 0.0.18:
[video]
resolution=1920x1080
fullscreen=true
quality=5
stereo=0
And the new settings.ini, when i run alpha for the first time:
[video]
resolution=1920x1080
fullscreen=true
quality=5
stereo=0
fullscreenResolution=1400x1050
windowedWidth=1600
windowedHeight=900
What is the expected output? What do you see instead?
Fullscreen runs in bad resolution (1400x1050) after migration to new version. I
expect the same fullscreen size as i had set up in old version (1920x1080).
What version of the product are you using? On what operating system?
Arch linux 64bit.
0.0.18 stable and 0.0.19 from hg 8687:4cc2b2cd4184
Please provide any additional information below.
It can be simply set again in configuration window. It just would be better to
have this value not broken on version migration.
```
Original issue reported on code.google.com by `ben...@unit22.org` on 12 Mar 2013 at 4:59
|
1.0
|
Video settings migration 0.0.18->0.0.19 - ```
What steps will reproduce the problem?
My video section in settings.ini from 0.0.18:
[video]
resolution=1920x1080
fullscreen=true
quality=5
stereo=0
And the new settings.ini, when i run alpha for the first time:
[video]
resolution=1920x1080
fullscreen=true
quality=5
stereo=0
fullscreenResolution=1400x1050
windowedWidth=1600
windowedHeight=900
What is the expected output? What do you see instead?
Fullscreen runs in bad resolution (1400x1050) after migration to new version. I
expect the same fullscreen size as i had set up in old version (1920x1080).
What version of the product are you using? On what operating system?
Arch linux 64bit.
0.0.18 stable and 0.0.19 from hg 8687:4cc2b2cd4184
Please provide any additional information below.
It can be simply set again in configuration window. It just would be better to
have this value not broken on version migration.
```
Original issue reported on code.google.com by `ben...@unit22.org` on 12 Mar 2013 at 4:59
|
defect
|
video settings migration what steps will reproduce the problem my video section in settings ini from resolution fullscreen true quality stereo and the new settings ini when i run alpha for the first time resolution fullscreen true quality stereo fullscreenresolution windowedwidth windowedheight what is the expected output what do you see instead fullscreen runs in bad resolution after migration to new version i expect the same fullscreen size as i had set up in old version what version of the product are you using on what operating system arch linux stable and from hg please provide any additional information below it can be simply set again in configuration window it just would be better to have this value not broken on version migration original issue reported on code google com by ben org on mar at
| 1
|
26,949
| 4,839,659,726
|
IssuesEvent
|
2016-11-09 10:15:48
|
google/google-authenticator-libpam
|
https://api.github.com/repos/google/google-authenticator-libpam
|
opened
|
Plans to support SHA256?
|
blackberry bug iphone libpam Priority-Medium Type-Defect
|
_From @ThomasHabets on October 10, 2014 8:7_
Original [issue 393](https://code.google.com/p/google-authenticator/issues/detail?id=393) created by synikal on 2014-06-19T07:29:02.000Z:
Hi all,
Just wondering if there were plans to support SHA256? I'd really like to use existing tokens with this module and I _think_ thats the prerequisite.
_Copied from original issue: google/google-authenticator#392_
|
1.0
|
Plans to support SHA256? - _From @ThomasHabets on October 10, 2014 8:7_
Original [issue 393](https://code.google.com/p/google-authenticator/issues/detail?id=393) created by synikal on 2014-06-19T07:29:02.000Z:
Hi all,
Just wondering if there were plans to support SHA256? I'd really like to use existing tokens with this module and I _think_ thats the prerequisite.
_Copied from original issue: google/google-authenticator#392_
|
defect
|
plans to support from thomashabets on october original created by synikal on hi all just wondering if there were plans to support i d really like to use existing tokens with this module and i think thats the prerequisite copied from original issue google google authenticator
| 1
|
311,032
| 26,762,482,560
|
IssuesEvent
|
2023-01-31 08:13:25
|
saleor/saleor-dashboard
|
https://api.github.com/repos/saleor/saleor-dashboard
|
closed
|
Cypress test fail: should be able to navigate through shop as a staff member using page permission. TC: SALEOR_3408
|
tests
|
**Known bug for versions:**
v37: false
**Additional Info:**
Spec: As a staff user I want to navigate through shop using different permissions
|
1.0
|
Cypress test fail: should be able to navigate through shop as a staff member using page permission. TC: SALEOR_3408 - **Known bug for versions:**
v37: false
**Additional Info:**
Spec: As a staff user I want to navigate through shop using different permissions
|
non_defect
|
cypress test fail should be able to navigate through shop as a staff member using page permission tc saleor known bug for versions false additional info spec as a staff user i want to navigate through shop using different permissions
| 0
|
35,343
| 7,704,297,901
|
IssuesEvent
|
2018-05-21 11:43:01
|
hazelcast/hazelcast
|
https://api.github.com/repos/hazelcast/hazelcast
|
opened
|
Adding data structure config at runtime fails when same config exists as static config
|
Module: Config Team: Core Type: Defect
|
When adding a dynamic data structure configuration then we fail-fast when the same structure is already configured statically. even when both configs are equal.
The expected outcome would be to silently ignore the submitted dynamic config when it is equal to an existing static config, or fail with a `ConfigurationException` when a conflicting static config already exists. The same behavior (ignore when equal, fail on conflict) is already in place when checking a submitted config against existing dynamic configs.
Kudos @jerrinot for raising this issue
|
1.0
|
Adding data structure config at runtime fails when same config exists as static config - When adding a dynamic data structure configuration then we fail-fast when the same structure is already configured statically. even when both configs are equal.
The expected outcome would be to silently ignore the submitted dynamic config when it is equal to an existing static config, or fail with a `ConfigurationException` when a conflicting static config already exists. The same behavior (ignore when equal, fail on conflict) is already in place when checking a submitted config against existing dynamic configs.
Kudos @jerrinot for raising this issue
|
defect
|
adding data structure config at runtime fails when same config exists as static config when adding a dynamic data structure configuration then we fail fast when the same structure is already configured statically even when both configs are equal the expected outcome would be to silently ignore the submitted dynamic config when it is equal to an existing static config or fail with a configurationexception when a conflicting static config already exists the same behavior ignore when equal fail on conflict is already in place when checking a submitted config against existing dynamic configs kudos jerrinot for raising this issue
| 1
|
1,371
| 2,603,842,152
|
IssuesEvent
|
2015-02-24 18:14:57
|
chrsmith/nishazi6
|
https://api.github.com/repos/chrsmith/nishazi6
|
opened
|
沈阳龟头上面长疙瘩
|
auto-migrated Priority-Medium Type-Defect
|
```
沈阳龟头上面长疙瘩〓沈陽軍區政治部醫院性病〓TEL:024-3102
3308〓成立于1946年,68年專注于性傳播疾病的研究和治療。位�
��沈陽市沈河區二緯路32號。是一所與新中國同建立共輝煌的�
��史悠久、設備精良、技術權威、專家云集,是預防、保健、
醫療、科研康復為一體的綜合性醫院。是國家首批公立甲等��
�隊醫院、全國首批醫療規范定點單位,是第四軍醫大學、東�
��大學等知名高等院校的教學醫院。曾被中國人民解放軍空軍
后勤部衛生部評為衛生工作先進單位,先后兩次榮立集體二��
�功。
```
-----
Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 7:25
|
1.0
|
沈阳龟头上面长疙瘩 - ```
沈阳龟头上面长疙瘩〓沈陽軍區政治部醫院性病〓TEL:024-3102
3308〓成立于1946年,68年專注于性傳播疾病的研究和治療。位�
��沈陽市沈河區二緯路32號。是一所與新中國同建立共輝煌的�
��史悠久、設備精良、技術權威、專家云集,是預防、保健、
醫療、科研康復為一體的綜合性醫院。是國家首批公立甲等��
�隊醫院、全國首批醫療規范定點單位,是第四軍醫大學、東�
��大學等知名高等院校的教學醫院。曾被中國人民解放軍空軍
后勤部衛生部評為衛生工作先進單位,先后兩次榮立集體二��
�功。
```
-----
Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 7:25
|
defect
|
沈阳龟头上面长疙瘩 沈阳龟头上面长疙瘩〓沈陽軍區政治部醫院性病〓tel: 〓 , 。位� �� 。是一所與新中國同建立共輝煌的� ��史悠久、設備精良、技術權威、專家云集,是預防、保健、 醫療、科研康復為一體的綜合性醫院。是國家首批公立甲等�� �隊醫院、全國首批醫療規范定點單位,是第四軍醫大學、東� ��大學等知名高等院校的教學醫院。曾被中國人民解放軍空軍 后勤部衛生部評為衛生工作先進單位,先后兩次榮立集體二�� �功。 original issue reported on code google com by gmail com on jun at
| 1
|
59,789
| 17,023,246,581
|
IssuesEvent
|
2021-07-03 01:02:22
|
tomhughes/trac-tickets
|
https://api.github.com/repos/tomhughes/trac-tickets
|
closed
|
search is slow with short words or commonly used words
|
Component: namefinder Priority: major Resolution: wontfix Type: defect
|
**[Submitted to the original trac issue database at 1.38pm, Thursday, 8th May 2008]**
Searching "la prairie" takes forever, searching "prairie" is a matter of seconds.
I was told that any search that involves a common word like "la" will be slow as there are a lot of places with that in their name to consider.
Shouldn't osm just discard it like google does, especially since it's short? Or if this can't be worked around this way, maybe add additional search fields to specify the country+region? that should surely make performance (and result relevance) better?
In any case, the search performance when including short words like "la" is horrible, and there is no way to tell namefinder to stop the search and refine it.
|
1.0
|
search is slow with short words or commonly used words - **[Submitted to the original trac issue database at 1.38pm, Thursday, 8th May 2008]**
Searching "la prairie" takes forever, searching "prairie" is a matter of seconds.
I was told that any search that involves a common word like "la" will be slow as there are a lot of places with that in their name to consider.
Shouldn't osm just discard it like google does, especially since it's short? Or if this can't be worked around this way, maybe add additional search fields to specify the country+region? that should surely make performance (and result relevance) better?
In any case, the search performance when including short words like "la" is horrible, and there is no way to tell namefinder to stop the search and refine it.
|
defect
|
search is slow with short words or commonly used words searching la prairie takes forever searching prairie is a matter of seconds i was told that any search that involves a common word like la will be slow as there are a lot of places with that in their name to consider shouldn t osm just discard it like google does especially since it s short or if this can t be worked around this way maybe add additional search fields to specify the country region that should surely make performance and result relevance better in any case the search performance when including short words like la is horrible and there is no way to tell namefinder to stop the search and refine it
| 1
|
20,867
| 3,643,581,159
|
IssuesEvent
|
2016-02-15 03:00:59
|
deeheber/code-blog
|
https://api.github.com/repos/deeheber/code-blog
|
closed
|
Fix Category Filters
|
design logic MVP
|
- [x] Make sure all categories show in the dropdwn (mobile) and sidebar (desktop)
- [x] Verify the correct data displays based off of which category is selected
- [x] Make sure the correct URL displays in the browser menu
- [x] Make sure the active link is highlighted (sidebar only)
|
1.0
|
Fix Category Filters - - [x] Make sure all categories show in the dropdwn (mobile) and sidebar (desktop)
- [x] Verify the correct data displays based off of which category is selected
- [x] Make sure the correct URL displays in the browser menu
- [x] Make sure the active link is highlighted (sidebar only)
|
non_defect
|
fix category filters make sure all categories show in the dropdwn mobile and sidebar desktop verify the correct data displays based off of which category is selected make sure the correct url displays in the browser menu make sure the active link is highlighted sidebar only
| 0
|
52,507
| 6,259,685,590
|
IssuesEvent
|
2017-07-14 18:39:31
|
mathjax/MathJax
|
https://api.github.com/repos/mathjax/MathJax
|
closed
|
[CommonHTML] AMScd error
|
Accepted Merged Test Available
|
Hello,
In a previous version of MathJax (which I do not remember), the following AMScd diagram was rendered correctly:
```
\begin{CD}
A @>a>b> B \\
@VlVrV @AlArA \\
C @<a<b< D
\end{CD}
```
but it is not the case in the current version.
Could you please check?
Thank you in advance.
|
1.0
|
[CommonHTML] AMScd error - Hello,
In a previous version of MathJax (which I do not remember), the following AMScd diagram was rendered correctly:
```
\begin{CD}
A @>a>b> B \\
@VlVrV @AlArA \\
C @<a<b< D
\end{CD}
```
but it is not the case in the current version.
Could you please check?
Thank you in advance.
|
non_defect
|
amscd error hello in a previous version of mathjax which i do not remember the following amscd diagram was rendered correctly begin cd a a b b vlvrv alara c a b d end cd but it is not the case in the current version could you please check thank you in advance
| 0
|
55,865
| 14,713,841,064
|
IssuesEvent
|
2021-01-05 10:59:28
|
primefaces/primeng
|
https://api.github.com/repos/primefaces/primeng
|
closed
|
p-treeTable with VirtualScroll only header is resizing
|
defect
|
If you have a PrimeNG PRO Support subscription please post your issue at;
https://pro.primefaces.org
where our team will respond within 4 business hours.
If you do not have a PrimeNG PRO Support subscription, fill-in the report below. Please note that
your issue will be added to the waiting list of community issues and will be reviewed on a first-come first-serve basis, as a result, the support team is unable to guarantee a specific schedule on when it will be reviewed. Thank you for your understanding.
Current Queue Time for Review
Without PRO Support: ~8-12 weeks.
With PRO Support: 1 hour
**I'm submitting a ...** (check one with "x")
```
[ x] bug report => Search github for a similar issue or PR before submitting
[ ] feature request => Please check if request is not on the roadmap already https://github.com/primefaces/primeng/wiki/Roadmap
[ ] support request => Please do not submit support request here, instead see http://forum.primefaces.org/viewforum.php?f=35
```
**Plunkr Case (Bug Reports)**
Please demonstrate your case at stackblitz by using the issue template below. Issues without a test case have much less possibility to be reviewd in detail and assisted.
https://stackblitz.com/edit/primeng-treetablescroll-demo-hatiny?file=src/app/app.component.html
**Current behavior**
Columns resize in tree table with virtualScroll=[true] not working
**Expected behavior**
columns resize should work
**Minimal reproduction of the problem with instructions**
Scroll down to the _Virtual Scroll with 100000 Nodes_ section
Then, try to resize any columns vertical blue line stays
**What is the motivation / use case for changing the behavior?**
<!-- Describe the motivation or the concrete use case -->
**Please tell us about your environment:**
<!-- Operating system, IDE, package manager, HTTP server, ... -->
* **Angular version:** 5.X
<!-- Check whether this is still an issue in the most recent Angular version -->
* **PrimeNG version:** 5.X
<!-- Check whether this is still an issue in the most recent Angular version -->
* **Browser:** [all | Chrome XX | Firefox XX | IE XX | Safari XX | Mobile Chrome XX | Android X.X Web Browser | iOS XX Safari | iOS XX UIWebView | iOS XX WKWebView ]
<!-- All browsers where this could be reproduced -->
* **Language:** [all | TypeScript X.X | ES6/7 | ES5]
* **Node (for AoT issues):** `node --version` =
|
1.0
|
p-treeTable with VirtualScroll only header is resizing - If you have a PrimeNG PRO Support subscription please post your issue at;
https://pro.primefaces.org
where our team will respond within 4 business hours.
If you do not have a PrimeNG PRO Support subscription, fill-in the report below. Please note that
your issue will be added to the waiting list of community issues and will be reviewed on a first-come first-serve basis, as a result, the support team is unable to guarantee a specific schedule on when it will be reviewed. Thank you for your understanding.
Current Queue Time for Review
Without PRO Support: ~8-12 weeks.
With PRO Support: 1 hour
**I'm submitting a ...** (check one with "x")
```
[ x] bug report => Search github for a similar issue or PR before submitting
[ ] feature request => Please check if request is not on the roadmap already https://github.com/primefaces/primeng/wiki/Roadmap
[ ] support request => Please do not submit support request here, instead see http://forum.primefaces.org/viewforum.php?f=35
```
**Plunkr Case (Bug Reports)**
Please demonstrate your case at stackblitz by using the issue template below. Issues without a test case have much less possibility to be reviewd in detail and assisted.
https://stackblitz.com/edit/primeng-treetablescroll-demo-hatiny?file=src/app/app.component.html
**Current behavior**
Columns resize in tree table with virtualScroll=[true] not working
**Expected behavior**
columns resize should work
**Minimal reproduction of the problem with instructions**
Scroll down to the _Virtual Scroll with 100000 Nodes_ section
Then, try to resize any columns vertical blue line stays
**What is the motivation / use case for changing the behavior?**
<!-- Describe the motivation or the concrete use case -->
**Please tell us about your environment:**
<!-- Operating system, IDE, package manager, HTTP server, ... -->
* **Angular version:** 5.X
<!-- Check whether this is still an issue in the most recent Angular version -->
* **PrimeNG version:** 5.X
<!-- Check whether this is still an issue in the most recent Angular version -->
* **Browser:** [all | Chrome XX | Firefox XX | IE XX | Safari XX | Mobile Chrome XX | Android X.X Web Browser | iOS XX Safari | iOS XX UIWebView | iOS XX WKWebView ]
<!-- All browsers where this could be reproduced -->
* **Language:** [all | TypeScript X.X | ES6/7 | ES5]
* **Node (for AoT issues):** `node --version` =
|
defect
|
p treetable with virtualscroll only header is resizing if you have a primeng pro support subscription please post your issue at where our team will respond within business hours if you do not have a primeng pro support subscription fill in the report below please note that your issue will be added to the waiting list of community issues and will be reviewed on a first come first serve basis as a result the support team is unable to guarantee a specific schedule on when it will be reviewed thank you for your understanding current queue time for review without pro support weeks with pro support hour i m submitting a check one with x bug report search github for a similar issue or pr before submitting feature request please check if request is not on the roadmap already support request please do not submit support request here instead see plunkr case bug reports please demonstrate your case at stackblitz by using the issue template below issues without a test case have much less possibility to be reviewd in detail and assisted current behavior columns resize in tree table with virtualscroll not working expected behavior columns resize should work minimal reproduction of the problem with instructions scroll down to the virtual scroll with nodes section then try to resize any columns vertical blue line stays what is the motivation use case for changing the behavior please tell us about your environment angular version x primeng version x browser language node for aot issues node version
| 1
|
33,203
| 7,051,727,199
|
IssuesEvent
|
2018-01-03 13:09:13
|
maloep/romcollectionbrowser
|
https://api.github.com/repos/maloep/romcollectionbrowser
|
closed
|
Problems running Emulators on Raspberry Pi
|
auto-migrated Priority-Medium Type-Defect
|
```
I am running xbmc and romcollectionbrowser in my Raspberry Pi (using the Xbian
distro, although it seems to happen in other distros for the Raspberry Pi).
This may be caused due to the lack of X11 on that system. "Solo mode" doesn't
minimize or close xbmc, so the result is the same.
What steps will reproduce the problem?
1. Turn on the computer, enter Rom collection browser in xbmc.
2. Select any game (I tried with a few SNES games). "Launch game <game name>"
appears on the bottom left corner.
3. Nothing else happens. I can still browse and "launch" games in
Romcollectionbrowser. Sometimes I can hear the music of the game on the
background, but I cannot see it.
What is the expected output? What do you see instead?
The game should play at fullscreen. Instead, I find myself in the plugin,
browsing games.
What version of the product are you using? On what operating system?
XBMC version: 12 Alpha 7
Romcollectionbrowser version: 1.0.8
Operating system: Xbian Alpha 2
```
Original issue reported on code.google.com by `marcpal...@gmail.com` on 25 Nov 2012 at 12:04
|
1.0
|
Problems running Emulators on Raspberry Pi - ```
I am running xbmc and romcollectionbrowser in my Raspberry Pi (using the Xbian
distro, although it seems to happen in other distros for the Raspberry Pi).
This may be caused due to the lack of X11 on that system. "Solo mode" doesn't
minimize or close xbmc, so the result is the same.
What steps will reproduce the problem?
1. Turn on the computer, enter Rom collection browser in xbmc.
2. Select any game (I tried with a few SNES games). "Launch game <game name>"
appears on the bottom left corner.
3. Nothing else happens. I can still browse and "launch" games in
Romcollectionbrowser. Sometimes I can hear the music of the game on the
background, but I cannot see it.
What is the expected output? What do you see instead?
The game should play at fullscreen. Instead, I find myself in the plugin,
browsing games.
What version of the product are you using? On what operating system?
XBMC version: 12 Alpha 7
Romcollectionbrowser version: 1.0.8
Operating system: Xbian Alpha 2
```
Original issue reported on code.google.com by `marcpal...@gmail.com` on 25 Nov 2012 at 12:04
|
defect
|
problems running emulators on raspberry pi i am running xbmc and romcollectionbrowser in my raspberry pi using the xbian distro although it seems to happen in other distros for the raspberry pi this may be caused due to the lack of on that system solo mode doesn t minimize or close xbmc so the result is the same what steps will reproduce the problem turn on the computer enter rom collection browser in xbmc select any game i tried with a few snes games launch game appears on the bottom left corner nothing else happens i can still browse and launch games in romcollectionbrowser sometimes i can hear the music of the game on the background but i cannot see it what is the expected output what do you see instead the game should play at fullscreen instead i find myself in the plugin browsing games what version of the product are you using on what operating system xbmc version alpha romcollectionbrowser version operating system xbian alpha original issue reported on code google com by marcpal gmail com on nov at
| 1
|
57,578
| 15,866,306,230
|
IssuesEvent
|
2021-04-08 15:36:32
|
jOOQ/jOOQ
|
https://api.github.com/repos/jOOQ/jOOQ
|
opened
|
Oracle: SQLException "Invalid Name Pattern" on Bind of procedure with IN parameter with type table of varchar
|
T: Defect
|
### Expected behavior
Hoping the problem does not sit before its laptop:
I expected the Call to the following procedure would succeed:
```
create or replace PACKAGE VEK_Generator IS
...
PROCEDURE freigeben(
...
, p_nachbearbeitenListe IN OUT NOCOPY ver_ase_gen.t_plsNacharbeitungsListe
, ...
);
END VEK_Generator;
```
The type in the package ver_ase_gen is defined as follows:
```
create or replace PACKAGE ver_ase_gen IS
...
TYPE t_plsNacharbeitungsListe IS TABLE OF plausibilitaet_meldungen.plm_fehlertext%TYPE INDEX BY VARCHAR2(9);
...
END ver_ase_gen;
```
The column `plm_fehlertext` of table `plausibilitaet_meldungen` has type `VARCHAR2(240 BYTE)`.
The schema of those packages is ICIS.
The generated JOOQ classes look fine enough to me:
```
/**
* This class is generated by jOOQ.
*/
@SuppressWarnings({ "all", "unchecked", "rawtypes" })
public class Freigeben extends AbstractRoutine<java.lang.Void> {
...
public static final Parameter<TPlsnacharbeitungslisteRecord> P_NACHBEARBEITENLISTE = Internal.createParameter("P_NACHBEARBEITENLISTE", SQLDataType.VARCHAR(240).asArrayDataType(de.cosmosdirekt.jooq.icis.packages.ver_ase_gen.udt.records.TPlsnacharbeitungslisteRecord.class), false, false);
...
}
/**
* This class is generated by jOOQ.
*/
@SuppressWarnings({ "all", "unchecked", "rawtypes" })
public class TPlsnacharbeitungslisteRecord extends ArrayRecordImpl<String> {
...
/**
* Create a new <code>ICIS.VER_ASE_GEN.T_PLSNACHARBEITUNGSLISTE</code> record
*/
public TPlsnacharbeitungslisteRecord() {
super(Icis.ICIS, VerAseGen.VER_ASE_GEN, "T_PLSNACHARBEITUNGSLISTE", SQLDataType.VARCHAR(240));
}
...
}
```
### Actual behavior
Calling the procedure via
```
Freigeben freigebenRoutine = new Freigeben();
...
freigebenRoutine.setPNachbearbeitenliste(new TPlsnacharbeitungslisteRecord());
...
freigebenRoutine.execute(dslContext.configuration());
```
isn't successful and yields the follwing exception
[Exception_on_execute_call.txt](https://github.com/jOOQ/jOOQ/files/6279712/Exception_on_execute_call.txt)
(The part `Caused by: java.sql.SQLException: Ungültiges Namensmuster: ICIS.VER_ASE_GEN.T_PLSNACHARBEITUNGSLISTE` seems relevant to me)
### Steps to reproduce the problem
- If the problem relates to code generation, please post your code generation configuration
--> Not sure if the problem relates to code generation, nevertheless here's the (relevant) config:
```
<configuration>
<generator>
<database>
<includeInvisibleColumns>false</includeInvisibleColumns>
<includes>
<!--Tabellen-->
| ICIS.VERTRAGSVERSIONEN
...
| ICIS.PLAUSIBILITAET_MELDUNGEN
<!--Packages-->
| ICIS.VEK_GENERATOR.*
...
| ICIS.VER_ASE_GEN.*
</includes>
<schemata>
<schema>
<inputSchema>ICIS</inputSchema>
</schema>
</schemata>
</database>
</generator>
</configuration>
```
Hope this is sufficient for a first look into it, if you need more sophisticated documentation please let me know ...
Thx a lot for your help!
Best,
Philipp
### Versions
- jOOQ: 3.14.8 trial edition
- Java: 11
- Database (include vendor): Oracle
- OS: Win 10 (on my developer laptop, RHEL in production)
- JDBC Driver (include name if inofficial driver): ojdbc8-19.9.0.0.jar
|
1.0
|
Oracle: SQLException "Invalid Name Pattern" on Bind of procedure with IN parameter with type table of varchar - ### Expected behavior
Hoping the problem does not sit before its laptop:
I expected the Call to the following procedure would succeed:
```
create or replace PACKAGE VEK_Generator IS
...
PROCEDURE freigeben(
...
, p_nachbearbeitenListe IN OUT NOCOPY ver_ase_gen.t_plsNacharbeitungsListe
, ...
);
END VEK_Generator;
```
The type in the package ver_ase_gen is defined as follows:
```
create or replace PACKAGE ver_ase_gen IS
...
TYPE t_plsNacharbeitungsListe IS TABLE OF plausibilitaet_meldungen.plm_fehlertext%TYPE INDEX BY VARCHAR2(9);
...
END ver_ase_gen;
```
The column `plm_fehlertext` of table `plausibilitaet_meldungen` has type `VARCHAR2(240 BYTE)`.
The schema of those packages is ICIS.
The generated JOOQ classes look fine enough to me:
```
/**
* This class is generated by jOOQ.
*/
@SuppressWarnings({ "all", "unchecked", "rawtypes" })
public class Freigeben extends AbstractRoutine<java.lang.Void> {
...
public static final Parameter<TPlsnacharbeitungslisteRecord> P_NACHBEARBEITENLISTE = Internal.createParameter("P_NACHBEARBEITENLISTE", SQLDataType.VARCHAR(240).asArrayDataType(de.cosmosdirekt.jooq.icis.packages.ver_ase_gen.udt.records.TPlsnacharbeitungslisteRecord.class), false, false);
...
}
/**
* This class is generated by jOOQ.
*/
@SuppressWarnings({ "all", "unchecked", "rawtypes" })
public class TPlsnacharbeitungslisteRecord extends ArrayRecordImpl<String> {
...
/**
* Create a new <code>ICIS.VER_ASE_GEN.T_PLSNACHARBEITUNGSLISTE</code> record
*/
public TPlsnacharbeitungslisteRecord() {
super(Icis.ICIS, VerAseGen.VER_ASE_GEN, "T_PLSNACHARBEITUNGSLISTE", SQLDataType.VARCHAR(240));
}
...
}
```
### Actual behavior
Calling the procedure via
```
Freigeben freigebenRoutine = new Freigeben();
...
freigebenRoutine.setPNachbearbeitenliste(new TPlsnacharbeitungslisteRecord());
...
freigebenRoutine.execute(dslContext.configuration());
```
isn't successful and yields the follwing exception
[Exception_on_execute_call.txt](https://github.com/jOOQ/jOOQ/files/6279712/Exception_on_execute_call.txt)
(The part `Caused by: java.sql.SQLException: Ungültiges Namensmuster: ICIS.VER_ASE_GEN.T_PLSNACHARBEITUNGSLISTE` seems relevant to me)
### Steps to reproduce the problem
- If the problem relates to code generation, please post your code generation configuration
--> Not sure if the problem relates to code generation, nevertheless here's the (relevant) config:
```
<configuration>
<generator>
<database>
<includeInvisibleColumns>false</includeInvisibleColumns>
<includes>
<!--Tabellen-->
| ICIS.VERTRAGSVERSIONEN
...
| ICIS.PLAUSIBILITAET_MELDUNGEN
<!--Packages-->
| ICIS.VEK_GENERATOR.*
...
| ICIS.VER_ASE_GEN.*
</includes>
<schemata>
<schema>
<inputSchema>ICIS</inputSchema>
</schema>
</schemata>
</database>
</generator>
</configuration>
```
Hope this is sufficient for a first look into it, if you need more sophisticated documentation please let me know ...
Thx a lot for your help!
Best,
Philipp
### Versions
- jOOQ: 3.14.8 trial edition
- Java: 11
- Database (include vendor): Oracle
- OS: Win 10 (on my developer laptop, RHEL in production)
- JDBC Driver (include name if inofficial driver): ojdbc8-19.9.0.0.jar
|
defect
|
oracle sqlexception invalid name pattern on bind of procedure with in parameter with type table of varchar expected behavior hoping the problem does not sit before its laptop i expected the call to the following procedure would succeed create or replace package vek generator is procedure freigeben p nachbearbeitenliste in out nocopy ver ase gen t plsnacharbeitungsliste end vek generator the type in the package ver ase gen is defined as follows create or replace package ver ase gen is type t plsnacharbeitungsliste is table of plausibilitaet meldungen plm fehlertext type index by end ver ase gen the column plm fehlertext of table plausibilitaet meldungen has type byte the schema of those packages is icis the generated jooq classes look fine enough to me this class is generated by jooq suppresswarnings all unchecked rawtypes public class freigeben extends abstractroutine public static final parameter p nachbearbeitenliste internal createparameter p nachbearbeitenliste sqldatatype varchar asarraydatatype de cosmosdirekt jooq icis packages ver ase gen udt records tplsnacharbeitungslisterecord class false false this class is generated by jooq suppresswarnings all unchecked rawtypes public class tplsnacharbeitungslisterecord extends arrayrecordimpl create a new icis ver ase gen t plsnacharbeitungsliste record public tplsnacharbeitungslisterecord super icis icis verasegen ver ase gen t plsnacharbeitungsliste sqldatatype varchar actual behavior calling the procedure via freigeben freigebenroutine new freigeben freigebenroutine setpnachbearbeitenliste new tplsnacharbeitungslisterecord freigebenroutine execute dslcontext configuration isn t successful and yields the follwing exception the part caused by java sql sqlexception ungültiges namensmuster icis ver ase gen t plsnacharbeitungsliste seems relevant to me steps to reproduce the problem if the problem relates to code generation please post your code generation configuration not sure if the problem relates to code generation nevertheless here s the relevant config false icis vertragsversionen icis plausibilitaet meldungen icis vek generator icis ver ase gen icis hope this is sufficient for a first look into it if you need more sophisticated documentation please let me know thx a lot for your help best philipp versions jooq trial edition java database include vendor oracle os win on my developer laptop rhel in production jdbc driver include name if inofficial driver jar
| 1
|
874
| 4,525,182,712
|
IssuesEvent
|
2016-09-07 03:12:59
|
NTiDECOM/ejepb-web
|
https://api.github.com/repos/NTiDECOM/ejepb-web
|
closed
|
Materialize Sass
|
architecture setup ui
|
Integrate [materialize-sass](https://github.com/mkhairi/materialize-sass) to the project
**Is possible to find more information about the framework and docs on [materializecss.com](http://materializecss.com)**
|
1.0
|
Materialize Sass - Integrate [materialize-sass](https://github.com/mkhairi/materialize-sass) to the project
**Is possible to find more information about the framework and docs on [materializecss.com](http://materializecss.com)**
|
non_defect
|
materialize sass integrate to the project is possible to find more information about the framework and docs on
| 0
|
20,227
| 13,767,098,091
|
IssuesEvent
|
2020-10-07 15:19:37
|
BCDevOps/developer-experience
|
https://api.github.com/repos/BCDevOps/developer-experience
|
closed
|
Registry PVC hard prune failed due to file permissions
|
Infrastructure
|
## Problem Description
Registry hard pruning (which puts the registry pods into Read only mode) runs once every three months. The latest hard prune run failed the morning of October 1st, due to the pruner claiming a file it needed to access was unabled to be access (permission denied). The PVC in question resides on gluster-file storage.
Definition of Done:
- [ ] Determine root cause of the issue, opening vendor cases as needed.
- [ ] If deemed necessary, reschedule a manual re-run of the hard prune cron outside of business hours and actively monitor.
|
1.0
|
Registry PVC hard prune failed due to file permissions - ## Problem Description
Registry hard pruning (which puts the registry pods into Read only mode) runs once every three months. The latest hard prune run failed the morning of October 1st, due to the pruner claiming a file it needed to access was unabled to be access (permission denied). The PVC in question resides on gluster-file storage.
Definition of Done:
- [ ] Determine root cause of the issue, opening vendor cases as needed.
- [ ] If deemed necessary, reschedule a manual re-run of the hard prune cron outside of business hours and actively monitor.
|
non_defect
|
registry pvc hard prune failed due to file permissions problem description registry hard pruning which puts the registry pods into read only mode runs once every three months the latest hard prune run failed the morning of october due to the pruner claiming a file it needed to access was unabled to be access permission denied the pvc in question resides on gluster file storage definition of done determine root cause of the issue opening vendor cases as needed if deemed necessary reschedule a manual re run of the hard prune cron outside of business hours and actively monitor
| 0
|
102,538
| 22,035,596,385
|
IssuesEvent
|
2022-05-28 14:23:22
|
etczrn/flutter_projects
|
https://api.github.com/repos/etczrn/flutter_projects
|
opened
|
Project Setup
|
freeCodeCamp.org
|
## Set project up from Terminal
Use this:
```
flutter create --org xxx.domain appname
```
example
```
flutter create --org io.createnewprojectwithflutter mynotes
```
|
1.0
|
Project Setup - ## Set project up from Terminal
Use this:
```
flutter create --org xxx.domain appname
```
example
```
flutter create --org io.createnewprojectwithflutter mynotes
```
|
non_defect
|
project setup set project up from terminal use this flutter create org xxx domain appname example flutter create org io createnewprojectwithflutter mynotes
| 0
|
66,550
| 20,270,606,480
|
IssuesEvent
|
2022-02-15 15:52:34
|
scipy/scipy
|
https://api.github.com/repos/scipy/scipy
|
closed
|
BUG: Tippett’s and Pearson’s method for combine_pvalues are not monotonous
|
defect scipy.stats query
|
### Describe your issue.
Something is wrong with the methods `pearson` and `tippett` for `scipy.stats.combine_pvalues`. They should be monotonically increasing with all components of the input, yet they are monotonically decreasing. At first glance, the output seems to be 1 − _p_ when it should be _p._ It’s also straightforward to compute that Tippett’s method for an input of (½,½) should yield _p_ = ¾ but it does not.
@rlucas7 IIUC, you should know best about this.
### Reproducing Code Example
```python
import numpy as np
from scipy.stats import combine_pvalues
n = 5
ps = np.linspace(0.1,0.9,5)
for method in ("pearson","tippett"):
combined_ps = [
combine_pvalues( np.full(n,p), method )[1]
for p in ps
]
assert np.all( np.diff(combined_ps) >= 0 )
assert combine_pvalues([0.5,0.5],method="tippett") == 0.75
```
### Error message
All assertions fail.
### SciPy/NumPy/Python version information
1.6.3 1.19.5 sys.version_info(major=3, minor=9, micro=7, releaselevel='final', serial=0)
|
1.0
|
BUG: Tippett’s and Pearson’s method for combine_pvalues are not monotonous - ### Describe your issue.
Something is wrong with the methods `pearson` and `tippett` for `scipy.stats.combine_pvalues`. They should be monotonically increasing with all components of the input, yet they are monotonically decreasing. At first glance, the output seems to be 1 − _p_ when it should be _p._ It’s also straightforward to compute that Tippett’s method for an input of (½,½) should yield _p_ = ¾ but it does not.
@rlucas7 IIUC, you should know best about this.
### Reproducing Code Example
```python
import numpy as np
from scipy.stats import combine_pvalues
n = 5
ps = np.linspace(0.1,0.9,5)
for method in ("pearson","tippett"):
combined_ps = [
combine_pvalues( np.full(n,p), method )[1]
for p in ps
]
assert np.all( np.diff(combined_ps) >= 0 )
assert combine_pvalues([0.5,0.5],method="tippett") == 0.75
```
### Error message
All assertions fail.
### SciPy/NumPy/Python version information
1.6.3 1.19.5 sys.version_info(major=3, minor=9, micro=7, releaselevel='final', serial=0)
|
defect
|
bug tippett’s and pearson’s method for combine pvalues are not monotonous describe your issue something is wrong with the methods pearson and tippett for scipy stats combine pvalues they should be monotonically increasing with all components of the input yet they are monotonically decreasing at first glance the output seems to be − p when it should be p it’s also straightforward to compute that tippett’s method for an input of ½ ½ should yield p ¾ but it does not iiuc you should know best about this reproducing code example python import numpy as np from scipy stats import combine pvalues n ps np linspace for method in pearson tippett combined ps combine pvalues np full n p method for p in ps assert np all np diff combined ps assert combine pvalues method tippett error message all assertions fail scipy numpy python version information sys version info major minor micro releaselevel final serial
| 1
|
67,162
| 16,826,823,664
|
IssuesEvent
|
2021-06-17 19:48:49
|
TheLegendOfMataNui/game-issues
|
https://api.github.com/repos/TheLegendOfMataNui/game-issues
|
closed
|
SPCV's entry crash from CLF2's side
|
Build: "Beta" 10-23-01 Crash Level-1 Onua Resolved
|
After restart i spawn normally in SPCV. After leaving and re-entering it keeps crashing and so on. Like #68 , i guess?
Beta, after saving a village.
And about labels on the right side - i just can't see this option. [Maybe because i don't have any permissions](https://github.com/KhronosGroup/Vulkan-Docs/issues/15), idk.
|
1.0
|
SPCV's entry crash from CLF2's side - After restart i spawn normally in SPCV. After leaving and re-entering it keeps crashing and so on. Like #68 , i guess?
Beta, after saving a village.
And about labels on the right side - i just can't see this option. [Maybe because i don't have any permissions](https://github.com/KhronosGroup/Vulkan-Docs/issues/15), idk.
|
non_defect
|
spcv s entry crash from s side after restart i spawn normally in spcv after leaving and re entering it keeps crashing and so on like i guess beta after saving a village and about labels on the right side i just can t see this option idk
| 0
|
10,458
| 15,167,438,643
|
IssuesEvent
|
2021-02-12 17:47:37
|
FutureNorthants/VirtualWorker
|
https://api.github.com/repos/FutureNorthants/VirtualWorker
|
opened
|
Auto update CXM service area from UnitaryServices Lex Bot
|
missing requirement
|
Currently if service intents change, CXM services have to be remapped manually; to add in function to map these automatically so intent changes update without requiring coding changes for adhoc adjustments.
|
1.0
|
Auto update CXM service area from UnitaryServices Lex Bot - Currently if service intents change, CXM services have to be remapped manually; to add in function to map these automatically so intent changes update without requiring coding changes for adhoc adjustments.
|
non_defect
|
auto update cxm service area from unitaryservices lex bot currently if service intents change cxm services have to be remapped manually to add in function to map these automatically so intent changes update without requiring coding changes for adhoc adjustments
| 0
|
5,997
| 2,610,219,183
|
IssuesEvent
|
2015-02-26 19:09:40
|
chrsmith/somefinders
|
https://api.github.com/repos/chrsmith/somefinders
|
opened
|
поздравления на блатном жаргоне
|
auto-migrated Priority-Medium Type-Defect
|
```
'''Габриель Рябов'''
День добрый никак не могу найти
.поздравления на блатном жаргоне. как то
выкладывали уже
'''воин Сысоев'''
Качай тут http://bit.ly/1asXE4c
'''Варлен Чернов'''
Просит ввести номер мобилы!Не опасно ли это?
'''Гермоген Меркушев'''
Неа все ок у меня ничего не списало
'''Адам Алексеев'''
Неа все ок у меня ничего не списало
Информация о файле: поздравления на
блатном жаргоне
Загружен: В этом месяце
Скачан раз: 1169
Рейтинг: 377
Средняя скорость скачивания: 725
Похожих файлов: 35
```
-----
Original issue reported on code.google.com by `kondense...@gmail.com` on 17 Dec 2013 at 6:15
|
1.0
|
поздравления на блатном жаргоне - ```
'''Габриель Рябов'''
День добрый никак не могу найти
.поздравления на блатном жаргоне. как то
выкладывали уже
'''воин Сысоев'''
Качай тут http://bit.ly/1asXE4c
'''Варлен Чернов'''
Просит ввести номер мобилы!Не опасно ли это?
'''Гермоген Меркушев'''
Неа все ок у меня ничего не списало
'''Адам Алексеев'''
Неа все ок у меня ничего не списало
Информация о файле: поздравления на
блатном жаргоне
Загружен: В этом месяце
Скачан раз: 1169
Рейтинг: 377
Средняя скорость скачивания: 725
Похожих файлов: 35
```
-----
Original issue reported on code.google.com by `kondense...@gmail.com` on 17 Dec 2013 at 6:15
|
defect
|
поздравления на блатном жаргоне габриель рябов день добрый никак не могу найти поздравления на блатном жаргоне как то выкладывали уже воин сысоев качай тут варлен чернов просит ввести номер мобилы не опасно ли это гермоген меркушев неа все ок у меня ничего не списало адам алексеев неа все ок у меня ничего не списало информация о файле поздравления на блатном жаргоне загружен в этом месяце скачан раз рейтинг средняя скорость скачивания похожих файлов original issue reported on code google com by kondense gmail com on dec at
| 1
|
63,027
| 26,230,589,151
|
IssuesEvent
|
2023-01-04 23:34:30
|
networkupstools/nut
|
https://api.github.com/repos/networkupstools/nut
|
closed
|
Move killpower flag file to /run
|
service/daemon start/stop upsmon
|
Hi,
Shouldn't the killpower flag file moved to /run (or /run/nut, or...) to avoid abusing /etc?
That would requires an upgrade path an maybe coordination with other to fix initscripts?
|
1.0
|
Move killpower flag file to /run - Hi,
Shouldn't the killpower flag file moved to /run (or /run/nut, or...) to avoid abusing /etc?
That would requires an upgrade path an maybe coordination with other to fix initscripts?
|
non_defect
|
move killpower flag file to run hi shouldn t the killpower flag file moved to run or run nut or to avoid abusing etc that would requires an upgrade path an maybe coordination with other to fix initscripts
| 0
|
11,330
| 2,649,174,469
|
IssuesEvent
|
2015-03-14 17:12:40
|
Paradoxianer/ProjectConceptor_base
|
https://api.github.com/repos/Paradoxianer/ProjectConceptor_base
|
closed
|
nodes can belong to more than one group
|
auto-migrated duplicate Priority-Medium Type-Defect
|
_From @GoogleCodeExporter on March 14, 2015 10:34_
```
What steps will reproduce the problem?
1. Group two nodes
2. Groupe the group
3. move them around will move the first two twice at much
This should also cause trouble when nodes are belonging to differnt groups at
the same level
also on deletion.
```
Original issue reported on code.google.com by `two4...@gmail.com` on 28 Dec 2013 at 5:52
_Copied from original issue: Paradoxianer/projectconceptor#1_
|
1.0
|
nodes can belong to more than one group - _From @GoogleCodeExporter on March 14, 2015 10:34_
```
What steps will reproduce the problem?
1. Group two nodes
2. Groupe the group
3. move them around will move the first two twice at much
This should also cause trouble when nodes are belonging to differnt groups at
the same level
also on deletion.
```
Original issue reported on code.google.com by `two4...@gmail.com` on 28 Dec 2013 at 5:52
_Copied from original issue: Paradoxianer/projectconceptor#1_
|
defect
|
nodes can belong to more than one group from googlecodeexporter on march what steps will reproduce the problem group two nodes groupe the group move them around will move the first two twice at much this should also cause trouble when nodes are belonging to differnt groups at the same level also on deletion original issue reported on code google com by gmail com on dec at copied from original issue paradoxianer projectconceptor
| 1
|
42,584
| 11,152,400,197
|
IssuesEvent
|
2019-12-24 08:31:45
|
hazelcast/hazelcast
|
https://api.github.com/repos/hazelcast/hazelcast
|
opened
|
Improve client listener registration on client reconnect
|
Team: Client Type: Defect
|
Listener registration is an async process. When registration is in progress, map.put can also be done in parallel and it is possible that a client can miss the events it generates.
For instance, client did a map.put, and connection between client and server is healthy but it missed the event of map.put. This is unexpected.
a related test failure: https://github.com/hazelcast/hazelcast/issues/16328
|
1.0
|
Improve client listener registration on client reconnect - Listener registration is an async process. When registration is in progress, map.put can also be done in parallel and it is possible that a client can miss the events it generates.
For instance, client did a map.put, and connection between client and server is healthy but it missed the event of map.put. This is unexpected.
a related test failure: https://github.com/hazelcast/hazelcast/issues/16328
|
defect
|
improve client listener registration on client reconnect listener registration is an async process when registration is in progress map put can also be done in parallel and it is possible that a client can miss the events it generates for instance client did a map put and connection between client and server is healthy but it missed the event of map put this is unexpected a related test failure
| 1
|
71,560
| 23,693,352,720
|
IssuesEvent
|
2022-08-29 12:49:02
|
primefaces/primefaces
|
https://api.github.com/repos/primefaces/primefaces
|
closed
|
Calendar: Uncaught TypeError: $this._base_updateDatepicker is not a function
|
:lady_beetle: defect
|
### Describe the bug
Within `2-jquery.ui.pfextensions.js` line 156 `_base_updateDatepicker` gets invoked, but this function does not exist.
```javascript
$this._base_updateDatepicker(inst);
```
### Reproducer
_No response_
### Expected behavior
_No response_
### PrimeFaces edition
_No response_
### PrimeFaces version
12.0.0-SNAPSHOT
### Theme
_No response_
### JSF implementation
Mojarra
### JSF version
2.3.14.SP04
### Java version
1.8.0_333
### Browser(s)
_No response_
|
1.0
|
Calendar: Uncaught TypeError: $this._base_updateDatepicker is not a function - ### Describe the bug
Within `2-jquery.ui.pfextensions.js` line 156 `_base_updateDatepicker` gets invoked, but this function does not exist.
```javascript
$this._base_updateDatepicker(inst);
```
### Reproducer
_No response_
### Expected behavior
_No response_
### PrimeFaces edition
_No response_
### PrimeFaces version
12.0.0-SNAPSHOT
### Theme
_No response_
### JSF implementation
Mojarra
### JSF version
2.3.14.SP04
### Java version
1.8.0_333
### Browser(s)
_No response_
|
defect
|
calendar uncaught typeerror this base updatedatepicker is not a function describe the bug within jquery ui pfextensions js line base updatedatepicker gets invoked but this function does not exist javascript this base updatedatepicker inst reproducer no response expected behavior no response primefaces edition no response primefaces version snapshot theme no response jsf implementation mojarra jsf version java version browser s no response
| 1
|
20,548
| 3,374,389,234
|
IssuesEvent
|
2015-11-24 12:48:39
|
hazelcast/hazelcast
|
https://api.github.com/repos/hazelcast/hazelcast
|
closed
|
Nearcache Invalidation on cache element expiry
|
Team: Core Type: Defect
|
Hello
Iam very new to hazelcast and have a question in this regard.We are migrating from ehcache to hazelcast and in the process i bumped into an issue(?) with hazelcast.I looked at nearcache docs but couldnt find anything about its design.
I have a simple map in hazelcast and elements are added to it during runtime and TTL is set on individual elements when they are added to map.
So i configured nearcache for this map as shown below
```
NearCacheConfig nearCacheConfig = new NearCacheConfig();
nearCacheConfig.setMaxSize({somemaxsize})
.setMaxIdleSeconds({somemaxidleseconds});
mapConfig.setNearCacheConfig(nearCacheConfig);
```
But it seems like nearcache always have stale entries forever(Till max idle time expires i think).So even if an element gets expired in actual map,nearcache has that entry sitting there.But if an entry is updated in Map,it gets propagated to nearcache.So is this how its supposed to work because i was thinking expired elements will also be propagated to nearcache and that element will be removed.Is there anyway to resolve this issue where TTL of entries in nearcache is dynamic?
Hazelcast version used is 3.4.2
Many Thanks
|
1.0
|
Nearcache Invalidation on cache element expiry - Hello
Iam very new to hazelcast and have a question in this regard.We are migrating from ehcache to hazelcast and in the process i bumped into an issue(?) with hazelcast.I looked at nearcache docs but couldnt find anything about its design.
I have a simple map in hazelcast and elements are added to it during runtime and TTL is set on individual elements when they are added to map.
So i configured nearcache for this map as shown below
```
NearCacheConfig nearCacheConfig = new NearCacheConfig();
nearCacheConfig.setMaxSize({somemaxsize})
.setMaxIdleSeconds({somemaxidleseconds});
mapConfig.setNearCacheConfig(nearCacheConfig);
```
But it seems like nearcache always have stale entries forever(Till max idle time expires i think).So even if an element gets expired in actual map,nearcache has that entry sitting there.But if an entry is updated in Map,it gets propagated to nearcache.So is this how its supposed to work because i was thinking expired elements will also be propagated to nearcache and that element will be removed.Is there anyway to resolve this issue where TTL of entries in nearcache is dynamic?
Hazelcast version used is 3.4.2
Many Thanks
|
defect
|
nearcache invalidation on cache element expiry hello iam very new to hazelcast and have a question in this regard we are migrating from ehcache to hazelcast and in the process i bumped into an issue with hazelcast i looked at nearcache docs but couldnt find anything about its design i have a simple map in hazelcast and elements are added to it during runtime and ttl is set on individual elements when they are added to map so i configured nearcache for this map as shown below nearcacheconfig nearcacheconfig new nearcacheconfig nearcacheconfig setmaxsize somemaxsize setmaxidleseconds somemaxidleseconds mapconfig setnearcacheconfig nearcacheconfig but it seems like nearcache always have stale entries forever till max idle time expires i think so even if an element gets expired in actual map nearcache has that entry sitting there but if an entry is updated in map it gets propagated to nearcache so is this how its supposed to work because i was thinking expired elements will also be propagated to nearcache and that element will be removed is there anyway to resolve this issue where ttl of entries in nearcache is dynamic hazelcast version used is many thanks
| 1
|
17,392
| 3,003,912,617
|
IssuesEvent
|
2015-07-25 11:32:05
|
bk79/i2c-gps-nav
|
https://api.github.com/repos/bk79/i2c-gps-nav
|
closed
|
Brand of GPS
|
auto-migrated Priority-Medium Type-Defect
|
```
I have a test environment setup using a 4800 baud GPS to one Arduino. I have
the two arduinos SDA, SLC and ground connected and made sure they were talking
with each other using the IDE master/slave examples.
Problem, I am receiving tons of I2C errors and the Wii GUI is not reporting any
information about the GPS such as SAT found etc.
I modified the following sketch to see of the MWC was sending the request but
it appears the MWC is not receiving an answer back generating massive number of
I2C bus errors.
#include <Wire.h>
void setup()
{
Serial.begin(115200); // start serial for output
Wire.begin(0x20); // join i2c bus with address #2
Wire.onRequest(requestEvent); // Set up event handlers
Wire.onReceive(receiveEvent);
}
void loop()
{
delay(100);
}
// function that executes whenever data is requested by master
// this function is registered as an event, see setup()
void requestEvent()
{
Serial.println("Request "); // respond with message of 6 bytes
// as expected by master
}
void receiveEvent(int bytesReceived)
{
int a = Wire.read();
Serial.print("Received "); // respond with message of 6 bytes
// as expected by master
Serial.print(a);
}
```
Original issue reported on code.google.com by `reverendrichie` on 14 Mar 2012 at 1:38
|
1.0
|
Brand of GPS - ```
I have a test environment setup using a 4800 baud GPS to one Arduino. I have
the two arduinos SDA, SLC and ground connected and made sure they were talking
with each other using the IDE master/slave examples.
Problem, I am receiving tons of I2C errors and the Wii GUI is not reporting any
information about the GPS such as SAT found etc.
I modified the following sketch to see of the MWC was sending the request but
it appears the MWC is not receiving an answer back generating massive number of
I2C bus errors.
#include <Wire.h>
void setup()
{
Serial.begin(115200); // start serial for output
Wire.begin(0x20); // join i2c bus with address #2
Wire.onRequest(requestEvent); // Set up event handlers
Wire.onReceive(receiveEvent);
}
void loop()
{
delay(100);
}
// function that executes whenever data is requested by master
// this function is registered as an event, see setup()
void requestEvent()
{
Serial.println("Request "); // respond with message of 6 bytes
// as expected by master
}
void receiveEvent(int bytesReceived)
{
int a = Wire.read();
Serial.print("Received "); // respond with message of 6 bytes
// as expected by master
Serial.print(a);
}
```
Original issue reported on code.google.com by `reverendrichie` on 14 Mar 2012 at 1:38
|
defect
|
brand of gps i have a test environment setup using a baud gps to one arduino i have the two arduinos sda slc and ground connected and made sure they were talking with each other using the ide master slave examples problem i am receiving tons of errors and the wii gui is not reporting any information about the gps such as sat found etc i modified the following sketch to see of the mwc was sending the request but it appears the mwc is not receiving an answer back generating massive number of bus errors include void setup serial begin start serial for output wire begin join bus with address wire onrequest requestevent set up event handlers wire onreceive receiveevent void loop delay function that executes whenever data is requested by master this function is registered as an event see setup void requestevent serial println request respond with message of bytes as expected by master void receiveevent int bytesreceived int a wire read serial print received respond with message of bytes as expected by master serial print a original issue reported on code google com by reverendrichie on mar at
| 1
|
129,332
| 12,404,807,749
|
IssuesEvent
|
2020-05-21 16:09:28
|
houseofcat/RabbitMQ.Core
|
https://api.github.com/repos/houseofcat/RabbitMQ.Core
|
closed
|
[Question] Message appears stuck in a Queue. It replayed unexpectedly during outage (or long breakpoint).
|
documentation question
|
Hi, I've just tested project SimpleTest and method `await RunParallelExecutionEngineAsync().ConfigureAwait(false);` and I got an error the message is duplicated and the message is still in the queue forever until I stop the application and restart again, then It consume and handle again, at that time it's gone

|
1.0
|
[Question] Message appears stuck in a Queue. It replayed unexpectedly during outage (or long breakpoint). - Hi, I've just tested project SimpleTest and method `await RunParallelExecutionEngineAsync().ConfigureAwait(false);` and I got an error the message is duplicated and the message is still in the queue forever until I stop the application and restart again, then It consume and handle again, at that time it's gone

|
non_defect
|
message appears stuck in a queue it replayed unexpectedly during outage or long breakpoint hi i ve just tested project simpletest and method await runparallelexecutionengineasync configureawait false and i got an error the message is duplicated and the message is still in the queue forever until i stop the application and restart again then it consume and handle again at that time it s gone
| 0
|
34,991
| 7,518,822,779
|
IssuesEvent
|
2018-04-12 09:35:48
|
hazelcast/hazelcast
|
https://api.github.com/repos/hazelcast/hazelcast
|
opened
|
cluster split heal test's loose data / structures from the cluster
|
Team: Core Type: Critical Type: Defect
|
I see random fail in the split heal test's, https://hazelcast-l337.ci.cloudbees.com/view/split/
after some re run the test do pass.
looking at
https://hazelcast-l337.ci.cloudbees.com/view/split/job/split-atomic-long/
I am looking in to the logs but I do not see why it should have lost the data.
after some split heal cycle we see
`fail HzMember2HZAA split_validate_qA hzcmd.atomic.ilong.Size threadId=0 global.AssertionException: atomicLongsplit_atomicA303 size 0 != expected 1`
which means the atomic long `atomicLongsplit_atomicA303` was lost out of the cluster.
i set longer setting for
-Dhazelcast.max.no.heartbeat.seconds=120
-Dhazelcast.max.no.master.confirmation.seconds=250
but the next run https://hazelcast-l337.ci.cloudbees.com/view/split/job/split-atomic-long/27/console
also lost an atomic long.
I don't know why it should happen that we lost structures out of the split cluster ?
essentially the tests all have the same format:
start a 5 node cluster,
load some data/structs into the cluster
split the cluster into a 2 and 3 node cluster
heal the cluster into a 5 node cluster
check for the data / structs
after a few split/heal cycles we lose the data/structs.
|
1.0
|
cluster split heal test's loose data / structures from the cluster -
I see random fail in the split heal test's, https://hazelcast-l337.ci.cloudbees.com/view/split/
after some re run the test do pass.
looking at
https://hazelcast-l337.ci.cloudbees.com/view/split/job/split-atomic-long/
I am looking in to the logs but I do not see why it should have lost the data.
after some split heal cycle we see
`fail HzMember2HZAA split_validate_qA hzcmd.atomic.ilong.Size threadId=0 global.AssertionException: atomicLongsplit_atomicA303 size 0 != expected 1`
which means the atomic long `atomicLongsplit_atomicA303` was lost out of the cluster.
i set longer setting for
-Dhazelcast.max.no.heartbeat.seconds=120
-Dhazelcast.max.no.master.confirmation.seconds=250
but the next run https://hazelcast-l337.ci.cloudbees.com/view/split/job/split-atomic-long/27/console
also lost an atomic long.
I don't know why it should happen that we lost structures out of the split cluster ?
essentially the tests all have the same format:
start a 5 node cluster,
load some data/structs into the cluster
split the cluster into a 2 and 3 node cluster
heal the cluster into a 5 node cluster
check for the data / structs
after a few split/heal cycles we lose the data/structs.
|
defect
|
cluster split heal test s loose data structures from the cluster i see random fail in the split heal test s after some re run the test do pass looking at i am looking in to the logs but i do not see why it should have lost the data after some split heal cycle we see fail split validate qa hzcmd atomic ilong size threadid global assertionexception atomiclongsplit size expected which means the atomic long atomiclongsplit was lost out of the cluster i set longer setting for dhazelcast max no heartbeat seconds dhazelcast max no master confirmation seconds but the next run also lost an atomic long i don t know why it should happen that we lost structures out of the split cluster essentially the tests all have the same format start a node cluster load some data structs into the cluster split the cluster into a and node cluster heal the cluster into a node cluster check for the data structs after a few split heal cycles we lose the data structs
| 1
|
116,848
| 15,022,115,342
|
IssuesEvent
|
2021-02-01 16:34:35
|
practice-uffs/programa
|
https://api.github.com/repos/practice-uffs/programa
|
opened
|
Vetorização logo PRACTICE
|
equipe:con-design interno:produção
|
Precisamos realizar a vetorização do logotipo do PRACTICE, tendo em vista a necessidade de adaptação de alguns elementos e assim, facilitar o seu uso em diferentes formatos.
|
1.0
|
Vetorização logo PRACTICE - Precisamos realizar a vetorização do logotipo do PRACTICE, tendo em vista a necessidade de adaptação de alguns elementos e assim, facilitar o seu uso em diferentes formatos.
|
non_defect
|
vetorização logo practice precisamos realizar a vetorização do logotipo do practice tendo em vista a necessidade de adaptação de alguns elementos e assim facilitar o seu uso em diferentes formatos
| 0
|
236,356
| 18,094,446,840
|
IssuesEvent
|
2021-09-22 07:27:38
|
Concordium/concordium.github.io
|
https://api.github.com/repos/Concordium/concordium.github.io
|
opened
|
Description of how to update a Ledger
|
documentation [Prio] High [Size] Small [Type] Task
|
**Task description**
At the moment we only have a description of how to set up a Ledger from scratch. We need some extra documentation on how to update a Ledger that is already set up.
*Add label for component and priority.*
|
1.0
|
Description of how to update a Ledger - **Task description**
At the moment we only have a description of how to set up a Ledger from scratch. We need some extra documentation on how to update a Ledger that is already set up.
*Add label for component and priority.*
|
non_defect
|
description of how to update a ledger task description at the moment we only have a description of how to set up a ledger from scratch we need some extra documentation on how to update a ledger that is already set up add label for component and priority
| 0
|
146,336
| 13,178,245,245
|
IssuesEvent
|
2020-08-12 08:49:16
|
AzureAD/microsoft-identity-web
|
https://api.github.com/repos/AzureAD/microsoft-identity-web
|
opened
|
Running locally vs deployed on Azure
|
documentation
|
### Documentation related to component
Web Api
### Please check all that apply
- [ ] typo
- [x] documentation doesn't exist
- [ ] documentation needs clarification
- [ ] error(s) in the example
- [x] needs an example
### Description of the issue
# TL;DR
In a asp.net core 3.1 setup, a console app using `ConfidentialClientApplicationBuilder` and `AcquireTokenForClient` calls a web api set up with `AddMicrosoftWebApiAuthentication` setup.
The api runs locally the web api serves the content, but when the web api is deployed to Azure it returns HTTP 403, with and without the *Authentication / Authorization* configured.
Question: How to debug what is wrong with the setup?
Maybe separately, what is the relationship between the "AzureAd" section of `appsettings.json` and the *Authentication / Authorization* blade of the azure portal?
# Details
I am working on Scenario 2-Call-OwnApi' from [A .NET Core daemon console application using Microsoft identity platform (formerly Azure AD v2.0)](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2)
[![enter image description here][1]][1]
The web api and the console client app are asp.net core 3.1.
Web Api `appsettings.json`:
```
"AzureAd": {
"Instance": "https://login.microsoftonline.com/",
"ClientId": "client-guid",
"Domain": "XXX.onmicrosoft.com",
"TenantId": "ad-tenant-guid"
},
```
*(in real files the real guids are set)*
Client conf:
```
{
"Instance": "https://login.microsoftonline.com/{0}",
"Tenant": "ad-tenant-guid",
"ClientId": "clientclientid",
"ClientSecret": "secret",
"CertificateName": "[Or instead of client secret: Enter here the name of a certificate (from the user cert store) as registered with your application]",
"TodoListBaseAddress": "http://localhost:12345",
"TodoListScope": "api://guid/.default"
}
```
A dedicated AD tenant is used for this that has 2 app registrations. One exposes the api, one is granted access to it.
**It works when both apps are run locally**
The API exposes multiple roles and in the web api code I can see expected claims.
After publishing the app to Azure (`https://~myapp.azurewebsites.net`) and changing the base address the client app can obtain the token but the response to api call is HTTP 403 (empty body).
I have experimented with the *Authentication / Authorization* section in the Azure portal being
* off
[![enter image description here][3]][3]
* configured
[![enter image description here][4]][4]
[1]: https://i.stack.imgur.com/DO5hD.png
[2]: https://i.stack.imgur.com/pPlXM.png
[3]: https://i.stack.imgur.com/ldhAz.png
[4]: https://i.stack.imgur.com/zWw1H.png
|
1.0
|
Running locally vs deployed on Azure - ### Documentation related to component
Web Api
### Please check all that apply
- [ ] typo
- [x] documentation doesn't exist
- [ ] documentation needs clarification
- [ ] error(s) in the example
- [x] needs an example
### Description of the issue
# TL;DR
In a asp.net core 3.1 setup, a console app using `ConfidentialClientApplicationBuilder` and `AcquireTokenForClient` calls a web api set up with `AddMicrosoftWebApiAuthentication` setup.
The api runs locally the web api serves the content, but when the web api is deployed to Azure it returns HTTP 403, with and without the *Authentication / Authorization* configured.
Question: How to debug what is wrong with the setup?
Maybe separately, what is the relationship between the "AzureAd" section of `appsettings.json` and the *Authentication / Authorization* blade of the azure portal?
# Details
I am working on Scenario 2-Call-OwnApi' from [A .NET Core daemon console application using Microsoft identity platform (formerly Azure AD v2.0)](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2)
[![enter image description here][1]][1]
The web api and the console client app are asp.net core 3.1.
Web Api `appsettings.json`:
```
"AzureAd": {
"Instance": "https://login.microsoftonline.com/",
"ClientId": "client-guid",
"Domain": "XXX.onmicrosoft.com",
"TenantId": "ad-tenant-guid"
},
```
*(in real files the real guids are set)*
Client conf:
```
{
"Instance": "https://login.microsoftonline.com/{0}",
"Tenant": "ad-tenant-guid",
"ClientId": "clientclientid",
"ClientSecret": "secret",
"CertificateName": "[Or instead of client secret: Enter here the name of a certificate (from the user cert store) as registered with your application]",
"TodoListBaseAddress": "http://localhost:12345",
"TodoListScope": "api://guid/.default"
}
```
A dedicated AD tenant is used for this that has 2 app registrations. One exposes the api, one is granted access to it.
**It works when both apps are run locally**
The API exposes multiple roles and in the web api code I can see expected claims.
After publishing the app to Azure (`https://~myapp.azurewebsites.net`) and changing the base address the client app can obtain the token but the response to api call is HTTP 403 (empty body).
I have experimented with the *Authentication / Authorization* section in the Azure portal being
* off
[![enter image description here][3]][3]
* configured
[![enter image description here][4]][4]
[1]: https://i.stack.imgur.com/DO5hD.png
[2]: https://i.stack.imgur.com/pPlXM.png
[3]: https://i.stack.imgur.com/ldhAz.png
[4]: https://i.stack.imgur.com/zWw1H.png
|
non_defect
|
running locally vs deployed on azure documentation related to component web api please check all that apply typo documentation doesn t exist documentation needs clarification error s in the example needs an example description of the issue tl dr in a asp net core setup a console app using confidentialclientapplicationbuilder and acquiretokenforclient calls a web api set up with addmicrosoftwebapiauthentication setup the api runs locally the web api serves the content but when the web api is deployed to azure it returns http with and without the authentication authorization configured question how to debug what is wrong with the setup maybe separately what is the relationship between the azuread section of appsettings json and the authentication authorization blade of the azure portal details i am working on scenario call ownapi from the web api and the console client app are asp net core web api appsettings json azuread instance clientid client guid domain xxx onmicrosoft com tenantid ad tenant guid in real files the real guids are set client conf instance tenant ad tenant guid clientid clientclientid clientsecret secret certificatename todolistbaseaddress todolistscope api guid default a dedicated ad tenant is used for this that has app registrations one exposes the api one is granted access to it it works when both apps are run locally the api exposes multiple roles and in the web api code i can see expected claims after publishing the app to azure and changing the base address the client app can obtain the token but the response to api call is http empty body i have experimented with the authentication authorization section in the azure portal being off configured
| 0
|
44,127
| 5,741,084,383
|
IssuesEvent
|
2017-04-24 03:29:12
|
geetsisbac/WCVVENIXYFVIRBXH3BYTI6TE
|
https://api.github.com/repos/geetsisbac/WCVVENIXYFVIRBXH3BYTI6TE
|
reopened
|
GHQR+8ZReUyFwopeaMOowwVMQ7qqE7Q4OfZTTWUMwQ7kIjQWwqxGF3tmbauIPVt3UXpE2KZjb/fEel1IpVLe3iiOjNYnVanj68GXqcbZl9WqugZq6KzaP2qGrUU3miH6HbcJeP94ZhOH6KzfcabWfQkqs2y10079mRH9Js81q2I=
|
design
|
G7RVJ2Gm/Eq3kJubDtKgx22oJd1vW1jVCYrpmn/roy1DHzWUUkyfZmnAQpMzp/leEqzIImpK2uAFQaZp/ZttEeBPRYqsR7sRXTKkrk/wITigM3aZWBtW4njAnzbSSiyoJLcQmJm6SlS3oIXohURveq8YvvmqhWMPTozfhIPic2bJrBCAewUWUwkX5kljrdW/+u05GleQ0cwuCL503X8sZdP1NIhWtGAfJr0xBZerMthGvQwssqtHCckKDra651B8+u05GleQ0cwuCL503X8sZfrtORpXkNHMLgi+dN1/LGVBK6g+LwOlTZWikDo0t3vbQO5iiMMGO+X0peitqsGN6vV0Wag3+ZeXALxqu9O+dcL67TkaV5DRzC4IvnTdfyxllwVqnmb3wR6+Cif27HE1zrjYE5/Xjb+yKQBUtk9a+ctn9UfinmWKSCiSnY2RyLL/eKZM5oLuweMEogcF6v8MqFhRXNe0WfBYLxMFwnczVYZkvyO08mLBCfMmll0BW4BGXvwQOXRpj3aYV6oUl9JxhfrtORpXkNHMLgi+dN1/LGXe7eaWYauLSZOEwGaLIbMYsloi7iAO7pdGPloKnvjZbr5agNSYIYoo3LHOlOsbfuf67TkaV5DRzC4IvnTdfyxlqBBUbrBracWmZzrJWDbDo0JPQqN0XDmRbGGufWZeSs767TkaV5DRzC4IvnTdfyxlHwzlFhbjdSXGvWwvzh6ekaJedx+7HJzgxMqYecgkMXkzDtXXB3EAnUnwSJXVufThLLiKizBgu/eA//0+VU7x/JCY8GkSMEm9y+dcBkkW1N9vF23iQP7TI1MRbBUttIDgNzAIraDsxLVYbrjqhM+Gi/xpM6KbWyrxuxErtt+v0SOciIuVTld3parLdysRxO2PT+yxGxMFrzq0etkybMbV3zJxv4kAFELTW2Of23hoDej7wY1s5aHn1wjTc2J6KdBPCz7t+rd+dhu+Nl3Nx4XD38JcXLIfP0ANN9qdt12evEev0PvSZ5DLfxrbG66Xvtzp+u05GleQ0cwuCL503X8sZTyN3xtn5wJ9OyMSRfHykXKuJKOBx39uazz4ynJAxLpiaGb2aCjnj8HvwKv9CINk3XXChVoaxLaBuVBfPenjAe05QhlQ8Wi40zo+eaEo8p8na2UqNceXnDEqhuWNU2fEP1BIFDoxTMCFebzLDwd8u+Noyue8/Gl26+kOnI8a60GwVVqLnWZjkzMpCUeoYmrRrb2cJmmE0MhHMIA2JCBHyTiTYa0csg2jLgg7pwCSAJwDvVzpENLsseET1calqFvlbenNvJCDc2fY1QmKJXE+H3j67TkaV5DRzC4IvnTdfyxlla/wxU+vin5sZmENv0VvS2bjQSW3RNYeWvVEGMxephb67TkaV5DRzC4IvnTdfyxlR88BGN4AepIiDWlUPH0DwSOBSgPLFbPLS6FBES/4+Mf67TkaV5DRzC4IvnTdfyxlP+xIYRnyZTG05C2/++DJZNKN/TPMD3akIiQ7M0qRuCb/BushRa0VC8Pxb1JCbyuNH72bLvhb+mP77MftKbOxtvrtORpXkNHMLgi+dN1/LGUmW8rXWAUlIwB8JC5lUXchJwU63IARKrm5Tbr6FkXIKxmGtaQMe6D70btCIy1ygEb67TkaV5DRzC4IvnTdfyxlIU8BfTetWKfI4q1awwf20p5rp/f/vi6Ucx9nQ8z/u8zBf4ULTLRuHYyZjHcMXCDOPfVi6eAWXz5r/Am2akqf6T5rkwIgOFA7dShH9ImPhT767TkaV5DRzC4IvnTdfyxl+u05GleQ0cwuCL503X8sZfK7RXakYrbHXHf4ir62tSGCCEVwJ29smgYMRrCm8/b+NkcfiJn99I2a4ex4gffEmfrtORpXkNHMLgi+dN1/LGX67TkaV5DRzC4IvnTdfyxlHTSTXlaXib3LdLl3dDMP1ZTByyCtsIVYE5z0oFGbBRvKqkCIFPxtxaKUkfLkHvZxPNMM9KCkwARyCjELrca5sVjFw6DwMWxyHH61H8tpVVCs0plMHPdhpt7aAJ9pyo+OuMByXxAf1ZnFBdrJ9jwV3x0/6CmOWDYhYpGRMnxmAOA=
|
1.0
|
GHQR+8ZReUyFwopeaMOowwVMQ7qqE7Q4OfZTTWUMwQ7kIjQWwqxGF3tmbauIPVt3UXpE2KZjb/fEel1IpVLe3iiOjNYnVanj68GXqcbZl9WqugZq6KzaP2qGrUU3miH6HbcJeP94ZhOH6KzfcabWfQkqs2y10079mRH9Js81q2I= - G7RVJ2Gm/Eq3kJubDtKgx22oJd1vW1jVCYrpmn/roy1DHzWUUkyfZmnAQpMzp/leEqzIImpK2uAFQaZp/ZttEeBPRYqsR7sRXTKkrk/wITigM3aZWBtW4njAnzbSSiyoJLcQmJm6SlS3oIXohURveq8YvvmqhWMPTozfhIPic2bJrBCAewUWUwkX5kljrdW/+u05GleQ0cwuCL503X8sZdP1NIhWtGAfJr0xBZerMthGvQwssqtHCckKDra651B8+u05GleQ0cwuCL503X8sZfrtORpXkNHMLgi+dN1/LGVBK6g+LwOlTZWikDo0t3vbQO5iiMMGO+X0peitqsGN6vV0Wag3+ZeXALxqu9O+dcL67TkaV5DRzC4IvnTdfyxllwVqnmb3wR6+Cif27HE1zrjYE5/Xjb+yKQBUtk9a+ctn9UfinmWKSCiSnY2RyLL/eKZM5oLuweMEogcF6v8MqFhRXNe0WfBYLxMFwnczVYZkvyO08mLBCfMmll0BW4BGXvwQOXRpj3aYV6oUl9JxhfrtORpXkNHMLgi+dN1/LGXe7eaWYauLSZOEwGaLIbMYsloi7iAO7pdGPloKnvjZbr5agNSYIYoo3LHOlOsbfuf67TkaV5DRzC4IvnTdfyxlqBBUbrBracWmZzrJWDbDo0JPQqN0XDmRbGGufWZeSs767TkaV5DRzC4IvnTdfyxlHwzlFhbjdSXGvWwvzh6ekaJedx+7HJzgxMqYecgkMXkzDtXXB3EAnUnwSJXVufThLLiKizBgu/eA//0+VU7x/JCY8GkSMEm9y+dcBkkW1N9vF23iQP7TI1MRbBUttIDgNzAIraDsxLVYbrjqhM+Gi/xpM6KbWyrxuxErtt+v0SOciIuVTld3parLdysRxO2PT+yxGxMFrzq0etkybMbV3zJxv4kAFELTW2Of23hoDej7wY1s5aHn1wjTc2J6KdBPCz7t+rd+dhu+Nl3Nx4XD38JcXLIfP0ANN9qdt12evEev0PvSZ5DLfxrbG66Xvtzp+u05GleQ0cwuCL503X8sZTyN3xtn5wJ9OyMSRfHykXKuJKOBx39uazz4ynJAxLpiaGb2aCjnj8HvwKv9CINk3XXChVoaxLaBuVBfPenjAe05QhlQ8Wi40zo+eaEo8p8na2UqNceXnDEqhuWNU2fEP1BIFDoxTMCFebzLDwd8u+Noyue8/Gl26+kOnI8a60GwVVqLnWZjkzMpCUeoYmrRrb2cJmmE0MhHMIA2JCBHyTiTYa0csg2jLgg7pwCSAJwDvVzpENLsseET1calqFvlbenNvJCDc2fY1QmKJXE+H3j67TkaV5DRzC4IvnTdfyxlla/wxU+vin5sZmENv0VvS2bjQSW3RNYeWvVEGMxephb67TkaV5DRzC4IvnTdfyxlR88BGN4AepIiDWlUPH0DwSOBSgPLFbPLS6FBES/4+Mf67TkaV5DRzC4IvnTdfyxlP+xIYRnyZTG05C2/++DJZNKN/TPMD3akIiQ7M0qRuCb/BushRa0VC8Pxb1JCbyuNH72bLvhb+mP77MftKbOxtvrtORpXkNHMLgi+dN1/LGUmW8rXWAUlIwB8JC5lUXchJwU63IARKrm5Tbr6FkXIKxmGtaQMe6D70btCIy1ygEb67TkaV5DRzC4IvnTdfyxlIU8BfTetWKfI4q1awwf20p5rp/f/vi6Ucx9nQ8z/u8zBf4ULTLRuHYyZjHcMXCDOPfVi6eAWXz5r/Am2akqf6T5rkwIgOFA7dShH9ImPhT767TkaV5DRzC4IvnTdfyxl+u05GleQ0cwuCL503X8sZfK7RXakYrbHXHf4ir62tSGCCEVwJ29smgYMRrCm8/b+NkcfiJn99I2a4ex4gffEmfrtORpXkNHMLgi+dN1/LGX67TkaV5DRzC4IvnTdfyxlHTSTXlaXib3LdLl3dDMP1ZTByyCtsIVYE5z0oFGbBRvKqkCIFPxtxaKUkfLkHvZxPNMM9KCkwARyCjELrca5sVjFw6DwMWxyHH61H8tpVVCs0plMHPdhpt7aAJ9pyo+OuMByXxAf1ZnFBdrJ9jwV3x0/6CmOWDYhYpGRMnxmAOA=
|
non_defect
|
ghqr xjb ea gi rd dhu wxu djznkn f b
| 0
|
4,226
| 2,610,089,570
|
IssuesEvent
|
2015-02-26 18:27:07
|
chrsmith/dsdsdaadf
|
https://api.github.com/repos/chrsmith/dsdsdaadf
|
opened
|
深圳痘痘如何祛除最好
|
auto-migrated Priority-Medium Type-Defect
|
```
深圳痘痘如何祛除最好【深圳韩方科颜全国热线400-869-1818,24
小时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩��
�秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,�
��方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹
”健康祛痘技术并结合先进“先进豪华彩光”仪,开创国内��
�业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上�
��痘痘。
```
-----
Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 7:36
|
1.0
|
深圳痘痘如何祛除最好 - ```
深圳痘痘如何祛除最好【深圳韩方科颜全国热线400-869-1818,24
小时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩��
�秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,�
��方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹
”健康祛痘技术并结合先进“先进豪华彩光”仪,开创国内��
�业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上�
��痘痘。
```
-----
Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 7:36
|
defect
|
深圳痘痘如何祛除最好 深圳痘痘如何祛除最好【 , 】深圳韩方科颜专业祛痘连锁机构,机构以韩�� �秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,� ��方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹 ”健康祛痘技术并结合先进“先进豪华彩光”仪,开创国内�� �业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上� ��痘痘。 original issue reported on code google com by szft com on may at
| 1
|
72,987
| 9,634,851,009
|
IssuesEvent
|
2019-05-15 22:32:13
|
typescript-eslint/typescript-eslint
|
https://api.github.com/repos/typescript-eslint/typescript-eslint
|
closed
|
ban-ts-ignore is not in "plugin:@typescript-eslint/recommended" despite the docs saying so.
|
bug documentation has pr package: eslint-plugin
|
It should be recommended according to https://github.com/typescript-eslint/typescript-eslint/tree/master/packages/eslint-plugin#supported-rules. But simply extending `plugin:@typescript-eslint/recommended` does not enable the rule.
Adding the rule explicitly in `eslintrc.js` works though.
**Repro**
Repro to follow.
**Expected Result**
**Actual Result**
**Additional Info**
**Versions**
| package | version |
| ---------------------------------- | ------- |
| `@typescript-eslint/eslint-plugin` | `1.4.2` |
| `@typescript-eslint/parser` | `1.4.2` |
| `TypeScript` | [`3.3.3333`](https://github.com/Microsoft/TypeScript/issues/30032) 😄 |
| `ESLint` | `5.15.1` |
| `node` | `11.11.0` |
| `yarn` | `1.13.0` |
|
1.0
|
ban-ts-ignore is not in "plugin:@typescript-eslint/recommended" despite the docs saying so. - It should be recommended according to https://github.com/typescript-eslint/typescript-eslint/tree/master/packages/eslint-plugin#supported-rules. But simply extending `plugin:@typescript-eslint/recommended` does not enable the rule.
Adding the rule explicitly in `eslintrc.js` works though.
**Repro**
Repro to follow.
**Expected Result**
**Actual Result**
**Additional Info**
**Versions**
| package | version |
| ---------------------------------- | ------- |
| `@typescript-eslint/eslint-plugin` | `1.4.2` |
| `@typescript-eslint/parser` | `1.4.2` |
| `TypeScript` | [`3.3.3333`](https://github.com/Microsoft/TypeScript/issues/30032) 😄 |
| `ESLint` | `5.15.1` |
| `node` | `11.11.0` |
| `yarn` | `1.13.0` |
|
non_defect
|
ban ts ignore is not in plugin typescript eslint recommended despite the docs saying so it should be recommended according to but simply extending plugin typescript eslint recommended does not enable the rule adding the rule explicitly in eslintrc js works though repro repro to follow expected result actual result additional info versions package version typescript eslint eslint plugin typescript eslint parser typescript 😄 eslint node yarn
| 0
|
429,095
| 12,420,865,966
|
IssuesEvent
|
2020-05-23 14:10:26
|
JiejayLan/seniorDesign
|
https://api.github.com/repos/JiejayLan/seniorDesign
|
closed
|
Final Presentaion
|
High Priority
|
Due: 2020/5/21(11:30 AM)
[Past Presentation](https://docs.google.com/presentation/d/1TtG_gyLSrvMxASylZedkYjTrMdcF0sTCWUuRQFIciwg/edit#slide=id.g6c8a9c3559_0_5)
[Final Version](https://open-source-searching-platform.web.app/)
### Structure
- Objective and background(kai)
- User Flow Graph (kai)
- Architectural Design(kai)
- Explain how frontend, backend and machine learning works together
- Features
+ Search by keywords across three platforms/ Filter by language and platform(Jie)
- Explain four APIs
+ Commit prediction(Yi)
- Dataset training ML model
- Difficulties and future plan(Jie)
- Demo(Choose keyword and repo in advance, 3 examples) kai
|
1.0
|
Final Presentaion - Due: 2020/5/21(11:30 AM)
[Past Presentation](https://docs.google.com/presentation/d/1TtG_gyLSrvMxASylZedkYjTrMdcF0sTCWUuRQFIciwg/edit#slide=id.g6c8a9c3559_0_5)
[Final Version](https://open-source-searching-platform.web.app/)
### Structure
- Objective and background(kai)
- User Flow Graph (kai)
- Architectural Design(kai)
- Explain how frontend, backend and machine learning works together
- Features
+ Search by keywords across three platforms/ Filter by language and platform(Jie)
- Explain four APIs
+ Commit prediction(Yi)
- Dataset training ML model
- Difficulties and future plan(Jie)
- Demo(Choose keyword and repo in advance, 3 examples) kai
|
non_defect
|
final presentaion due am structure objective and background kai user flow graph kai architectural design kai explain how frontend backend and machine learning works together features search by keywords across three platforms filter by language and platform jie explain four apis commit prediction yi dataset training ml model difficulties and future plan jie demo choose keyword and repo in advance examples kai
| 0
|
69,450
| 22,356,008,486
|
IssuesEvent
|
2022-06-15 15:42:05
|
department-of-veterans-affairs/va.gov-cms
|
https://api.github.com/repos/department-of-veterans-affairs/va.gov-cms
|
opened
|
Content Audit view breaks for published VAMC System Banner Alerts
|
Defect Needs refining ⭐️ Sitewide CMS
|
## Describe the defect
If you attempt to view published **VAMC System Banner Alert with Situation Updates** nodes using the Content Audit view no results will be displayed. This could be a problem with the view or, more likely, an issue with the editorial workflow for this particular content type not being the same as the others.
## To Reproduce
Steps to reproduce the behavior:
1. Go to /admin/content/audit
2. Select **VAMC System Banner Alert with Situation Updates** for the Content type filter
3. Select **Published** for the Moderation state (Any selection here will actually break the results)
4. Click the Filter button
5. No results are returned
## Expected behavior
The **VAMC System Banner Alert with Situation Updates** content type should work with this view when a Moderation state is selected - like all other content types.
## Screenshots

### CMS Team
Please check the team(s) that will do this work.
- [ ] `Program`
- [ ] `Platform CMS Team`
- [ ] `Sitewide Crew`
- [x] `⭐️ Sitewide CMS`
- [ ] `⭐️ Public Websites`
- [ ] `⭐️ Facilities`
- [ ] `⭐️ User support`
|
1.0
|
Content Audit view breaks for published VAMC System Banner Alerts - ## Describe the defect
If you attempt to view published **VAMC System Banner Alert with Situation Updates** nodes using the Content Audit view no results will be displayed. This could be a problem with the view or, more likely, an issue with the editorial workflow for this particular content type not being the same as the others.
## To Reproduce
Steps to reproduce the behavior:
1. Go to /admin/content/audit
2. Select **VAMC System Banner Alert with Situation Updates** for the Content type filter
3. Select **Published** for the Moderation state (Any selection here will actually break the results)
4. Click the Filter button
5. No results are returned
## Expected behavior
The **VAMC System Banner Alert with Situation Updates** content type should work with this view when a Moderation state is selected - like all other content types.
## Screenshots

### CMS Team
Please check the team(s) that will do this work.
- [ ] `Program`
- [ ] `Platform CMS Team`
- [ ] `Sitewide Crew`
- [x] `⭐️ Sitewide CMS`
- [ ] `⭐️ Public Websites`
- [ ] `⭐️ Facilities`
- [ ] `⭐️ User support`
|
defect
|
content audit view breaks for published vamc system banner alerts describe the defect if you attempt to view published vamc system banner alert with situation updates nodes using the content audit view no results will be displayed this could be a problem with the view or more likely an issue with the editorial workflow for this particular content type not being the same as the others to reproduce steps to reproduce the behavior go to admin content audit select vamc system banner alert with situation updates for the content type filter select published for the moderation state any selection here will actually break the results click the filter button no results are returned expected behavior the vamc system banner alert with situation updates content type should work with this view when a moderation state is selected like all other content types screenshots cms team please check the team s that will do this work program platform cms team sitewide crew ⭐️ sitewide cms ⭐️ public websites ⭐️ facilities ⭐️ user support
| 1
|
11,238
| 2,641,951,475
|
IssuesEvent
|
2015-03-11 20:42:16
|
chrsmith/html5rocks
|
https://api.github.com/repos/chrsmith/html5rocks
|
closed
|
Add globe to studio
|
Milestone-4 Priority-Medium Studio Type-Defect
|
Original [issue 154](https://code.google.com/p/html5rocks/issues/detail?id=154) created by chrsmith on 2010-08-15T06:28:08.000Z:
<b>What steps will reproduce the problem?</b>
<b>1.</b>
<b>2.</b>
<b>3.</b>
<b>What is the expected output? What do you see instead?</b>
<b>Please use labels and text to provide additional information.</b>
|
1.0
|
Add globe to studio - Original [issue 154](https://code.google.com/p/html5rocks/issues/detail?id=154) created by chrsmith on 2010-08-15T06:28:08.000Z:
<b>What steps will reproduce the problem?</b>
<b>1.</b>
<b>2.</b>
<b>3.</b>
<b>What is the expected output? What do you see instead?</b>
<b>Please use labels and text to provide additional information.</b>
|
defect
|
add globe to studio original created by chrsmith on what steps will reproduce the problem what is the expected output what do you see instead please use labels and text to provide additional information
| 1
|
6,621
| 2,610,257,965
|
IssuesEvent
|
2015-02-26 19:22:17
|
chrsmith/dsdsdaadf
|
https://api.github.com/repos/chrsmith/dsdsdaadf
|
opened
|
深圳激光怎么祛除青春痘
|
auto-migrated Priority-Medium Type-Defect
|
```
深圳激光怎么祛除青春痘【深圳韩方科颜全国热线400-869-1818��
�24小时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以��
�国秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品�
��韩方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反
弹”健康祛痘技术并结合先进“先进豪华彩光”仪,开创国��
�专业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸�
��的痘痘。
```
-----
Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 8:46
|
1.0
|
深圳激光怎么祛除青春痘 - ```
深圳激光怎么祛除青春痘【深圳韩方科颜全国热线400-869-1818��
�24小时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以��
�国秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品�
��韩方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反
弹”健康祛痘技术并结合先进“先进豪华彩光”仪,开创国��
�专业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸�
��的痘痘。
```
-----
Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 8:46
|
defect
|
深圳激光怎么祛除青春痘 深圳激光怎么祛除青春痘【 �� � 】深圳韩方科颜专业祛痘连锁机构,机构以�� �国秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品� ��韩方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反 弹”健康祛痘技术并结合先进“先进豪华彩光”仪,开创国�� �专业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸� ��的痘痘。 original issue reported on code google com by szft com on may at
| 1
|
40,039
| 9,809,955,324
|
IssuesEvent
|
2019-06-12 19:15:59
|
idaholab/moose
|
https://api.github.com/repos/idaholab/moose
|
closed
|
SamplerPostprocessorTransfer becomes slow for large number of samples
|
T: defect
|
## Bug Description
<!--A clear and concise description of the problem.-->
For a large number of samples, SamplerPostprocessorTransfer becomes very slow to collect results from sub app. It is caused by calling initialize() function for every sample to transfer back data. This should be avoided.
## Steps to Reproduce
<!--Steps to reproduce the behavior (input file, or modifications to an existing input file, etc.)-->
I tried to run 100000 samples with 20 mpi processes and a significant slow-down after all the sub apps finish was observed.
## Impact
<!--Does this prevent you from getting your work done, or is it more of an annoyance?-->
It will prevent users from running Monte Carlo simulation with large number of samples.
|
1.0
|
SamplerPostprocessorTransfer becomes slow for large number of samples - ## Bug Description
<!--A clear and concise description of the problem.-->
For a large number of samples, SamplerPostprocessorTransfer becomes very slow to collect results from sub app. It is caused by calling initialize() function for every sample to transfer back data. This should be avoided.
## Steps to Reproduce
<!--Steps to reproduce the behavior (input file, or modifications to an existing input file, etc.)-->
I tried to run 100000 samples with 20 mpi processes and a significant slow-down after all the sub apps finish was observed.
## Impact
<!--Does this prevent you from getting your work done, or is it more of an annoyance?-->
It will prevent users from running Monte Carlo simulation with large number of samples.
|
defect
|
samplerpostprocessortransfer becomes slow for large number of samples bug description for a large number of samples samplerpostprocessortransfer becomes very slow to collect results from sub app it is caused by calling initialize function for every sample to transfer back data this should be avoided steps to reproduce i tried to run samples with mpi processes and a significant slow down after all the sub apps finish was observed impact it will prevent users from running monte carlo simulation with large number of samples
| 1
|
669,806
| 22,641,648,517
|
IssuesEvent
|
2022-07-01 03:07:30
|
heading1/WYLSBingsu
|
https://api.github.com/repos/heading1/WYLSBingsu
|
reopened
|
[BE] 로그인 유저 정보 가져오기 API 구현
|
⚙️ Backend 🔨 Feature ↔️ mid-priority
|
## 🔨 기능 설명
로그인 유저 정보 가져오기 API 구현
## 📑 완료 조건
문제없이 GET될 때
## 💭 관련 백로그
[[BE] 로그인 화면]-[API]-[]
0.5h
|
1.0
|
[BE] 로그인 유저 정보 가져오기 API 구현 - ## 🔨 기능 설명
로그인 유저 정보 가져오기 API 구현
## 📑 완료 조건
문제없이 GET될 때
## 💭 관련 백로그
[[BE] 로그인 화면]-[API]-[]
0.5h
|
non_defect
|
로그인 유저 정보 가져오기 api 구현 🔨 기능 설명 로그인 유저 정보 가져오기 api 구현 📑 완료 조건 문제없이 get될 때 💭 관련 백로그 로그인 화면
| 0
|
65,892
| 12,693,994,424
|
IssuesEvent
|
2020-06-22 05:19:12
|
esp8266/Arduino
|
https://api.github.com/repos/esp8266/Arduino
|
opened
|
Large stack usage in core and MDNS libraries
|
component: MDNS component: core type: code cleanup
|
I've been playing locally with GCC10 and the `-Wstack-usage=` option which emits a compile-time warning when stacks are larger than the size given. There's only 4K total stack to play with so I set the warning limit to 300 bytes.
This isn't an exhaustive list (need to do a local CI and aggregate warnings), but I'm seeing very high use in the flash_hal and MDNS:
Flash_write() has a 512 byte buffer allocated on the stack in the case of an unaligned write, which seems pretty massive and ripe for reduction:
````
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\cores\esp8266\flash_hal.cpp: In function 'int32_t flash_hal_write(uint32_t, uint32_t, const uint8_t*)':
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\cores\esp8266\flash_hal.cpp:102:9: warning: stack usage is 576 bytes [-Wstack-usage=]
102 | int32_t flash_hal_write(uint32_t addr, uint32_t size, const uint8_t *src) {
| ^~~~~~~~~~~~~~~
````
MDNS (old and new) have stack usages up to almost 700 bytes which might explain some issues seen at runtime.
````
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Control.cpp: In member function 'uint8_t esp8266::MDNSImplementation::MDNSResponder::_ZNK7esp826618MDNSImplementation13MDNSResponder17_replyMaskForHostERKNS1_16stcMDNS_RRHeaderEPb$part$0(const esp8266::MDNSImplementation::MDNSResponder::stcMDNS_RRHeader&, bool*) const':
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Control.cpp:1991:9: warning: stack usage is 304 bytes [-Wstack-usage=]
1991 | uint8_t MDNSResponder::_replyMaskForHost(const MDNSResponder::stcMDNS_RRHeader& p_RRHeader,
| ^~~~~~~~~~~~~
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Control.cpp: In member function 'uint8_t esp8266::MDNSImplementation::MDNSResponder::_ZNK7esp826618MDNSImplementation13MDNSResponder20_replyMaskForServiceERKNS1_16stcMDNS_RRHeaderERKNS1_14stcMDNSServiceEPb$part$0(const esp8266::MDNSImplementation::MDNSResponder::stcMDNS_RRHeader&, const esp8266::MDNSImplementation::MDNSResponder::stcMDNSService&, bool*) const':
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Control.cpp:2069:9: warning: stack usage is 576 bytes [-Wstack-usage=]
2069 | uint8_t MDNSResponder::_replyMaskForService(const MDNSResponder::stcMDNS_RRHeader& p_RRHeader,
| ^~~~~~~~~~~~~
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Control.cpp: In member function 'bool esp8266::MDNSImplementation::MDNSResponder::_parseQuery(const esp8266::MDNSImplementation::MDNSResponder::stcMDNS_MsgHeader&)':
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Control.cpp:179:6: warning: stack usage is 608 bytes [-Wstack-usage=]
179 | bool MDNSResponder::_parseQuery(const MDNSResponder::stcMDNS_MsgHeader& p_MsgHeader)
| ^~~~~~~~~~~~~
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Control.cpp: In member function 'bool esp8266::MDNSImplementation::MDNSResponder::_ZN7esp826618MDNSImplementation13MDNSResponder15_processAnswersEPKNS1_16stcMDNS_RRAnswerE$part$0(const esp8266::MDNSImplementation::MDNSResponder::stcMDNS_RRAnswer*)':
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Control.cpp:749:6: warning: stack usage is 320 bytes [-Wstack-usage=]
749 | bool MDNSResponder::_processAnswers(const MDNSResponder::stcMDNS_RRAnswer* p_pAnswers)
| ^~~~~~~~~~~~~
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Control.cpp: In member function 'bool esp8266::MDNSImplementation::MDNSResponder::_parseResponse(const esp8266::MDNSImplementation::MDNSResponder::stcMDNS_MsgHeader&)':
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Control.cpp:628:6: warning: stack usage is 320 bytes [-Wstack-usage=]
628 | bool MDNSResponder::_parseResponse(const MDNSResponder::stcMDNS_MsgHeader& p_MsgHeader)
| ^~~~~~~~~~~~~
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\ESP8266mDNS_Legacy.cpp: In member function 'void Legacy_MDNSResponder::MDNSResponder::_parsePacket()':
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\ESP8266mDNS_Legacy.cpp:567:6: warning: stack usage is 688 bytes [-Wstack-usage=]
567 | void MDNSResponder::_parsePacket()
| ^~~~~~~~~~~~~
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Transfer.cpp: In member function 'bool esp8266::MDNSImplementation::MDNSResponder::_writeMDNSHostDomain(const char*, bool, esp8266::MDNSImplementation::MDNSResponder::stcMDNSSendParameter&)':
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Transfer.cpp:1367:6: warning: stack usage is 320 bytes [-Wstack-usage=]
1367 | bool MDNSResponder::_writeMDNSHostDomain(const char* p_pcHostname,
| ^~~~~~~~~~~~~
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Transfer.cpp: In member function 'bool esp8266::MDNSImplementation::MDNSResponder::_writeMDNSServiceDomain(const esp8266::MDNSImplementation::MDNSResponder::stcMDNSService&, bool, bool, esp8266::MDNSImplementation::MDNSResponder::stcMDNSSendParameter&)':
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Transfer.cpp:1410:6: warning: stack usage is 320 bytes [-Wstack-usage=]
1410 | bool MDNSResponder::_writeMDNSServiceDomain(const MDNSResponder::stcMDNSService& p_Service,
| ^~~~~~~~~~~~~
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Transfer.cpp: In member function 'bool esp8266::MDNSImplementation::MDNSResponder::_writeMDNSAnswer_PTR_IP4(IPAddress, esp8266::MDNSImplementation::MDNSResponder::stcMDNSSendParameter&)':
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Transfer.cpp:1517:6: warning: stack usage is 560 bytes [-Wstack-usage=]
1517 | bool MDNSResponder::_writeMDNSAnswer_PTR_IP4(IPAddress p_IPAddress,
| ^~~~~~~~~~~~~
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Transfer.cpp: In member function 'bool esp8266::MDNSImplementation::MDNSResponder::_writeMDNSAnswer_PTR_TYPE(esp8266::MDNSImplementation::MDNSResponder::stcMDNSService&, esp8266::MDNSImplementation::MDNSResponder::stcMDNSSendParameter&)':
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Transfer.cpp:1550:6: warning: stack usage is 544 bytes [-Wstack-usage=]
1550 | bool MDNSResponder::_writeMDNSAnswer_PTR_TYPE(MDNSResponder::stcMDNSService& p_rService,
| ^~~~~~~~~~~~~
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Transfer.cpp: In member function 'bool esp8266::MDNSImplementation::MDNSResponder::_writeMDNSAnswer_SRV(esp8266::MDNSImplementation::MDNSResponder::stcMDNSService&, esp8266::MDNSImplementation::MDNSResponder::stcMDNSSendParameter&)':
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Transfer.cpp:1725:6: warning: stack usage is 320 bytes [-Wstack-usage=]
1725 | bool MDNSResponder::_writeMDNSAnswer_SRV(MDNSResponder::stcMDNSService& p_rService,
| ^~~~~~~~~~~~~
````
Crypto.c has some large stacks (which might be unavoidable given the algorithm, but we may want to consider moving it to heap or refactoring the code:
````
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\cores\esp8266\Crypto.cpp: In function 'void* {anonymous}::createBearsslHmac(const br_hash_class*, const void*, size_t, const void*, size_t, void*, size_t)':
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\cores\esp8266\Crypto.cpp:67:7: warning: stack usage is 432 bytes [-Wstack-usage=]
67 | void *createBearsslHmac(const br_hash_class *hashType, const void *data, const size_t dataLength, const void *hashKey, const size_t hashKeyLength, void *resultArray, const size_t outputLength)
| ^~~~~~~~~~~~~~~~~
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\cores\esp8266\Crypto.cpp: In function 'void* {anonymous}::createBearsslHmacCT(const br_hash_class*, const void*, size_t, const void*, size_t, void*, size_t)':
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\cores\esp8266\Crypto.cpp:110:7: warning: stack usage is 464 bytes [-Wstack-usage=]
110 | void *createBearsslHmacCT(const br_hash_class *hashType, const void *data, const size_t dataLength, const void *hashKey, const size_t hashKeyLength, void *resultArray, const size_t outputLength)
| ^~~~~~~~~~~~~~~~~~~
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\cores\esp8266\Crypto.cpp: In function 'void experimental::crypto::chacha20Poly1305Kernel(int, void*, size_t, const void*, const void*, size_t, const void*, void*, const void*, size_t)':
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\cores\esp8266\Crypto.cpp:507:6: warning: stack usage is 464 bytes [-Wstack-usage=]
507 | void chacha20Poly1305Kernel(const int encrypt, void *data, const size_t dataLength, const void *key, const void *keySalt, const size_t keySaltLength,
| ^~~~~~~~~~~~~~~~~~~~~~
````
|
1.0
|
Large stack usage in core and MDNS libraries - I've been playing locally with GCC10 and the `-Wstack-usage=` option which emits a compile-time warning when stacks are larger than the size given. There's only 4K total stack to play with so I set the warning limit to 300 bytes.
This isn't an exhaustive list (need to do a local CI and aggregate warnings), but I'm seeing very high use in the flash_hal and MDNS:
Flash_write() has a 512 byte buffer allocated on the stack in the case of an unaligned write, which seems pretty massive and ripe for reduction:
````
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\cores\esp8266\flash_hal.cpp: In function 'int32_t flash_hal_write(uint32_t, uint32_t, const uint8_t*)':
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\cores\esp8266\flash_hal.cpp:102:9: warning: stack usage is 576 bytes [-Wstack-usage=]
102 | int32_t flash_hal_write(uint32_t addr, uint32_t size, const uint8_t *src) {
| ^~~~~~~~~~~~~~~
````
MDNS (old and new) have stack usages up to almost 700 bytes which might explain some issues seen at runtime.
````
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Control.cpp: In member function 'uint8_t esp8266::MDNSImplementation::MDNSResponder::_ZNK7esp826618MDNSImplementation13MDNSResponder17_replyMaskForHostERKNS1_16stcMDNS_RRHeaderEPb$part$0(const esp8266::MDNSImplementation::MDNSResponder::stcMDNS_RRHeader&, bool*) const':
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Control.cpp:1991:9: warning: stack usage is 304 bytes [-Wstack-usage=]
1991 | uint8_t MDNSResponder::_replyMaskForHost(const MDNSResponder::stcMDNS_RRHeader& p_RRHeader,
| ^~~~~~~~~~~~~
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Control.cpp: In member function 'uint8_t esp8266::MDNSImplementation::MDNSResponder::_ZNK7esp826618MDNSImplementation13MDNSResponder20_replyMaskForServiceERKNS1_16stcMDNS_RRHeaderERKNS1_14stcMDNSServiceEPb$part$0(const esp8266::MDNSImplementation::MDNSResponder::stcMDNS_RRHeader&, const esp8266::MDNSImplementation::MDNSResponder::stcMDNSService&, bool*) const':
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Control.cpp:2069:9: warning: stack usage is 576 bytes [-Wstack-usage=]
2069 | uint8_t MDNSResponder::_replyMaskForService(const MDNSResponder::stcMDNS_RRHeader& p_RRHeader,
| ^~~~~~~~~~~~~
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Control.cpp: In member function 'bool esp8266::MDNSImplementation::MDNSResponder::_parseQuery(const esp8266::MDNSImplementation::MDNSResponder::stcMDNS_MsgHeader&)':
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Control.cpp:179:6: warning: stack usage is 608 bytes [-Wstack-usage=]
179 | bool MDNSResponder::_parseQuery(const MDNSResponder::stcMDNS_MsgHeader& p_MsgHeader)
| ^~~~~~~~~~~~~
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Control.cpp: In member function 'bool esp8266::MDNSImplementation::MDNSResponder::_ZN7esp826618MDNSImplementation13MDNSResponder15_processAnswersEPKNS1_16stcMDNS_RRAnswerE$part$0(const esp8266::MDNSImplementation::MDNSResponder::stcMDNS_RRAnswer*)':
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Control.cpp:749:6: warning: stack usage is 320 bytes [-Wstack-usage=]
749 | bool MDNSResponder::_processAnswers(const MDNSResponder::stcMDNS_RRAnswer* p_pAnswers)
| ^~~~~~~~~~~~~
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Control.cpp: In member function 'bool esp8266::MDNSImplementation::MDNSResponder::_parseResponse(const esp8266::MDNSImplementation::MDNSResponder::stcMDNS_MsgHeader&)':
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Control.cpp:628:6: warning: stack usage is 320 bytes [-Wstack-usage=]
628 | bool MDNSResponder::_parseResponse(const MDNSResponder::stcMDNS_MsgHeader& p_MsgHeader)
| ^~~~~~~~~~~~~
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\ESP8266mDNS_Legacy.cpp: In member function 'void Legacy_MDNSResponder::MDNSResponder::_parsePacket()':
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\ESP8266mDNS_Legacy.cpp:567:6: warning: stack usage is 688 bytes [-Wstack-usage=]
567 | void MDNSResponder::_parsePacket()
| ^~~~~~~~~~~~~
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Transfer.cpp: In member function 'bool esp8266::MDNSImplementation::MDNSResponder::_writeMDNSHostDomain(const char*, bool, esp8266::MDNSImplementation::MDNSResponder::stcMDNSSendParameter&)':
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Transfer.cpp:1367:6: warning: stack usage is 320 bytes [-Wstack-usage=]
1367 | bool MDNSResponder::_writeMDNSHostDomain(const char* p_pcHostname,
| ^~~~~~~~~~~~~
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Transfer.cpp: In member function 'bool esp8266::MDNSImplementation::MDNSResponder::_writeMDNSServiceDomain(const esp8266::MDNSImplementation::MDNSResponder::stcMDNSService&, bool, bool, esp8266::MDNSImplementation::MDNSResponder::stcMDNSSendParameter&)':
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Transfer.cpp:1410:6: warning: stack usage is 320 bytes [-Wstack-usage=]
1410 | bool MDNSResponder::_writeMDNSServiceDomain(const MDNSResponder::stcMDNSService& p_Service,
| ^~~~~~~~~~~~~
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Transfer.cpp: In member function 'bool esp8266::MDNSImplementation::MDNSResponder::_writeMDNSAnswer_PTR_IP4(IPAddress, esp8266::MDNSImplementation::MDNSResponder::stcMDNSSendParameter&)':
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Transfer.cpp:1517:6: warning: stack usage is 560 bytes [-Wstack-usage=]
1517 | bool MDNSResponder::_writeMDNSAnswer_PTR_IP4(IPAddress p_IPAddress,
| ^~~~~~~~~~~~~
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Transfer.cpp: In member function 'bool esp8266::MDNSImplementation::MDNSResponder::_writeMDNSAnswer_PTR_TYPE(esp8266::MDNSImplementation::MDNSResponder::stcMDNSService&, esp8266::MDNSImplementation::MDNSResponder::stcMDNSSendParameter&)':
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Transfer.cpp:1550:6: warning: stack usage is 544 bytes [-Wstack-usage=]
1550 | bool MDNSResponder::_writeMDNSAnswer_PTR_TYPE(MDNSResponder::stcMDNSService& p_rService,
| ^~~~~~~~~~~~~
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Transfer.cpp: In member function 'bool esp8266::MDNSImplementation::MDNSResponder::_writeMDNSAnswer_SRV(esp8266::MDNSImplementation::MDNSResponder::stcMDNSService&, esp8266::MDNSImplementation::MDNSResponder::stcMDNSSendParameter&)':
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\libraries\ESP8266mDNS\src\LEAmDNS_Transfer.cpp:1725:6: warning: stack usage is 320 bytes [-Wstack-usage=]
1725 | bool MDNSResponder::_writeMDNSAnswer_SRV(MDNSResponder::stcMDNSService& p_rService,
| ^~~~~~~~~~~~~
````
Crypto.c has some large stacks (which might be unavoidable given the algorithm, but we may want to consider moving it to heap or refactoring the code:
````
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\cores\esp8266\Crypto.cpp: In function 'void* {anonymous}::createBearsslHmac(const br_hash_class*, const void*, size_t, const void*, size_t, void*, size_t)':
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\cores\esp8266\Crypto.cpp:67:7: warning: stack usage is 432 bytes [-Wstack-usage=]
67 | void *createBearsslHmac(const br_hash_class *hashType, const void *data, const size_t dataLength, const void *hashKey, const size_t hashKeyLength, void *resultArray, const size_t outputLength)
| ^~~~~~~~~~~~~~~~~
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\cores\esp8266\Crypto.cpp: In function 'void* {anonymous}::createBearsslHmacCT(const br_hash_class*, const void*, size_t, const void*, size_t, void*, size_t)':
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\cores\esp8266\Crypto.cpp:110:7: warning: stack usage is 464 bytes [-Wstack-usage=]
110 | void *createBearsslHmacCT(const br_hash_class *hashType, const void *data, const size_t dataLength, const void *hashKey, const size_t hashKeyLength, void *resultArray, const size_t outputLength)
| ^~~~~~~~~~~~~~~~~~~
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\cores\esp8266\Crypto.cpp: In function 'void experimental::crypto::chacha20Poly1305Kernel(int, void*, size_t, const void*, const void*, size_t, const void*, void*, const void*, size_t)':
C:\Users\earle\Documents\Arduino\hardware\esp8266com\esp8266\cores\esp8266\Crypto.cpp:507:6: warning: stack usage is 464 bytes [-Wstack-usage=]
507 | void chacha20Poly1305Kernel(const int encrypt, void *data, const size_t dataLength, const void *key, const void *keySalt, const size_t keySaltLength,
| ^~~~~~~~~~~~~~~~~~~~~~
````
|
non_defect
|
large stack usage in core and mdns libraries i ve been playing locally with and the wstack usage option which emits a compile time warning when stacks are larger than the size given there s only total stack to play with so i set the warning limit to bytes this isn t an exhaustive list need to do a local ci and aggregate warnings but i m seeing very high use in the flash hal and mdns flash write has a byte buffer allocated on the stack in the case of an unaligned write which seems pretty massive and ripe for reduction c users earle documents arduino hardware cores flash hal cpp in function t flash hal write t t const t c users earle documents arduino hardware cores flash hal cpp warning stack usage is bytes t flash hal write t addr t size const t src mdns old and new have stack usages up to almost bytes which might explain some issues seen at runtime c users earle documents arduino hardware libraries src leamdns control cpp in member function t mdnsimplementation mdnsresponder rrheaderepb part const mdnsimplementation mdnsresponder stcmdns rrheader bool const c users earle documents arduino hardware libraries src leamdns control cpp warning stack usage is bytes t mdnsresponder replymaskforhost const mdnsresponder stcmdns rrheader p rrheader c users earle documents arduino hardware libraries src leamdns control cpp in member function t mdnsimplementation mdnsresponder part const mdnsimplementation mdnsresponder stcmdns rrheader const mdnsimplementation mdnsresponder stcmdnsservice bool const c users earle documents arduino hardware libraries src leamdns control cpp warning stack usage is bytes t mdnsresponder replymaskforservice const mdnsresponder stcmdns rrheader p rrheader c users earle documents arduino hardware libraries src leamdns control cpp in member function bool mdnsimplementation mdnsresponder parsequery const mdnsimplementation mdnsresponder stcmdns msgheader c users earle documents arduino hardware libraries src leamdns control cpp warning stack usage is bytes bool mdnsresponder parsequery const mdnsresponder stcmdns msgheader p msgheader c users earle documents arduino hardware libraries src leamdns control cpp in member function bool mdnsimplementation mdnsresponder rranswere part const mdnsimplementation mdnsresponder stcmdns rranswer c users earle documents arduino hardware libraries src leamdns control cpp warning stack usage is bytes bool mdnsresponder processanswers const mdnsresponder stcmdns rranswer p panswers c users earle documents arduino hardware libraries src leamdns control cpp in member function bool mdnsimplementation mdnsresponder parseresponse const mdnsimplementation mdnsresponder stcmdns msgheader c users earle documents arduino hardware libraries src leamdns control cpp warning stack usage is bytes bool mdnsresponder parseresponse const mdnsresponder stcmdns msgheader p msgheader c users earle documents arduino hardware libraries src legacy cpp in member function void legacy mdnsresponder mdnsresponder parsepacket c users earle documents arduino hardware libraries src legacy cpp warning stack usage is bytes void mdnsresponder parsepacket c users earle documents arduino hardware libraries src leamdns transfer cpp in member function bool mdnsimplementation mdnsresponder writemdnshostdomain const char bool mdnsimplementation mdnsresponder stcmdnssendparameter c users earle documents arduino hardware libraries src leamdns transfer cpp warning stack usage is bytes bool mdnsresponder writemdnshostdomain const char p pchostname c users earle documents arduino hardware libraries src leamdns transfer cpp in member function bool mdnsimplementation mdnsresponder writemdnsservicedomain const mdnsimplementation mdnsresponder stcmdnsservice bool bool mdnsimplementation mdnsresponder stcmdnssendparameter c users earle documents arduino hardware libraries src leamdns transfer cpp warning stack usage is bytes bool mdnsresponder writemdnsservicedomain const mdnsresponder stcmdnsservice p service c users earle documents arduino hardware libraries src leamdns transfer cpp in member function bool mdnsimplementation mdnsresponder writemdnsanswer ptr ipaddress mdnsimplementation mdnsresponder stcmdnssendparameter c users earle documents arduino hardware libraries src leamdns transfer cpp warning stack usage is bytes bool mdnsresponder writemdnsanswer ptr ipaddress p ipaddress c users earle documents arduino hardware libraries src leamdns transfer cpp in member function bool mdnsimplementation mdnsresponder writemdnsanswer ptr type mdnsimplementation mdnsresponder stcmdnsservice mdnsimplementation mdnsresponder stcmdnssendparameter c users earle documents arduino hardware libraries src leamdns transfer cpp warning stack usage is bytes bool mdnsresponder writemdnsanswer ptr type mdnsresponder stcmdnsservice p rservice c users earle documents arduino hardware libraries src leamdns transfer cpp in member function bool mdnsimplementation mdnsresponder writemdnsanswer srv mdnsimplementation mdnsresponder stcmdnsservice mdnsimplementation mdnsresponder stcmdnssendparameter c users earle documents arduino hardware libraries src leamdns transfer cpp warning stack usage is bytes bool mdnsresponder writemdnsanswer srv mdnsresponder stcmdnsservice p rservice crypto c has some large stacks which might be unavoidable given the algorithm but we may want to consider moving it to heap or refactoring the code c users earle documents arduino hardware cores crypto cpp in function void anonymous createbearsslhmac const br hash class const void size t const void size t void size t c users earle documents arduino hardware cores crypto cpp warning stack usage is bytes void createbearsslhmac const br hash class hashtype const void data const size t datalength const void hashkey const size t hashkeylength void resultarray const size t outputlength c users earle documents arduino hardware cores crypto cpp in function void anonymous createbearsslhmacct const br hash class const void size t const void size t void size t c users earle documents arduino hardware cores crypto cpp warning stack usage is bytes void createbearsslhmacct const br hash class hashtype const void data const size t datalength const void hashkey const size t hashkeylength void resultarray const size t outputlength c users earle documents arduino hardware cores crypto cpp in function void experimental crypto int void size t const void const void size t const void void const void size t c users earle documents arduino hardware cores crypto cpp warning stack usage is bytes void const int encrypt void data const size t datalength const void key const void keysalt const size t keysaltlength
| 0
|
165,579
| 20,602,414,097
|
IssuesEvent
|
2022-03-06 13:23:22
|
elastic/kibana
|
https://api.github.com/repos/elastic/kibana
|
opened
|
[Cloud Posture] Add support for search bar in Findings endpoint
|
csp Team:Cloud Security Posture
|
Following the transfer logic of findings endpoint to the backend, add support to term search, tags filter, and date range filter
|
True
|
[Cloud Posture] Add support for search bar in Findings endpoint - Following the transfer logic of findings endpoint to the backend, add support to term search, tags filter, and date range filter
|
non_defect
|
add support for search bar in findings endpoint following the transfer logic of findings endpoint to the backend add support to term search tags filter and date range filter
| 0
|
40,397
| 9,981,574,416
|
IssuesEvent
|
2019-07-10 07:50:59
|
cakephp/bake
|
https://api.github.com/repos/cakephp/bake
|
closed
|
Inconsistent naming of baked test class and an actual class
|
Defect
|
CakePHP 3.6.2
Bake 1.7.3
## Steps to reproduce
Bake a class. Name of the class should have a suffix, same as the baking type.
`$ ./bin/cake bake shell TestShell`
```
Creating file .../app/src/Shell/TestShellShell.php
Wrote `.../app/src/Shell/TestShellShell.php`
Baking test case for App\Shell\TestShell ...
Creating file .../app/tests/TestCase/Shell/TestShellTest.php
Wrote `.../app/tests/TestCase/Shell/TestShellTest.php`
```
## Expected
Class file and test file should be named the same: either `TestShellShell` & `TestShellShellTest` or `TestShell` and `TestShellTest`
## Actual result
Resulting test file has a different name and referrers to incorrect class: `TestShell` instead of `TestShellShell`.
|
1.0
|
Inconsistent naming of baked test class and an actual class - CakePHP 3.6.2
Bake 1.7.3
## Steps to reproduce
Bake a class. Name of the class should have a suffix, same as the baking type.
`$ ./bin/cake bake shell TestShell`
```
Creating file .../app/src/Shell/TestShellShell.php
Wrote `.../app/src/Shell/TestShellShell.php`
Baking test case for App\Shell\TestShell ...
Creating file .../app/tests/TestCase/Shell/TestShellTest.php
Wrote `.../app/tests/TestCase/Shell/TestShellTest.php`
```
## Expected
Class file and test file should be named the same: either `TestShellShell` & `TestShellShellTest` or `TestShell` and `TestShellTest`
## Actual result
Resulting test file has a different name and referrers to incorrect class: `TestShell` instead of `TestShellShell`.
|
defect
|
inconsistent naming of baked test class and an actual class cakephp bake steps to reproduce bake a class name of the class should have a suffix same as the baking type bin cake bake shell testshell creating file app src shell testshellshell php wrote app src shell testshellshell php baking test case for app shell testshell creating file app tests testcase shell testshelltest php wrote app tests testcase shell testshelltest php expected class file and test file should be named the same either testshellshell testshellshelltest or testshell and testshelltest actual result resulting test file has a different name and referrers to incorrect class testshell instead of testshellshell
| 1
|
689,834
| 23,635,879,772
|
IssuesEvent
|
2022-08-25 13:16:49
|
infor-design/enterprise
|
https://api.github.com/repos/infor-design/enterprise
|
closed
|
DataGrid: In modal gets inline style which is not desired
|
type: regression bug :leftwards_arrow_with_hook: [3] priority: high
|
<!-- Please be aware that this is a publicly visible bug report. Do not post any credentials, screenshots with proprietary information, or anything you think shouldn't be visible to the world. If reporting a security issue such as a xss vulnerability. Please use the [security advisories feature](https://github.com/infor-design/enterprise/security/advisories). If private information is required to be shared for a quality bug report, please email one of the [code owners](https://github.com/infor-design/enterprise/blob/main/.github/CODEOWNERS) directly. -->
**Describe the bug**
You can reproduce this issue by checking out `14.1.x` build and run it.
Then go M > Modal Dialog
Click on Dialog with Data Grid


Now if you check out `14.0.x` build and start it


You can already notice a visible difference. The modal in 14.0.x is wider, and does not have added inline width style.
**To Reproduce**
<!-- Please spend a little time to make an accurate reduced test case for the issue. The more code you include the less likely is that the issue can be fixed quickly (or at all). This is a good article about reduced test cases if your unfamiliar https://css-tricks.com/reduced-test-cases/. -->
Steps to reproduce the behavior:
The repository used is `enterprise-ng` to reproduce.
**Expected behavior**
The inline width style is not desired.
**Version**
- ids-enterprise-ng: 14.2.1
|
1.0
|
DataGrid: In modal gets inline style which is not desired - <!-- Please be aware that this is a publicly visible bug report. Do not post any credentials, screenshots with proprietary information, or anything you think shouldn't be visible to the world. If reporting a security issue such as a xss vulnerability. Please use the [security advisories feature](https://github.com/infor-design/enterprise/security/advisories). If private information is required to be shared for a quality bug report, please email one of the [code owners](https://github.com/infor-design/enterprise/blob/main/.github/CODEOWNERS) directly. -->
**Describe the bug**
You can reproduce this issue by checking out `14.1.x` build and run it.
Then go M > Modal Dialog
Click on Dialog with Data Grid


Now if you check out `14.0.x` build and start it


You can already notice a visible difference. The modal in 14.0.x is wider, and does not have added inline width style.
**To Reproduce**
<!-- Please spend a little time to make an accurate reduced test case for the issue. The more code you include the less likely is that the issue can be fixed quickly (or at all). This is a good article about reduced test cases if your unfamiliar https://css-tricks.com/reduced-test-cases/. -->
Steps to reproduce the behavior:
The repository used is `enterprise-ng` to reproduce.
**Expected behavior**
The inline width style is not desired.
**Version**
- ids-enterprise-ng: 14.2.1
|
non_defect
|
datagrid in modal gets inline style which is not desired describe the bug you can reproduce this issue by checking out x build and run it then go m modal dialog click on dialog with data grid now if you check out x build and start it you can already notice a visible difference the modal in x is wider and does not have added inline width style to reproduce steps to reproduce the behavior the repository used is enterprise ng to reproduce expected behavior the inline width style is not desired version ids enterprise ng
| 0
|
83,898
| 7,883,840,056
|
IssuesEvent
|
2018-06-27 07:10:21
|
h2oai/datatable
|
https://api.github.com/repos/h2oai/datatable
|
closed
|
test_dt_load_time causes spontaneous failures
|
test
|
On some systems, loading time takes 0.29s or even 0.36s (above the limit of 0.25s). Need to investigate why this is happening, and either fix the problem or adjust the test.
|
1.0
|
test_dt_load_time causes spontaneous failures - On some systems, loading time takes 0.29s or even 0.36s (above the limit of 0.25s). Need to investigate why this is happening, and either fix the problem or adjust the test.
|
non_defect
|
test dt load time causes spontaneous failures on some systems loading time takes or even above the limit of need to investigate why this is happening and either fix the problem or adjust the test
| 0
|
6,801
| 2,610,280,456
|
IssuesEvent
|
2015-02-26 19:29:39
|
chrsmith/scribefire-chrome
|
https://api.github.com/repos/chrsmith/scribefire-chrome
|
closed
|
Draft versions not saving
|
auto-migrated Priority-Medium Type-Defect
|
```
What's the problem?
I save a draft. If I close ScribeFire, or if I start a subsequent draft within
the same session, my saved draft vanishes.
What browser are you using?
Opera 11.50
Build 1074
Platform Linux (Mint 9)
System
i686, 2.6.32-21-generic
What version of ScribeFire are you running?
1.7.1
```
-----
Original issue reported on code.google.com by `anthropo...@gmail.com` on 4 Jul 2011 at 5:59
|
1.0
|
Draft versions not saving - ```
What's the problem?
I save a draft. If I close ScribeFire, or if I start a subsequent draft within
the same session, my saved draft vanishes.
What browser are you using?
Opera 11.50
Build 1074
Platform Linux (Mint 9)
System
i686, 2.6.32-21-generic
What version of ScribeFire are you running?
1.7.1
```
-----
Original issue reported on code.google.com by `anthropo...@gmail.com` on 4 Jul 2011 at 5:59
|
defect
|
draft versions not saving what s the problem i save a draft if i close scribefire or if i start a subsequent draft within the same session my saved draft vanishes what browser are you using opera build platform linux mint system generic what version of scribefire are you running original issue reported on code google com by anthropo gmail com on jul at
| 1
|
119,339
| 25,511,223,925
|
IssuesEvent
|
2022-11-28 13:15:47
|
5l1D3R/Github-actions
|
https://api.github.com/repos/5l1D3R/Github-actions
|
opened
|
CVE: 2022-21363 found in MySQL java connector - Version: 5.1.35 [JAVA]
|
Severity: High Veracode Dependency Scanning
|
Veracode Software Composition Analysis
===============================
Attribute | Details
| --- | --- |
Library | MySQL java connector
Description | MySQL java connector
Language | JAVA
Vulnerability | Privilege Escalation
Vulnerability description | mysql-connector is vulnerable to privilege escalation. An attacker can exploit the vulnerability and takeover the MySQL Connectors.
CVE | 2022-21363
CVSS score | 6
Vulnerability present in version/s | 5.1.29-8.0.27
Found library version/s | 5.1.35
Vulnerability fixed in version | 8.0.28
Library latest version | 8.0.31
Fix |
Links:
- https://sca.analysiscenter.veracode.com/vulnerability-database/libraries/1834?version=5.1.35
- https://sca.analysiscenter.veracode.com/vulnerability-database/vulnerabilities/35820
- Patch: https://github.com/mysql/mysql-connector-j/commit/4993d5735fd84a46e7d949ad1bcaa0e9bb039824
|
1.0
|
CVE: 2022-21363 found in MySQL java connector - Version: 5.1.35 [JAVA] - Veracode Software Composition Analysis
===============================
Attribute | Details
| --- | --- |
Library | MySQL java connector
Description | MySQL java connector
Language | JAVA
Vulnerability | Privilege Escalation
Vulnerability description | mysql-connector is vulnerable to privilege escalation. An attacker can exploit the vulnerability and takeover the MySQL Connectors.
CVE | 2022-21363
CVSS score | 6
Vulnerability present in version/s | 5.1.29-8.0.27
Found library version/s | 5.1.35
Vulnerability fixed in version | 8.0.28
Library latest version | 8.0.31
Fix |
Links:
- https://sca.analysiscenter.veracode.com/vulnerability-database/libraries/1834?version=5.1.35
- https://sca.analysiscenter.veracode.com/vulnerability-database/vulnerabilities/35820
- Patch: https://github.com/mysql/mysql-connector-j/commit/4993d5735fd84a46e7d949ad1bcaa0e9bb039824
|
non_defect
|
cve found in mysql java connector version veracode software composition analysis attribute details library mysql java connector description mysql java connector language java vulnerability privilege escalation vulnerability description mysql connector is vulnerable to privilege escalation an attacker can exploit the vulnerability and takeover the mysql connectors cve cvss score vulnerability present in version s found library version s vulnerability fixed in version library latest version fix links patch
| 0
|
61,184
| 17,023,627,717
|
IssuesEvent
|
2021-07-03 03:00:13
|
tomhughes/trac-tickets
|
https://api.github.com/repos/tomhughes/trac-tickets
|
closed
|
Private tracks visible
|
Component: website Priority: major Resolution: fixed Type: defect
|
**[Submitted to the original trac issue database at 12.34pm, Wednesday, 1st September 2010]**
Private tracks are visible on the user-specific list http://www.openstreetmap.org/user/<user>/traces even if not logged in. The tracks are only visible on the german version of the page, the english one is ok.
How to reproduce:
1. upload a track with "private" visibiliy
2. log out
3. change settings of web browser to prefer german language
4. go to http://www.openstreetmap.org/user/<user>/traces
You will see a list of all tracks, including the private one.
|
1.0
|
Private tracks visible - **[Submitted to the original trac issue database at 12.34pm, Wednesday, 1st September 2010]**
Private tracks are visible on the user-specific list http://www.openstreetmap.org/user/<user>/traces even if not logged in. The tracks are only visible on the german version of the page, the english one is ok.
How to reproduce:
1. upload a track with "private" visibiliy
2. log out
3. change settings of web browser to prefer german language
4. go to http://www.openstreetmap.org/user/<user>/traces
You will see a list of all tracks, including the private one.
|
defect
|
private tracks visible private tracks are visible on the user specific list even if not logged in the tracks are only visible on the german version of the page the english one is ok how to reproduce upload a track with private visibiliy log out change settings of web browser to prefer german language go to you will see a list of all tracks including the private one
| 1
|
79,954
| 29,797,819,915
|
IssuesEvent
|
2023-06-16 05:08:10
|
hazelcast/hazelcast
|
https://api.github.com/repos/hazelcast/hazelcast
|
opened
|
Partition table frozen on rolling nodes restart
|
Type: Defect
|
<!--
Thanks for reporting your issue. Please share with us the following information, to help us resolve your issue quickly and efficiently.
-->
**Describe the bug**
During rolling restart of Hazelcast cluster nodes partition table frozen (probably because of failed migration) . That caused put/get operations to target unavailable members.
**Expected behavior**
Failed migration should not stop partition table updates.
**To Reproduce**
Unable to reproduce, it happened only once in our environment.
**Additional context**
It looks like we encountered some race condition in hazelcast when redeploying our application.
Our application uses hazelcast in embedded mode. We have 64 nodes, use mostly map structure, the cluster is not heavy utilized.
I'm attaching a file with all the details I found out - i cannot add this inline as it is too big.
Please let me know if I can add any more information.
Thanks
Aleksander
[hz.md](https://github.com/hazelcast/hazelcast/files/11766177/hz.md)
|
1.0
|
Partition table frozen on rolling nodes restart - <!--
Thanks for reporting your issue. Please share with us the following information, to help us resolve your issue quickly and efficiently.
-->
**Describe the bug**
During rolling restart of Hazelcast cluster nodes partition table frozen (probably because of failed migration) . That caused put/get operations to target unavailable members.
**Expected behavior**
Failed migration should not stop partition table updates.
**To Reproduce**
Unable to reproduce, it happened only once in our environment.
**Additional context**
It looks like we encountered some race condition in hazelcast when redeploying our application.
Our application uses hazelcast in embedded mode. We have 64 nodes, use mostly map structure, the cluster is not heavy utilized.
I'm attaching a file with all the details I found out - i cannot add this inline as it is too big.
Please let me know if I can add any more information.
Thanks
Aleksander
[hz.md](https://github.com/hazelcast/hazelcast/files/11766177/hz.md)
|
defect
|
partition table frozen on rolling nodes restart thanks for reporting your issue please share with us the following information to help us resolve your issue quickly and efficiently describe the bug during rolling restart of hazelcast cluster nodes partition table frozen probably because of failed migration that caused put get operations to target unavailable members expected behavior failed migration should not stop partition table updates to reproduce unable to reproduce it happened only once in our environment additional context it looks like we encountered some race condition in hazelcast when redeploying our application our application uses hazelcast in embedded mode we have nodes use mostly map structure the cluster is not heavy utilized i m attaching a file with all the details i found out i cannot add this inline as it is too big please let me know if i can add any more information thanks aleksander
| 1
|
732,187
| 25,248,107,613
|
IssuesEvent
|
2022-11-15 12:43:43
|
akvo/akvo-rsr
|
https://api.github.com/repos/akvo/akvo-rsr
|
closed
|
Feature Request: UI for aggregation tasks
|
Feature request Priority: High
|
### What are you trying to do?
See how the aggregation tasks are going.
### Describe the solution you'd like
A view of all aggregation tasks at program level in a new tab.

or

There will also be a button or some kind of method required to relaunch tasks that require intervention.
- SCHEDULED
- RUNNING
- FINISHED
- FAILED
- REQUIRES_INTERVENTION (name to be decided)
### Have you consider alternatives?
_No response_
### Additional context
_No response_
|
1.0
|
Feature Request: UI for aggregation tasks - ### What are you trying to do?
See how the aggregation tasks are going.
### Describe the solution you'd like
A view of all aggregation tasks at program level in a new tab.

or

There will also be a button or some kind of method required to relaunch tasks that require intervention.
- SCHEDULED
- RUNNING
- FINISHED
- FAILED
- REQUIRES_INTERVENTION (name to be decided)
### Have you consider alternatives?
_No response_
### Additional context
_No response_
|
non_defect
|
feature request ui for aggregation tasks what are you trying to do see how the aggregation tasks are going describe the solution you d like a view of all aggregation tasks at program level in a new tab or there will also be a button or some kind of method required to relaunch tasks that require intervention scheduled running finished failed requires intervention name to be decided have you consider alternatives no response additional context no response
| 0
|
34,820
| 7,460,679,965
|
IssuesEvent
|
2018-03-30 20:51:39
|
kerdokullamae/test_koik_issued
|
https://api.github.com/repos/kerdokullamae/test_koik_issued
|
closed
|
Täpne otsing leidandmete järgi vahel aegub
|
C: AIS P: highest R: fixed T: defect
|
**Reported by koitsaarevet on 17 Mar 2017 12:10 UTC**
http://www.dev-ais-web.arhiiv.ee täpse otsingu vormil päring leidandmed = TLA.230 veeretab liivakella u 50 sekundit ja siis tuleb veateade: Error: Maximum execution time of 30 seconds exceeded.
Samas päring leidandmed = ERA.1 annab vastuse paari sekundiga.
http://ais2.arhiiv.ee toimib ka TLA.230 päring korrektselt.
|
1.0
|
Täpne otsing leidandmete järgi vahel aegub - **Reported by koitsaarevet on 17 Mar 2017 12:10 UTC**
http://www.dev-ais-web.arhiiv.ee täpse otsingu vormil päring leidandmed = TLA.230 veeretab liivakella u 50 sekundit ja siis tuleb veateade: Error: Maximum execution time of 30 seconds exceeded.
Samas päring leidandmed = ERA.1 annab vastuse paari sekundiga.
http://ais2.arhiiv.ee toimib ka TLA.230 päring korrektselt.
|
defect
|
täpne otsing leidandmete järgi vahel aegub reported by koitsaarevet on mar utc täpse otsingu vormil päring leidandmed tla veeretab liivakella u sekundit ja siis tuleb veateade error maximum execution time of seconds exceeded samas päring leidandmed era annab vastuse paari sekundiga toimib ka tla päring korrektselt
| 1
|
74,709
| 14,289,443,247
|
IssuesEvent
|
2020-11-23 19:16:05
|
mozilla/foundation.mozilla.org
|
https://api.github.com/repos/mozilla/foundation.mozilla.org
|
closed
|
PNI - namespace the buyersguide `template` content
|
Buyer's Guide 🛍 code cleanup engineering localization 🌎
|
We currently have all our templating code in `networkapi/buyersguide/templates`, so it will conflict with files found in `networkapi/templates` when there are files with the same name.
The solution is to make a dir called `networkapi/buyersguide/templates/buyersguide` and then move everything that's in `buyersguide/templates` into that new `buyersguide/templates/buyersguide`. That way, django won't try to merge them with the files found in the base templates dir.
- [ ] create new `networkapi/buyersguide/templates/buyersguide` dir
- [ ] move templates into this new dir
- [ ] update https://github.com/mozilla/foundation.mozilla.org/blob/master/translation-management.sh#L37-L51
- [ ] coordinate a change for https://github.com/mozilla-l10n/fomo-l10n/ with @TheoChevalier / MoCo
|
1.0
|
PNI - namespace the buyersguide `template` content - We currently have all our templating code in `networkapi/buyersguide/templates`, so it will conflict with files found in `networkapi/templates` when there are files with the same name.
The solution is to make a dir called `networkapi/buyersguide/templates/buyersguide` and then move everything that's in `buyersguide/templates` into that new `buyersguide/templates/buyersguide`. That way, django won't try to merge them with the files found in the base templates dir.
- [ ] create new `networkapi/buyersguide/templates/buyersguide` dir
- [ ] move templates into this new dir
- [ ] update https://github.com/mozilla/foundation.mozilla.org/blob/master/translation-management.sh#L37-L51
- [ ] coordinate a change for https://github.com/mozilla-l10n/fomo-l10n/ with @TheoChevalier / MoCo
|
non_defect
|
pni namespace the buyersguide template content we currently have all our templating code in networkapi buyersguide templates so it will conflict with files found in networkapi templates when there are files with the same name the solution is to make a dir called networkapi buyersguide templates buyersguide and then move everything that s in buyersguide templates into that new buyersguide templates buyersguide that way django won t try to merge them with the files found in the base templates dir create new networkapi buyersguide templates buyersguide dir move templates into this new dir update coordinate a change for with theochevalier moco
| 0
|
64,464
| 18,684,702,774
|
IssuesEvent
|
2021-11-01 10:54:12
|
obophenotype/cell-ontology
|
https://api.github.com/repos/obophenotype/cell-ontology
|
closed
|
PATO differentium for non-terminallly differentiated cell
|
Priority-Medium Type-Defect auto-migrated autoclosed-unfixed
|
```
I think that this is a useful general class as long as relevant subclasses can
be autoclassified under it:
[Term]
id: CL:0000055
name: non-terminally differentiated cell
namespace: cell
def: "A precursor cell with a limited number of potential fates." [SANBI:mhl]
synonym: "blast cell" EXACT []
xref: BTO:0000125
xref: FMA:84782
is_a: CL:0000003 ! native cell
It was originally defined using the PATO differentium 'differentiated', which
is clearly wrong. I have removed this but would like a way to define a pair of
disjoint classes to be used for autoclassification:
terminally differentiated cell
non-terminally differentiated cell
For the former, perhaps we need PATO terminally differentiated instead of just
PATO differentiated?
For the latter, perhaps use PATO mulit-potent or oligopotent?
```
Original issue reported on code.google.com by `dosu...@gmail.com` on 27 Feb 2012 at 10:46
|
1.0
|
PATO differentium for non-terminallly differentiated cell - ```
I think that this is a useful general class as long as relevant subclasses can
be autoclassified under it:
[Term]
id: CL:0000055
name: non-terminally differentiated cell
namespace: cell
def: "A precursor cell with a limited number of potential fates." [SANBI:mhl]
synonym: "blast cell" EXACT []
xref: BTO:0000125
xref: FMA:84782
is_a: CL:0000003 ! native cell
It was originally defined using the PATO differentium 'differentiated', which
is clearly wrong. I have removed this but would like a way to define a pair of
disjoint classes to be used for autoclassification:
terminally differentiated cell
non-terminally differentiated cell
For the former, perhaps we need PATO terminally differentiated instead of just
PATO differentiated?
For the latter, perhaps use PATO mulit-potent or oligopotent?
```
Original issue reported on code.google.com by `dosu...@gmail.com` on 27 Feb 2012 at 10:46
|
defect
|
pato differentium for non terminallly differentiated cell i think that this is a useful general class as long as relevant subclasses can be autoclassified under it id cl name non terminally differentiated cell namespace cell def a precursor cell with a limited number of potential fates synonym blast cell exact xref bto xref fma is a cl native cell it was originally defined using the pato differentium differentiated which is clearly wrong i have removed this but would like a way to define a pair of disjoint classes to be used for autoclassification terminally differentiated cell non terminally differentiated cell for the former perhaps we need pato terminally differentiated instead of just pato differentiated for the latter perhaps use pato mulit potent or oligopotent original issue reported on code google com by dosu gmail com on feb at
| 1
|
75,815
| 26,076,641,262
|
IssuesEvent
|
2022-12-24 16:35:55
|
primefaces/primefaces
|
https://api.github.com/repos/primefaces/primefaces
|
closed
|
DataView: Pagination fills grid except for last page
|
:lady_beetle: defect
|
### Describe the bug
The DataView grid appears to have a mind of its own as far as evenly filling the grid.
Go to https://www.primefaces.org/showcase/ui/data/dataview/lazy.xhtml
The dropdown to select # of items is populated with 6,12 and 16. 6 and 12 appear to work fine because they are evenly divisible by 3 but choosing 16 shows one aspect of what I'm seeing. If you choose 16 you actually end up with 6 full rows of 3 which is 18 items.
I noticed if I had a non-lazy DataView grid with 20 total items with rows="10" then it will "steal" 2 more items from the total dataset and display 1-12 even though I've asked for 10, the next page correctly shows just 10 with the last "row" only having a single griditem.
In another example in my application, I have a lazy data model where it's pulling just the required information from the data source then I've noticed it does similar thing on first page and where I have 13 total items and set rows="10" then the LazyDataModel is correctly passing first=0 and maxResults=10 to my load method, and the data source is ONLY returning 10 items but the DataView is displaying 12 grid items. It ends up repeating the "last" 2 grid items again to fill out the 4th row.
So it's either stealing extra items from the data set(if available) or duplicating ones it has access to to fill the grid on the first page (and probably each one in between?) but the last page correctly shows the remaining 1 or 2 if it's not a full row.
So I'm not sure if this is intended behavior and we should enforce multiples of 3 for rows in the paging template or this is a defect.
In the documentation for the DataView: https://primefaces.github.io/primefaces/12_0_0/#/components/dataview
* The Ajax Pagination section suggests rowsPerPageTemplate="6,12,16"
* Then just below it in Paginator Template section it has rowsPerPageTemplate="9,12,15" as suggested options.
In any case, the DataView in list view appears to correctly show 10 items as asked.
### Reproducer
https://www.primefaces.org/showcase/ui/data/dataview/lazy.xhtml
choose 16 items per page
End up with 18 items per page
### Expected behavior
Only display 16 items as long as there are 16 or more.
### PrimeFaces edition
Community
### PrimeFaces version
12.0.0
### Theme
_No response_
### JSF implementation
MyFaces
### JSF version
2.3 (MyFaces 2.3.9)
### Java version
17.0.5-8
### Browser(s)
_No response_
|
1.0
|
DataView: Pagination fills grid except for last page - ### Describe the bug
The DataView grid appears to have a mind of its own as far as evenly filling the grid.
Go to https://www.primefaces.org/showcase/ui/data/dataview/lazy.xhtml
The dropdown to select # of items is populated with 6,12 and 16. 6 and 12 appear to work fine because they are evenly divisible by 3 but choosing 16 shows one aspect of what I'm seeing. If you choose 16 you actually end up with 6 full rows of 3 which is 18 items.
I noticed if I had a non-lazy DataView grid with 20 total items with rows="10" then it will "steal" 2 more items from the total dataset and display 1-12 even though I've asked for 10, the next page correctly shows just 10 with the last "row" only having a single griditem.
In another example in my application, I have a lazy data model where it's pulling just the required information from the data source then I've noticed it does similar thing on first page and where I have 13 total items and set rows="10" then the LazyDataModel is correctly passing first=0 and maxResults=10 to my load method, and the data source is ONLY returning 10 items but the DataView is displaying 12 grid items. It ends up repeating the "last" 2 grid items again to fill out the 4th row.
So it's either stealing extra items from the data set(if available) or duplicating ones it has access to to fill the grid on the first page (and probably each one in between?) but the last page correctly shows the remaining 1 or 2 if it's not a full row.
So I'm not sure if this is intended behavior and we should enforce multiples of 3 for rows in the paging template or this is a defect.
In the documentation for the DataView: https://primefaces.github.io/primefaces/12_0_0/#/components/dataview
* The Ajax Pagination section suggests rowsPerPageTemplate="6,12,16"
* Then just below it in Paginator Template section it has rowsPerPageTemplate="9,12,15" as suggested options.
In any case, the DataView in list view appears to correctly show 10 items as asked.
### Reproducer
https://www.primefaces.org/showcase/ui/data/dataview/lazy.xhtml
choose 16 items per page
End up with 18 items per page
### Expected behavior
Only display 16 items as long as there are 16 or more.
### PrimeFaces edition
Community
### PrimeFaces version
12.0.0
### Theme
_No response_
### JSF implementation
MyFaces
### JSF version
2.3 (MyFaces 2.3.9)
### Java version
17.0.5-8
### Browser(s)
_No response_
|
defect
|
dataview pagination fills grid except for last page describe the bug the dataview grid appears to have a mind of its own as far as evenly filling the grid go to the dropdown to select of items is populated with and and appear to work fine because they are evenly divisible by but choosing shows one aspect of what i m seeing if you choose you actually end up with full rows of which is items i noticed if i had a non lazy dataview grid with total items with rows then it will steal more items from the total dataset and display even though i ve asked for the next page correctly shows just with the last row only having a single griditem in another example in my application i have a lazy data model where it s pulling just the required information from the data source then i ve noticed it does similar thing on first page and where i have total items and set rows then the lazydatamodel is correctly passing first and maxresults to my load method and the data source is only returning items but the dataview is displaying grid items it ends up repeating the last grid items again to fill out the row so it s either stealing extra items from the data set if available or duplicating ones it has access to to fill the grid on the first page and probably each one in between but the last page correctly shows the remaining or if it s not a full row so i m not sure if this is intended behavior and we should enforce multiples of for rows in the paging template or this is a defect in the documentation for the dataview the ajax pagination section suggests rowsperpagetemplate then just below it in paginator template section it has rowsperpagetemplate as suggested options in any case the dataview in list view appears to correctly show items as asked reproducer choose items per page end up with items per page expected behavior only display items as long as there are or more primefaces edition community primefaces version theme no response jsf implementation myfaces jsf version myfaces java version browser s no response
| 1
|
35,665
| 5,000,405,669
|
IssuesEvent
|
2016-12-10 09:25:24
|
nicolargo/glances
|
https://api.github.com/repos/nicolargo/glances
|
closed
|
Enhancements for "glances --browser"
|
enhancement needs test
|
**Description:**
I was testing "glances --browser" and I found it a great feature to help to fast monitor a couple of servers. But theres some things that could be enhanced, I will list my suggestions bellow:
- A auto-connect support:
> Every time I open the "glances --browser" I have to manually enter and exit in each server to connect to they. Only after this the server data in the list starts to update. It would be much more fast and easier to put the Glances to connect to all server on start instead.
- A auto-reconnect support:
> I use the internet to list the most of servers, so sometimes I lose the connection to a server and it shows as off-line in Glances. I would be very nice if a auto-reconnect option in glances.conf to allow enable the reconnection, the number of attempts and the interval.
- A text color and highlight per item per server:
> Currently the "glances --browser" colorizes the entire line of the server based on status. But the ideal is that each item have their own color/highlight independently based on careful/warning/critical configs for each server (and if my #950 suggestion be accept with bells too). => Will be implemented in https://github.com/nicolargo/glances/issues/977
|
1.0
|
Enhancements for "glances --browser" - **Description:**
I was testing "glances --browser" and I found it a great feature to help to fast monitor a couple of servers. But theres some things that could be enhanced, I will list my suggestions bellow:
- A auto-connect support:
> Every time I open the "glances --browser" I have to manually enter and exit in each server to connect to they. Only after this the server data in the list starts to update. It would be much more fast and easier to put the Glances to connect to all server on start instead.
- A auto-reconnect support:
> I use the internet to list the most of servers, so sometimes I lose the connection to a server and it shows as off-line in Glances. I would be very nice if a auto-reconnect option in glances.conf to allow enable the reconnection, the number of attempts and the interval.
- A text color and highlight per item per server:
> Currently the "glances --browser" colorizes the entire line of the server based on status. But the ideal is that each item have their own color/highlight independently based on careful/warning/critical configs for each server (and if my #950 suggestion be accept with bells too). => Will be implemented in https://github.com/nicolargo/glances/issues/977
|
non_defect
|
enhancements for glances browser description i was testing glances browser and i found it a great feature to help to fast monitor a couple of servers but theres some things that could be enhanced i will list my suggestions bellow a auto connect support every time i open the glances browser i have to manually enter and exit in each server to connect to they only after this the server data in the list starts to update it would be much more fast and easier to put the glances to connect to all server on start instead a auto reconnect support i use the internet to list the most of servers so sometimes i lose the connection to a server and it shows as off line in glances i would be very nice if a auto reconnect option in glances conf to allow enable the reconnection the number of attempts and the interval a text color and highlight per item per server currently the glances browser colorizes the entire line of the server based on status but the ideal is that each item have their own color highlight independently based on careful warning critical configs for each server and if my suggestion be accept with bells too will be implemented in
| 0
|
261,588
| 22,755,171,051
|
IssuesEvent
|
2022-07-07 15:57:01
|
ethereum/solidity
|
https://api.github.com/repos/ethereum/solidity
|
closed
|
External test for Gnosis Protocol v2 fails with `TypeError: authenticator.connect is not a function` and other errors
|
bug :bug: testing :hammer:
|
https://app.circleci.com/pipelines/github/ethereum/solidity/24994/workflows/03537476-6acb-4266-824e-f0b89854df3c/jobs/1100938
The repo hasn't changed so maybe something on our end broke it?
Finding the offending commit might help
|
1.0
|
External test for Gnosis Protocol v2 fails with `TypeError: authenticator.connect is not a function` and other errors - https://app.circleci.com/pipelines/github/ethereum/solidity/24994/workflows/03537476-6acb-4266-824e-f0b89854df3c/jobs/1100938
The repo hasn't changed so maybe something on our end broke it?
Finding the offending commit might help
|
non_defect
|
external test for gnosis protocol fails with typeerror authenticator connect is not a function and other errors the repo hasn t changed so maybe something on our end broke it finding the offending commit might help
| 0
|
11,488
| 5,011,878,407
|
IssuesEvent
|
2016-12-13 09:32:43
|
LLNL/spack
|
https://api.github.com/repos/LLNL/spack
|
closed
|
SLEPc fails to configure with Spack's python
|
bug build-error package python
|
I just wiped my installation of Spack to re-install and check things and got the error `Symbol not found: __PyCodecInfo_GetIncrementalDecoder`:
```
==> './configure' '--prefix=/Users/davydden/spack/opt/spack/darwin-sierra-x86_64/clang-8.0.0-apple/slepc-3.7.3-rhcxmg2ntqe3v6epgljeseffnpa4gla2' '--with-arpack-dir=/Users/davydden/spack/opt/spack/darwin-sierra-x86_64/clang-8.0.0-apple/arpack-ng-3.4.0-g76ncwdpqcyx5lm5e65ydwaetbx5sulo/lib' '--with-arpack-flags=-lparpack,-larpack'
Traceback (most recent call last):
File "./configure", line 10, in <module>
execfile(os.path.join(os.path.dirname(__file__), 'config', 'configure.py'))
File "./config/configure.py", line 140, in <module>
import slepc, petsc, arpack, blzpack, trlan, feast, primme, blopex, sowing, lapack
File "/private/var/folders/5k/sqpp24tx3ylds4fgm13pfht00000gn/T/davydden/spack-stage/spack-stage-ZOF1pH/slepc-3.7.3/config/packages/petsc.py", line 22, in <module>
import package, os, sys, commands
File "/private/var/folders/5k/sqpp24tx3ylds4fgm13pfht00000gn/T/davydden/spack-stage/spack-stage-ZOF1pH/slepc-3.7.3/config/package.py", line 22, in <module>
import os, sys, commands, tempfile, shutil, urllib, urlparse, tarfile
File "/Users/davydden/spack/opt/spack/darwin-sierra-x86_64/clang-8.0.0-apple/python-2.7.12-6dtr7kw2sj5zu7z7v3ox3agrmpw5cndt/lib/python2.7/tempfile.py", line 32, in <module>
import io as _io
File "/Users/davydden/spack/opt/spack/darwin-sierra-x86_64/clang-8.0.0-apple/python-2.7.12-6dtr7kw2sj5zu7z7v3ox3agrmpw5cndt/lib/python2.7/io.py", line 51, in <module>
import _io
ImportError: dlopen(/Users/davydden/spack/opt/spack/darwin-sierra-x86_64/clang-8.0.0-apple/python-2.7.12-6dtr7kw2sj5zu7z7v3ox3agrmpw5cndt/lib/python2.7/lib-dynload/_io.so, 2): Symbol not found: __PyCodecInfo_GetIncrementalDecoder
Referenced from: /Users/davydden/spack/opt/spack/darwin-sierra-x86_64/clang-8.0.0-apple/python-2.7.12-6dtr7kw2sj5zu7z7v3ox3agrmpw5cndt/lib/python2.7/lib-dynload/_io.so
Expected in: flat namespace
in /Users/davydden/spack/opt/spack/darwin-sierra-x86_64/clang-8.0.0-apple/python-2.7.12-6dtr7kw2sj5zu7z7v3ox3agrmpw5cndt/lib/python2.7/lib-dynload/_io.so
```
Looking at the history of `python` package, i don't see what could have led to this.
For now will be using
```
python:
version: [2.7.10]
paths:
python@2.7.10: /usr
buildable: False
```
|
1.0
|
SLEPc fails to configure with Spack's python - I just wiped my installation of Spack to re-install and check things and got the error `Symbol not found: __PyCodecInfo_GetIncrementalDecoder`:
```
==> './configure' '--prefix=/Users/davydden/spack/opt/spack/darwin-sierra-x86_64/clang-8.0.0-apple/slepc-3.7.3-rhcxmg2ntqe3v6epgljeseffnpa4gla2' '--with-arpack-dir=/Users/davydden/spack/opt/spack/darwin-sierra-x86_64/clang-8.0.0-apple/arpack-ng-3.4.0-g76ncwdpqcyx5lm5e65ydwaetbx5sulo/lib' '--with-arpack-flags=-lparpack,-larpack'
Traceback (most recent call last):
File "./configure", line 10, in <module>
execfile(os.path.join(os.path.dirname(__file__), 'config', 'configure.py'))
File "./config/configure.py", line 140, in <module>
import slepc, petsc, arpack, blzpack, trlan, feast, primme, blopex, sowing, lapack
File "/private/var/folders/5k/sqpp24tx3ylds4fgm13pfht00000gn/T/davydden/spack-stage/spack-stage-ZOF1pH/slepc-3.7.3/config/packages/petsc.py", line 22, in <module>
import package, os, sys, commands
File "/private/var/folders/5k/sqpp24tx3ylds4fgm13pfht00000gn/T/davydden/spack-stage/spack-stage-ZOF1pH/slepc-3.7.3/config/package.py", line 22, in <module>
import os, sys, commands, tempfile, shutil, urllib, urlparse, tarfile
File "/Users/davydden/spack/opt/spack/darwin-sierra-x86_64/clang-8.0.0-apple/python-2.7.12-6dtr7kw2sj5zu7z7v3ox3agrmpw5cndt/lib/python2.7/tempfile.py", line 32, in <module>
import io as _io
File "/Users/davydden/spack/opt/spack/darwin-sierra-x86_64/clang-8.0.0-apple/python-2.7.12-6dtr7kw2sj5zu7z7v3ox3agrmpw5cndt/lib/python2.7/io.py", line 51, in <module>
import _io
ImportError: dlopen(/Users/davydden/spack/opt/spack/darwin-sierra-x86_64/clang-8.0.0-apple/python-2.7.12-6dtr7kw2sj5zu7z7v3ox3agrmpw5cndt/lib/python2.7/lib-dynload/_io.so, 2): Symbol not found: __PyCodecInfo_GetIncrementalDecoder
Referenced from: /Users/davydden/spack/opt/spack/darwin-sierra-x86_64/clang-8.0.0-apple/python-2.7.12-6dtr7kw2sj5zu7z7v3ox3agrmpw5cndt/lib/python2.7/lib-dynload/_io.so
Expected in: flat namespace
in /Users/davydden/spack/opt/spack/darwin-sierra-x86_64/clang-8.0.0-apple/python-2.7.12-6dtr7kw2sj5zu7z7v3ox3agrmpw5cndt/lib/python2.7/lib-dynload/_io.so
```
Looking at the history of `python` package, i don't see what could have led to this.
For now will be using
```
python:
version: [2.7.10]
paths:
python@2.7.10: /usr
buildable: False
```
|
non_defect
|
slepc fails to configure with spack s python i just wiped my installation of spack to re install and check things and got the error symbol not found pycodecinfo getincrementaldecoder configure prefix users davydden spack opt spack darwin sierra clang apple slepc with arpack dir users davydden spack opt spack darwin sierra clang apple arpack ng lib with arpack flags lparpack larpack traceback most recent call last file configure line in execfile os path join os path dirname file config configure py file config configure py line in import slepc petsc arpack blzpack trlan feast primme blopex sowing lapack file private var folders t davydden spack stage spack stage slepc config packages petsc py line in import package os sys commands file private var folders t davydden spack stage spack stage slepc config package py line in import os sys commands tempfile shutil urllib urlparse tarfile file users davydden spack opt spack darwin sierra clang apple python lib tempfile py line in import io as io file users davydden spack opt spack darwin sierra clang apple python lib io py line in import io importerror dlopen users davydden spack opt spack darwin sierra clang apple python lib lib dynload io so symbol not found pycodecinfo getincrementaldecoder referenced from users davydden spack opt spack darwin sierra clang apple python lib lib dynload io so expected in flat namespace in users davydden spack opt spack darwin sierra clang apple python lib lib dynload io so looking at the history of python package i don t see what could have led to this for now will be using python version paths python usr buildable false
| 0
|
407,640
| 11,924,841,343
|
IssuesEvent
|
2020-04-01 10:12:29
|
speedy-net/speedy-net
|
https://api.github.com/repos/speedy-net/speedy-net
|
closed
|
Upgrade Django to 2.2
|
low priority
|
I want to upgrade Speedy Net to use Django 2.0 or 2.1. We have to check our requirements, And also it's possible that we need to change things in our own code.
We don't use `django-modeltranslation` any more (I changed to use `django-translated-fields`) and I think all our requirements support Django 2.0, but I didn't check if all of them support Django 2.1.
|
1.0
|
Upgrade Django to 2.2 - I want to upgrade Speedy Net to use Django 2.0 or 2.1. We have to check our requirements, And also it's possible that we need to change things in our own code.
We don't use `django-modeltranslation` any more (I changed to use `django-translated-fields`) and I think all our requirements support Django 2.0, but I didn't check if all of them support Django 2.1.
|
non_defect
|
upgrade django to i want to upgrade speedy net to use django or we have to check our requirements and also it s possible that we need to change things in our own code we don t use django modeltranslation any more i changed to use django translated fields and i think all our requirements support django but i didn t check if all of them support django
| 0
|
21,806
| 3,561,396,647
|
IssuesEvent
|
2016-01-23 19:28:24
|
larcenists/larceny
|
https://api.github.com/repos/larcenists/larceny
|
closed
|
define-values raises an error with unbound variable set-cdr!
|
bug C: R7RS P: major T: defect
|
(Might be related to #680)
The following expression raises an error (executed on the REPL of `larceny -r7rs`):
```scheme
(define-values (a b c) (values 1 2 3))
Syntax violation: invalid reference
No binding available for set-cdr! in library (larceny r7rs macros)
Form: set-cdr!
Trace:
(set-cdr! a (cddr a))
(lambda (v) (set-cdr! a (cddr a)) v)
((lambda (v) (set-cdr! a (cddr a)) v) (cadr a))
(let ((v (cadr a))) (set-cdr! a (cddr a)) v)
Error: unhandled condition:
Compound condition has these components:
#<record &who>
who : "invalid reference"
#<record &message>
message : "No binding available for set-cdr! in library (larceny r7rs macros)"
#<record &syntax>
form : set-cdr!
subform : #f
Entering debugger; type "?" for help.
```
Using the following version.
```
$ larceny -version
Larceny v0.98 "General Ripper" (Mar 7 2015 01:06:26, precise:Linux:unified)
```
|
1.0
|
define-values raises an error with unbound variable set-cdr! - (Might be related to #680)
The following expression raises an error (executed on the REPL of `larceny -r7rs`):
```scheme
(define-values (a b c) (values 1 2 3))
Syntax violation: invalid reference
No binding available for set-cdr! in library (larceny r7rs macros)
Form: set-cdr!
Trace:
(set-cdr! a (cddr a))
(lambda (v) (set-cdr! a (cddr a)) v)
((lambda (v) (set-cdr! a (cddr a)) v) (cadr a))
(let ((v (cadr a))) (set-cdr! a (cddr a)) v)
Error: unhandled condition:
Compound condition has these components:
#<record &who>
who : "invalid reference"
#<record &message>
message : "No binding available for set-cdr! in library (larceny r7rs macros)"
#<record &syntax>
form : set-cdr!
subform : #f
Entering debugger; type "?" for help.
```
Using the following version.
```
$ larceny -version
Larceny v0.98 "General Ripper" (Mar 7 2015 01:06:26, precise:Linux:unified)
```
|
defect
|
define values raises an error with unbound variable set cdr might be related to the following expression raises an error executed on the repl of larceny scheme define values a b c values syntax violation invalid reference no binding available for set cdr in library larceny macros form set cdr trace set cdr a cddr a lambda v set cdr a cddr a v lambda v set cdr a cddr a v cadr a let v cadr a set cdr a cddr a v error unhandled condition compound condition has these components who invalid reference message no binding available for set cdr in library larceny macros form set cdr subform f entering debugger type for help using the following version larceny version larceny general ripper mar precise linux unified
| 1
|
19,032
| 3,126,689,698
|
IssuesEvent
|
2015-09-08 10:47:48
|
hazelcast/hazelcast
|
https://api.github.com/repos/hazelcast/hazelcast
|
opened
|
TxQueue ordering on rollback can be violated
|
Team: Core Type: Defect
|
Imagine you have the following queue
[1,2,3,4,5]
and you do tx{
q.poll // 1
q.poll // 2
tx.abort
}
Then the queue content could become [2,1,3,4,5]
The reason is that the QueueContainer doesn't rollback the changes in the oposite order of being added. It is a random order due to the fact that the changes are stored in a hashmap.
|
1.0
|
TxQueue ordering on rollback can be violated - Imagine you have the following queue
[1,2,3,4,5]
and you do tx{
q.poll // 1
q.poll // 2
tx.abort
}
Then the queue content could become [2,1,3,4,5]
The reason is that the QueueContainer doesn't rollback the changes in the oposite order of being added. It is a random order due to the fact that the changes are stored in a hashmap.
|
defect
|
txqueue ordering on rollback can be violated imagine you have the following queue and you do tx q poll q poll tx abort then the queue content could become the reason is that the queuecontainer doesn t rollback the changes in the oposite order of being added it is a random order due to the fact that the changes are stored in a hashmap
| 1
|
44,902
| 12,425,385,648
|
IssuesEvent
|
2020-05-24 16:06:04
|
Cockatrice/Cockatrice
|
https://api.github.com/repos/Cockatrice/Cockatrice
|
closed
|
4k resolution causes cards and chat box to be too small.
|
Defect - Basic UI / UX
|


I got a new laptop and I have been unable to resize the chat boxes or the cards making it very difficult to play on.
|
1.0
|
4k resolution causes cards and chat box to be too small. - 

I got a new laptop and I have been unable to resize the chat boxes or the cards making it very difficult to play on.
|
defect
|
resolution causes cards and chat box to be too small i got a new laptop and i have been unable to resize the chat boxes or the cards making it very difficult to play on
| 1
|
24,294
| 3,956,367,031
|
IssuesEvent
|
2016-04-30 04:25:03
|
RRUZ/delphi-ide-theme-editor
|
https://api.github.com/repos/RRUZ/delphi-ide-theme-editor
|
closed
|
Color Changes doesn't work on other languajes
|
auto-migrated Priority-Medium Type-Defect
|
```
Resume
IDE Theme Editor doesn't apply changes when Delphi was installed on a
non-English OS (Windows XP SP3 Spanish - Delphi 2009)
Steps
1) Start Delphi IDE Theme Editor on a Spanish OS (Windows XP SP3)
2) Select one color scheme and click on 'Apply Theme'
3) Start Delphi and open a project
4) Go to the Code Tab
Notice that the color scheme hasn't changed
Notes
When used on the same English version OS (Windows XP SP3 English) the IDE Theme
editor worked perfectly
Suggestions
Probably related to how Reg-key works on different languages
```
Original issue reported on code.google.com by `TheWor...@gmail.com` on 17 Jun 2014 at 10:43
|
1.0
|
Color Changes doesn't work on other languajes - ```
Resume
IDE Theme Editor doesn't apply changes when Delphi was installed on a
non-English OS (Windows XP SP3 Spanish - Delphi 2009)
Steps
1) Start Delphi IDE Theme Editor on a Spanish OS (Windows XP SP3)
2) Select one color scheme and click on 'Apply Theme'
3) Start Delphi and open a project
4) Go to the Code Tab
Notice that the color scheme hasn't changed
Notes
When used on the same English version OS (Windows XP SP3 English) the IDE Theme
editor worked perfectly
Suggestions
Probably related to how Reg-key works on different languages
```
Original issue reported on code.google.com by `TheWor...@gmail.com` on 17 Jun 2014 at 10:43
|
defect
|
color changes doesn t work on other languajes resume ide theme editor doesn t apply changes when delphi was installed on a non english os windows xp spanish delphi steps start delphi ide theme editor on a spanish os windows xp select one color scheme and click on apply theme start delphi and open a project go to the code tab notice that the color scheme hasn t changed notes when used on the same english version os windows xp english the ide theme editor worked perfectly suggestions probably related to how reg key works on different languages original issue reported on code google com by thewor gmail com on jun at
| 1
|
35,435
| 4,663,427,621
|
IssuesEvent
|
2016-10-05 09:13:06
|
yaseminalpay/Fall2016Swe573_YaseminAlpay
|
https://api.github.com/repos/yaseminalpay/Fall2016Swe573_YaseminAlpay
|
closed
|
Create mockups with scenarios
|
design documentation priority: high
|
Create mockups with solid examples. These mockups should contain every possible information to clarify requirements such as diagrams, use-case scenarios etc.
|
1.0
|
Create mockups with scenarios - Create mockups with solid examples. These mockups should contain every possible information to clarify requirements such as diagrams, use-case scenarios etc.
|
non_defect
|
create mockups with scenarios create mockups with solid examples these mockups should contain every possible information to clarify requirements such as diagrams use case scenarios etc
| 0
|
78,654
| 22,339,289,272
|
IssuesEvent
|
2022-06-14 22:04:34
|
pytorch/vision
|
https://api.github.com/repos/pytorch/vision
|
closed
|
Unable to build CUDA version in a docker environment without nvidia GPU
|
topic: build
|
### 🐛 Describe the bug
I am currently trying to build torchvision with CUDA on a docker container without nvidia drivers running, but the cuda runtime installed.
However, when i try to run the setup script via:
```
BUILD_TORCH=ON \
CMAKE_SHARED_LINKER_FLAGS="-L/usr/local/cuda-11.7/lib64 -lcusolver -lcusparse" \
CMAKE_EXE_LINKER_FLAGS="-L/usr/local/cuda-11.7/lib64 -lcusolver -lcusparse" \
PYTHON_EXECUTABLE=/usr/local/bin/python \
CUDA_HOME=/usr/local/cuda-11.7 \
CMAKE_CUDA_COMPILER=/usr/local/cuda-11.7/bin/nvcc \
CMAKE_CUDA_RUNTIME_LIBRARY=Static \
USE_CUDA=1 \
USE_CUDNN=1 \
CMAKE_BUILD_TYPE=Release \
EXTRA_CAFFE2_CMAKE_FLAGS="(-DATEN_NO_TEST=ON)" \
TORCH_NVCC_FLAGS="-Xfatbin -compress-all" \
TORCH_CUDA_ARCH_LIST="7.5 8.6+PTX" \
FORCE_CUDA=1 \
python3 setup.py bdist_wheel
```
I get the warning that no CUDA runtime is found `No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda'`
Is the only way to build torchvision inside a docker container to have an active GPU with nvidia drivers installed? That seems problematic
### Versions
PyTorch version: 1.11.0a0
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (GCC) 11.2.0
Clang version: Could not collect
CMake version: version 3.22.0
Libc version: glibc-2.27
Python version: 3.10.4 (main, Jan 1 2000, 08:00:00) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-117-generic-x86_64-with-glibc2.27
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: False
Versions of relevant libraries:
[pip3] numpy==1.22.4
[pip3] torch==1.11.0a0+gitunknown
[pip3] torchvision==0.12.0a0
[conda] Could not collect
|
1.0
|
Unable to build CUDA version in a docker environment without nvidia GPU - ### 🐛 Describe the bug
I am currently trying to build torchvision with CUDA on a docker container without nvidia drivers running, but the cuda runtime installed.
However, when i try to run the setup script via:
```
BUILD_TORCH=ON \
CMAKE_SHARED_LINKER_FLAGS="-L/usr/local/cuda-11.7/lib64 -lcusolver -lcusparse" \
CMAKE_EXE_LINKER_FLAGS="-L/usr/local/cuda-11.7/lib64 -lcusolver -lcusparse" \
PYTHON_EXECUTABLE=/usr/local/bin/python \
CUDA_HOME=/usr/local/cuda-11.7 \
CMAKE_CUDA_COMPILER=/usr/local/cuda-11.7/bin/nvcc \
CMAKE_CUDA_RUNTIME_LIBRARY=Static \
USE_CUDA=1 \
USE_CUDNN=1 \
CMAKE_BUILD_TYPE=Release \
EXTRA_CAFFE2_CMAKE_FLAGS="(-DATEN_NO_TEST=ON)" \
TORCH_NVCC_FLAGS="-Xfatbin -compress-all" \
TORCH_CUDA_ARCH_LIST="7.5 8.6+PTX" \
FORCE_CUDA=1 \
python3 setup.py bdist_wheel
```
I get the warning that no CUDA runtime is found `No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda'`
Is the only way to build torchvision inside a docker container to have an active GPU with nvidia drivers installed? That seems problematic
### Versions
PyTorch version: 1.11.0a0
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (GCC) 11.2.0
Clang version: Could not collect
CMake version: version 3.22.0
Libc version: glibc-2.27
Python version: 3.10.4 (main, Jan 1 2000, 08:00:00) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-117-generic-x86_64-with-glibc2.27
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: False
Versions of relevant libraries:
[pip3] numpy==1.22.4
[pip3] torch==1.11.0a0+gitunknown
[pip3] torchvision==0.12.0a0
[conda] Could not collect
|
non_defect
|
unable to build cuda version in a docker environment without nvidia gpu 🐛 describe the bug i am currently trying to build torchvision with cuda on a docker container without nvidia drivers running but the cuda runtime installed however when i try to run the setup script via build torch on cmake shared linker flags l usr local cuda lcusolver lcusparse cmake exe linker flags l usr local cuda lcusolver lcusparse python executable usr local bin python cuda home usr local cuda cmake cuda compiler usr local cuda bin nvcc cmake cuda runtime library static use cuda use cudnn cmake build type release extra cmake flags daten no test on torch nvcc flags xfatbin compress all torch cuda arch list ptx force cuda setup py bdist wheel i get the warning that no cuda runtime is found no cuda runtime is found using cuda home usr local cuda is the only way to build torchvision inside a docker container to have an active gpu with nvidia drivers installed that seems problematic versions pytorch version is debug build false cuda used to build pytorch rocm used to build pytorch n a os ubuntu lts gcc version gcc clang version could not collect cmake version version libc version glibc python version main jan bit runtime python platform linux generic with is cuda available false cuda runtime version no cuda gpu models and configuration no cuda nvidia driver version no cuda cudnn version no cuda hip runtime version n a miopen runtime version n a is xnnpack available false versions of relevant libraries numpy torch gitunknown torchvision could not collect
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.