Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 5
112
| repo_url
stringlengths 34
141
| action
stringclasses 3
values | title
stringlengths 1
757
| labels
stringlengths 4
664
| body
stringlengths 3
261k
| index
stringclasses 10
values | text_combine
stringlengths 96
261k
| label
stringclasses 2
values | text
stringlengths 96
232k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
28,657
| 5,320,584,308
|
IssuesEvent
|
2017-02-14 10:55:09
|
contao/installation-bundle
|
https://api.github.com/repos/contao/installation-bundle
|
closed
|
InstallTool logException
|
defect
|
In Contao Managed Version the path to the log file should be something like
`$this->rootDir.'/../var/logs/prod-'.date('Y-m-d').'.log'`
in InstallTool.php, Line 395
|
1.0
|
InstallTool logException - In Contao Managed Version the path to the log file should be something like
`$this->rootDir.'/../var/logs/prod-'.date('Y-m-d').'.log'`
in InstallTool.php, Line 395
|
defect
|
installtool logexception in contao managed version the path to the log file should be something like this rootdir var logs prod date y m d log in installtool php line
| 1
|
97,714
| 11,030,213,195
|
IssuesEvent
|
2019-12-06 15:20:35
|
pulumi/pulumi-vault
|
https://api.github.com/repos/pulumi/pulumi-vault
|
reopened
|
Group Alias example is different than terraforms and is missing vital information
|
documentation question
|
The `vault.identity.GroupAlias` example is missing vital information that the terraform documentation has.
https://github.com/terraform-providers/terraform-provider-vault/blob/master/website/docs/r/identity_group.html.md#example-usage
https://www.pulumi.com/docs/reference/pkg/nodejs/pulumi/vault/identity/#GroupAlias
Notice that Pulumi is missing the name for the `GroupAlias`. Without this, the external groups will not work. Pulumi is likely stripping this due to it's autonaming policy, but that can't happen here. In fact, I would suggest making `name` required whenever the `type` is external if that is possible.
|
1.0
|
Group Alias example is different than terraforms and is missing vital information - The `vault.identity.GroupAlias` example is missing vital information that the terraform documentation has.
https://github.com/terraform-providers/terraform-provider-vault/blob/master/website/docs/r/identity_group.html.md#example-usage
https://www.pulumi.com/docs/reference/pkg/nodejs/pulumi/vault/identity/#GroupAlias
Notice that Pulumi is missing the name for the `GroupAlias`. Without this, the external groups will not work. Pulumi is likely stripping this due to it's autonaming policy, but that can't happen here. In fact, I would suggest making `name` required whenever the `type` is external if that is possible.
|
non_defect
|
group alias example is different than terraforms and is missing vital information the vault identity groupalias example is missing vital information that the terraform documentation has notice that pulumi is missing the name for the groupalias without this the external groups will not work pulumi is likely stripping this due to it s autonaming policy but that can t happen here in fact i would suggest making name required whenever the type is external if that is possible
| 0
|
62,905
| 17,260,371,930
|
IssuesEvent
|
2021-07-22 06:35:47
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
opened
|
White screen
|
T-Defect
|
Hello!
after updating Element to version 1.7.33, suddenly just a white screen began to appear, only reinstalling the Element application helps, previously this was in version 1.36, this was fixed in version 1.4.1 and everything was fine.
Please fix it.
we use on the desktop version
OS: Win Server 2012R2.
|
1.0
|
White screen - Hello!
after updating Element to version 1.7.33, suddenly just a white screen began to appear, only reinstalling the Element application helps, previously this was in version 1.36, this was fixed in version 1.4.1 and everything was fine.
Please fix it.
we use on the desktop version
OS: Win Server 2012R2.
|
defect
|
white screen hello after updating element to version suddenly just a white screen began to appear only reinstalling the element application helps previously this was in version this was fixed in version and everything was fine please fix it we use on the desktop version os win server
| 1
|
74,996
| 25,474,191,154
|
IssuesEvent
|
2022-11-25 12:58:09
|
jOOQ/jOOQ
|
https://api.github.com/repos/jOOQ/jOOQ
|
closed
|
Expand unqualified asterisk in MySQL when it's not leading
|
T: Defect C: Functionality P: Medium R: Fixed E: All Editions
|
This works in MySQL:
```sql
SELECT *, 'a' FROM (SELECT 1 x) t
```
Producing:
```
|x |a |
|---|---|
|1 |a |
```
But this doesn't work:
```sql
SELECT 'a', * FROM (SELECT 1 x) t
```
Raising:
> SQL Error [1064] [42000]: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '* FROM (SELECT 1 x) t' at line 1
|
1.0
|
Expand unqualified asterisk in MySQL when it's not leading - This works in MySQL:
```sql
SELECT *, 'a' FROM (SELECT 1 x) t
```
Producing:
```
|x |a |
|---|---|
|1 |a |
```
But this doesn't work:
```sql
SELECT 'a', * FROM (SELECT 1 x) t
```
Raising:
> SQL Error [1064] [42000]: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '* FROM (SELECT 1 x) t' at line 1
|
defect
|
expand unqualified asterisk in mysql when it s not leading this works in mysql sql select a from select x t producing x a a but this doesn t work sql select a from select x t raising sql error you have an error in your sql syntax check the manual that corresponds to your mysql server version for the right syntax to use near from select x t at line
| 1
|
515,686
| 14,967,380,318
|
IssuesEvent
|
2021-01-27 15:37:19
|
StatCan/daaas
|
https://api.github.com/repos/StatCan/daaas
|
closed
|
Add backups for Minio
|
area/engineering component/storage kind/feature priority/blocker size/S
|
### Target
Frequency: nightly
Retention: 30 days
_Requires incremental. Other strategy may be required._
|
1.0
|
Add backups for Minio - ### Target
Frequency: nightly
Retention: 30 days
_Requires incremental. Other strategy may be required._
|
non_defect
|
add backups for minio target frequency nightly retention days requires incremental other strategy may be required
| 0
|
67,504
| 20,971,509,415
|
IssuesEvent
|
2022-03-28 11:50:41
|
jOOQ/jOOQ
|
https://api.github.com/repos/jOOQ/jOOQ
|
opened
|
Select::getSelect and ::fields should return a single field when FOR XML or FOR JSON clause is present
|
T: Defect C: Functionality P: Medium E: Professional Edition E: Enterprise Edition
|
https://github.com/jOOQ/jOOQ/issues/10565 has fixed a few issues related to parsing `FOR XML` and `FOR JSON`, in case of which the resulting degree of the query is always 1. The clauses act like if they were wrapping the rest of the query in a derived table, and then projecting only `XMLAGG` or `JSON_ARRAYAGG` from the contents.
Perhaps we should implement this kind of logic everywhere the projection can be accessed from, including `Select::getSelect` and `Select::fields`, etc.
To be investigated.
|
1.0
|
Select::getSelect and ::fields should return a single field when FOR XML or FOR JSON clause is present - https://github.com/jOOQ/jOOQ/issues/10565 has fixed a few issues related to parsing `FOR XML` and `FOR JSON`, in case of which the resulting degree of the query is always 1. The clauses act like if they were wrapping the rest of the query in a derived table, and then projecting only `XMLAGG` or `JSON_ARRAYAGG` from the contents.
Perhaps we should implement this kind of logic everywhere the projection can be accessed from, including `Select::getSelect` and `Select::fields`, etc.
To be investigated.
|
defect
|
select getselect and fields should return a single field when for xml or for json clause is present has fixed a few issues related to parsing for xml and for json in case of which the resulting degree of the query is always the clauses act like if they were wrapping the rest of the query in a derived table and then projecting only xmlagg or json arrayagg from the contents perhaps we should implement this kind of logic everywhere the projection can be accessed from including select getselect and select fields etc to be investigated
| 1
|
21,043
| 3,452,986,265
|
IssuesEvent
|
2015-12-17 08:57:13
|
luigirizzo/netmap
|
https://api.github.com/repos/luigirizzo/netmap
|
closed
|
Run Netmap in a Fedora VM running in VirtualBox
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1. install Fedora 20
2. install VirtualBox
3. create a VirtualBox Fedora20 VM with 3 vNICs e1000 (Virtual Box vNIC Intel
PRO/1000 T Server(82543GC), install Xen and try install Netmap from git.
4. install Netmap:
- cd netmap-release/LINUX
- ./configure
- make
Until now, every works well.
What is the expected output? What do you see instead?
I need to load new e1000 driver with "rmmod 1000" and insmod "./e1000/e1000.ko"
command, but I receive this error:
insmod: ERROR: could not insert module ./e1000/e1000.ko: Unknown symbol in
module
and dmesg
e1000: Unknown symbol netmap_enable_all_rings (err 0)
e1000: Unknown symbol netmap_reset (err 0)
e1000: Unknown symbol netmap_disable_all_rings (err 0)
e1000: Unknown symbol netmap_detach (err 0)
e1000: Unknown symbol netmap_ring_reinit (err 0)
e1000: Unknown symbol netmap_no_pendintr (err 0)
e1000: Unknown symbol nm_rxsync_prologue (err 0)
e1000: Unknown symbol netmap_rx_irq (err 0)
e1000: Unknown symbol netmap_attach (err 0)
What version of the product are you using? On what operating system?
Fedora20, kernel 3.17.4-200.fc20.x86_64
ethtool -i p2p1
driver:e1000
version: 7.3.21-k8-NAPI
```
Original issue reported on code.google.com by `emerson....@gmail.com` on 12 Dec 2014 at 5:34
|
1.0
|
Run Netmap in a Fedora VM running in VirtualBox - ```
What steps will reproduce the problem?
1. install Fedora 20
2. install VirtualBox
3. create a VirtualBox Fedora20 VM with 3 vNICs e1000 (Virtual Box vNIC Intel
PRO/1000 T Server(82543GC), install Xen and try install Netmap from git.
4. install Netmap:
- cd netmap-release/LINUX
- ./configure
- make
Until now, every works well.
What is the expected output? What do you see instead?
I need to load new e1000 driver with "rmmod 1000" and insmod "./e1000/e1000.ko"
command, but I receive this error:
insmod: ERROR: could not insert module ./e1000/e1000.ko: Unknown symbol in
module
and dmesg
e1000: Unknown symbol netmap_enable_all_rings (err 0)
e1000: Unknown symbol netmap_reset (err 0)
e1000: Unknown symbol netmap_disable_all_rings (err 0)
e1000: Unknown symbol netmap_detach (err 0)
e1000: Unknown symbol netmap_ring_reinit (err 0)
e1000: Unknown symbol netmap_no_pendintr (err 0)
e1000: Unknown symbol nm_rxsync_prologue (err 0)
e1000: Unknown symbol netmap_rx_irq (err 0)
e1000: Unknown symbol netmap_attach (err 0)
What version of the product are you using? On what operating system?
Fedora20, kernel 3.17.4-200.fc20.x86_64
ethtool -i p2p1
driver:e1000
version: 7.3.21-k8-NAPI
```
Original issue reported on code.google.com by `emerson....@gmail.com` on 12 Dec 2014 at 5:34
|
defect
|
run netmap in a fedora vm running in virtualbox what steps will reproduce the problem install fedora install virtualbox create a virtualbox vm with vnics virtual box vnic intel pro t server install xen and try install netmap from git install netmap cd netmap release linux configure make until now every works well what is the expected output what do you see instead i need to load new driver with rmmod and insmod ko command but i receive this error insmod error could not insert module ko unknown symbol in module and dmesg unknown symbol netmap enable all rings err unknown symbol netmap reset err unknown symbol netmap disable all rings err unknown symbol netmap detach err unknown symbol netmap ring reinit err unknown symbol netmap no pendintr err unknown symbol nm rxsync prologue err unknown symbol netmap rx irq err unknown symbol netmap attach err what version of the product are you using on what operating system kernel ethtool i driver version napi original issue reported on code google com by emerson gmail com on dec at
| 1
|
65,019
| 19,014,340,200
|
IssuesEvent
|
2021-11-23 12:51:54
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
closed
|
Infinite spinning when opening riot.im desktop
|
T-Defect P1 S-Major A-Electron T-Other
|
### Description
When opening riot on my Windows 10 Desktop it shows a blank screen, a spinner in the middle and "logout" at the bottom. The symbol also shows "1 new message" but nothing happens, no matter how long i wait.
I'm starting with a .cmd with --proxy-server= included. I also tried without it, same bug.
### Steps to reproduce
- No clue how to reproduce it
Log: sent/not sent? Not sent as its not possible.

### Version information
<!-- IMPORTANT: please answer the following questions, to help us narrow down the problem -->
- **Platform**: desktop
For the desktop app:
- **OS**: Windows 10
- **Version**: 0.17.6 (did a fresh install on top of the old one, no change)
Do i have to delete the whole riot and reinstall again, resulting in losing my keys and so on?
(Its working on: Android, Windows 10 and Linux Ubuntu (both, 2 different computers. So, no clue.)
|
1.0
|
Infinite spinning when opening riot.im desktop - ### Description
When opening riot on my Windows 10 Desktop it shows a blank screen, a spinner in the middle and "logout" at the bottom. The symbol also shows "1 new message" but nothing happens, no matter how long i wait.
I'm starting with a .cmd with --proxy-server= included. I also tried without it, same bug.
### Steps to reproduce
- No clue how to reproduce it
Log: sent/not sent? Not sent as its not possible.

### Version information
<!-- IMPORTANT: please answer the following questions, to help us narrow down the problem -->
- **Platform**: desktop
For the desktop app:
- **OS**: Windows 10
- **Version**: 0.17.6 (did a fresh install on top of the old one, no change)
Do i have to delete the whole riot and reinstall again, resulting in losing my keys and so on?
(Its working on: Android, Windows 10 and Linux Ubuntu (both, 2 different computers. So, no clue.)
|
defect
|
infinite spinning when opening riot im desktop description when opening riot on my windows desktop it shows a blank screen a spinner in the middle and logout at the bottom the symbol also shows new message but nothing happens no matter how long i wait i m starting with a cmd with proxy server included i also tried without it same bug steps to reproduce no clue how to reproduce it log sent not sent not sent as its not possible version information platform desktop for the desktop app os windows version did a fresh install on top of the old one no change do i have to delete the whole riot and reinstall again resulting in losing my keys and so on its working on android windows and linux ubuntu both different computers so no clue
| 1
|
26,120
| 4,593,614,556
|
IssuesEvent
|
2016-09-21 02:02:27
|
afisher1/GridLAB-D
|
https://api.github.com/repos/afisher1/GridLAB-D
|
closed
|
#43 The BUILD file does not describe how to make xerces and cppunit build for linux,
|
defect
|
At this point the makefile does not build xerces or install it on linux systems if it is missing
,
|
1.0
|
#43 The BUILD file does not describe how to make xerces and cppunit build for linux,
- At this point the makefile does not build xerces or install it on linux systems if it is missing
,
|
defect
|
the build file does not describe how to make xerces and cppunit build for linux at this point the makefile does not build xerces or install it on linux systems if it is missing
| 1
|
19,181
| 3,150,907,069
|
IssuesEvent
|
2015-09-16 03:08:53
|
fuzzdb-project/fuzzdb
|
https://api.github.com/repos/fuzzdb-project/fuzzdb
|
closed
|
Move to Github
|
auto-migrated Priority-Medium Type-Defect
|
```
With the close of google code will this project move to github?
```
Original issue reported on code.google.com by `Lordsai...@gmail.com` on 16 Mar 2015 at 11:59
|
1.0
|
Move to Github - ```
With the close of google code will this project move to github?
```
Original issue reported on code.google.com by `Lordsai...@gmail.com` on 16 Mar 2015 at 11:59
|
defect
|
move to github with the close of google code will this project move to github original issue reported on code google com by lordsai gmail com on mar at
| 1
|
705,556
| 24,238,446,996
|
IssuesEvent
|
2022-09-27 03:09:42
|
paperclip-ui/paperclip
|
https://api.github.com/repos/paperclip-ui/paperclip
|
closed
|
ability to register native elements for preview
|
priority: medium impact: high effort: medium
|
This could enable custom elements + plugins to be rendered in the preview. Will also need to be able to expose props that can be passed into these native elements.
|
1.0
|
ability to register native elements for preview - This could enable custom elements + plugins to be rendered in the preview. Will also need to be able to expose props that can be passed into these native elements.
|
non_defect
|
ability to register native elements for preview this could enable custom elements plugins to be rendered in the preview will also need to be able to expose props that can be passed into these native elements
| 0
|
33,127
| 7,035,518,963
|
IssuesEvent
|
2017-12-28 00:43:01
|
OGMS/ogms
|
https://api.github.com/repos/OGMS/ogms
|
closed
|
witnessed event
|
auto-migrated Priority-Medium Type-Defect
|
```
Hi,
I need a term "witnessed loss of consciousness". It isn't a symptom,
because it has to be witnessed by somebody else than the patient, but it
doesn't have to be a sign, as it doesn't have to be witnessed by a
clinician (though it could be)
I suspect that OGMS would want a more general term "witnessed event" for
example. Maybe having "witnessed event" as a process realizing the witness
role, and that also has_input an organism which does not bear that role?
Any suggestions?
Ps: my use case, if it helps: seizure level 1 is defined by the Brighton
collaboration as being "witnessed sudden loss of consciousness AND
generalized, tonic, clonic, tonic–clonic, or atonic motor manifestations."
```
Original issue reported on code.google.com by `mcour...@gmail.com` on 22 Jan 2010 at 10:39
|
1.0
|
witnessed event - ```
Hi,
I need a term "witnessed loss of consciousness". It isn't a symptom,
because it has to be witnessed by somebody else than the patient, but it
doesn't have to be a sign, as it doesn't have to be witnessed by a
clinician (though it could be)
I suspect that OGMS would want a more general term "witnessed event" for
example. Maybe having "witnessed event" as a process realizing the witness
role, and that also has_input an organism which does not bear that role?
Any suggestions?
Ps: my use case, if it helps: seizure level 1 is defined by the Brighton
collaboration as being "witnessed sudden loss of consciousness AND
generalized, tonic, clonic, tonic–clonic, or atonic motor manifestations."
```
Original issue reported on code.google.com by `mcour...@gmail.com` on 22 Jan 2010 at 10:39
|
defect
|
witnessed event hi i need a term witnessed loss of consciousness it isn t a symptom because it has to be witnessed by somebody else than the patient but it doesn t have to be a sign as it doesn t have to be witnessed by a clinician though it could be i suspect that ogms would want a more general term witnessed event for example maybe having witnessed event as a process realizing the witness role and that also has input an organism which does not bear that role any suggestions ps my use case if it helps seizure level is defined by the brighton collaboration as being witnessed sudden loss of consciousness and generalized tonic clonic tonic–clonic or atonic motor manifestations original issue reported on code google com by mcour gmail com on jan at
| 1
|
8,128
| 2,611,453,456
|
IssuesEvent
|
2015-02-27 05:00:50
|
chrsmith/hedgewars
|
https://api.github.com/repos/chrsmith/hedgewars
|
closed
|
Problem with computer-run team: always does the same action as it does not know what to do
|
auto-migrated Priority-Low Type-Defect
|
```
What steps will reproduce the problem?
I can't really say: watch the video, the green hedge has already shot once,
we're waiting for the second time, but it always does the same actions again
(he did it for about thirty seconds, before at least shooting to the right of
the screen).
What is the expected output? What do you see instead?
See video.
What version of the product are you using? On what operating system?
Hedgewars 0.9.13, Ubuntu 10.10
Please provide any additional information below.
```
Original issue reported on code.google.com by `t.lauxer...@gmail.com` on 29 Nov 2010 at 2:40
* Merged into: #184
Attachments:
* [bug_hedgewars.ogv](https://storage.googleapis.com/google-code-attachments/hedgewars/issue-113/comment-0/bug_hedgewars.ogv)
|
1.0
|
Problem with computer-run team: always does the same action as it does not know what to do - ```
What steps will reproduce the problem?
I can't really say: watch the video, the green hedge has already shot once,
we're waiting for the second time, but it always does the same actions again
(he did it for about thirty seconds, before at least shooting to the right of
the screen).
What is the expected output? What do you see instead?
See video.
What version of the product are you using? On what operating system?
Hedgewars 0.9.13, Ubuntu 10.10
Please provide any additional information below.
```
Original issue reported on code.google.com by `t.lauxer...@gmail.com` on 29 Nov 2010 at 2:40
* Merged into: #184
Attachments:
* [bug_hedgewars.ogv](https://storage.googleapis.com/google-code-attachments/hedgewars/issue-113/comment-0/bug_hedgewars.ogv)
|
defect
|
problem with computer run team always does the same action as it does not know what to do what steps will reproduce the problem i can t really say watch the video the green hedge has already shot once we re waiting for the second time but it always does the same actions again he did it for about thirty seconds before at least shooting to the right of the screen what is the expected output what do you see instead see video what version of the product are you using on what operating system hedgewars ubuntu please provide any additional information below original issue reported on code google com by t lauxer gmail com on nov at merged into attachments
| 1
|
161,548
| 12,551,182,076
|
IssuesEvent
|
2020-06-06 13:53:16
|
astpl1998/Kanam-Latex
|
https://api.github.com/repos/astpl1998/Kanam-Latex
|
reopened
|
S&S-Commercial Invoice Print_PFS
|
17.Testing2_Completed
|
Dear Team,
Here am created new issue for "Commercial Invoice Print development".
In future i have added PFS for further process.
Thanks and Regards,
M.Maheshwaran.
|
1.0
|
S&S-Commercial Invoice Print_PFS - Dear Team,
Here am created new issue for "Commercial Invoice Print development".
In future i have added PFS for further process.
Thanks and Regards,
M.Maheshwaran.
|
non_defect
|
s s commercial invoice print pfs dear team here am created new issue for commercial invoice print development in future i have added pfs for further process thanks and regards m maheshwaran
| 0
|
75,468
| 20,825,501,747
|
IssuesEvent
|
2022-03-18 20:17:45
|
golang/go
|
https://api.github.com/repos/golang/go
|
opened
|
x/build/cmd/relui: build releases
|
Builders NeedsFix
|
relui should subsume the functionality of x/build/cmd/releasebot and x/build/cmd/release.
|
1.0
|
x/build/cmd/relui: build releases - relui should subsume the functionality of x/build/cmd/releasebot and x/build/cmd/release.
|
non_defect
|
x build cmd relui build releases relui should subsume the functionality of x build cmd releasebot and x build cmd release
| 0
|
85,114
| 24,515,219,161
|
IssuesEvent
|
2022-10-11 03:58:50
|
apache/shardingsphere
|
https://api.github.com/repos/apache/shardingsphere
|
closed
|
upgrade all actions/cache
|
type: build
|
there are some cache are v2 and the rest are v3, need to use the same version.
take a look on the `restore-keys` of action cache, is this necessary ?
|
1.0
|
upgrade all actions/cache - there are some cache are v2 and the rest are v3, need to use the same version.
take a look on the `restore-keys` of action cache, is this necessary ?
|
non_defect
|
upgrade all actions cache there are some cache are and the rest are need to use the same version take a look on the restore keys of action cache is this necessary ?
| 0
|
256,209
| 19,403,623,765
|
IssuesEvent
|
2021-12-19 16:17:46
|
monitoring-plugins/monitoring-plugins
|
https://api.github.com/repos/monitoring-plugins/monitoring-plugins
|
closed
|
URL for check_game not available anymore [sf#3004910]
|
import bug compilation documentation
|
Submitted by madolphs on 2010-05-20 22:45:24
While ./configure it's been said that one should get qstat from http://www.activesw.com/people/steve/qstat.html. The site seems to be no longer available. http://www.qstat.org/ seems to be more appropriate.
configure, lines 19373,19374
Regards,
Mike Adolphs
|
1.0
|
URL for check_game not available anymore [sf#3004910] - Submitted by madolphs on 2010-05-20 22:45:24
While ./configure it's been said that one should get qstat from http://www.activesw.com/people/steve/qstat.html. The site seems to be no longer available. http://www.qstat.org/ seems to be more appropriate.
configure, lines 19373,19374
Regards,
Mike Adolphs
|
non_defect
|
url for check game not available anymore submitted by madolphs on while configure it s been said that one should get qstat from the site seems to be no longer available seems to be more appropriate configure lines regards mike adolphs
| 0
|
8,776
| 3,787,948,354
|
IssuesEvent
|
2016-03-21 13:01:10
|
stkent/amplify
|
https://api.github.com/repos/stkent/amplify
|
opened
|
Avoid potential run time exception if library user attempts to configure version rules before getting the shared Amplify instance
|
bug code difficulty-easy
|
See https://github.com/stkent/amplify/pull/130/files#r56817394
|
1.0
|
Avoid potential run time exception if library user attempts to configure version rules before getting the shared Amplify instance - See https://github.com/stkent/amplify/pull/130/files#r56817394
|
non_defect
|
avoid potential run time exception if library user attempts to configure version rules before getting the shared amplify instance see
| 0
|
339,493
| 10,255,349,903
|
IssuesEvent
|
2019-08-21 15:15:22
|
CLOSER-Cohorts/archivist
|
https://api.github.com/repos/CLOSER-Cohorts/archivist
|
closed
|
Importer not working for instrument DDI XML
|
Importer Low priority bug
|
I have tested the importer by using a downloaded instrument XML from Archivist and then importing it under a different prefix. Archivist message states that it has imported but it does not appear.
Note the importer works for USoc XML.
|
1.0
|
Importer not working for instrument DDI XML - I have tested the importer by using a downloaded instrument XML from Archivist and then importing it under a different prefix. Archivist message states that it has imported but it does not appear.
Note the importer works for USoc XML.
|
non_defect
|
importer not working for instrument ddi xml i have tested the importer by using a downloaded instrument xml from archivist and then importing it under a different prefix archivist message states that it has imported but it does not appear note the importer works for usoc xml
| 0
|
50,071
| 13,187,319,900
|
IssuesEvent
|
2020-08-13 03:02:16
|
icecube-trac/tix3
|
https://api.github.com/repos/icecube-trac/tix3
|
closed
|
include new phys-service release (Trac #86)
|
Migrated from Trac defect offline-software
|
B01-13-01
<details>
<summary>_Migrated from https://code.icecube.wisc.edu/ticket/86
, reported by blaufuss and owned by blaufuss_</summary>
<p>
```json
{
"status": "closed",
"changetime": "2007-11-11T03:51:18",
"description": "B01-13-01",
"reporter": "blaufuss",
"cc": "",
"resolution": "fixed",
"_ts": "1194753078000000",
"component": "offline-software",
"summary": "include new phys-service release",
"priority": "normal",
"keywords": "",
"time": "2007-08-08T18:54:51",
"milestone": "",
"owner": "blaufuss",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
include new phys-service release (Trac #86) - B01-13-01
<details>
<summary>_Migrated from https://code.icecube.wisc.edu/ticket/86
, reported by blaufuss and owned by blaufuss_</summary>
<p>
```json
{
"status": "closed",
"changetime": "2007-11-11T03:51:18",
"description": "B01-13-01",
"reporter": "blaufuss",
"cc": "",
"resolution": "fixed",
"_ts": "1194753078000000",
"component": "offline-software",
"summary": "include new phys-service release",
"priority": "normal",
"keywords": "",
"time": "2007-08-08T18:54:51",
"milestone": "",
"owner": "blaufuss",
"type": "defect"
}
```
</p>
</details>
|
defect
|
include new phys service release trac migrated from reported by blaufuss and owned by blaufuss json status closed changetime description reporter blaufuss cc resolution fixed ts component offline software summary include new phys service release priority normal keywords time milestone owner blaufuss type defect
| 1
|
7,688
| 2,610,433,065
|
IssuesEvent
|
2015-02-26 20:21:52
|
chrsmith/scribefire-chrome
|
https://api.github.com/repos/chrsmith/scribefire-chrome
|
closed
|
I cannot add a wordpress.com blog...
|
auto-migrated Priority-Medium Type-Defect
|
```
What's the problem?
I cannot add a wordpress.com blog...
What browser are you using?
Latest version of chrome...
What version of ScribeFire are you running?
4.2.3
```
-----
Original issue reported on code.google.com by `toddlohe...@gmail.com` on 7 Feb 2014 at 2:11
|
1.0
|
I cannot add a wordpress.com blog... - ```
What's the problem?
I cannot add a wordpress.com blog...
What browser are you using?
Latest version of chrome...
What version of ScribeFire are you running?
4.2.3
```
-----
Original issue reported on code.google.com by `toddlohe...@gmail.com` on 7 Feb 2014 at 2:11
|
defect
|
i cannot add a wordpress com blog what s the problem i cannot add a wordpress com blog what browser are you using latest version of chrome what version of scribefire are you running original issue reported on code google com by toddlohe gmail com on feb at
| 1
|
4,382
| 6,926,637,384
|
IssuesEvent
|
2017-11-30 19:52:25
|
cerner/terra-core
|
https://api.github.com/repos/cerner/terra-core
|
closed
|
Cover Image component
|
cover-image Needs orion requirements new feature orion reviewed
|
# Issue Description
Placeholder for cover image component
## Issue Type
<!-- Is this a new feature request, enhancement, bug report, other? -->
- [x] New Feature
- [ ] Enhancement
- [ ] Bug
- [ ] Other
## Expected Behavior
TBD
## Current Behavior
N/A
|
1.0
|
Cover Image component - # Issue Description
Placeholder for cover image component
## Issue Type
<!-- Is this a new feature request, enhancement, bug report, other? -->
- [x] New Feature
- [ ] Enhancement
- [ ] Bug
- [ ] Other
## Expected Behavior
TBD
## Current Behavior
N/A
|
non_defect
|
cover image component issue description placeholder for cover image component issue type new feature enhancement bug other expected behavior tbd current behavior n a
| 0
|
28,653
| 2,708,459,246
|
IssuesEvent
|
2015-04-08 09:05:41
|
ondras/wwwsqldesigner
|
https://api.github.com/repos/ondras/wwwsqldesigner
|
closed
|
Postgresql: serial/bigserial data types must be converted to integer/bigint in foreign keys
|
imported Priority-Medium Type-Other
|
_From [i...@amarosia.com](https://code.google.com/u/115984126334797849605/) on April 23, 2009 04:25:12_
What steps will reproduce the problem? 1. 'create foreign key' from a primary key of data type 'serial' 2. 3. What is the expected output? What do you see instead? the resultant foreign key is of data type serial. Should be 'integer' if
reference column is 'serial' or 'bigint' if reference column is 'bigserial'. What version of the product are you using? On what operating system? 2.3.2 ubuntu 8.10 Please provide any additional information below. This is a Postgresql related issue. Serial/bigserial are pseudo-types
_Original issue: http://code.google.com/p/wwwsqldesigner/issues/detail?id=16_
|
1.0
|
Postgresql: serial/bigserial data types must be converted to integer/bigint in foreign keys - _From [i...@amarosia.com](https://code.google.com/u/115984126334797849605/) on April 23, 2009 04:25:12_
What steps will reproduce the problem? 1. 'create foreign key' from a primary key of data type 'serial' 2. 3. What is the expected output? What do you see instead? the resultant foreign key is of data type serial. Should be 'integer' if
reference column is 'serial' or 'bigint' if reference column is 'bigserial'. What version of the product are you using? On what operating system? 2.3.2 ubuntu 8.10 Please provide any additional information below. This is a Postgresql related issue. Serial/bigserial are pseudo-types
_Original issue: http://code.google.com/p/wwwsqldesigner/issues/detail?id=16_
|
non_defect
|
postgresql serial bigserial data types must be converted to integer bigint in foreign keys from on april what steps will reproduce the problem create foreign key from a primary key of data type serial what is the expected output what do you see instead the resultant foreign key is of data type serial should be integer if reference column is serial or bigint if reference column is bigserial what version of the product are you using on what operating system ubuntu please provide any additional information below this is a postgresql related issue serial bigserial are pseudo types original issue
| 0
|
116,786
| 4,706,584,870
|
IssuesEvent
|
2016-10-13 17:36:31
|
default1406/PhyLab
|
https://api.github.com/repos/default1406/PhyLab
|
opened
|
整理收集物理实验相关资料
|
priority 2 size 5
|
- 对象:程富瑞
- 预计时长:8h
- 详情:在对当前已存在资料进行熟悉和整理后,确定α阶段需要完成的实验项目(α阶段完成后基本全覆盖),针对这些实验收集整理相关资料,这部分需要特别仔细,才能保证实验报告模板的翔实和准确。
|
1.0
|
整理收集物理实验相关资料 - - 对象:程富瑞
- 预计时长:8h
- 详情:在对当前已存在资料进行熟悉和整理后,确定α阶段需要完成的实验项目(α阶段完成后基本全覆盖),针对这些实验收集整理相关资料,这部分需要特别仔细,才能保证实验报告模板的翔实和准确。
|
non_defect
|
整理收集物理实验相关资料 对象:程富瑞 预计时长: 详情:在对当前已存在资料进行熟悉和整理后,确定α阶段需要完成的实验项目(α阶段完成后基本全覆盖),针对这些实验收集整理相关资料,这部分需要特别仔细,才能保证实验报告模板的翔实和准确。
| 0
|
45,301
| 12,708,609,606
|
IssuesEvent
|
2020-06-23 10:51:14
|
jOOQ/jOOQ
|
https://api.github.com/repos/jOOQ/jOOQ
|
opened
|
PostgreSQL array of domain inconsistent behaviour
|
T: Defect
|
While most of it was already mentioned in related issues, it might help to have this consolidated test case.
### Expected behavior and actual behavior:
With the following SQL:
```sql
CREATE DOMAIN JSONB_DOM AS JSONB;
CREATE TABLE test
(
normal JSONB,
domain JSONB_DOM,
normal_array JSONB[],
domain_array JSONB_DOM[]
)
```
Generated table fields are:
```java
TableField<TestRecord, JSONB> normal = createField(DSL.name("normal"), org.jooq.impl.SQLDataType.JSONB, this, "");
TableField<TestRecord, JSONB> domain = createField(DSL.name("domain"), org.jooq.impl.SQLDataType.JSONB, this, "");
TableField<TestRecord, JSONB[]> normalArray = createField(DSL.name("normal_array"), org.jooq.impl.SQLDataType.JSONB.getArrayDataType(), this, "");
TableField<TestRecord, Object[]> domainArray = createField(DSL.name("domain_array"), org.jooq.impl.DefaultDataType.getDefaultDataType("\"public\".\"jsonb_dom\"").getArrayDataType(), this, "");
```
Inconsistencies go further if you use forced types with converters, with Gradle config:
```groovy
forcedType {
types = 'JSONB'
userType = 'NormalType'
converter = 'NormalConverter'
}
forcedType {
types = 'JSONB_DOM'
userType = 'DomainType'
converter = 'DomainConverter'
}
forcedType {
types = '_JSONB'
userType = 'NormalType[]'
converter = 'NormalArrayConverter'
}
forcedType {
types = '_JSONB_DOM'
userType = 'DomainType[]'
converter = 'DomainArrayConverter'
}
```
Resulting fields are:
```java
TableField<TestRecord, NormalType> normal = createField(DSL.name("normal"), org.jooq.impl.SQLDataType.JSONB, this, "", new NormalConverter());
TableField<TestRecord, NormalType> domain = createField(DSL.name("domain"), org.jooq.impl.SQLDataType.JSONB, this, "", new NormalConverter());
TableField<TestRecord, NormalType[]> normalArray = createField(DSL.name("normal_array"), org.jooq.impl.SQLDataType.JSONB.getArrayDataType(), this, "", new NormalArrayConverter());
TableField<TestRecord, DomainType[]> domainArray = createField(DSL.name("domain_array"), org.jooq.impl.DefaultDataType.getDefaultDataType("\"public\".\"jsonb_dom\"").getArrayDataType(), this, "", new DomainArrayConverter());
```
As you can see above, while matching domains with forced types is currently not supported, for arrays you instead **have to** match against `_JSONB_DOM`.
### Versions:
- jOOQ: 3.13.2
- Java: 11
- Database (include vendor): PostgreSQL 11.7
- JDBC Driver (include name if inofficial driver): 42.2.12
Related to: #681, #3486, #6568, #8439
|
1.0
|
PostgreSQL array of domain inconsistent behaviour - While most of it was already mentioned in related issues, it might help to have this consolidated test case.
### Expected behavior and actual behavior:
With the following SQL:
```sql
CREATE DOMAIN JSONB_DOM AS JSONB;
CREATE TABLE test
(
normal JSONB,
domain JSONB_DOM,
normal_array JSONB[],
domain_array JSONB_DOM[]
)
```
Generated table fields are:
```java
TableField<TestRecord, JSONB> normal = createField(DSL.name("normal"), org.jooq.impl.SQLDataType.JSONB, this, "");
TableField<TestRecord, JSONB> domain = createField(DSL.name("domain"), org.jooq.impl.SQLDataType.JSONB, this, "");
TableField<TestRecord, JSONB[]> normalArray = createField(DSL.name("normal_array"), org.jooq.impl.SQLDataType.JSONB.getArrayDataType(), this, "");
TableField<TestRecord, Object[]> domainArray = createField(DSL.name("domain_array"), org.jooq.impl.DefaultDataType.getDefaultDataType("\"public\".\"jsonb_dom\"").getArrayDataType(), this, "");
```
Inconsistencies go further if you use forced types with converters, with Gradle config:
```groovy
forcedType {
types = 'JSONB'
userType = 'NormalType'
converter = 'NormalConverter'
}
forcedType {
types = 'JSONB_DOM'
userType = 'DomainType'
converter = 'DomainConverter'
}
forcedType {
types = '_JSONB'
userType = 'NormalType[]'
converter = 'NormalArrayConverter'
}
forcedType {
types = '_JSONB_DOM'
userType = 'DomainType[]'
converter = 'DomainArrayConverter'
}
```
Resulting fields are:
```java
TableField<TestRecord, NormalType> normal = createField(DSL.name("normal"), org.jooq.impl.SQLDataType.JSONB, this, "", new NormalConverter());
TableField<TestRecord, NormalType> domain = createField(DSL.name("domain"), org.jooq.impl.SQLDataType.JSONB, this, "", new NormalConverter());
TableField<TestRecord, NormalType[]> normalArray = createField(DSL.name("normal_array"), org.jooq.impl.SQLDataType.JSONB.getArrayDataType(), this, "", new NormalArrayConverter());
TableField<TestRecord, DomainType[]> domainArray = createField(DSL.name("domain_array"), org.jooq.impl.DefaultDataType.getDefaultDataType("\"public\".\"jsonb_dom\"").getArrayDataType(), this, "", new DomainArrayConverter());
```
As you can see above, while matching domains with forced types is currently not supported, for arrays you instead **have to** match against `_JSONB_DOM`.
### Versions:
- jOOQ: 3.13.2
- Java: 11
- Database (include vendor): PostgreSQL 11.7
- JDBC Driver (include name if inofficial driver): 42.2.12
Related to: #681, #3486, #6568, #8439
|
defect
|
postgresql array of domain inconsistent behaviour while most of it was already mentioned in related issues it might help to have this consolidated test case expected behavior and actual behavior with the following sql sql create domain jsonb dom as jsonb create table test normal jsonb domain jsonb dom normal array jsonb domain array jsonb dom generated table fields are java tablefield normal createfield dsl name normal org jooq impl sqldatatype jsonb this tablefield domain createfield dsl name domain org jooq impl sqldatatype jsonb this tablefield normalarray createfield dsl name normal array org jooq impl sqldatatype jsonb getarraydatatype this tablefield domainarray createfield dsl name domain array org jooq impl defaultdatatype getdefaultdatatype public jsonb dom getarraydatatype this inconsistencies go further if you use forced types with converters with gradle config groovy forcedtype types jsonb usertype normaltype converter normalconverter forcedtype types jsonb dom usertype domaintype converter domainconverter forcedtype types jsonb usertype normaltype converter normalarrayconverter forcedtype types jsonb dom usertype domaintype converter domainarrayconverter resulting fields are java tablefield normal createfield dsl name normal org jooq impl sqldatatype jsonb this new normalconverter tablefield domain createfield dsl name domain org jooq impl sqldatatype jsonb this new normalconverter tablefield normalarray createfield dsl name normal array org jooq impl sqldatatype jsonb getarraydatatype this new normalarrayconverter tablefield domainarray createfield dsl name domain array org jooq impl defaultdatatype getdefaultdatatype public jsonb dom getarraydatatype this new domainarrayconverter as you can see above while matching domains with forced types is currently not supported for arrays you instead have to match against jsonb dom versions jooq java database include vendor postgresql jdbc driver include name if inofficial driver related to
| 1
|
104,332
| 11,404,121,493
|
IssuesEvent
|
2020-01-31 09:06:17
|
jonasjungaker/DD2480_A2_CI_Group10
|
https://api.github.com/repos/jonasjungaker/DD2480_A2_CI_Group10
|
opened
|
API documentation
|
documentation
|
A criterion for the assignment is to convert the JavaDoc into a browsable format, e.g. an HTML file.
|
1.0
|
API documentation - A criterion for the assignment is to convert the JavaDoc into a browsable format, e.g. an HTML file.
|
non_defect
|
api documentation a criterion for the assignment is to convert the javadoc into a browsable format e g an html file
| 0
|
5,835
| 2,610,216,324
|
IssuesEvent
|
2015-02-26 19:08:57
|
chrsmith/somefinders
|
https://api.github.com/repos/chrsmith/somefinders
|
opened
|
схемы вышивок крестом пасха в формате xsd.pdf
|
auto-migrated Priority-Medium Type-Defect
|
```
'''Анисим Никитин'''
День добрый никак не могу найти .схемы
вышивок крестом пасха в формате xsd.pdf. как то
выкладывали уже
'''Арий Матвеев'''
Вот хороший сайт где можно скачать
http://bit.ly/1h43pJ3
'''Борислав Лебедев'''
Просит ввести номер мобилы!Не опасно ли это?
'''Владлен Емельянов'''
Неа все ок у меня ничего не списало
'''Борислав Максимов'''
Не это не влияет на баланс
Информация о файле: схемы вышивок крестом
пасха в формате xsd.pdf
Загружен: В этом месяце
Скачан раз: 724
Рейтинг: 1220
Средняя скорость скачивания: 996
Похожих файлов: 23
```
-----
Original issue reported on code.google.com by `kondense...@gmail.com` on 16 Dec 2013 at 6:53
|
1.0
|
схемы вышивок крестом пасха в формате xsd.pdf - ```
'''Анисим Никитин'''
День добрый никак не могу найти .схемы
вышивок крестом пасха в формате xsd.pdf. как то
выкладывали уже
'''Арий Матвеев'''
Вот хороший сайт где можно скачать
http://bit.ly/1h43pJ3
'''Борислав Лебедев'''
Просит ввести номер мобилы!Не опасно ли это?
'''Владлен Емельянов'''
Неа все ок у меня ничего не списало
'''Борислав Максимов'''
Не это не влияет на баланс
Информация о файле: схемы вышивок крестом
пасха в формате xsd.pdf
Загружен: В этом месяце
Скачан раз: 724
Рейтинг: 1220
Средняя скорость скачивания: 996
Похожих файлов: 23
```
-----
Original issue reported on code.google.com by `kondense...@gmail.com` on 16 Dec 2013 at 6:53
|
defect
|
схемы вышивок крестом пасха в формате xsd pdf анисим никитин день добрый никак не могу найти схемы вышивок крестом пасха в формате xsd pdf как то выкладывали уже арий матвеев вот хороший сайт где можно скачать борислав лебедев просит ввести номер мобилы не опасно ли это владлен емельянов неа все ок у меня ничего не списало борислав максимов не это не влияет на баланс информация о файле схемы вышивок крестом пасха в формате xsd pdf загружен в этом месяце скачан раз рейтинг средняя скорость скачивания похожих файлов original issue reported on code google com by kondense gmail com on dec at
| 1
|
407,097
| 27,598,103,578
|
IssuesEvent
|
2023-03-09 08:10:33
|
gbowne1/dotfiles
|
https://api.github.com/repos/gbowne1/dotfiles
|
opened
|
My .bashrc needs help
|
bug documentation enhancement help wanted good first issue question
|
This isn't far from the default and could be a lot better.
I use Debian 10 and 11
I don't know a lot about bash or .bashrc but I am 100% sure it could be a lot better and could use help with this.
I also at times use tmux as well as bash inside vscode.
|
1.0
|
My .bashrc needs help - This isn't far from the default and could be a lot better.
I use Debian 10 and 11
I don't know a lot about bash or .bashrc but I am 100% sure it could be a lot better and could use help with this.
I also at times use tmux as well as bash inside vscode.
|
non_defect
|
my bashrc needs help this isn t far from the default and could be a lot better i use debian and i don t know a lot about bash or bashrc but i am sure it could be a lot better and could use help with this i also at times use tmux as well as bash inside vscode
| 0
|
189,674
| 14,517,527,255
|
IssuesEvent
|
2020-12-13 19:58:33
|
microsoft/AzureStorageExplorer
|
https://api.github.com/repos/microsoft/AzureStorageExplorer
|
closed
|
The action 'Copy' is enabled in the context menu when selecting one blob and its blob snapshots
|
:beetle: regression :gear: blobs :heavy_check_mark: merged 🧪 testing
|
**Storage Explorer Version**: 1.17.0
**Build Number**: 20201210.19
**Branch**: rel/1.17.0
**Platform/OS**: Windows 10/ Linux Ubuntu 16.04 / MacOS Catalina
**Architecture**: ia32/x64
**Regression From**: Previous release(1.15.1)
## Steps to Reproduce ##
1. Expand one storage account -> Blob Containers.
2. Create a blob container -> Upload a blob to it -> Create snapshots for the blob.
3. Select the blob and its blob snapshots under 'Manage Snapshot' view.
4. Observe the context menu.
5. Check whether the action 'Copy' is disabled or not.
## Expected Experience ##
The action 'Copy' is disabled in the context menu.
## Actual Experience ##
The action 'Copy' is enabled in the context menu.

## Additional Context ##
1. This issue doesn't reproduce in the toolbar.

2. This issue doesn't reproduce for one ADLS Gen2 blob container.
|
1.0
|
The action 'Copy' is enabled in the context menu when selecting one blob and its blob snapshots - **Storage Explorer Version**: 1.17.0
**Build Number**: 20201210.19
**Branch**: rel/1.17.0
**Platform/OS**: Windows 10/ Linux Ubuntu 16.04 / MacOS Catalina
**Architecture**: ia32/x64
**Regression From**: Previous release(1.15.1)
## Steps to Reproduce ##
1. Expand one storage account -> Blob Containers.
2. Create a blob container -> Upload a blob to it -> Create snapshots for the blob.
3. Select the blob and its blob snapshots under 'Manage Snapshot' view.
4. Observe the context menu.
5. Check whether the action 'Copy' is disabled or not.
## Expected Experience ##
The action 'Copy' is disabled in the context menu.
## Actual Experience ##
The action 'Copy' is enabled in the context menu.

## Additional Context ##
1. This issue doesn't reproduce in the toolbar.

2. This issue doesn't reproduce for one ADLS Gen2 blob container.
|
non_defect
|
the action copy is enabled in the context menu when selecting one blob and its blob snapshots storage explorer version build number branch rel platform os windows linux ubuntu macos catalina architecture regression from previous release steps to reproduce expand one storage account blob containers create a blob container upload a blob to it create snapshots for the blob select the blob and its blob snapshots under manage snapshot view observe the context menu check whether the action copy is disabled or not expected experience the action copy is disabled in the context menu actual experience the action copy is enabled in the context menu additional context this issue doesn t reproduce in the toolbar this issue doesn t reproduce for one adls blob container
| 0
|
172,516
| 13,308,357,553
|
IssuesEvent
|
2020-08-26 00:44:46
|
ignitionrobotics/ign-fuel-tools
|
https://api.github.com/repos/ignitionrobotics/ign-fuel-tools
|
opened
|
Command line tools failing on Windows and macOS
|
Windows bug macOS tests
|
The `ign_TEST`s fail on homebrew with:
```
31: Expected: (output.find(g_version)) != (std::string::npos), actual: 18446744073709551615 vs 18446744073709551615
31: I cannot find any available 'ign' command:
31: * Did you install any ignition library?
31: * Did you set the IGN_CONFIG_PATH environment variable?
31: E.g.: export IGN_CONFIG_PATH=$HOME/local/share/ignition
```
And on Windows with:
```
31: D:\Jenkins\workspace\ignition_fuel-tools-ci-pr_any-windows7-amd64\ws\ign-fuel-tools\src\ign_TEST.cc(57): error: Expected: (output.find(g_version)) != (std::string::npos), actual: 18446744073709551615 vs 18446744073709551615
31: 'ign' is not recognized as an internal or external command,
31: operable program or batch file.
```
|
1.0
|
Command line tools failing on Windows and macOS - The `ign_TEST`s fail on homebrew with:
```
31: Expected: (output.find(g_version)) != (std::string::npos), actual: 18446744073709551615 vs 18446744073709551615
31: I cannot find any available 'ign' command:
31: * Did you install any ignition library?
31: * Did you set the IGN_CONFIG_PATH environment variable?
31: E.g.: export IGN_CONFIG_PATH=$HOME/local/share/ignition
```
And on Windows with:
```
31: D:\Jenkins\workspace\ignition_fuel-tools-ci-pr_any-windows7-amd64\ws\ign-fuel-tools\src\ign_TEST.cc(57): error: Expected: (output.find(g_version)) != (std::string::npos), actual: 18446744073709551615 vs 18446744073709551615
31: 'ign' is not recognized as an internal or external command,
31: operable program or batch file.
```
|
non_defect
|
command line tools failing on windows and macos the ign test s fail on homebrew with expected output find g version std string npos actual vs i cannot find any available ign command did you install any ignition library did you set the ign config path environment variable e g export ign config path home local share ignition and on windows with d jenkins workspace ignition fuel tools ci pr any ws ign fuel tools src ign test cc error expected output find g version std string npos actual vs ign is not recognized as an internal or external command operable program or batch file
| 0
|
711,976
| 24,480,875,679
|
IssuesEvent
|
2022-10-08 20:23:29
|
deckhouse/deckhouse
|
https://api.github.com/repos/deckhouse/deckhouse
|
reopened
|
[cloud-provider-yandex] Prevent machine checksum change if platformID is set to default value in instance class
|
area/cloud-provider type/bug status/rotten priority/backlog source/deckhouse-team
|
### Preflight Checklist
- [X] I agree to follow the [Code of Conduct](https://github.com/deckhouse/deckhouse/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [X] I have searched the [issue tracker](https://github.com/deckhouse/deckhouse/issues) for an issue that matches the one I want to file, without success.
### Version
v1.33
### Expected Behavior
Machine checksum don't change if platformID is set to default value in instance class.
### Actual Behavior
Machines are rolled out.
### Steps To Reproduce
_No response_
### Additional Information
Bug
https://github.com/deckhouse/deckhouse/blob/main/modules/030-cloud-provider-yandex/cloud-instance-manager/machine-class.checksum#L2
### Logs
_No response_
|
1.0
|
[cloud-provider-yandex] Prevent machine checksum change if platformID is set to default value in instance class - ### Preflight Checklist
- [X] I agree to follow the [Code of Conduct](https://github.com/deckhouse/deckhouse/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [X] I have searched the [issue tracker](https://github.com/deckhouse/deckhouse/issues) for an issue that matches the one I want to file, without success.
### Version
v1.33
### Expected Behavior
Machine checksum don't change if platformID is set to default value in instance class.
### Actual Behavior
Machines are rolled out.
### Steps To Reproduce
_No response_
### Additional Information
Bug
https://github.com/deckhouse/deckhouse/blob/main/modules/030-cloud-provider-yandex/cloud-instance-manager/machine-class.checksum#L2
### Logs
_No response_
|
non_defect
|
prevent machine checksum change if platformid is set to default value in instance class preflight checklist i agree to follow the that this project adheres to i have searched the for an issue that matches the one i want to file without success version expected behavior machine checksum don t change if platformid is set to default value in instance class actual behavior machines are rolled out steps to reproduce no response additional information bug logs no response
| 0
|
347,267
| 24,887,824,933
|
IssuesEvent
|
2022-10-28 09:18:02
|
Franky4566/ped
|
https://api.github.com/repos/Franky4566/ped
|
opened
|
Formatting
|
type.DocumentationBug severity.Low
|
Can consider including line breaks inbetween as its a bit hard to read the long output everytime we add a client/ property, pair or delete

<!--session: 1666947272254-69b62f37-508a-4913-a024-1403c4ed688b-->
<!--Version: Web v3.4.4-->
|
1.0
|
Formatting - Can consider including line breaks inbetween as its a bit hard to read the long output everytime we add a client/ property, pair or delete

<!--session: 1666947272254-69b62f37-508a-4913-a024-1403c4ed688b-->
<!--Version: Web v3.4.4-->
|
non_defect
|
formatting can consider including line breaks inbetween as its a bit hard to read the long output everytime we add a client property pair or delete
| 0
|
23,481
| 3,830,572,625
|
IssuesEvent
|
2016-03-31 15:00:00
|
primefaces/primeng
|
https://api.github.com/repos/primefaces/primeng
|
closed
|
Cell templating ignored on scrollable table
|
defect
|
When a datatable is scrollable and has templating for columns, the templates are ignored.
```xml
<p-dataTable [value]="cars" scrollable="true" scrollHeight="200">
<header>Vertical</header>
<p-column field="vin" header="Vin"></p-column>
<p-column field="year" header="Year"></p-column>
<p-column field="brand" header="Brand"></p-column>
<p-column field="color" header="Color">
<template #col #car="rowData">
<span [style.color]="car[col.field]">{{car[col.field]}}</span>
</template>
</p-column>
</p-dataTable>
```
|
1.0
|
Cell templating ignored on scrollable table - When a datatable is scrollable and has templating for columns, the templates are ignored.
```xml
<p-dataTable [value]="cars" scrollable="true" scrollHeight="200">
<header>Vertical</header>
<p-column field="vin" header="Vin"></p-column>
<p-column field="year" header="Year"></p-column>
<p-column field="brand" header="Brand"></p-column>
<p-column field="color" header="Color">
<template #col #car="rowData">
<span [style.color]="car[col.field]">{{car[col.field]}}</span>
</template>
</p-column>
</p-dataTable>
```
|
defect
|
cell templating ignored on scrollable table when a datatable is scrollable and has templating for columns the templates are ignored xml vertical car
| 1
|
11,930
| 3,549,761,336
|
IssuesEvent
|
2016-01-20 19:17:01
|
asciidoctor/asciidoctor.js
|
https://api.github.com/repos/asciidoctor/asciidoctor.js
|
opened
|
Add instructions to the contributing code guide about how to add a spec
|
documentation
|
Add information to the contributing code guide about how to add a new spec and how to run the specs. This will encourage participation and help grow the test suite.
|
1.0
|
Add instructions to the contributing code guide about how to add a spec - Add information to the contributing code guide about how to add a new spec and how to run the specs. This will encourage participation and help grow the test suite.
|
non_defect
|
add instructions to the contributing code guide about how to add a spec add information to the contributing code guide about how to add a new spec and how to run the specs this will encourage participation and help grow the test suite
| 0
|
11,242
| 2,641,957,958
|
IssuesEvent
|
2015-03-11 20:44:32
|
chrsmith/html5rocks
|
https://api.github.com/repos/chrsmith/html5rocks
|
closed
|
Broken links on website
|
Priority-Medium Type-Defect
|
Original [issue 164](https://code.google.com/p/html5rocks/issues/detail?id=164) created by chrsmith on 2010-08-17T08:36:38.000Z:
Very minor issue...the links in the footer of the "Samples Studio" do not work as the page is hosted on a new domain and the links are relative, e.g. http://studio.html5rocks.com/tos.html.
|
1.0
|
Broken links on website - Original [issue 164](https://code.google.com/p/html5rocks/issues/detail?id=164) created by chrsmith on 2010-08-17T08:36:38.000Z:
Very minor issue...the links in the footer of the "Samples Studio" do not work as the page is hosted on a new domain and the links are relative, e.g. http://studio.html5rocks.com/tos.html.
|
defect
|
broken links on website original created by chrsmith on very minor issue the links in the footer of the quot samples studio quot do not work as the page is hosted on a new domain and the links are relative e g
| 1
|
71,578
| 13,686,368,366
|
IssuesEvent
|
2020-09-30 08:36:10
|
microsoft/azure-pipelines-tasks
|
https://api.github.com/repos/microsoft/azure-pipelines-tasks
|
closed
|
PublishCodeCoverageResults seems to publish to build artifacts which multi-stage pipeline fails when trying to auto-retrieve.
|
Area: CodeCoverage Area: Test bug
|
## Note
Issues in this repo are for tracking bugs, feature requests and questions for the tasks in this repo
For a list:
https://github.com/Microsoft/azure-pipelines-tasks/tree/master/Tasks
If you have an issue or request for the Azure Pipelines service, use developer community instead:
https://developercommunity.visualstudio.com/spaces/21/index.html )
## Required Information
Entering this information will route you directly to the right team and expedite traction.
**Question, Bug, or Feature?**
*Type*: Bug
**Enter Task Name**: PublishCodeCoverageResults
list here (V# not needed):
https://github.com/Microsoft/azure-pipelines-tasks/tree/master/Tasks
## Environment
- Server - Azure Pipelines or TFS on-premises? Azure Pipelines
- If using TFS on-premises, provide the version:
- If using Azure Pipelines, provide the account name, team project name, build definition name/build number: AvidMicrosoftProjects Disney POFs buildId=538 #20200629.16
- Agent - Hosted or Private:
- If using Hosted agent, provide agent queue name: Unsure
- If using private agent, provide the OS of the machine running the agent and the agent version:
## Issue Description
[Include task name(s), screenshots and any other relevant details]
The pipeline is multi-stage (Build/Test and Deploy).
When this task is available in the Build pipeline, it succeeds but I think the artifact creation causes a failure in the deployment pipeline.
#- task: PublishCodeCoverageResults@1
# displayName: Publish Code Coverage
# condition: and(succeeded(), eq(variables['Build.Reason'], 'PullRequest'))
# inputs:
# codeCoverageTool: Cobertura
# summaryFileLocation: $(Build.SourcesDirectory)/coverlet/reports/Cobertura.xml
# failIfCoverageEmpty: true
### Task logs
[Enable debug logging and please provide the zip file containing all the logs for a speedy resolution]
[logs_538.zip](https://github.com/microsoft/azure-pipelines-tasks/files/4848923/logs_538.zip)
## Troubleshooting
Checkout how to troubleshoot failures and collect debug logs: https://docs.microsoft.com/en-us/vsts/build-release/actions/troubleshooting
### Error logs
[Insert error from the logs here for a quick overview]
2020-06-30T01:27:33.9661901Z ##[section]Starting: Deploy
2020-06-30T01:27:34.1191600Z ##[section]Starting: Initialize job
2020-06-30T01:27:34.1193026Z Agent name: 'Hosted Agent'
2020-06-30T01:27:34.1193429Z Agent machine name: 'fv-az755'
2020-06-30T01:27:34.1193707Z Current agent version: '2.171.1'
2020-06-30T01:27:34.1234608Z ##[group]Operating System
2020-06-30T01:27:34.1234864Z Ubuntu
2020-06-30T01:27:34.1235050Z 18.04.4
2020-06-30T01:27:34.1235213Z LTS
2020-06-30T01:27:34.1235386Z ##[endgroup]
2020-06-30T01:27:34.1235600Z ##[group]Virtual Environment
2020-06-30T01:27:34.1235829Z Environment: ubuntu-18.04
2020-06-30T01:27:34.1236054Z Version: 20200621.1
2020-06-30T01:27:34.1236397Z Included Software: https://github.com/actions/virtual-environments/blob/ubuntu18/20200621.1/images/linux/Ubuntu1804-README.md
2020-06-30T01:27:34.1236734Z ##[endgroup]
2020-06-30T01:27:34.1237792Z Current image version: '20200621.1'
2020-06-30T01:27:34.1242710Z Agent running as: 'vsts'
2020-06-30T01:27:34.1297509Z Prepare build directory.
2020-06-30T01:27:34.1645816Z Set build variables.
2020-06-30T01:27:34.1681897Z Download all required tasks.
2020-06-30T01:27:34.1805217Z Downloading task: DownloadPipelineArtifact (1.2.5)
2020-06-30T01:27:35.1408815Z Downloading task: DownloadBuildArtifacts (0.167.2)
2020-06-30T01:27:36.1138309Z Downloading task: AzureCLI (1.163.3)
2020-06-30T01:27:36.1984220Z Downloading task: AzureWebAppContainer (1.163.7)
2020-06-30T01:27:36.9752495Z Checking job knob settings.
2020-06-30T01:27:36.9761104Z Knob: AgentToolsDirectory = /opt/hostedtoolcache Source: ${AGENT_TOOLSDIRECTORY}
2020-06-30T01:27:36.9762197Z Knob: AgentPerflog = /home/vsts/perflog Source: ${VSTS_AGENT_PERFLOG}
2020-06-30T01:27:36.9762762Z Finished checking job knob settings.
2020-06-30T01:27:37.0058810Z Start tracking orphan processes.
2020-06-30T01:27:37.0245729Z ##[section]Finishing: Initialize job
2020-06-30T01:27:37.0604628Z ##[section]Starting: Download Artifact
2020-06-30T01:27:37.0816384Z ==============================================================================
2020-06-30T01:27:37.0817345Z Task : Download pipeline artifact
2020-06-30T01:27:37.0817972Z Description : Download a named artifact from a pipeline to a local path
2020-06-30T01:27:37.0818242Z Version : 1.2.5
2020-06-30T01:27:37.0818736Z Author : Microsoft Corporation
2020-06-30T01:27:37.0819483Z Help : Download a named artifact from a pipeline to a local path
2020-06-30T01:27:37.0819824Z ==============================================================================
2020-06-30T01:27:37.4886425Z Download from the specified build: #538
2020-06-30T01:27:37.4888702Z Download artifact to: /home/vsts/work/1/
2020-06-30T01:27:38.3662902Z Information, ApplicationInsightsTelemetrySender will correlate events with X-TFS-Session 7018f52c-074f-4683-abf4-73fab2331f8b
2020-06-30T01:27:38.4318521Z Information, ApplicationInsightsTelemetrySender did not correlate any events with X-TFS-Session 7018f52c-074f-4683-abf4-73fab2331f8b
2020-06-30T01:27:38.4356165Z ##[error]Could not find any pipeline artifacts in the build.
2020-06-30T01:27:38.4483293Z ##[section]Finishing: Download Artifact
2020-06-30T01:27:38.4597268Z ##[section]Starting: Finalize Job
2020-06-30T01:27:38.4627542Z Cleaning up task key
2020-06-30T01:27:38.4628943Z Start cleaning up orphan processes.
2020-06-30T01:27:38.4889121Z ##[section]Finishing: Finalize Job
2020-06-30T01:27:38.4929101Z ##[section]Finishing: Deploy
|
1.0
|
PublishCodeCoverageResults seems to publish to build artifacts which multi-stage pipeline fails when trying to auto-retrieve. - ## Note
Issues in this repo are for tracking bugs, feature requests and questions for the tasks in this repo
For a list:
https://github.com/Microsoft/azure-pipelines-tasks/tree/master/Tasks
If you have an issue or request for the Azure Pipelines service, use developer community instead:
https://developercommunity.visualstudio.com/spaces/21/index.html )
## Required Information
Entering this information will route you directly to the right team and expedite traction.
**Question, Bug, or Feature?**
*Type*: Bug
**Enter Task Name**: PublishCodeCoverageResults
list here (V# not needed):
https://github.com/Microsoft/azure-pipelines-tasks/tree/master/Tasks
## Environment
- Server - Azure Pipelines or TFS on-premises? Azure Pipelines
- If using TFS on-premises, provide the version:
- If using Azure Pipelines, provide the account name, team project name, build definition name/build number: AvidMicrosoftProjects Disney POFs buildId=538 #20200629.16
- Agent - Hosted or Private:
- If using Hosted agent, provide agent queue name: Unsure
- If using private agent, provide the OS of the machine running the agent and the agent version:
## Issue Description
[Include task name(s), screenshots and any other relevant details]
The pipeline is multi-stage (Build/Test and Deploy).
When this task is available in the Build pipeline, it succeeds but I think the artifact creation causes a failure in the deployment pipeline.
#- task: PublishCodeCoverageResults@1
# displayName: Publish Code Coverage
# condition: and(succeeded(), eq(variables['Build.Reason'], 'PullRequest'))
# inputs:
# codeCoverageTool: Cobertura
# summaryFileLocation: $(Build.SourcesDirectory)/coverlet/reports/Cobertura.xml
# failIfCoverageEmpty: true
### Task logs
[Enable debug logging and please provide the zip file containing all the logs for a speedy resolution]
[logs_538.zip](https://github.com/microsoft/azure-pipelines-tasks/files/4848923/logs_538.zip)
## Troubleshooting
Checkout how to troubleshoot failures and collect debug logs: https://docs.microsoft.com/en-us/vsts/build-release/actions/troubleshooting
### Error logs
[Insert error from the logs here for a quick overview]
2020-06-30T01:27:33.9661901Z ##[section]Starting: Deploy
2020-06-30T01:27:34.1191600Z ##[section]Starting: Initialize job
2020-06-30T01:27:34.1193026Z Agent name: 'Hosted Agent'
2020-06-30T01:27:34.1193429Z Agent machine name: 'fv-az755'
2020-06-30T01:27:34.1193707Z Current agent version: '2.171.1'
2020-06-30T01:27:34.1234608Z ##[group]Operating System
2020-06-30T01:27:34.1234864Z Ubuntu
2020-06-30T01:27:34.1235050Z 18.04.4
2020-06-30T01:27:34.1235213Z LTS
2020-06-30T01:27:34.1235386Z ##[endgroup]
2020-06-30T01:27:34.1235600Z ##[group]Virtual Environment
2020-06-30T01:27:34.1235829Z Environment: ubuntu-18.04
2020-06-30T01:27:34.1236054Z Version: 20200621.1
2020-06-30T01:27:34.1236397Z Included Software: https://github.com/actions/virtual-environments/blob/ubuntu18/20200621.1/images/linux/Ubuntu1804-README.md
2020-06-30T01:27:34.1236734Z ##[endgroup]
2020-06-30T01:27:34.1237792Z Current image version: '20200621.1'
2020-06-30T01:27:34.1242710Z Agent running as: 'vsts'
2020-06-30T01:27:34.1297509Z Prepare build directory.
2020-06-30T01:27:34.1645816Z Set build variables.
2020-06-30T01:27:34.1681897Z Download all required tasks.
2020-06-30T01:27:34.1805217Z Downloading task: DownloadPipelineArtifact (1.2.5)
2020-06-30T01:27:35.1408815Z Downloading task: DownloadBuildArtifacts (0.167.2)
2020-06-30T01:27:36.1138309Z Downloading task: AzureCLI (1.163.3)
2020-06-30T01:27:36.1984220Z Downloading task: AzureWebAppContainer (1.163.7)
2020-06-30T01:27:36.9752495Z Checking job knob settings.
2020-06-30T01:27:36.9761104Z Knob: AgentToolsDirectory = /opt/hostedtoolcache Source: ${AGENT_TOOLSDIRECTORY}
2020-06-30T01:27:36.9762197Z Knob: AgentPerflog = /home/vsts/perflog Source: ${VSTS_AGENT_PERFLOG}
2020-06-30T01:27:36.9762762Z Finished checking job knob settings.
2020-06-30T01:27:37.0058810Z Start tracking orphan processes.
2020-06-30T01:27:37.0245729Z ##[section]Finishing: Initialize job
2020-06-30T01:27:37.0604628Z ##[section]Starting: Download Artifact
2020-06-30T01:27:37.0816384Z ==============================================================================
2020-06-30T01:27:37.0817345Z Task : Download pipeline artifact
2020-06-30T01:27:37.0817972Z Description : Download a named artifact from a pipeline to a local path
2020-06-30T01:27:37.0818242Z Version : 1.2.5
2020-06-30T01:27:37.0818736Z Author : Microsoft Corporation
2020-06-30T01:27:37.0819483Z Help : Download a named artifact from a pipeline to a local path
2020-06-30T01:27:37.0819824Z ==============================================================================
2020-06-30T01:27:37.4886425Z Download from the specified build: #538
2020-06-30T01:27:37.4888702Z Download artifact to: /home/vsts/work/1/
2020-06-30T01:27:38.3662902Z Information, ApplicationInsightsTelemetrySender will correlate events with X-TFS-Session 7018f52c-074f-4683-abf4-73fab2331f8b
2020-06-30T01:27:38.4318521Z Information, ApplicationInsightsTelemetrySender did not correlate any events with X-TFS-Session 7018f52c-074f-4683-abf4-73fab2331f8b
2020-06-30T01:27:38.4356165Z ##[error]Could not find any pipeline artifacts in the build.
2020-06-30T01:27:38.4483293Z ##[section]Finishing: Download Artifact
2020-06-30T01:27:38.4597268Z ##[section]Starting: Finalize Job
2020-06-30T01:27:38.4627542Z Cleaning up task key
2020-06-30T01:27:38.4628943Z Start cleaning up orphan processes.
2020-06-30T01:27:38.4889121Z ##[section]Finishing: Finalize Job
2020-06-30T01:27:38.4929101Z ##[section]Finishing: Deploy
|
non_defect
|
publishcodecoverageresults seems to publish to build artifacts which multi stage pipeline fails when trying to auto retrieve note issues in this repo are for tracking bugs feature requests and questions for the tasks in this repo for a list if you have an issue or request for the azure pipelines service use developer community instead required information entering this information will route you directly to the right team and expedite traction question bug or feature type bug enter task name publishcodecoverageresults list here v not needed environment server azure pipelines or tfs on premises azure pipelines if using tfs on premises provide the version if using azure pipelines provide the account name team project name build definition name build number avidmicrosoftprojects disney pofs buildid agent hosted or private if using hosted agent provide agent queue name unsure if using private agent provide the os of the machine running the agent and the agent version issue description the pipeline is multi stage build test and deploy when this task is available in the build pipeline it succeeds but i think the artifact creation causes a failure in the deployment pipeline task publishcodecoverageresults displayname publish code coverage condition and succeeded eq variables pullrequest inputs codecoveragetool cobertura summaryfilelocation build sourcesdirectory coverlet reports cobertura xml failifcoverageempty true task logs troubleshooting checkout how to troubleshoot failures and collect debug logs error logs starting deploy starting initialize job agent name hosted agent agent machine name fv current agent version operating system ubuntu lts virtual environment environment ubuntu version included software current image version agent running as vsts prepare build directory set build variables download all required tasks downloading task downloadpipelineartifact downloading task downloadbuildartifacts downloading task azurecli downloading task azurewebappcontainer checking job knob settings knob agenttoolsdirectory opt hostedtoolcache source agent toolsdirectory knob agentperflog home vsts perflog source vsts agent perflog finished checking job knob settings start tracking orphan processes finishing initialize job starting download artifact task download pipeline artifact description download a named artifact from a pipeline to a local path version author microsoft corporation help download a named artifact from a pipeline to a local path download from the specified build download artifact to home vsts work information applicationinsightstelemetrysender will correlate events with x tfs session information applicationinsightstelemetrysender did not correlate any events with x tfs session could not find any pipeline artifacts in the build finishing download artifact starting finalize job cleaning up task key start cleaning up orphan processes finishing finalize job finishing deploy
| 0
|
11,931
| 4,321,393,288
|
IssuesEvent
|
2016-07-25 10:00:14
|
KodrAus/elasticsearch-rs
|
https://api.github.com/repos/KodrAus/elasticsearch-rs
|
closed
|
Breaking Changes to API in 5.0
|
codegen hyper notes rotor
|
See: https://www.elastic.co/guide/en/elasticsearch/reference/master/breaking_50_rest_api_changes.html
There aren't a whole lot of changes to the REST API, and those should be picked up automagically by updating the spec
|
1.0
|
Breaking Changes to API in 5.0 - See: https://www.elastic.co/guide/en/elasticsearch/reference/master/breaking_50_rest_api_changes.html
There aren't a whole lot of changes to the REST API, and those should be picked up automagically by updating the spec
|
non_defect
|
breaking changes to api in see there aren t a whole lot of changes to the rest api and those should be picked up automagically by updating the spec
| 0
|
46,981
| 13,056,009,522
|
IssuesEvent
|
2020-07-30 03:22:49
|
icecube-trac/tix2
|
https://api.github.com/repos/icecube-trac/tix2
|
opened
|
[dataio] Undefined behavior in dataio/icetray (Trac #2216)
|
Incomplete Migration Migrated from Trac combo core defect
|
Migrated from https://code.icecube.wisc.edu/ticket/2216
```json
{
"status": "closed",
"changetime": "2019-03-19T15:14:54",
"description": "I can elicit undefined behavior in dataio unit tests by the following:\n\n1. Run a_nocompression.py\n2. Run i_adds_mutineer.py\n3. Run j_fatals_reading_mutineer.py with the addition of 'from icecube.test_unregistered import UnregisteredTrack'. I've pasted a single python file below that elicits this behavior.\n\nThe test randomly passes/fails. I can only reproduce this behavior on Mac OSX, and my test setup included Python 2.7.14 and LLVM-9.0.0.\n\n\n#!/usr/bin/env python\n\nfrom I3Tray import *\n\nfrom os.path import expandvars\n\nimport os\nimport sys\n\nfrom icecube import dataclasses \nfrom icecube import phys_services \nfrom icecube import dataio \n\nfrom icecube.test_unregistered import UnregisteredTrack\n\n#\n# This sets up a bunch of paths of files and stuff. Nice to have a\n# real scripting language at one's disposal for this kind of thing.\n#\ntestdata = expandvars(\"$I3_TESTDATA\")\nrunfile = testdata + \"/2007data/2007_I3Only_Run109732_Nch20.i3.gz\"\n\ntray = I3Tray()\n\ntray.AddModule(\"I3Reader\",\n Filename = runfile,\n SkipKeys = [\"I3PfFilterMask\",\"CalibratedFADC\",\"CalibratedATWD\"])\n\n# This file is super old\ntray.AddModule(\"QConverter\")\ntray.AddModule(lambda fr: False) # Drop all existing P-frames\n\n#\n# Make the Q frames into P\n#\ntray.AddModule(\"I3NullSplitter\")\n\n#\n# And this is the magic writer. We will make it work harder later.\n#\ntray.AddModule(\"I3Writer\", filename = \"pass1.i3\")\n\n#\n# Here we specify how many frames to process, or we can omit the\n# argument to Execute() and the the tray will run until a module tells\n# it to stop (via RequestSuspension()). We'll do a few frames so\n# there's a chunk of data involved.\n#\ntray.Execute(15)\n\ntray = I3Tray()\ntray.AddModule(\"I3Reader\", Filename=\"pass1.i3\")\ntray.AddModule(lambda frame : \\\n frame.Put(\"mutineer\", UnregisteredTrack()))\ntray.AddModule(\"Dump\")\ntray.AddModule(\"I3Writer\", Filename = \"hasmutineer.i3.gz\")\n\ntray.Execute()\n\ntray = I3Tray()\n\n# by default the reader will log_fatal if it can't deserialize something\ntray.AddModule(\"I3Reader\",\"reader\", Filename=expandvars(\"hasmutineer.i3.gz\"))\n\n# this guy actually does the get and causes the error\ntray.AddModule(\"Get\", \"getter\")(\n (\"Keys\", [\"mutineer\"])\n )\n\ntray.AddModule(\"Dump\",\"dump\")\n\ntry:\n tray.Execute()\n \n\nexcept:\n sys.exit(0)\nelse:\n print(\"***\\n***\\n*** Failure! Script didn't throw as it should have.\\n***\\n***\\n***\\n\")\n sys.exit(1) # ought to die, shouldn't get here",
"reporter": "jbraun",
"cc": "",
"resolution": "fixed",
"_ts": "1553008494085899",
"component": "combo core",
"summary": "[dataio] Undefined behavior in dataio/icetray",
"priority": "major",
"keywords": "bug icetray dataio",
"time": "2018-12-04T16:15:08",
"milestone": "Vernal Equinox 2019",
"owner": "olivas",
"type": "defect"
}
```
|
1.0
|
[dataio] Undefined behavior in dataio/icetray (Trac #2216) - Migrated from https://code.icecube.wisc.edu/ticket/2216
```json
{
"status": "closed",
"changetime": "2019-03-19T15:14:54",
"description": "I can elicit undefined behavior in dataio unit tests by the following:\n\n1. Run a_nocompression.py\n2. Run i_adds_mutineer.py\n3. Run j_fatals_reading_mutineer.py with the addition of 'from icecube.test_unregistered import UnregisteredTrack'. I've pasted a single python file below that elicits this behavior.\n\nThe test randomly passes/fails. I can only reproduce this behavior on Mac OSX, and my test setup included Python 2.7.14 and LLVM-9.0.0.\n\n\n#!/usr/bin/env python\n\nfrom I3Tray import *\n\nfrom os.path import expandvars\n\nimport os\nimport sys\n\nfrom icecube import dataclasses \nfrom icecube import phys_services \nfrom icecube import dataio \n\nfrom icecube.test_unregistered import UnregisteredTrack\n\n#\n# This sets up a bunch of paths of files and stuff. Nice to have a\n# real scripting language at one's disposal for this kind of thing.\n#\ntestdata = expandvars(\"$I3_TESTDATA\")\nrunfile = testdata + \"/2007data/2007_I3Only_Run109732_Nch20.i3.gz\"\n\ntray = I3Tray()\n\ntray.AddModule(\"I3Reader\",\n Filename = runfile,\n SkipKeys = [\"I3PfFilterMask\",\"CalibratedFADC\",\"CalibratedATWD\"])\n\n# This file is super old\ntray.AddModule(\"QConverter\")\ntray.AddModule(lambda fr: False) # Drop all existing P-frames\n\n#\n# Make the Q frames into P\n#\ntray.AddModule(\"I3NullSplitter\")\n\n#\n# And this is the magic writer. We will make it work harder later.\n#\ntray.AddModule(\"I3Writer\", filename = \"pass1.i3\")\n\n#\n# Here we specify how many frames to process, or we can omit the\n# argument to Execute() and the the tray will run until a module tells\n# it to stop (via RequestSuspension()). We'll do a few frames so\n# there's a chunk of data involved.\n#\ntray.Execute(15)\n\ntray = I3Tray()\ntray.AddModule(\"I3Reader\", Filename=\"pass1.i3\")\ntray.AddModule(lambda frame : \\\n frame.Put(\"mutineer\", UnregisteredTrack()))\ntray.AddModule(\"Dump\")\ntray.AddModule(\"I3Writer\", Filename = \"hasmutineer.i3.gz\")\n\ntray.Execute()\n\ntray = I3Tray()\n\n# by default the reader will log_fatal if it can't deserialize something\ntray.AddModule(\"I3Reader\",\"reader\", Filename=expandvars(\"hasmutineer.i3.gz\"))\n\n# this guy actually does the get and causes the error\ntray.AddModule(\"Get\", \"getter\")(\n (\"Keys\", [\"mutineer\"])\n )\n\ntray.AddModule(\"Dump\",\"dump\")\n\ntry:\n tray.Execute()\n \n\nexcept:\n sys.exit(0)\nelse:\n print(\"***\\n***\\n*** Failure! Script didn't throw as it should have.\\n***\\n***\\n***\\n\")\n sys.exit(1) # ought to die, shouldn't get here",
"reporter": "jbraun",
"cc": "",
"resolution": "fixed",
"_ts": "1553008494085899",
"component": "combo core",
"summary": "[dataio] Undefined behavior in dataio/icetray",
"priority": "major",
"keywords": "bug icetray dataio",
"time": "2018-12-04T16:15:08",
"milestone": "Vernal Equinox 2019",
"owner": "olivas",
"type": "defect"
}
```
|
defect
|
undefined behavior in dataio icetray trac migrated from json status closed changetime description i can elicit undefined behavior in dataio unit tests by the following n run a nocompression py run i adds mutineer py run j fatals reading mutineer py with the addition of from icecube test unregistered import unregisteredtrack i ve pasted a single python file below that elicits this behavior n nthe test randomly passes fails i can only reproduce this behavior on mac osx and my test setup included python and llvm n n n usr bin env python n nfrom import n nfrom os path import expandvars n nimport os nimport sys n nfrom icecube import dataclasses nfrom icecube import phys services nfrom icecube import dataio n nfrom icecube test unregistered import unregisteredtrack n n n this sets up a bunch of paths of files and stuff nice to have a n real scripting language at one s disposal for this kind of thing n ntestdata expandvars testdata nrunfile testdata gz n ntray n ntray addmodule n filename runfile n skipkeys n n this file is super old ntray addmodule qconverter ntray addmodule lambda fr false drop all existing p frames n n n make the q frames into p n ntray addmodule n n n and this is the magic writer we will make it work harder later n ntray addmodule filename n n n here we specify how many frames to process or we can omit the n argument to execute and the the tray will run until a module tells n it to stop via requestsuspension we ll do a few frames so n there s a chunk of data involved n ntray execute n ntray ntray addmodule filename ntray addmodule lambda frame n frame put mutineer unregisteredtrack ntray addmodule dump ntray addmodule filename hasmutineer gz n ntray execute n ntray n n by default the reader will log fatal if it can t deserialize something ntray addmodule reader filename expandvars hasmutineer gz n n this guy actually does the get and causes the error ntray addmodule get getter n keys n n ntray addmodule dump dump n ntry n tray execute n n nexcept n sys exit nelse n print n n failure script didn t throw as it should have n n n n n sys exit ought to die shouldn t get here reporter jbraun cc resolution fixed ts component combo core summary undefined behavior in dataio icetray priority major keywords bug icetray dataio time milestone vernal equinox owner olivas type defect
| 1
|
29,501
| 14,146,616,225
|
IssuesEvent
|
2020-11-10 19:29:49
|
ampproject/amphtml
|
https://api.github.com/repos/ampproject/amphtml
|
closed
|
CLS of 1.0 caused by "async" for https://cdn.ampproject.org/v0.js
|
Type: Discussion/Question Type: Performance WG: performance
|
## What's the issue?
Async for https://cdn.ampproject.org/v0.js is causing 1 Cumulative Layout Shift that penalizes our amp-only site with a CLS of magnitude 1.0.
Removing async removes the CLS issue but the "async" for v0.js is mandatory for amp.
## How do we reproduce the issue?
Step 0. Make a boilerplate amp-html: https://amp.dev/boilerplate/
Step 1. Add some "lorem ipsum".
Step 2. Inspect performance with chrome, opera etc. CLS issue appears.
Step 3. Remove async from "<script async src="https://cdn.ampproject.org/v0.js"></script>"
Step 4. Inspect performance again and CLS issue is not there.
Step 5. Add async back on to double-check and the CLS is back.
The result is: 1 single CLS but with a magnitude of 1.0.
I've taken screen-shots of these steps.
##Browsers/Device?
Tested on chrome, opera, windows 10. On chrome I've inspected in both mobile- and desktop-mode.
## Which AMP version is affected?
Version 2010132225003
I got my first CLS error flag in Core Web Vitals at 20-10-2020 (2 weeks ago).





|
True
|
CLS of 1.0 caused by "async" for https://cdn.ampproject.org/v0.js - ## What's the issue?
Async for https://cdn.ampproject.org/v0.js is causing 1 Cumulative Layout Shift that penalizes our amp-only site with a CLS of magnitude 1.0.
Removing async removes the CLS issue but the "async" for v0.js is mandatory for amp.
## How do we reproduce the issue?
Step 0. Make a boilerplate amp-html: https://amp.dev/boilerplate/
Step 1. Add some "lorem ipsum".
Step 2. Inspect performance with chrome, opera etc. CLS issue appears.
Step 3. Remove async from "<script async src="https://cdn.ampproject.org/v0.js"></script>"
Step 4. Inspect performance again and CLS issue is not there.
Step 5. Add async back on to double-check and the CLS is back.
The result is: 1 single CLS but with a magnitude of 1.0.
I've taken screen-shots of these steps.
##Browsers/Device?
Tested on chrome, opera, windows 10. On chrome I've inspected in both mobile- and desktop-mode.
## Which AMP version is affected?
Version 2010132225003
I got my first CLS error flag in Core Web Vitals at 20-10-2020 (2 weeks ago).





|
non_defect
|
cls of caused by async for what s the issue async for is causing cumulative layout shift that penalizes our amp only site with a cls of magnitude removing async removes the cls issue but the async for js is mandatory for amp how do we reproduce the issue step make a boilerplate amp html step add some lorem ipsum step inspect performance with chrome opera etc cls issue appears step remove async from script async src step inspect performance again and cls issue is not there step add async back on to double check and the cls is back the result is single cls but with a magnitude of i ve taken screen shots of these steps browsers device tested on chrome opera windows on chrome i ve inspected in both mobile and desktop mode which amp version is affected version i got my first cls error flag in core web vitals at weeks ago
| 0
|
2,099
| 2,603,976,316
|
IssuesEvent
|
2015-02-24 19:01:35
|
chrsmith/nishazi6
|
https://api.github.com/repos/chrsmith/nishazi6
|
opened
|
沈阳阴茎上长肉刺怎么回事
|
auto-migrated Priority-Medium Type-Defect
|
```
沈阳阴茎上长肉刺怎么回事〓沈陽軍區政治部醫院性病〓TEL��
�024-31023308〓成立于1946年,68年專注于性傳播疾病的研究和治�
��。位于沈陽市沈河區二緯路32號。是一所與新中國同建立共�
��煌的歷史悠久、設備精良、技術權威、專家云集,是預防、
保健、醫療、科研康復為一體的綜合性醫院。是國家首批公��
�甲等部隊醫院、全國首批醫療規范定點單位,是第四軍醫大�
��、東南大學等知名高等院校的教學醫院。曾被中國人民解放
軍空軍后勤部衛生部評為衛生工作先進單位,先后兩次榮立��
�體二等功。
```
-----
Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 8:21
|
1.0
|
沈阳阴茎上长肉刺怎么回事 - ```
沈阳阴茎上长肉刺怎么回事〓沈陽軍區政治部醫院性病〓TEL��
�024-31023308〓成立于1946年,68年專注于性傳播疾病的研究和治�
��。位于沈陽市沈河區二緯路32號。是一所與新中國同建立共�
��煌的歷史悠久、設備精良、技術權威、專家云集,是預防、
保健、醫療、科研康復為一體的綜合性醫院。是國家首批公��
�甲等部隊醫院、全國首批醫療規范定點單位,是第四軍醫大�
��、東南大學等知名高等院校的教學醫院。曾被中國人民解放
軍空軍后勤部衛生部評為衛生工作先進單位,先后兩次榮立��
�體二等功。
```
-----
Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 8:21
|
defect
|
沈阳阴茎上长肉刺怎么回事 沈阳阴茎上长肉刺怎么回事〓沈陽軍區政治部醫院性病〓tel�� � 〓 , � ��。 。是一所與新中國同建立共� ��煌的歷史悠久、設備精良、技術權威、專家云集,是預防、 保健、醫療、科研康復為一體的綜合性醫院。是國家首批公�� �甲等部隊醫院、全國首批醫療規范定點單位,是第四軍醫大� ��、東南大學等知名高等院校的教學醫院。曾被中國人民解放 軍空軍后勤部衛生部評為衛生工作先進單位,先后兩次榮立�� �體二等功。 original issue reported on code google com by gmail com on jun at
| 1
|
17,621
| 3,012,772,026
|
IssuesEvent
|
2015-07-29 02:25:49
|
yawlfoundation/yawl
|
https://api.github.com/repos/yawlfoundation/yawl
|
closed
|
Schema validation problem at runtime
|
auto-migrated Category-Component-ResService Priority-Medium Type-Defect
|
```
Running the attached example and filling in the fields of the presented
form yields the error message attached.
```
Original issue reported on code.google.com by `arthurte...@gmail.com` on 18 Aug 2008 at 8:27
Attachments:
* [net57.xml](https://storage.googleapis.com/google-code-attachments/yawl/issue-105/comment-0/net57.xml)
* [net57.ywl](https://storage.googleapis.com/google-code-attachments/yawl/issue-105/comment-0/net57.ywl)
* [Screenshot.rtf](https://storage.googleapis.com/google-code-attachments/yawl/issue-105/comment-0/Screenshot.rtf)
|
1.0
|
Schema validation problem at runtime - ```
Running the attached example and filling in the fields of the presented
form yields the error message attached.
```
Original issue reported on code.google.com by `arthurte...@gmail.com` on 18 Aug 2008 at 8:27
Attachments:
* [net57.xml](https://storage.googleapis.com/google-code-attachments/yawl/issue-105/comment-0/net57.xml)
* [net57.ywl](https://storage.googleapis.com/google-code-attachments/yawl/issue-105/comment-0/net57.ywl)
* [Screenshot.rtf](https://storage.googleapis.com/google-code-attachments/yawl/issue-105/comment-0/Screenshot.rtf)
|
defect
|
schema validation problem at runtime running the attached example and filling in the fields of the presented form yields the error message attached original issue reported on code google com by arthurte gmail com on aug at attachments
| 1
|
43,129
| 11,494,661,174
|
IssuesEvent
|
2020-02-12 02:18:38
|
NREL/EnergyPlus
|
https://api.github.com/repos/NREL/EnergyPlus
|
closed
|
Crash when using a WaterHeater:HeatPump and also a simple (standalone) WaterHeaterMixed
|
Defect
|
Issue overview
--------------
I get a hard crash, in calcStandardRatings. Will start by creating a proper unit test to isolate the bug, then fix it.
### Details
Some additional details for this issue (if relevant):
- Platform (Operating system, version)
- Version of EnergyPlus (if using an intermediate build, include SHA)
- Unmethours link or helpdesk ticket number
### Checklist
Add to this list or remove from it as applicable. This is a simple templated set of guidelines.
- [ ] Defect file added (list location of defect file here)
- [ ] Ticket added to Pivotal for defect (development team task)
- [ ] Pull request created (the pull request will have additional tasks related to reviewing changes that fix this defect)
|
1.0
|
Crash when using a WaterHeater:HeatPump and also a simple (standalone) WaterHeaterMixed - Issue overview
--------------
I get a hard crash, in calcStandardRatings. Will start by creating a proper unit test to isolate the bug, then fix it.
### Details
Some additional details for this issue (if relevant):
- Platform (Operating system, version)
- Version of EnergyPlus (if using an intermediate build, include SHA)
- Unmethours link or helpdesk ticket number
### Checklist
Add to this list or remove from it as applicable. This is a simple templated set of guidelines.
- [ ] Defect file added (list location of defect file here)
- [ ] Ticket added to Pivotal for defect (development team task)
- [ ] Pull request created (the pull request will have additional tasks related to reviewing changes that fix this defect)
|
defect
|
crash when using a waterheater heatpump and also a simple standalone waterheatermixed issue overview i get a hard crash in calcstandardratings will start by creating a proper unit test to isolate the bug then fix it details some additional details for this issue if relevant platform operating system version version of energyplus if using an intermediate build include sha unmethours link or helpdesk ticket number checklist add to this list or remove from it as applicable this is a simple templated set of guidelines defect file added list location of defect file here ticket added to pivotal for defect development team task pull request created the pull request will have additional tasks related to reviewing changes that fix this defect
| 1
|
101,350
| 31,035,385,581
|
IssuesEvent
|
2023-08-10 14:55:44
|
zephyrproject-rtos/zephyr
|
https://api.github.com/repos/zephyrproject-rtos/zephyr
|
closed
|
gen_isr_tables: unable to support riscv plic interrupt mapping
|
bug priority: medium area: Kernel area: Build System area: RISCV
|
**Describe the bug**
Currently, in a multi-level (nested) interrupt controller arrangement, the maximum number of IRQs supported for a given level is silently hard-coded with a few "magic numbers".
https://github.com/zephyrproject-rtos/zephyr/blob/main/scripts/build/gen_isr_tables.py#L18-L28
The per-level limit set in Python is 255. However, some architectures (RISC-V being one of them) have a standard interrupt controller design that supports far more than 255.
Directly from the RISC-V PLIC specification, this standard interrupt controller supports 1023 IRQs locally (so significantly more than 255).
https://github.com/riscv/riscv-plic-spec/blob/master/riscv-plic.adoc#memory-map
Moreover, there are many boards in Zephyr which already include a PLIC in their Devicetree.
```shell
# likely some duplication, but there are many plics..
grep "plic" $(find boards soc arch -name '*.dts' -o -name '*.dtsi') | wc -l
157
```
Namely, `adp_xc7k_ae350` ([andes_v5_ae350.dtsi](https://github.com/zephyrproject-rtos/zephyr/blob/main/dts/riscv/andes/andes_v5_ae350.dtsi#L179))
This is a bit of "a big deal" because Zephyr is unable to accomadate a fundamental feature of most RISC-V designs. Luckily it is 95% a configuration and build issue.
Please also mention any information which could help others to understand
the problem you're facing:
- What target platform are you using? `adp_xc7k_ae350`
- What have you tried to diagnose or workaround this issue? added to the testsuite and found the root cause of the error
- Is this a regression? Likely not.
**To Reproduce**
Add the test case ~~in the linked PR~~ Linked at the bottom of this issue. Optionally, make the modifications mentioned below to increase verbosity of `gen_isr_tables.py`.
1. `west build -p auto -b adp_xc7k_ae350 -T tests/kernel/gen_isr_table/arch.interrupt.gen_isr_table.riscv64.multi_level`
2. See error
**Expected behavior**
1. The build to succeed without issue.
2. The interrupt is routed and available to SW at the given IRQ
**Impact**
It's actually a bit of a showstopper.
**Logs and console output**
```shell
west build -p auto -b adp_xc7k_ae350 -T tests/kernel/gen_isr_table/arch.interrupt.gen_isr_table.riscv64.multi_level
...
gen_isr_tables.py: 2nd level offsets: [11]
gen_isr_tables.py: (64, 0)
gen_isr_tables.py: Configured interrupt routing
gen_isr_tables.py: handler irq flags param
gen_isr_tables.py: --------------------------
gen_isr_tables.py: 0xa72 1 0 0xb01dface
gen_isr_tables.py: 0x2618 11 0 0x0
gen_isr_tables.py: 0x2c50 2059 0 0x5260
gen_isr_tables.py: 0x3298 7 0 0x0
gen_isr_tables.py: offset is 0
gen_isr_tables.py: num_vectors is 64
gen_isr_tables.py: IRQ = 0x1
gen_isr_tables.py: IRQ_level = 1
gen_isr_tables.py: IRQ_Indx = 1
gen_isr_tables.py: IRQ_Pos = 1
gen_isr_tables.py: IRQ = 0xb
gen_isr_tables.py: IRQ_level = 1
gen_isr_tables.py: IRQ_Indx = 11
gen_isr_tables.py: IRQ_Pos = 11
gen_isr_tables.py: IRQ = 0x80b
gen_isr_tables.py: IRQ_level = 2
gen_isr_tables.py: IRQ_Indx = 8
gen_isr_tables.py: IRQ_Pos = 19
gen_isr_tables.py: IRQ = 0x7
gen_isr_tables.py: IRQ_level = 1
gen_isr_tables.py: IRQ_Indx = 7
gen_isr_tables.py: IRQ_Pos = 7
gen_isr_tables.py: error: MAX_IRQ_PER_AGGREGATOR: 1023 does not fit in the bitmask 0xff
ninja: build stopped: subcommand failed.
```
**Environment (please complete the following information):**
- OS: (e.g. Linux, MacOS, Windows): Linux
- Toolchain (e.g Zephyr SDK, ...): Zephyr SDK v0.16.1
- Commit SHA or Version used: b0e9cb3f480ed475da584d7179c77e0ff19aa80b
**Additional context**
Test case here
https://bit.ly/45i4DL2
|
1.0
|
gen_isr_tables: unable to support riscv plic interrupt mapping - **Describe the bug**
Currently, in a multi-level (nested) interrupt controller arrangement, the maximum number of IRQs supported for a given level is silently hard-coded with a few "magic numbers".
https://github.com/zephyrproject-rtos/zephyr/blob/main/scripts/build/gen_isr_tables.py#L18-L28
The per-level limit set in Python is 255. However, some architectures (RISC-V being one of them) have a standard interrupt controller design that supports far more than 255.
Directly from the RISC-V PLIC specification, this standard interrupt controller supports 1023 IRQs locally (so significantly more than 255).
https://github.com/riscv/riscv-plic-spec/blob/master/riscv-plic.adoc#memory-map
Moreover, there are many boards in Zephyr which already include a PLIC in their Devicetree.
```shell
# likely some duplication, but there are many plics..
grep "plic" $(find boards soc arch -name '*.dts' -o -name '*.dtsi') | wc -l
157
```
Namely, `adp_xc7k_ae350` ([andes_v5_ae350.dtsi](https://github.com/zephyrproject-rtos/zephyr/blob/main/dts/riscv/andes/andes_v5_ae350.dtsi#L179))
This is a bit of "a big deal" because Zephyr is unable to accomadate a fundamental feature of most RISC-V designs. Luckily it is 95% a configuration and build issue.
Please also mention any information which could help others to understand
the problem you're facing:
- What target platform are you using? `adp_xc7k_ae350`
- What have you tried to diagnose or workaround this issue? added to the testsuite and found the root cause of the error
- Is this a regression? Likely not.
**To Reproduce**
Add the test case ~~in the linked PR~~ Linked at the bottom of this issue. Optionally, make the modifications mentioned below to increase verbosity of `gen_isr_tables.py`.
1. `west build -p auto -b adp_xc7k_ae350 -T tests/kernel/gen_isr_table/arch.interrupt.gen_isr_table.riscv64.multi_level`
2. See error
**Expected behavior**
1. The build to succeed without issue.
2. The interrupt is routed and available to SW at the given IRQ
**Impact**
It's actually a bit of a showstopper.
**Logs and console output**
```shell
west build -p auto -b adp_xc7k_ae350 -T tests/kernel/gen_isr_table/arch.interrupt.gen_isr_table.riscv64.multi_level
...
gen_isr_tables.py: 2nd level offsets: [11]
gen_isr_tables.py: (64, 0)
gen_isr_tables.py: Configured interrupt routing
gen_isr_tables.py: handler irq flags param
gen_isr_tables.py: --------------------------
gen_isr_tables.py: 0xa72 1 0 0xb01dface
gen_isr_tables.py: 0x2618 11 0 0x0
gen_isr_tables.py: 0x2c50 2059 0 0x5260
gen_isr_tables.py: 0x3298 7 0 0x0
gen_isr_tables.py: offset is 0
gen_isr_tables.py: num_vectors is 64
gen_isr_tables.py: IRQ = 0x1
gen_isr_tables.py: IRQ_level = 1
gen_isr_tables.py: IRQ_Indx = 1
gen_isr_tables.py: IRQ_Pos = 1
gen_isr_tables.py: IRQ = 0xb
gen_isr_tables.py: IRQ_level = 1
gen_isr_tables.py: IRQ_Indx = 11
gen_isr_tables.py: IRQ_Pos = 11
gen_isr_tables.py: IRQ = 0x80b
gen_isr_tables.py: IRQ_level = 2
gen_isr_tables.py: IRQ_Indx = 8
gen_isr_tables.py: IRQ_Pos = 19
gen_isr_tables.py: IRQ = 0x7
gen_isr_tables.py: IRQ_level = 1
gen_isr_tables.py: IRQ_Indx = 7
gen_isr_tables.py: IRQ_Pos = 7
gen_isr_tables.py: error: MAX_IRQ_PER_AGGREGATOR: 1023 does not fit in the bitmask 0xff
ninja: build stopped: subcommand failed.
```
**Environment (please complete the following information):**
- OS: (e.g. Linux, MacOS, Windows): Linux
- Toolchain (e.g Zephyr SDK, ...): Zephyr SDK v0.16.1
- Commit SHA or Version used: b0e9cb3f480ed475da584d7179c77e0ff19aa80b
**Additional context**
Test case here
https://bit.ly/45i4DL2
|
non_defect
|
gen isr tables unable to support riscv plic interrupt mapping describe the bug currently in a multi level nested interrupt controller arrangement the maximum number of irqs supported for a given level is silently hard coded with a few magic numbers the per level limit set in python is however some architectures risc v being one of them have a standard interrupt controller design that supports far more than directly from the risc v plic specification this standard interrupt controller supports irqs locally so significantly more than moreover there are many boards in zephyr which already include a plic in their devicetree shell likely some duplication but there are many plics grep plic find boards soc arch name dts o name dtsi wc l namely adp this is a bit of a big deal because zephyr is unable to accomadate a fundamental feature of most risc v designs luckily it is a configuration and build issue please also mention any information which could help others to understand the problem you re facing what target platform are you using adp what have you tried to diagnose or workaround this issue added to the testsuite and found the root cause of the error is this a regression likely not to reproduce add the test case in the linked pr linked at the bottom of this issue optionally make the modifications mentioned below to increase verbosity of gen isr tables py west build p auto b adp t tests kernel gen isr table arch interrupt gen isr table multi level see error expected behavior the build to succeed without issue the interrupt is routed and available to sw at the given irq impact it s actually a bit of a showstopper logs and console output shell west build p auto b adp t tests kernel gen isr table arch interrupt gen isr table multi level gen isr tables py level offsets gen isr tables py gen isr tables py configured interrupt routing gen isr tables py handler irq flags param gen isr tables py gen isr tables py gen isr tables py gen isr tables py gen isr tables py gen isr tables py offset is gen isr tables py num vectors is gen isr tables py irq gen isr tables py irq level gen isr tables py irq indx gen isr tables py irq pos gen isr tables py irq gen isr tables py irq level gen isr tables py irq indx gen isr tables py irq pos gen isr tables py irq gen isr tables py irq level gen isr tables py irq indx gen isr tables py irq pos gen isr tables py irq gen isr tables py irq level gen isr tables py irq indx gen isr tables py irq pos gen isr tables py error max irq per aggregator does not fit in the bitmask ninja build stopped subcommand failed environment please complete the following information os e g linux macos windows linux toolchain e g zephyr sdk zephyr sdk commit sha or version used additional context test case here
| 0
|
21,874
| 3,574,140,303
|
IssuesEvent
|
2016-01-27 10:25:58
|
hazelcast/hazelcast
|
https://api.github.com/repos/hazelcast/hazelcast
|
closed
|
Parallel execution of MapStore#store method for the same key triggered by IMap#flush
|
Team: Core Type: Defect
|
Hi,
this issue exists in Hazelcast 3.3-EA and also a build of the 3.3 development branch from August 7, 2014. It is related to issue #2128.
If you have a map with write-behind and a map store configured (eviction is not needed), and you call the flush method in the IMap, the map store's store method can be called concurrently for the same key, namely for those keys which are in the write-behind queue and then forcibly stored by the flush. This is because the flush operation storing all entries in the write-behind queue seems to be executed in the operation thread, while the periodic processing of the write-behind queue is done by an executor service defined in the WriteBehindQueueManager.
The following piece of code is a unit test to reproduce the issue:
```java
public class TestMapStore16 extends TestCase {
private static final String mapName = "testMap" + TestMapStore15.class.getSimpleName();
private static final int writeDelaySeconds = 1;
@Override
protected void setUp() throws Exception {
// configure logging
if (!TestHazelcast.loggingInitialized) {
TestHazelcast.loggingInitialized = true;
BasicConfigurator.configure();
}
}
public void testNoStoreConcurrency() throws Exception {
// create shared hazelcast instance config
final Config config = new XmlConfigBuilder().build();
config.setProperty("hazelcast.logging.type", "log4j");
// create shared map store implementation
SlowConcurrencyCheckingMapStore store = new SlowConcurrencyCheckingMapStore();
// configure map store
MapStoreConfig mapStoreConfig = new MapStoreConfig();
mapStoreConfig.setEnabled(true);
mapStoreConfig.setWriteDelaySeconds(writeDelaySeconds);
mapStoreConfig.setClassName(null);
mapStoreConfig.setImplementation(store);
MapConfig mapConfig = config.getMapConfig(mapName);
mapConfig.setMapStoreConfig(mapStoreConfig);
// start hazelcast instance
HazelcastInstance hcInstance = Hazelcast.newHazelcastInstance(config);
IMap<String, String> testMap = hcInstance.getMap(mapName);
// This will trigger a write-behind store in roughly writeDelaySeconds, the store itself is artificially delayed to take 10 seconds
testMap.put("key", "value");
// Wait until the store operation has started
Thread.sleep((writeDelaySeconds + 2) * 1000);
// Flush the map, causing the not yet stored entries to be stored
testMap.flush();
// Make sure that the store triggered by the flush did not overlap with a write-behind call to store for the same key
assertEquals("There were concurrent executions of store for the same key", 0, store.getConcurrentStoreCount());
}
}
```
It relies on the following dummy-store:
```java
/**
* Map store that sleeps for 10 seconds in the store implementation and counts the number of
* concurrently executed stores for the same key.
*/
public class SlowConcurrencyCheckingMapStore implements MapStore<String, String> {
private static final long SLEEP_TIME = 10000;
private ConcurrentHashMap<String, String> store = new ConcurrentHashMap<String, String>();
private Set<String> activeKeys = Collections.newSetFromMap(new ConcurrentHashMap<String, Boolean>());
private AtomicInteger concurrentStoreCount = new AtomicInteger(0);
public int getConcurrentStoreCount() {
return concurrentStoreCount.get();
}
@Override
public void store(String key, String value) {
boolean added = activeKeys.add(key);
if (!added) concurrentStoreCount.incrementAndGet();
try {
try {
Thread.sleep(SLEEP_TIME);
} catch (InterruptedException e) {
// ignore
}
store.put(key, value);
} finally {
if (added) activeKeys.remove(key);
}
}
@Override
public void storeAll(Map<String, String> map) {
for (Entry<String, String> entry : map.entrySet()) {
store(entry.getKey(), entry.getValue());
}
}
@Override
public String load(String key) {
return store.get(key);
}
@Override
public Map<String, String> loadAll(Collection<String> keys) {
Map<String, String> result = new HashMap<String, String>();
for (String key : keys) {
result.put(key, store.get(key));
}
return result;
}
@Override
public Set<String> loadAllKeys() {
return store.keySet();
}
@Override
public void delete(String key) {
store.remove(key);
}
@Override
public void deleteAll(Collection<String> keys) {
for (String key : keys) {
store.remove(key);
}
}
}
```
A workaround is to ensure mutual exclusion in the MapStore implementation. However, I would expect the mutual exclusion guarantee from Hazelcast.
Cheers,
Andreas
|
1.0
|
Parallel execution of MapStore#store method for the same key triggered by IMap#flush - Hi,
this issue exists in Hazelcast 3.3-EA and also a build of the 3.3 development branch from August 7, 2014. It is related to issue #2128.
If you have a map with write-behind and a map store configured (eviction is not needed), and you call the flush method in the IMap, the map store's store method can be called concurrently for the same key, namely for those keys which are in the write-behind queue and then forcibly stored by the flush. This is because the flush operation storing all entries in the write-behind queue seems to be executed in the operation thread, while the periodic processing of the write-behind queue is done by an executor service defined in the WriteBehindQueueManager.
The following piece of code is a unit test to reproduce the issue:
```java
public class TestMapStore16 extends TestCase {
private static final String mapName = "testMap" + TestMapStore15.class.getSimpleName();
private static final int writeDelaySeconds = 1;
@Override
protected void setUp() throws Exception {
// configure logging
if (!TestHazelcast.loggingInitialized) {
TestHazelcast.loggingInitialized = true;
BasicConfigurator.configure();
}
}
public void testNoStoreConcurrency() throws Exception {
// create shared hazelcast instance config
final Config config = new XmlConfigBuilder().build();
config.setProperty("hazelcast.logging.type", "log4j");
// create shared map store implementation
SlowConcurrencyCheckingMapStore store = new SlowConcurrencyCheckingMapStore();
// configure map store
MapStoreConfig mapStoreConfig = new MapStoreConfig();
mapStoreConfig.setEnabled(true);
mapStoreConfig.setWriteDelaySeconds(writeDelaySeconds);
mapStoreConfig.setClassName(null);
mapStoreConfig.setImplementation(store);
MapConfig mapConfig = config.getMapConfig(mapName);
mapConfig.setMapStoreConfig(mapStoreConfig);
// start hazelcast instance
HazelcastInstance hcInstance = Hazelcast.newHazelcastInstance(config);
IMap<String, String> testMap = hcInstance.getMap(mapName);
// This will trigger a write-behind store in roughly writeDelaySeconds, the store itself is artificially delayed to take 10 seconds
testMap.put("key", "value");
// Wait until the store operation has started
Thread.sleep((writeDelaySeconds + 2) * 1000);
// Flush the map, causing the not yet stored entries to be stored
testMap.flush();
// Make sure that the store triggered by the flush did not overlap with a write-behind call to store for the same key
assertEquals("There were concurrent executions of store for the same key", 0, store.getConcurrentStoreCount());
}
}
```
It relies on the following dummy-store:
```java
/**
* Map store that sleeps for 10 seconds in the store implementation and counts the number of
* concurrently executed stores for the same key.
*/
public class SlowConcurrencyCheckingMapStore implements MapStore<String, String> {
private static final long SLEEP_TIME = 10000;
private ConcurrentHashMap<String, String> store = new ConcurrentHashMap<String, String>();
private Set<String> activeKeys = Collections.newSetFromMap(new ConcurrentHashMap<String, Boolean>());
private AtomicInteger concurrentStoreCount = new AtomicInteger(0);
public int getConcurrentStoreCount() {
return concurrentStoreCount.get();
}
@Override
public void store(String key, String value) {
boolean added = activeKeys.add(key);
if (!added) concurrentStoreCount.incrementAndGet();
try {
try {
Thread.sleep(SLEEP_TIME);
} catch (InterruptedException e) {
// ignore
}
store.put(key, value);
} finally {
if (added) activeKeys.remove(key);
}
}
@Override
public void storeAll(Map<String, String> map) {
for (Entry<String, String> entry : map.entrySet()) {
store(entry.getKey(), entry.getValue());
}
}
@Override
public String load(String key) {
return store.get(key);
}
@Override
public Map<String, String> loadAll(Collection<String> keys) {
Map<String, String> result = new HashMap<String, String>();
for (String key : keys) {
result.put(key, store.get(key));
}
return result;
}
@Override
public Set<String> loadAllKeys() {
return store.keySet();
}
@Override
public void delete(String key) {
store.remove(key);
}
@Override
public void deleteAll(Collection<String> keys) {
for (String key : keys) {
store.remove(key);
}
}
}
```
A workaround is to ensure mutual exclusion in the MapStore implementation. However, I would expect the mutual exclusion guarantee from Hazelcast.
Cheers,
Andreas
|
defect
|
parallel execution of mapstore store method for the same key triggered by imap flush hi this issue exists in hazelcast ea and also a build of the development branch from august it is related to issue if you have a map with write behind and a map store configured eviction is not needed and you call the flush method in the imap the map store s store method can be called concurrently for the same key namely for those keys which are in the write behind queue and then forcibly stored by the flush this is because the flush operation storing all entries in the write behind queue seems to be executed in the operation thread while the periodic processing of the write behind queue is done by an executor service defined in the writebehindqueuemanager the following piece of code is a unit test to reproduce the issue java public class extends testcase private static final string mapname testmap class getsimplename private static final int writedelayseconds override protected void setup throws exception configure logging if testhazelcast logginginitialized testhazelcast logginginitialized true basicconfigurator configure public void testnostoreconcurrency throws exception create shared hazelcast instance config final config config new xmlconfigbuilder build config setproperty hazelcast logging type create shared map store implementation slowconcurrencycheckingmapstore store new slowconcurrencycheckingmapstore configure map store mapstoreconfig mapstoreconfig new mapstoreconfig mapstoreconfig setenabled true mapstoreconfig setwritedelayseconds writedelayseconds mapstoreconfig setclassname null mapstoreconfig setimplementation store mapconfig mapconfig config getmapconfig mapname mapconfig setmapstoreconfig mapstoreconfig start hazelcast instance hazelcastinstance hcinstance hazelcast newhazelcastinstance config imap testmap hcinstance getmap mapname this will trigger a write behind store in roughly writedelayseconds the store itself is artificially delayed to take seconds testmap put key value wait until the store operation has started thread sleep writedelayseconds flush the map causing the not yet stored entries to be stored testmap flush make sure that the store triggered by the flush did not overlap with a write behind call to store for the same key assertequals there were concurrent executions of store for the same key store getconcurrentstorecount it relies on the following dummy store java map store that sleeps for seconds in the store implementation and counts the number of concurrently executed stores for the same key public class slowconcurrencycheckingmapstore implements mapstore private static final long sleep time private concurrenthashmap store new concurrenthashmap private set activekeys collections newsetfrommap new concurrenthashmap private atomicinteger concurrentstorecount new atomicinteger public int getconcurrentstorecount return concurrentstorecount get override public void store string key string value boolean added activekeys add key if added concurrentstorecount incrementandget try try thread sleep sleep time catch interruptedexception e ignore store put key value finally if added activekeys remove key override public void storeall map map for entry entry map entryset store entry getkey entry getvalue override public string load string key return store get key override public map loadall collection keys map result new hashmap for string key keys result put key store get key return result override public set loadallkeys return store keyset override public void delete string key store remove key override public void deleteall collection keys for string key keys store remove key a workaround is to ensure mutual exclusion in the mapstore implementation however i would expect the mutual exclusion guarantee from hazelcast cheers andreas
| 1
|
194,072
| 15,396,361,415
|
IssuesEvent
|
2021-03-03 20:31:48
|
njgibbon/fend
|
https://api.github.com/repos/njgibbon/fend
|
closed
|
Research
|
documentation
|
Complete testing and example config on at least 3 repositories and record this in the research section of the docs. For example and illustration.
|
1.0
|
Research - Complete testing and example config on at least 3 repositories and record this in the research section of the docs. For example and illustration.
|
non_defect
|
research complete testing and example config on at least repositories and record this in the research section of the docs for example and illustration
| 0
|
72,559
| 24,182,491,061
|
IssuesEvent
|
2022-09-23 10:11:59
|
hazelcast/hazelcast
|
https://api.github.com/repos/hazelcast/hazelcast
|
opened
|
ClassNotFoundException for SQL query
|
Type: Defect
|
**Describe the bug**
An exception is thrown if an SQL query is run against a map containing custom classes.
Tested using 5.2.0-SNAPSHOT
```
Encountered an unexpected exception while executing the query:
Failed to deserialize query result value: java.lang.ClassNotFoundException: hazelcast.platform.demos.banking.trademonitor.NasdaqFinancialStatus
com.hazelcast.nio.serialization.HazelcastSerializationException: Failed to deserialize query result value: java.lang.ClassNotFoundException: hazelcast.platform.demos.banking.trademonitor.NasdaqFinancialStatus
at com.hazelcast.sql.impl.SqlRowImpl.getObject0(SqlRowImpl.java:76)
at com.hazelcast.sql.impl.SqlRowImpl.getObject(SqlRowImpl.java:50)
at com.hazelcast.client.console.SqlConsole.printRow(SqlConsole.java:429)
at com.hazelcast.client.console.SqlConsole.executeSqlCmd(SqlConsole.java:205)
at com.hazelcast.client.console.SqlConsole.run(SqlConsole.java:168)
at com.hazelcast.function.ConsumerEx.accept(ConsumerEx.java:47)
at com.hazelcast.client.console.HazelcastCommandLine.runWithHazelcast(HazelcastCommandLine.java:449)
at com.hazelcast.client.console.HazelcastCommandLine.sql(HazelcastCommandLine.java:149)
at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:104)
at java.base/java.lang.reflect.Method.invoke(Method.java:577)
at picocli.CommandLine.executeUserObject(CommandLine.java:1972)
at picocli.CommandLine.access$1300(CommandLine.java:145)
at picocli.CommandLine$RunAll.recursivelyExecuteUserObject(CommandLine.java:2431)
at picocli.CommandLine$RunAll.recursivelyExecuteUserObject(CommandLine.java:2433)
at picocli.CommandLine$RunAll.handle(CommandLine.java:2428)
at picocli.CommandLine$RunAll.handle(CommandLine.java:2389)
at picocli.CommandLine$AbstractParseResultHandler.handleParseResult(CommandLine.java:2172)
at picocli.CommandLine.parseWithHandlers(CommandLine.java:2559)
at com.hazelcast.client.console.HazelcastCommandLine.runCommandLine(HazelcastCommandLine.java:553)
at com.hazelcast.client.console.HazelcastCommandLine.main(HazelcastCommandLine.java:133)
```
**Expected behavior**
No exception to be thrown
**To Reproduce**
Create a custom class - eg. https://github.com/hazelcast/hazelcast-platform-demos/blob/master/banking/trade-monitor/custom-classes/src/main/java/hazelcast/platform/demos/banking/trademonitor/SymbolInfo.java
Insert it into a map
Run `SELECT * FROM map` from hz-cli
**Additional context**
Superficially this is user-error. The classes aren't on the classpath of hz-cli.
However, since the SQL runs serverside and returns standard columns - text and numerics - deserialization should run serverside (which has the classes).
|
1.0
|
ClassNotFoundException for SQL query - **Describe the bug**
An exception is thrown if an SQL query is run against a map containing custom classes.
Tested using 5.2.0-SNAPSHOT
```
Encountered an unexpected exception while executing the query:
Failed to deserialize query result value: java.lang.ClassNotFoundException: hazelcast.platform.demos.banking.trademonitor.NasdaqFinancialStatus
com.hazelcast.nio.serialization.HazelcastSerializationException: Failed to deserialize query result value: java.lang.ClassNotFoundException: hazelcast.platform.demos.banking.trademonitor.NasdaqFinancialStatus
at com.hazelcast.sql.impl.SqlRowImpl.getObject0(SqlRowImpl.java:76)
at com.hazelcast.sql.impl.SqlRowImpl.getObject(SqlRowImpl.java:50)
at com.hazelcast.client.console.SqlConsole.printRow(SqlConsole.java:429)
at com.hazelcast.client.console.SqlConsole.executeSqlCmd(SqlConsole.java:205)
at com.hazelcast.client.console.SqlConsole.run(SqlConsole.java:168)
at com.hazelcast.function.ConsumerEx.accept(ConsumerEx.java:47)
at com.hazelcast.client.console.HazelcastCommandLine.runWithHazelcast(HazelcastCommandLine.java:449)
at com.hazelcast.client.console.HazelcastCommandLine.sql(HazelcastCommandLine.java:149)
at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:104)
at java.base/java.lang.reflect.Method.invoke(Method.java:577)
at picocli.CommandLine.executeUserObject(CommandLine.java:1972)
at picocli.CommandLine.access$1300(CommandLine.java:145)
at picocli.CommandLine$RunAll.recursivelyExecuteUserObject(CommandLine.java:2431)
at picocli.CommandLine$RunAll.recursivelyExecuteUserObject(CommandLine.java:2433)
at picocli.CommandLine$RunAll.handle(CommandLine.java:2428)
at picocli.CommandLine$RunAll.handle(CommandLine.java:2389)
at picocli.CommandLine$AbstractParseResultHandler.handleParseResult(CommandLine.java:2172)
at picocli.CommandLine.parseWithHandlers(CommandLine.java:2559)
at com.hazelcast.client.console.HazelcastCommandLine.runCommandLine(HazelcastCommandLine.java:553)
at com.hazelcast.client.console.HazelcastCommandLine.main(HazelcastCommandLine.java:133)
```
**Expected behavior**
No exception to be thrown
**To Reproduce**
Create a custom class - eg. https://github.com/hazelcast/hazelcast-platform-demos/blob/master/banking/trade-monitor/custom-classes/src/main/java/hazelcast/platform/demos/banking/trademonitor/SymbolInfo.java
Insert it into a map
Run `SELECT * FROM map` from hz-cli
**Additional context**
Superficially this is user-error. The classes aren't on the classpath of hz-cli.
However, since the SQL runs serverside and returns standard columns - text and numerics - deserialization should run serverside (which has the classes).
|
defect
|
classnotfoundexception for sql query describe the bug an exception is thrown if an sql query is run against a map containing custom classes tested using snapshot encountered an unexpected exception while executing the query failed to deserialize query result value java lang classnotfoundexception hazelcast platform demos banking trademonitor nasdaqfinancialstatus com hazelcast nio serialization hazelcastserializationexception failed to deserialize query result value java lang classnotfoundexception hazelcast platform demos banking trademonitor nasdaqfinancialstatus at com hazelcast sql impl sqlrowimpl sqlrowimpl java at com hazelcast sql impl sqlrowimpl getobject sqlrowimpl java at com hazelcast client console sqlconsole printrow sqlconsole java at com hazelcast client console sqlconsole executesqlcmd sqlconsole java at com hazelcast client console sqlconsole run sqlconsole java at com hazelcast function consumerex accept consumerex java at com hazelcast client console hazelcastcommandline runwithhazelcast hazelcastcommandline java at com hazelcast client console hazelcastcommandline sql hazelcastcommandline java at java base jdk internal reflect directmethodhandleaccessor invoke directmethodhandleaccessor java at java base java lang reflect method invoke method java at picocli commandline executeuserobject commandline java at picocli commandline access commandline java at picocli commandline runall recursivelyexecuteuserobject commandline java at picocli commandline runall recursivelyexecuteuserobject commandline java at picocli commandline runall handle commandline java at picocli commandline runall handle commandline java at picocli commandline abstractparseresulthandler handleparseresult commandline java at picocli commandline parsewithhandlers commandline java at com hazelcast client console hazelcastcommandline runcommandline hazelcastcommandline java at com hazelcast client console hazelcastcommandline main hazelcastcommandline java expected behavior no exception to be thrown to reproduce create a custom class eg insert it into a map run select from map from hz cli additional context superficially this is user error the classes aren t on the classpath of hz cli however since the sql runs serverside and returns standard columns text and numerics deserialization should run serverside which has the classes
| 1
|
227,841
| 7,543,535,567
|
IssuesEvent
|
2018-04-17 15:45:51
|
mandeep/sublime-text-conda
|
https://api.github.com/repos/mandeep/sublime-text-conda
|
closed
|
Renaming caused macOS 10.13 to not load old user configuration
|
Priority: High Status: Completed Type: Bug
|
The following line caused macOS 10.13 not to load the old user configuration file as Python's `open()` is case sensitive.
https://github.com/mandeep/sublime-text-conda/blob/44095806ae1f6b12e8f8a0f5f50eeb22ea355864/commands.py#L20
Since HFS+ and AFPS by default are case-insensitive, deleting and recreating the file will solve the problem.
|
1.0
|
Renaming caused macOS 10.13 to not load old user configuration - The following line caused macOS 10.13 not to load the old user configuration file as Python's `open()` is case sensitive.
https://github.com/mandeep/sublime-text-conda/blob/44095806ae1f6b12e8f8a0f5f50eeb22ea355864/commands.py#L20
Since HFS+ and AFPS by default are case-insensitive, deleting and recreating the file will solve the problem.
|
non_defect
|
renaming caused macos to not load old user configuration the following line caused macos not to load the old user configuration file as python s open is case sensitive since hfs and afps by default are case insensitive deleting and recreating the file will solve the problem
| 0
|
32,685
| 13,912,126,030
|
IssuesEvent
|
2020-10-20 18:23:59
|
an0rak-dev/gif
|
https://api.github.com/repos/an0rak-dev/gif
|
opened
|
Add a new Gif to a User's library
|
enhancement service:image
|
From the Home screen, the user will have the ability to open a form where she/he can upload a Gif in her/his library.
Here is the required informations :
* the name of the gif
* the gif itself (uploaded from the user's computer)
|
1.0
|
Add a new Gif to a User's library - From the Home screen, the user will have the ability to open a form where she/he can upload a Gif in her/his library.
Here is the required informations :
* the name of the gif
* the gif itself (uploaded from the user's computer)
|
non_defect
|
add a new gif to a user s library from the home screen the user will have the ability to open a form where she he can upload a gif in her his library here is the required informations the name of the gif the gif itself uploaded from the user s computer
| 0
|
25,473
| 2,683,810,059
|
IssuesEvent
|
2015-03-28 10:30:59
|
ConEmu/old-issues
|
https://api.github.com/repos/ConEmu/old-issues
|
closed
|
Emenu + Mouse RB
|
2–5 stars bug imported Priority-Medium
|
_From [dreggy...@gmail.com](https://code.google.com/u/116951980404761225403/) on July 01, 2009 05:53:45_
Версия ОС:Win XP SP2 (2002)
Версия FAR:Far 2.0 (build 1013)
Версия Conemu: ConEmu .Maximus5.090628b.7z
Версия Emenu: drkns 19.06.2009 16:28:22 +0200 - build 31
При использовании макроса
[reg]
REGEDIT4
[HKEY_CURRENT_USER\Software\Far2\KeyMacros\Shell\MsRClick]
"Sequence"="MsLClick $MMode 1 F11 x Enter $MMode 1"
[/reg]
для отображения графического меню, меню появляется, но не позволяет себя
скрыть никакими иными способами (esc, нажатия мыши в свободное место) кроме
выбора какого-нить пункта меню.
_Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=37_
|
1.0
|
Emenu + Mouse RB - _From [dreggy...@gmail.com](https://code.google.com/u/116951980404761225403/) on July 01, 2009 05:53:45_
Версия ОС:Win XP SP2 (2002)
Версия FAR:Far 2.0 (build 1013)
Версия Conemu: ConEmu .Maximus5.090628b.7z
Версия Emenu: drkns 19.06.2009 16:28:22 +0200 - build 31
При использовании макроса
[reg]
REGEDIT4
[HKEY_CURRENT_USER\Software\Far2\KeyMacros\Shell\MsRClick]
"Sequence"="MsLClick $MMode 1 F11 x Enter $MMode 1"
[/reg]
для отображения графического меню, меню появляется, но не позволяет себя
скрыть никакими иными способами (esc, нажатия мыши в свободное место) кроме
выбора какого-нить пункта меню.
_Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=37_
|
non_defect
|
emenu mouse rb from on july версия ос win xp версия far far build версия conemu conemu версия emenu drkns build при использовании макроса sequence mslclick mmode x enter mmode для отображения графического меню меню появляется но не позволяет себя скрыть никакими иными способами esc нажатия мыши в свободное место кроме выбора какого нить пункта меню original issue
| 0
|
70,361
| 23,139,892,890
|
IssuesEvent
|
2022-07-28 17:24:21
|
NREL/EnergyPlus
|
https://api.github.com/repos/NREL/EnergyPlus
|
closed
|
Supply fan heat effect is incorrect in standard ratings calculation for two speed DX cooling coil
|
Defect
|
Issue overview
--------------
Supply fan heat effect used to determine standard net cooling capacity for standard ratings calculation in CalcTwoSpeedDXCoilStandardRating function is missing supply air mass flow rate as a multiplier in equation below:
```
FanHeatCorrection = state.dataLoopNodes->Node(FanOutletNode).Enthalpy - state.dataLoopNodes->Node(FanInletNode).Enthalpy;
NetCoolingCapRated = state.dataDXCoils->DXCoil(DXCoilNum).RatedTotCap(1) * TotCapTempModFac * TotCapFlowModFac - FanHeatCorrection;
```
### Details
Some additional details for this issue (if relevant):
- Platform (Operating system, any)
- Version of EnergyPlus (Develop, [f361c6a](https://github.com/NREL/EnergyPlus/commit/f361c6a4ac42b83f11ec832ce483465bbc2cf371))
### Checklist
Add to this list or remove from it as applicable. This is a simple templated set of guidelines.
- [ ] Defect file added (list location of defect file here)
- [ ] Ticket added to Pivotal for defect (development team task)
- [ ] Pull request created (the pull request will have additional tasks related to reviewing changes that fix this defect)
|
1.0
|
Supply fan heat effect is incorrect in standard ratings calculation for two speed DX cooling coil - Issue overview
--------------
Supply fan heat effect used to determine standard net cooling capacity for standard ratings calculation in CalcTwoSpeedDXCoilStandardRating function is missing supply air mass flow rate as a multiplier in equation below:
```
FanHeatCorrection = state.dataLoopNodes->Node(FanOutletNode).Enthalpy - state.dataLoopNodes->Node(FanInletNode).Enthalpy;
NetCoolingCapRated = state.dataDXCoils->DXCoil(DXCoilNum).RatedTotCap(1) * TotCapTempModFac * TotCapFlowModFac - FanHeatCorrection;
```
### Details
Some additional details for this issue (if relevant):
- Platform (Operating system, any)
- Version of EnergyPlus (Develop, [f361c6a](https://github.com/NREL/EnergyPlus/commit/f361c6a4ac42b83f11ec832ce483465bbc2cf371))
### Checklist
Add to this list or remove from it as applicable. This is a simple templated set of guidelines.
- [ ] Defect file added (list location of defect file here)
- [ ] Ticket added to Pivotal for defect (development team task)
- [ ] Pull request created (the pull request will have additional tasks related to reviewing changes that fix this defect)
|
defect
|
supply fan heat effect is incorrect in standard ratings calculation for two speed dx cooling coil issue overview supply fan heat effect used to determine standard net cooling capacity for standard ratings calculation in calctwospeeddxcoilstandardrating function is missing supply air mass flow rate as a multiplier in equation below fanheatcorrection state dataloopnodes node fanoutletnode enthalpy state dataloopnodes node faninletnode enthalpy netcoolingcaprated state datadxcoils dxcoil dxcoilnum ratedtotcap totcaptempmodfac totcapflowmodfac fanheatcorrection details some additional details for this issue if relevant platform operating system any version of energyplus develop checklist add to this list or remove from it as applicable this is a simple templated set of guidelines defect file added list location of defect file here ticket added to pivotal for defect development team task pull request created the pull request will have additional tasks related to reviewing changes that fix this defect
| 1
|
79,777
| 29,046,999,324
|
IssuesEvent
|
2023-05-13 17:42:27
|
zed-industries/community
|
https://api.github.com/repos/zed-industries/community
|
closed
|
Changes aren't tracked when git repo is created with Zed open
|
defect git
|
### Check for existing issues
- [X] Completed
### Describe the bug
1. Create a directory
2. Open zed in this directory
3. Add a file to this directory and open it
4. Initialize a git repo and commit the file from step 3
5. Add changes to the file -> notice that changes aren't tracked in the git gutter
If you close zed and re-open it, the changes are now tracked in the gutter
### To reproduce
-
### Expected behavior
-
### Environment
Zed 0.59.0 – /Applications/Zed.app
macOS 12.6
architecture x86_64
### If applicable, add mockups / screenshots to help explain present your vision of the feature
_No response_
### If applicable, attach your `~/Library/Logs/Zed/Zed.log` file to this issue
_No response_
|
1.0
|
Changes aren't tracked when git repo is created with Zed open - ### Check for existing issues
- [X] Completed
### Describe the bug
1. Create a directory
2. Open zed in this directory
3. Add a file to this directory and open it
4. Initialize a git repo and commit the file from step 3
5. Add changes to the file -> notice that changes aren't tracked in the git gutter
If you close zed and re-open it, the changes are now tracked in the gutter
### To reproduce
-
### Expected behavior
-
### Environment
Zed 0.59.0 – /Applications/Zed.app
macOS 12.6
architecture x86_64
### If applicable, add mockups / screenshots to help explain present your vision of the feature
_No response_
### If applicable, attach your `~/Library/Logs/Zed/Zed.log` file to this issue
_No response_
|
defect
|
changes aren t tracked when git repo is created with zed open check for existing issues completed describe the bug create a directory open zed in this directory add a file to this directory and open it initialize a git repo and commit the file from step add changes to the file notice that changes aren t tracked in the git gutter if you close zed and re open it the changes are now tracked in the gutter to reproduce expected behavior environment zed – applications zed app macos architecture if applicable add mockups screenshots to help explain present your vision of the feature no response if applicable attach your library logs zed zed log file to this issue no response
| 1
|
15,675
| 2,868,980,707
|
IssuesEvent
|
2015-06-05 22:21:10
|
dart-lang/sdk
|
https://api.github.com/repos/dart-lang/sdk
|
closed
|
"Please use --trace" message is busted on Windows
|
Area-Pub Priority-Unassigned Triaged Type-Defect
|
The message pub prints on an unexpected error requesting that users include the --trace results in the bug report uses single quotes for its arguments. The Windows command line interprets these as literal quotes, causing the suggested command not to work.
We should have some more advanced shell-escaping logic here.
|
1.0
|
"Please use --trace" message is busted on Windows - The message pub prints on an unexpected error requesting that users include the --trace results in the bug report uses single quotes for its arguments. The Windows command line interprets these as literal quotes, causing the suggested command not to work.
We should have some more advanced shell-escaping logic here.
|
defect
|
please use trace message is busted on windows the message pub prints on an unexpected error requesting that users include the trace results in the bug report uses single quotes for its arguments the windows command line interprets these as literal quotes causing the suggested command not to work we should have some more advanced shell escaping logic here
| 1
|
143,539
| 19,185,917,937
|
IssuesEvent
|
2021-12-05 07:18:22
|
Maagan-Michael/invitease
|
https://api.github.com/repos/Maagan-Michael/invitease
|
closed
|
Replace Docker base image
|
enhancement security
|
The Docker base image is rather insecure when compared to the Alpine Docker for Python, better replace it.
|
True
|
Replace Docker base image - The Docker base image is rather insecure when compared to the Alpine Docker for Python, better replace it.
|
non_defect
|
replace docker base image the docker base image is rather insecure when compared to the alpine docker for python better replace it
| 0
|
72,420
| 24,109,757,108
|
IssuesEvent
|
2022-09-20 10:20:36
|
matrix-org/synapse
|
https://api.github.com/repos/matrix-org/synapse
|
closed
|
memory leak since 1.53.0
|
A-Presence S-Minor T-Defect O-Occasional A-Memory-Usage
|
### Description
Since upgrade tomatrix-synapse-py3==1.53.0+focal1 from 1.49.2+bionic1 i observe memory leak on my instance.
The upgrade is concomitant to OS upgrade from Ubuntu bionic => focal / Python 3.6 to 3.8
We didn't change homeserver.yaml during upgrade
Our machine had 3 GB memory for 2 years and now 10G isn't enough.
### Steps to reproduce
root@srv-matrix1:~# systemctl status matrix-synapse.service
● matrix-synapse.service - Synapse Matrix homeserver
Loaded: loaded (/lib/systemd/system/matrix-synapse.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2022-03-04 10:21:05 CET; 4h 45min ago
Process: 171067 ExecStartPre=/opt/venvs/matrix-synapse/bin/python -m synapse.app.homeserver --config-path=/etc/matrix-synapse/homese>
Main PID: 171075 (python)
Tasks: 30 (limit: 11811)
Memory: 6.1G
CGroup: /system.slice/matrix-synapse.service
└─171075 /opt/venvs/matrix-synapse/bin/python -m synapse.app.homeserver --config-path=/etc/matrix-synapse/homeserver.yaml ->

I tried to change this config without success
expiry_time: 30m
syslogs says oom killer killed synapse:
Mar 4 10:20:54 XXXX kernel: [174841.111273] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/system.slice/matrix-synapse.service,task=python,pid=143210,uid=114
Mar 4 10:20:54 srv-matrix1 kernel: [174841.111339] Out of memory: Killed process 143210 (python) total-vm:12564520kB, anon-rss:9073668kB, file-rss:0kB, shmem-rss:0kB, UID:114 pgtables:21244kB oom_score_adj:0
no further usefull information in homeserver.log
### Version information
$ curl http://localhost:8008/_synapse/admin/v1/server_version
{"server_version":"1.53.0","python_version":"3.8.10"}
- **Version**: 1.53.0
- **Install method**:
Ubuntu apt repo
- **Platform**:
VMWare
I could be happy to help getting python stacktrack to debug this, if I have any lead how to do so.
(sorry for my english)
|
1.0
|
memory leak since 1.53.0 - ### Description
Since upgrade tomatrix-synapse-py3==1.53.0+focal1 from 1.49.2+bionic1 i observe memory leak on my instance.
The upgrade is concomitant to OS upgrade from Ubuntu bionic => focal / Python 3.6 to 3.8
We didn't change homeserver.yaml during upgrade
Our machine had 3 GB memory for 2 years and now 10G isn't enough.
### Steps to reproduce
root@srv-matrix1:~# systemctl status matrix-synapse.service
● matrix-synapse.service - Synapse Matrix homeserver
Loaded: loaded (/lib/systemd/system/matrix-synapse.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2022-03-04 10:21:05 CET; 4h 45min ago
Process: 171067 ExecStartPre=/opt/venvs/matrix-synapse/bin/python -m synapse.app.homeserver --config-path=/etc/matrix-synapse/homese>
Main PID: 171075 (python)
Tasks: 30 (limit: 11811)
Memory: 6.1G
CGroup: /system.slice/matrix-synapse.service
└─171075 /opt/venvs/matrix-synapse/bin/python -m synapse.app.homeserver --config-path=/etc/matrix-synapse/homeserver.yaml ->

I tried to change this config without success
expiry_time: 30m
syslogs says oom killer killed synapse:
Mar 4 10:20:54 XXXX kernel: [174841.111273] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/system.slice/matrix-synapse.service,task=python,pid=143210,uid=114
Mar 4 10:20:54 srv-matrix1 kernel: [174841.111339] Out of memory: Killed process 143210 (python) total-vm:12564520kB, anon-rss:9073668kB, file-rss:0kB, shmem-rss:0kB, UID:114 pgtables:21244kB oom_score_adj:0
no further usefull information in homeserver.log
### Version information
$ curl http://localhost:8008/_synapse/admin/v1/server_version
{"server_version":"1.53.0","python_version":"3.8.10"}
- **Version**: 1.53.0
- **Install method**:
Ubuntu apt repo
- **Platform**:
VMWare
I could be happy to help getting python stacktrack to debug this, if I have any lead how to do so.
(sorry for my english)
|
defect
|
memory leak since description since upgrade tomatrix synapse from i observe memory leak on my instance the upgrade is concomitant to os upgrade from ubuntu bionic focal python to we didn t change homeserver yaml during upgrade our machine had gb memory for years and now isn t enough steps to reproduce root srv systemctl status matrix synapse service ● matrix synapse service synapse matrix homeserver loaded loaded lib systemd system matrix synapse service enabled vendor preset enabled active active running since fri cet ago process execstartpre opt venvs matrix synapse bin python m synapse app homeserver config path etc matrix synapse homese main pid python tasks limit memory cgroup system slice matrix synapse service └─ opt venvs matrix synapse bin python m synapse app homeserver config path etc matrix synapse homeserver yaml i tried to change this config without success expiry time syslogs says oom killer killed synapse mar xxxx kernel oom kill constraint constraint none nodemask null cpuset mems allowed global oom task memcg system slice matrix synapse service task python pid uid mar srv kernel out of memory killed process python total vm anon rss file rss shmem rss uid pgtables oom score adj no further usefull information in homeserver log version information curl server version python version version install method ubuntu apt repo platform vmware i could be happy to help getting python stacktrack to debug this if i have any lead how to do so sorry for my english
| 1
|
57,488
| 15,812,960,871
|
IssuesEvent
|
2021-04-05 06:46:47
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
closed
|
Password reset has no complexity requirements
|
A-Password-Reset A-User-Settings P1 T-Defect
|
So you can change your password to be a single character if you like :(
|
1.0
|
Password reset has no complexity requirements - So you can change your password to be a single character if you like :(
|
defect
|
password reset has no complexity requirements so you can change your password to be a single character if you like
| 1
|
42,397
| 11,013,696,577
|
IssuesEvent
|
2019-12-04 21:02:33
|
scipy/scipy
|
https://api.github.com/repos/scipy/scipy
|
closed
|
sparse.linalg.expm of an empty matrix
|
defect scipy.linalg
|
Computing the matrix exponential of an empty matrix raises a ValueError exception.
#### Reproducing code example:
```
scipy.linalg.expm(np.zeros((0, 0)))
```
#### Error message:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/runner/.local/share/virtualenvs/python3/lib/python3.7/site-packages/scipy/linalg/matfuncs.py", line 256, in expm
return scipy.sparse.linalg.expm(A)
File "/home/runner/.local/share/virtualenvs/python3/lib/python3.7/site-packages/scipy/sparse/linalg/matfuncs.py", line 606, in expm
return _expm(A, use_exact_onenorm='auto')
File "/home/runner/.local/share/virtualenvs/python3/lib/python3.7/site-packages/scipy/sparse/linalg/matfuncs.py", line 647, in _expm
eta_1 = max(h.d4_loose, h.d6_loose)
File "/home/runner/.local/share/virtualenvs/python3/lib/python3.7/site-packages/scipy/sparse/linalg/matfuncs.py", line 458, in d4_loose
return self.d4_tight
File "/home/runner/.local/share/virtualenvs/python3/lib/python3.7/site-packages/scipy/sparse/linalg/matfuncs.py", line 434, in d4_tight
self._d4_exact = _onenorm(self.A4)**(1/4.)
File "/home/runner/.local/share/virtualenvs/python3/lib/python3.7/site-packages/scipy/sparse/linalg/matfuncs.py", line 407, in A4
self.A2, self.A2, structure=self.structure)
File "/home/runner/.local/share/virtualenvs/python3/lib/python3.7/site-packages/scipy/sparse/linalg/matfuncs.py", line 400, in A2
self.A, self.A, structure=self.structure)
File "/home/runner/.local/share/virtualenvs/python3/lib/python3.7/site-packages/scipy/sparse/linalg/matfuncs.py", line 178, in _smart_matrix_product
out = f(alpha, A, B)
ValueError: On entry to DTRMM parameter number 9 had an illegal value
```
#### Scipy/Numpy/Python version information:
1.3.1 1.17.2 sys.version_info(major=3, minor=7, micro=4, releaselevel='final', serial=0)
|
1.0
|
sparse.linalg.expm of an empty matrix - Computing the matrix exponential of an empty matrix raises a ValueError exception.
#### Reproducing code example:
```
scipy.linalg.expm(np.zeros((0, 0)))
```
#### Error message:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/runner/.local/share/virtualenvs/python3/lib/python3.7/site-packages/scipy/linalg/matfuncs.py", line 256, in expm
return scipy.sparse.linalg.expm(A)
File "/home/runner/.local/share/virtualenvs/python3/lib/python3.7/site-packages/scipy/sparse/linalg/matfuncs.py", line 606, in expm
return _expm(A, use_exact_onenorm='auto')
File "/home/runner/.local/share/virtualenvs/python3/lib/python3.7/site-packages/scipy/sparse/linalg/matfuncs.py", line 647, in _expm
eta_1 = max(h.d4_loose, h.d6_loose)
File "/home/runner/.local/share/virtualenvs/python3/lib/python3.7/site-packages/scipy/sparse/linalg/matfuncs.py", line 458, in d4_loose
return self.d4_tight
File "/home/runner/.local/share/virtualenvs/python3/lib/python3.7/site-packages/scipy/sparse/linalg/matfuncs.py", line 434, in d4_tight
self._d4_exact = _onenorm(self.A4)**(1/4.)
File "/home/runner/.local/share/virtualenvs/python3/lib/python3.7/site-packages/scipy/sparse/linalg/matfuncs.py", line 407, in A4
self.A2, self.A2, structure=self.structure)
File "/home/runner/.local/share/virtualenvs/python3/lib/python3.7/site-packages/scipy/sparse/linalg/matfuncs.py", line 400, in A2
self.A, self.A, structure=self.structure)
File "/home/runner/.local/share/virtualenvs/python3/lib/python3.7/site-packages/scipy/sparse/linalg/matfuncs.py", line 178, in _smart_matrix_product
out = f(alpha, A, B)
ValueError: On entry to DTRMM parameter number 9 had an illegal value
```
#### Scipy/Numpy/Python version information:
1.3.1 1.17.2 sys.version_info(major=3, minor=7, micro=4, releaselevel='final', serial=0)
|
defect
|
sparse linalg expm of an empty matrix computing the matrix exponential of an empty matrix raises a valueerror exception reproducing code example scipy linalg expm np zeros error message traceback most recent call last file line in file home runner local share virtualenvs lib site packages scipy linalg matfuncs py line in expm return scipy sparse linalg expm a file home runner local share virtualenvs lib site packages scipy sparse linalg matfuncs py line in expm return expm a use exact onenorm auto file home runner local share virtualenvs lib site packages scipy sparse linalg matfuncs py line in expm eta max h loose h loose file home runner local share virtualenvs lib site packages scipy sparse linalg matfuncs py line in loose return self tight file home runner local share virtualenvs lib site packages scipy sparse linalg matfuncs py line in tight self exact onenorm self file home runner local share virtualenvs lib site packages scipy sparse linalg matfuncs py line in self self structure self structure file home runner local share virtualenvs lib site packages scipy sparse linalg matfuncs py line in self a self a structure self structure file home runner local share virtualenvs lib site packages scipy sparse linalg matfuncs py line in smart matrix product out f alpha a b valueerror on entry to dtrmm parameter number had an illegal value scipy numpy python version information sys version info major minor micro releaselevel final serial
| 1
|
83,610
| 16,240,904,667
|
IssuesEvent
|
2021-05-07 09:25:02
|
fac21/Week7--Server-Side-App-AANS
|
https://api.github.com/repos/fac21/Week7--Server-Side-App-AANS
|
opened
|
Fantastic work !
|
code review
|
Great job everyone ! I love the idea (ngl got a tad caught up in your game and ended up playing it a fair few number of times). I see you made a nice kanban board and worked through the issues there methodically :)
Would have liked to see your actual story points to understand whether your estimates were accurate or not. Story points are really useful to plan your sprints (for example if you know the total number of points you completed for the past 2 projects, then you can get an idea of what's feasible and make sure you don't give yourself more work than you can handle).
Minimalistic design is great, but don't forget to do a liiiittle bit of css if possible. I know it's annoying to mess around with at first, but you can actually get quite creative with it and learning to do front-end is just as important as the back-end stuff.
You could also try to balance your commits a little more (if you go into the insights tab you'll see a quick overview of how many commits everyone made). If you're pairing, you can add a ['Co-authored-by'](https://docs.github.com/en/github/committing-changes-to-your-project/creating-a-commit-with-multiple-authors) to your commits so that it appears on both your accounts.
Well done adding the middleware to handle errors !
|
1.0
|
Fantastic work ! - Great job everyone ! I love the idea (ngl got a tad caught up in your game and ended up playing it a fair few number of times). I see you made a nice kanban board and worked through the issues there methodically :)
Would have liked to see your actual story points to understand whether your estimates were accurate or not. Story points are really useful to plan your sprints (for example if you know the total number of points you completed for the past 2 projects, then you can get an idea of what's feasible and make sure you don't give yourself more work than you can handle).
Minimalistic design is great, but don't forget to do a liiiittle bit of css if possible. I know it's annoying to mess around with at first, but you can actually get quite creative with it and learning to do front-end is just as important as the back-end stuff.
You could also try to balance your commits a little more (if you go into the insights tab you'll see a quick overview of how many commits everyone made). If you're pairing, you can add a ['Co-authored-by'](https://docs.github.com/en/github/committing-changes-to-your-project/creating-a-commit-with-multiple-authors) to your commits so that it appears on both your accounts.
Well done adding the middleware to handle errors !
|
non_defect
|
fantastic work great job everyone i love the idea ngl got a tad caught up in your game and ended up playing it a fair few number of times i see you made a nice kanban board and worked through the issues there methodically would have liked to see your actual story points to understand whether your estimates were accurate or not story points are really useful to plan your sprints for example if you know the total number of points you completed for the past projects then you can get an idea of what s feasible and make sure you don t give yourself more work than you can handle minimalistic design is great but don t forget to do a liiiittle bit of css if possible i know it s annoying to mess around with at first but you can actually get quite creative with it and learning to do front end is just as important as the back end stuff you could also try to balance your commits a little more if you go into the insights tab you ll see a quick overview of how many commits everyone made if you re pairing you can add a to your commits so that it appears on both your accounts well done adding the middleware to handle errors
| 0
|
367,748
| 10,861,459,216
|
IssuesEvent
|
2019-11-14 11:07:48
|
MarcTowler/ItsLit-RPG-Tracker
|
https://api.github.com/repos/MarcTowler/ItsLit-RPG-Tracker
|
opened
|
Introduce abilities
|
GAPI Low Priority enhancement
|
Abilities can be used once a fight by clicking an extra symbol in monsterFight. Abilities need AP which need to be replenished with items...
e.g.
Magic (various magic types, i.e. firebolt, blizzard etc)
|
1.0
|
Introduce abilities - Abilities can be used once a fight by clicking an extra symbol in monsterFight. Abilities need AP which need to be replenished with items...
e.g.
Magic (various magic types, i.e. firebolt, blizzard etc)
|
non_defect
|
introduce abilities abilities can be used once a fight by clicking an extra symbol in monsterfight abilities need ap which need to be replenished with items e g magic various magic types i e firebolt blizzard etc
| 0
|
60,972
| 8,481,780,901
|
IssuesEvent
|
2018-10-25 16:37:43
|
arangodb/arangodb
|
https://api.github.com/repos/arangodb/arangodb
|
closed
|
Support-Links should link to Documents corresponding the current version of Admin i.e. 3.4 ...
|
1 Bug 2 Fixed 3 Documentation 3 UI
|
ArangoDb 3.4 RC1 - Admin

... instead of latest stable version i.e. 3.3
|
1.0
|
Support-Links should link to Documents corresponding the current version of Admin i.e. 3.4 ... - ArangoDb 3.4 RC1 - Admin

... instead of latest stable version i.e. 3.3
|
non_defect
|
support links should link to documents corresponding the current version of admin i e arangodb admin instead of latest stable version i e
| 0
|
160,653
| 20,115,477,986
|
IssuesEvent
|
2022-02-07 19:02:59
|
dotnet/aspnetcore
|
https://api.github.com/repos/dotnet/aspnetcore
|
closed
|
Receiving 'The feature is not supported' in Microsoft.AspNetCore.Authentication.Negotiate
|
area-security
|
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Describe the bug
I created a simple application to test authentication in a linux container.
```c#
using Microsoft.AspNetCore.Authentication.Negotiate;
var builder = WebApplication.CreateBuilder(args);
// Add services to the container.
builder.Services.AddControllers();
// Learn more about configuring Swagger/OpenAPI at https://aka.ms/aspnetcore/swashbuckle
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();
builder.Services.AddAuthentication(NegotiateDefaults.AuthenticationScheme)
.AddNegotiate(options =>
{
if (RuntimeInformation.IsOSPlatform(OSPlatform.Linux))
{
options.EnableLdap(settings =>
{
settings.Domain = "<domain_name>";
settings.MachineAccountName = "<windows_host_name>";
});
}
});
var app = builder.Build();
// Configure the HTTP request pipeline.
app.UseSwagger();
app.UseSwaggerUI();
app.UseAuthentication();
app.UseAuthorization();
app.MapControllers().RequireAuthorization();
app.Run();
```
UserController
```c#
namespace TestNegotiate.Controllers
{
[Route("api/[controller]")]
[ApiController]
public class UsersController : ControllerBase
{
[HttpGet]
public IActionResult GetUser()
{
var claims = User.Claims.Select(c => new { c.Value, c.Type });
return Ok(
new
{
User.Identity?.Name,
User.Identity?.IsAuthenticated,
claims
});
}
}
}
```
Dockerfile:
```
FROM mcr.microsoft.com/dotnet/aspnet:6.0 AS base
WORKDIR /app
ENV KRB5_KTNAME=/app/srv-dms-k8s.keytab
RUN apt-get update \
&& apt-get install -y --no-install-recommends krb5-config krb5-user realmd adcli packagekit sssd sssd-tools
COPY TestNegotiate/krb5.conf /etc/krb5.conf
EXPOSE 80
FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build
WORKDIR /src
COPY ["TestNegotiate/TestNegotiate.csproj", "TestNegotiate/"]
RUN dotnet restore "TestNegotiate/TestNegotiate.csproj"
COPY . .
WORKDIR "/src/TestNegotiate"
RUN dotnet build "TestNegotiate.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "TestNegotiate.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "TestNegotiate.dll"]
```
During application start I receive an error:
```
System.DirectoryServices.Protocols.LdapException: The feature is not supported.
at System.DirectoryServices.Protocols.LdapConnection.BindHelper(NetworkCredential newCredential, Boolean needSetCredential)
at System.DirectoryServices.Protocols.LdapConnection.Bind()
at Microsoft.AspNetCore.Authentication.Negotiate.PostConfigureNegotiateOptions.PostConfigure(String name, NegotiateOptions options)
at Microsoft.Extensions.Options.OptionsFactory`1.Create(String name)
at Microsoft.Extensions.Options.OptionsMonitor`1.<>c__DisplayClass10_0.<Get>b__0()
at System.Lazy`1.ViaFactory(LazyThreadSafetyMode mode)
at System.Lazy`1.ExecutionAndPublication(LazyHelper executionAndPublication, Boolean useDefaultConstructor)
at System.Lazy`1.CreateValue()
at System.Lazy`1.get_Value()
at Microsoft.Extensions.Options.OptionsCache`1.GetOrAdd(String name, Func`1 createOptions)
at Microsoft.Extensions.Options.OptionsMonitor`1.Get(String name)
at Microsoft.AspNetCore.Authentication.Negotiate.Internal.NegotiateOptionsValidationStartupFilter.<>c__DisplayClass2_0.<Configure>b__0(IApplicationBuilder builder)
at Microsoft.AspNetCore.Mvc.Filters.MiddlewareFilterBuilderStartupFilter.<>c__DisplayClass0_0.<Configure>g__MiddlewareFilterBuilder|0(IApplicationBuilder builder)
at Microsoft.AspNetCore.HostFilteringStartupFilter.<>c__DisplayClass0_0.<Configure>b__0(IApplicationBuilder app)
at Microsoft.AspNetCore.Hosting.GenericWebHostService.StartAsync(CancellationToken cancellationToken)
at Microsoft.Extensions.Hosting.Internal.Host.StartAsync(CancellationToken cancellationToken)
at Microsoft.Extensions.Hosting.HostingAbstractionsHostExtensions.RunAsync(IHost host, CancellationToken token)
at Microsoft.Extensions.Hosting.HostingAbstractionsHostExtensions.RunAsync(IHost host, CancellationToken token)
at Microsoft.Extensions.Hosting.HostingAbstractionsHostExtensions.Run(IHost host)
at Microsoft.AspNetCore.Builder.WebApplication.Run(String url)
at Program.<Main>$(String[] args) in C:\Users\ruazed1\source\repos\TestNegotiate\TestNegotiate\Program.cs:line 34
```
Why this happens? Here is written it should work under linux https://docs.microsoft.com/en-us/aspnet/core/security/authentication/windowsauth?view=aspnetcore-6.0&tabs=visual-studio#kerberos-authentication-and-role-based-access-control-rbac.
### Expected Behavior
Authentication should work and resolve groups using LDAP.
### Steps To Reproduce
_No response_
### Exceptions (if any)
_No response_
### .NET Version
On local machine (where application is build):
```
❯ dotnet --info
.NET SDK (reflecting any global.json):
Version: 6.0.101
Commit: ef49f6213a
Runtime Environment:
OS Name: Windows
OS Version: 10.0.19043
OS Platform: Windows
RID: win10-x64
Base Path: C:\Program Files\dotnet\sdk\6.0.101\
Host (useful for support):
Version: 6.0.1
Commit: 3a25a7f1cc
.NET SDKs installed:
3.1.414 [C:\Program Files\dotnet\sdk]
5.0.202 [C:\Program Files\dotnet\sdk]
5.0.404 [C:\Program Files\dotnet\sdk]
6.0.100 [C:\Program Files\dotnet\sdk]
6.0.101 [C:\Program Files\dotnet\sdk]
```
On a container
```
root@25b23ba7824f:/app# dotnet --info
Host (useful for support):
Version: 6.0.1
Commit: 3a25a7f1cc
.NET SDKs installed:
No SDKs were found.
.NET runtimes installed:
Microsoft.AspNetCore.App 6.0.1 [/usr/share/dotnet/shared/Microsoft.AspNetCore.App]
Microsoft.NETCore.App 6.0.1 [/usr/share/dotnet/shared/Microsoft.NETCore.App]
To install additional .NET runtimes or SDKs:
https://aka.ms/dotnet-download
```
### Anything else?
_No response_
|
True
|
Receiving 'The feature is not supported' in Microsoft.AspNetCore.Authentication.Negotiate - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Describe the bug
I created a simple application to test authentication in a linux container.
```c#
using Microsoft.AspNetCore.Authentication.Negotiate;
var builder = WebApplication.CreateBuilder(args);
// Add services to the container.
builder.Services.AddControllers();
// Learn more about configuring Swagger/OpenAPI at https://aka.ms/aspnetcore/swashbuckle
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();
builder.Services.AddAuthentication(NegotiateDefaults.AuthenticationScheme)
.AddNegotiate(options =>
{
if (RuntimeInformation.IsOSPlatform(OSPlatform.Linux))
{
options.EnableLdap(settings =>
{
settings.Domain = "<domain_name>";
settings.MachineAccountName = "<windows_host_name>";
});
}
});
var app = builder.Build();
// Configure the HTTP request pipeline.
app.UseSwagger();
app.UseSwaggerUI();
app.UseAuthentication();
app.UseAuthorization();
app.MapControllers().RequireAuthorization();
app.Run();
```
UserController
```c#
namespace TestNegotiate.Controllers
{
[Route("api/[controller]")]
[ApiController]
public class UsersController : ControllerBase
{
[HttpGet]
public IActionResult GetUser()
{
var claims = User.Claims.Select(c => new { c.Value, c.Type });
return Ok(
new
{
User.Identity?.Name,
User.Identity?.IsAuthenticated,
claims
});
}
}
}
```
Dockerfile:
```
FROM mcr.microsoft.com/dotnet/aspnet:6.0 AS base
WORKDIR /app
ENV KRB5_KTNAME=/app/srv-dms-k8s.keytab
RUN apt-get update \
&& apt-get install -y --no-install-recommends krb5-config krb5-user realmd adcli packagekit sssd sssd-tools
COPY TestNegotiate/krb5.conf /etc/krb5.conf
EXPOSE 80
FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build
WORKDIR /src
COPY ["TestNegotiate/TestNegotiate.csproj", "TestNegotiate/"]
RUN dotnet restore "TestNegotiate/TestNegotiate.csproj"
COPY . .
WORKDIR "/src/TestNegotiate"
RUN dotnet build "TestNegotiate.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "TestNegotiate.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "TestNegotiate.dll"]
```
During application start I receive an error:
```
System.DirectoryServices.Protocols.LdapException: The feature is not supported.
at System.DirectoryServices.Protocols.LdapConnection.BindHelper(NetworkCredential newCredential, Boolean needSetCredential)
at System.DirectoryServices.Protocols.LdapConnection.Bind()
at Microsoft.AspNetCore.Authentication.Negotiate.PostConfigureNegotiateOptions.PostConfigure(String name, NegotiateOptions options)
at Microsoft.Extensions.Options.OptionsFactory`1.Create(String name)
at Microsoft.Extensions.Options.OptionsMonitor`1.<>c__DisplayClass10_0.<Get>b__0()
at System.Lazy`1.ViaFactory(LazyThreadSafetyMode mode)
at System.Lazy`1.ExecutionAndPublication(LazyHelper executionAndPublication, Boolean useDefaultConstructor)
at System.Lazy`1.CreateValue()
at System.Lazy`1.get_Value()
at Microsoft.Extensions.Options.OptionsCache`1.GetOrAdd(String name, Func`1 createOptions)
at Microsoft.Extensions.Options.OptionsMonitor`1.Get(String name)
at Microsoft.AspNetCore.Authentication.Negotiate.Internal.NegotiateOptionsValidationStartupFilter.<>c__DisplayClass2_0.<Configure>b__0(IApplicationBuilder builder)
at Microsoft.AspNetCore.Mvc.Filters.MiddlewareFilterBuilderStartupFilter.<>c__DisplayClass0_0.<Configure>g__MiddlewareFilterBuilder|0(IApplicationBuilder builder)
at Microsoft.AspNetCore.HostFilteringStartupFilter.<>c__DisplayClass0_0.<Configure>b__0(IApplicationBuilder app)
at Microsoft.AspNetCore.Hosting.GenericWebHostService.StartAsync(CancellationToken cancellationToken)
at Microsoft.Extensions.Hosting.Internal.Host.StartAsync(CancellationToken cancellationToken)
at Microsoft.Extensions.Hosting.HostingAbstractionsHostExtensions.RunAsync(IHost host, CancellationToken token)
at Microsoft.Extensions.Hosting.HostingAbstractionsHostExtensions.RunAsync(IHost host, CancellationToken token)
at Microsoft.Extensions.Hosting.HostingAbstractionsHostExtensions.Run(IHost host)
at Microsoft.AspNetCore.Builder.WebApplication.Run(String url)
at Program.<Main>$(String[] args) in C:\Users\ruazed1\source\repos\TestNegotiate\TestNegotiate\Program.cs:line 34
```
Why this happens? Here is written it should work under linux https://docs.microsoft.com/en-us/aspnet/core/security/authentication/windowsauth?view=aspnetcore-6.0&tabs=visual-studio#kerberos-authentication-and-role-based-access-control-rbac.
### Expected Behavior
Authentication should work and resolve groups using LDAP.
### Steps To Reproduce
_No response_
### Exceptions (if any)
_No response_
### .NET Version
On local machine (where application is build):
```
❯ dotnet --info
.NET SDK (reflecting any global.json):
Version: 6.0.101
Commit: ef49f6213a
Runtime Environment:
OS Name: Windows
OS Version: 10.0.19043
OS Platform: Windows
RID: win10-x64
Base Path: C:\Program Files\dotnet\sdk\6.0.101\
Host (useful for support):
Version: 6.0.1
Commit: 3a25a7f1cc
.NET SDKs installed:
3.1.414 [C:\Program Files\dotnet\sdk]
5.0.202 [C:\Program Files\dotnet\sdk]
5.0.404 [C:\Program Files\dotnet\sdk]
6.0.100 [C:\Program Files\dotnet\sdk]
6.0.101 [C:\Program Files\dotnet\sdk]
```
On a container
```
root@25b23ba7824f:/app# dotnet --info
Host (useful for support):
Version: 6.0.1
Commit: 3a25a7f1cc
.NET SDKs installed:
No SDKs were found.
.NET runtimes installed:
Microsoft.AspNetCore.App 6.0.1 [/usr/share/dotnet/shared/Microsoft.AspNetCore.App]
Microsoft.NETCore.App 6.0.1 [/usr/share/dotnet/shared/Microsoft.NETCore.App]
To install additional .NET runtimes or SDKs:
https://aka.ms/dotnet-download
```
### Anything else?
_No response_
|
non_defect
|
receiving the feature is not supported in microsoft aspnetcore authentication negotiate is there an existing issue for this i have searched the existing issues describe the bug i created a simple application to test authentication in a linux container c using microsoft aspnetcore authentication negotiate var builder webapplication createbuilder args add services to the container builder services addcontrollers learn more about configuring swagger openapi at builder services addendpointsapiexplorer builder services addswaggergen builder services addauthentication negotiatedefaults authenticationscheme addnegotiate options if runtimeinformation isosplatform osplatform linux options enableldap settings settings domain settings machineaccountname var app builder build configure the http request pipeline app useswagger app useswaggerui app useauthentication app useauthorization app mapcontrollers requireauthorization app run usercontroller c namespace testnegotiate controllers public class userscontroller controllerbase public iactionresult getuser var claims user claims select c new c value c type return ok new user identity name user identity isauthenticated claims dockerfile from mcr microsoft com dotnet aspnet as base workdir app env ktname app srv dms keytab run apt get update apt get install y no install recommends config user realmd adcli packagekit sssd sssd tools copy testnegotiate conf etc conf expose from mcr microsoft com dotnet sdk as build workdir src copy run dotnet restore testnegotiate testnegotiate csproj copy workdir src testnegotiate run dotnet build testnegotiate csproj c release o app build from build as publish run dotnet publish testnegotiate csproj c release o app publish from base as final workdir app copy from publish app publish entrypoint during application start i receive an error system directoryservices protocols ldapexception the feature is not supported at system directoryservices protocols ldapconnection bindhelper networkcredential newcredential boolean needsetcredential at system directoryservices protocols ldapconnection bind at microsoft aspnetcore authentication negotiate postconfigurenegotiateoptions postconfigure string name negotiateoptions options at microsoft extensions options optionsfactory create string name at microsoft extensions options optionsmonitor c b at system lazy viafactory lazythreadsafetymode mode at system lazy executionandpublication lazyhelper executionandpublication boolean usedefaultconstructor at system lazy createvalue at system lazy get value at microsoft extensions options optionscache getoradd string name func createoptions at microsoft extensions options optionsmonitor get string name at microsoft aspnetcore authentication negotiate internal negotiateoptionsvalidationstartupfilter c b iapplicationbuilder builder at microsoft aspnetcore mvc filters middlewarefilterbuilderstartupfilter c g middlewarefilterbuilder iapplicationbuilder builder at microsoft aspnetcore hostfilteringstartupfilter c b iapplicationbuilder app at microsoft aspnetcore hosting genericwebhostservice startasync cancellationtoken cancellationtoken at microsoft extensions hosting internal host startasync cancellationtoken cancellationtoken at microsoft extensions hosting hostingabstractionshostextensions runasync ihost host cancellationtoken token at microsoft extensions hosting hostingabstractionshostextensions runasync ihost host cancellationtoken token at microsoft extensions hosting hostingabstractionshostextensions run ihost host at microsoft aspnetcore builder webapplication run string url at program string args in c users source repos testnegotiate testnegotiate program cs line why this happens here is written it should work under linux expected behavior authentication should work and resolve groups using ldap steps to reproduce no response exceptions if any no response net version on local machine where application is build ❯ dotnet info net sdk reflecting any global json version commit runtime environment os name windows os version os platform windows rid base path c program files dotnet sdk host useful for support version commit net sdks installed on a container root app dotnet info host useful for support version commit net sdks installed no sdks were found net runtimes installed microsoft aspnetcore app microsoft netcore app to install additional net runtimes or sdks anything else no response
| 0
|
38,149
| 8,674,831,462
|
IssuesEvent
|
2018-11-30 09:04:58
|
luigirizzo/netmap-ipfw
|
https://api.github.com/repos/luigirizzo/netmap-ipfw
|
closed
|
Compile error in Linux
|
Priority-Medium Type-Defect auto-migrated
|
```
What steps will reproduce the problem?
1. git clone https://code.google.com/p/netmap-ipfw/
2. make NETMAP_INC=/home/user/netmap-ipfw/sys/
3.
What is the expected output? What do you see instead?
Successful compile. Instead error:
*make: *** No rule to make target `pkt-gen.o', needed by `pkt-gen'. Stop.*
What version of the product are you using? On what operating system?
o netmap-ipfw (14 Feb 2014);
o Linux networklat 3.13.0-32-generic #57-Ubuntu SMP Tue Jul 15 03:51:08 UTC
2014 x86_64 x86_64 x86_64 GNU/Linux
Please provide any additional information below.
Welcome to Ubuntu 14.04.1 LTS (GNU/Linux 3.13.0-32-generic x86_64)
user@networklat:~/netmap-ipfw$ ll
total 60
drwxrwxr-x 7 user user 4096 Mar 25 13:53 ./
drwxr-xr-x 22 user user 4096 Mar 26 09:49 ../
-rw-rw-r-- 1 user user 100 Feb 19 14:23 BSDmakefile
drwxrwxr-x 3 user user 4096 Feb 19 14:23 extra/
drwxrwxr-x 8 user user 4096 Mar 18 09:28 .git/
-rw-r--r-- 1 root root 1123 Mar 18 12:18 GNUmakefile
-rw------- 1 root root 1193 Mar 18 12:52 GNUmakefile.save
drwxrwxr-x 2 user user 4096 Mar 25 13:39 ipfw/
-rw-rw-r-- 1 user user 804 Feb 19 14:23 Makefile
-rw-rw-r-- 1 user user 592 Feb 19 14:23 Makefile.inc
-rw-rw-r-- 1 user user 5378 Feb 19 14:23 Makefile.kipfw
drwxrwxr-x 3 user user 4096 Mar 18 09:55 objs/
-rw-rw-r-- 1 user user 2392 Feb 19 14:23 README
drwxrwxr-x 6 user user 4096 Feb 19 14:23 sys/
user@networklat:~/netmap-ipfw$ ll ipfw
total 544
drwxrwxr-x 2 user user 4096 Mar 25 13:39 ./
drwxrwxr-x 7 user user 4096 Mar 25 13:53 ../
-rw-rw-r-- 1 user user 3325 Feb 19 14:23 altq.c
-rw-rw-r-- 1 user user 4304 Mar 18 09:55 altq.o
-rw-rw-r-- 1 user user 35542 Feb 19 14:23 dummynet.c
-rw-rw-r-- 1 user user 30472 Mar 18 09:55 dummynet.o
-rw-rw-r-- 1 user user 1904 Mar 18 09:55 expand_number.o
-rw-rw-r-- 1 user user 8808 Mar 18 09:55 glue.o
-rw-rw-r-- 1 user user 3992 Mar 18 09:55 humanize_number.o
-rwxrwxr-x 1 user user 113133 Mar 18 09:55 ipfw*
-rw-rw-r-- 1 user user 105330 Feb 19 14:23 ipfw2.c
-rw-rw-r-- 1 user user 7101 Feb 19 14:23 ipfw2.h
-rw-rw-r-- 1 user user 108144 Mar 18 09:55 ipfw2.o
-rw-rw-r-- 1 user user 13285 Feb 19 14:23 ipv6.c
-rw-rw-r-- 1 user user 10600 Mar 18 09:55 ipv6.o
-rw-rw-r-- 1 user user 15902 Feb 19 14:23 main.c
-rw-rw-r-- 1 user user 18120 Mar 18 09:55 main.o
-rw-rw-r-- 1 user user 1319 Feb 19 14:23 Makefile
-rw-rw-r-- 1 user user 23721 Feb 19 14:23 nat.c
-rw-rw-r-- 1 user user 534 Mar 19 13:58 net-config.sh
-rw-rw-r-- 1 user user 1244 Mar 18 18:59 netmap_conf.c
-rw-rw-r-- 1 user user 1293 Mar 18 15:00 netmap_test.sh
user@networklat:~/netmap-ipfw$
user@networklat:~/netmap-ipfw$
user@networklat:~/netmap-ipfw$
user@networklat:~/netmap-ipfw$ make NETMAP_INC=/home/user/netmap-ipfw/sys/
make: *** No rule to make target `pkt-gen.o', needed by `pkt-gen'. Stop.
user@networklat:~/netmap-ipfw$
user@networklat:~/netmap-ipfw$
user@networklat:~/netmap-ipfw$
user@networklat:~/netmap-ipfw$ sudo find . -type f -exec grep -li "pkt-gen.o"
{} \;
[sudo] password for user:
./GNUmakefile.save
./GNUmakefile
user@networklat:~/netmap-ipfw$
```
Original issue reported on code.google.com by `avit...@gmail.com` on 26 Mar 2015 at 5:56
|
1.0
|
Compile error in Linux - ```
What steps will reproduce the problem?
1. git clone https://code.google.com/p/netmap-ipfw/
2. make NETMAP_INC=/home/user/netmap-ipfw/sys/
3.
What is the expected output? What do you see instead?
Successful compile. Instead error:
*make: *** No rule to make target `pkt-gen.o', needed by `pkt-gen'. Stop.*
What version of the product are you using? On what operating system?
o netmap-ipfw (14 Feb 2014);
o Linux networklat 3.13.0-32-generic #57-Ubuntu SMP Tue Jul 15 03:51:08 UTC
2014 x86_64 x86_64 x86_64 GNU/Linux
Please provide any additional information below.
Welcome to Ubuntu 14.04.1 LTS (GNU/Linux 3.13.0-32-generic x86_64)
user@networklat:~/netmap-ipfw$ ll
total 60
drwxrwxr-x 7 user user 4096 Mar 25 13:53 ./
drwxr-xr-x 22 user user 4096 Mar 26 09:49 ../
-rw-rw-r-- 1 user user 100 Feb 19 14:23 BSDmakefile
drwxrwxr-x 3 user user 4096 Feb 19 14:23 extra/
drwxrwxr-x 8 user user 4096 Mar 18 09:28 .git/
-rw-r--r-- 1 root root 1123 Mar 18 12:18 GNUmakefile
-rw------- 1 root root 1193 Mar 18 12:52 GNUmakefile.save
drwxrwxr-x 2 user user 4096 Mar 25 13:39 ipfw/
-rw-rw-r-- 1 user user 804 Feb 19 14:23 Makefile
-rw-rw-r-- 1 user user 592 Feb 19 14:23 Makefile.inc
-rw-rw-r-- 1 user user 5378 Feb 19 14:23 Makefile.kipfw
drwxrwxr-x 3 user user 4096 Mar 18 09:55 objs/
-rw-rw-r-- 1 user user 2392 Feb 19 14:23 README
drwxrwxr-x 6 user user 4096 Feb 19 14:23 sys/
user@networklat:~/netmap-ipfw$ ll ipfw
total 544
drwxrwxr-x 2 user user 4096 Mar 25 13:39 ./
drwxrwxr-x 7 user user 4096 Mar 25 13:53 ../
-rw-rw-r-- 1 user user 3325 Feb 19 14:23 altq.c
-rw-rw-r-- 1 user user 4304 Mar 18 09:55 altq.o
-rw-rw-r-- 1 user user 35542 Feb 19 14:23 dummynet.c
-rw-rw-r-- 1 user user 30472 Mar 18 09:55 dummynet.o
-rw-rw-r-- 1 user user 1904 Mar 18 09:55 expand_number.o
-rw-rw-r-- 1 user user 8808 Mar 18 09:55 glue.o
-rw-rw-r-- 1 user user 3992 Mar 18 09:55 humanize_number.o
-rwxrwxr-x 1 user user 113133 Mar 18 09:55 ipfw*
-rw-rw-r-- 1 user user 105330 Feb 19 14:23 ipfw2.c
-rw-rw-r-- 1 user user 7101 Feb 19 14:23 ipfw2.h
-rw-rw-r-- 1 user user 108144 Mar 18 09:55 ipfw2.o
-rw-rw-r-- 1 user user 13285 Feb 19 14:23 ipv6.c
-rw-rw-r-- 1 user user 10600 Mar 18 09:55 ipv6.o
-rw-rw-r-- 1 user user 15902 Feb 19 14:23 main.c
-rw-rw-r-- 1 user user 18120 Mar 18 09:55 main.o
-rw-rw-r-- 1 user user 1319 Feb 19 14:23 Makefile
-rw-rw-r-- 1 user user 23721 Feb 19 14:23 nat.c
-rw-rw-r-- 1 user user 534 Mar 19 13:58 net-config.sh
-rw-rw-r-- 1 user user 1244 Mar 18 18:59 netmap_conf.c
-rw-rw-r-- 1 user user 1293 Mar 18 15:00 netmap_test.sh
user@networklat:~/netmap-ipfw$
user@networklat:~/netmap-ipfw$
user@networklat:~/netmap-ipfw$
user@networklat:~/netmap-ipfw$ make NETMAP_INC=/home/user/netmap-ipfw/sys/
make: *** No rule to make target `pkt-gen.o', needed by `pkt-gen'. Stop.
user@networklat:~/netmap-ipfw$
user@networklat:~/netmap-ipfw$
user@networklat:~/netmap-ipfw$
user@networklat:~/netmap-ipfw$ sudo find . -type f -exec grep -li "pkt-gen.o"
{} \;
[sudo] password for user:
./GNUmakefile.save
./GNUmakefile
user@networklat:~/netmap-ipfw$
```
Original issue reported on code.google.com by `avit...@gmail.com` on 26 Mar 2015 at 5:56
|
defect
|
compile error in linux what steps will reproduce the problem git clone make netmap inc home user netmap ipfw sys what is the expected output what do you see instead successful compile instead error make no rule to make target pkt gen o needed by pkt gen stop what version of the product are you using on what operating system o netmap ipfw feb o linux networklat generic ubuntu smp tue jul utc gnu linux please provide any additional information below welcome to ubuntu lts gnu linux generic user networklat netmap ipfw ll total drwxrwxr x user user mar drwxr xr x user user mar rw rw r user user feb bsdmakefile drwxrwxr x user user feb extra drwxrwxr x user user mar git rw r r root root mar gnumakefile rw root root mar gnumakefile save drwxrwxr x user user mar ipfw rw rw r user user feb makefile rw rw r user user feb makefile inc rw rw r user user feb makefile kipfw drwxrwxr x user user mar objs rw rw r user user feb readme drwxrwxr x user user feb sys user networklat netmap ipfw ll ipfw total drwxrwxr x user user mar drwxrwxr x user user mar rw rw r user user feb altq c rw rw r user user mar altq o rw rw r user user feb dummynet c rw rw r user user mar dummynet o rw rw r user user mar expand number o rw rw r user user mar glue o rw rw r user user mar humanize number o rwxrwxr x user user mar ipfw rw rw r user user feb c rw rw r user user feb h rw rw r user user mar o rw rw r user user feb c rw rw r user user mar o rw rw r user user feb main c rw rw r user user mar main o rw rw r user user feb makefile rw rw r user user feb nat c rw rw r user user mar net config sh rw rw r user user mar netmap conf c rw rw r user user mar netmap test sh user networklat netmap ipfw user networklat netmap ipfw user networklat netmap ipfw user networklat netmap ipfw make netmap inc home user netmap ipfw sys make no rule to make target pkt gen o needed by pkt gen stop user networklat netmap ipfw user networklat netmap ipfw user networklat netmap ipfw user networklat netmap ipfw sudo find type f exec grep li pkt gen o password for user gnumakefile save gnumakefile user networklat netmap ipfw original issue reported on code google com by avit gmail com on mar at
| 1
|
66,034
| 19,905,095,628
|
IssuesEvent
|
2022-01-25 11:57:55
|
openzfs/zfs
|
https://api.github.com/repos/openzfs/zfs
|
closed
|
Nginx in docker on zfs must now use 'senfile off;'
|
Type: Defect
|
I folks. I have an ArchLinux with zfs running a dokuwiki using this docker image : ghcr.io/linuxserver/dokuwiki:version-2020-07-29
It worked very well since a long time.
But a few days ago, it stopped working and started filling nginx log (in docker) with this kind of message :
```
[alert] 386#386: *310 sendfile() failed (22: Invalid argument) while sending response to client,
```
when serving static files.
After a long time searching, I simply changed an option in nginx (in docker) : 'senfile off;' and the problem was gone.
I believe this could be linked to something bad happening between my Linux version and zfs. I believe it brokes when upgrading from 2.1.1 to 2.1.2
'hope this report can be usefull.
```
[root@orion ~]# zfs version
zfs-2.1.2-1
zfs-kmod-2.1.2-1
[root@orion ~]# uname -a
Linux orion 5.16.2-arch1-1 #1 SMP PREEMPT Thu, 20 Jan 2022 16:18:29 +0000 x86_64 GNU/Linux
```
|
1.0
|
Nginx in docker on zfs must now use 'senfile off;' - I folks. I have an ArchLinux with zfs running a dokuwiki using this docker image : ghcr.io/linuxserver/dokuwiki:version-2020-07-29
It worked very well since a long time.
But a few days ago, it stopped working and started filling nginx log (in docker) with this kind of message :
```
[alert] 386#386: *310 sendfile() failed (22: Invalid argument) while sending response to client,
```
when serving static files.
After a long time searching, I simply changed an option in nginx (in docker) : 'senfile off;' and the problem was gone.
I believe this could be linked to something bad happening between my Linux version and zfs. I believe it brokes when upgrading from 2.1.1 to 2.1.2
'hope this report can be usefull.
```
[root@orion ~]# zfs version
zfs-2.1.2-1
zfs-kmod-2.1.2-1
[root@orion ~]# uname -a
Linux orion 5.16.2-arch1-1 #1 SMP PREEMPT Thu, 20 Jan 2022 16:18:29 +0000 x86_64 GNU/Linux
```
|
defect
|
nginx in docker on zfs must now use senfile off i folks i have an archlinux with zfs running a dokuwiki using this docker image ghcr io linuxserver dokuwiki version it worked very well since a long time but a few days ago it stopped working and started filling nginx log in docker with this kind of message sendfile failed invalid argument while sending response to client when serving static files after a long time searching i simply changed an option in nginx in docker senfile off and the problem was gone i believe this could be linked to something bad happening between my linux version and zfs i believe it brokes when upgrading from to hope this report can be usefull zfs version zfs zfs kmod uname a linux orion smp preempt thu jan gnu linux
| 1
|
309,627
| 9,477,692,265
|
IssuesEvent
|
2019-04-19 19:35:03
|
abpframework/abp
|
https://api.github.com/repos/abpframework/abp
|
opened
|
Align AbpValidationActionFilter with MethodInvocationValidator
|
enhancement framework priority:high
|
MethodInvocationValidator works with IObjectMapper and it natually supports data annotations and fluent validation. It also handles attributes like EnableValidationAttribute and DisableValidationAttribute.
However, AspNet Core side uses IModelStateValidator and it only validates model state errors using Microsoft's implementation. It also disables MethodInvocationValidator if an app service is used as API controller.
So, we should align implementations and allow all validation features to be available to API controllers too.
|
1.0
|
Align AbpValidationActionFilter with MethodInvocationValidator - MethodInvocationValidator works with IObjectMapper and it natually supports data annotations and fluent validation. It also handles attributes like EnableValidationAttribute and DisableValidationAttribute.
However, AspNet Core side uses IModelStateValidator and it only validates model state errors using Microsoft's implementation. It also disables MethodInvocationValidator if an app service is used as API controller.
So, we should align implementations and allow all validation features to be available to API controllers too.
|
non_defect
|
align abpvalidationactionfilter with methodinvocationvalidator methodinvocationvalidator works with iobjectmapper and it natually supports data annotations and fluent validation it also handles attributes like enablevalidationattribute and disablevalidationattribute however aspnet core side uses imodelstatevalidator and it only validates model state errors using microsoft s implementation it also disables methodinvocationvalidator if an app service is used as api controller so we should align implementations and allow all validation features to be available to api controllers too
| 0
|
89,079
| 11,195,438,148
|
IssuesEvent
|
2020-01-03 06:27:14
|
jackfirth/rebellion
|
https://api.github.com/repos/jackfirth/rebellion
|
closed
|
Support pattern matching on type instances
|
enhancement needs api design pattern matching
|
The `rebellion/type` libraries should cooperate with the `racket/match` library so that pattern matching on instances of custom types is easy.
|
1.0
|
Support pattern matching on type instances - The `rebellion/type` libraries should cooperate with the `racket/match` library so that pattern matching on instances of custom types is easy.
|
non_defect
|
support pattern matching on type instances the rebellion type libraries should cooperate with the racket match library so that pattern matching on instances of custom types is easy
| 0
|
10,629
| 2,622,177,928
|
IssuesEvent
|
2015-03-04 00:17:30
|
byzhang/leveldb
|
https://api.github.com/repos/byzhang/leveldb
|
opened
|
Build issue on Solaris
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1. Untar 1.4.0 sources on Solaris 10, run gmake
The environment (partly set by the OpenCSW build system):
HOME="/home/maciej"
PATH="/opt/csw/gnu:/home/maciej/src/opencsw/pkg/leveldb/trunk/work/solaris10-i38
6/install-isa-amd64/opt/csw/bin/amd64:/home/maciej/src/opencsw/pkg/leveldb/trunk
/work/solaris10-i386/install-isa-amd64/opt/csw/bin:/home/maciej/src/opencsw/pkg/
leveldb/trunk/work/solaris10-i386/install-isa-amd64/opt/csw/sbin/amd64:/home/mac
iej/src/opencsw/pkg/leveldb/trunk/work/solaris10-i386/install-isa-amd64/opt/csw/
sbin:/opt/csw/bin/amd64:/opt/csw/bin:/opt/csw/sbin/amd64:/opt/csw/sbin:/opt/csw/
bin:/home/maciej/src/opencsw/pkg/.buildsys/v2/gar/bin:/usr/bin:/usr/sbin:/usr/ja
va/bin:/usr/ccs/bin:/usr/openwin/bin"
LC_ALL="C"
prefix="/opt/csw/gxx"
exec_prefix="/opt/csw/gxx"
bindir="/opt/csw/gxx/bin/amd64"
sbindir="/opt/csw/gxx/sbin/amd64" libexecdir="/opt/csw/gxx/libexec/amd64"
datadir="/opt/csw/gxx/share"
sysconfdir="/etc/opt/csw/gxx"
sharedstatedir="/opt/csw/gxx/share"
localstatedir="/var/opt/csw/gxx"
libdir="/opt/csw/gxx/lib/64"
infodir="/opt/csw/gxx/share/info" lispdir="/opt/csw/gxx/share/emacs/site-lisp"
includedir="/opt/csw/gxx/include"
mandir="/opt/csw/gxx/share/man"
docdir="/opt/csw/gxx/share/doc"
sourcedir="/opt/csw/src"
CPPFLAGS="-I/opt/csw/gxx/include -I/opt/csw/include"
CFLAGS="-O2 -pipe -m64 -march=opteron"
CXXFLAGS="-O2 -pipe -m64 -march=opteron"
LDFLAGS="-m64 -march=opteron -L/opt/csw/gxx/lib/64 -L/opt/csw/lib/64"
FFLAGS="-O2 -pipe -m64 -march=opteron"
FCFLAGS="-O2 -pipe -m64 -march=opteron"
F77="/opt/csw/bin/gfortran-4.6"
FC="/opt/csw/bin/gfortran-4.6"
ASFLAGS="" OPTFLAGS="-O2 -pipe -m64 -march=opteron" CC="/opt/csw/bin/gcc-4.6"
CXX="/opt/csw/bin/g++-4.6"
CC_HOME="/opt/csw"
CC_VERSION="gcc version 4.6.3 (GCC) "
CXX_VERSION="gcc version 4.6.3 (GCC) "
GARCH="i386"
GAROSREL="5.10"
GARPACKAGE="trunk"
LD_OPTIONS="-R/opt/csw/gxx/lib/\$ISALIST -R/opt/csw/gxx/lib/64
-R/opt/csw/lib/\$ISALIST -R/opt/csw/lib/64"
2.
3.
What is the expected output? What do you see instead?
I'm getting an error message:
rm -f libleveldb.a
ar -rs libleveldb.a db/builder.o db/c.o db/db_impl.o db/db_iter.o db/dbformat.o
db/filename.o db/log_reader.o db/log_writer.o db/memtable.o db/repair.o
db/table_cache.o db/version_edit.o db/version_set.o db/write_batch.o
table/block.o table/block_builder.o table/filter_block.o table/format.o
table/iterator.o table/merger.o table/table.o table/table_builder.o
table/two_level_iterator.o util/arena.o util/bloom.o util/cache.o util/coding.o
util/comparator.o util/crc32c.o util/env.o util/env_posix.o
util/filter_policy.o util/hash.o util/histogram.o util/logging.o util/options.o
util/status.o port/port_posix.o
ar: creating libleveldb.a
ld: warning: option -o appears more than once, first setting taken
ld: fatal: file
/home/maciej/src/opencsw/pkg/leveldb/trunk/work/solaris10-i386/build-isa-amd64/l
eveldb-1.4.0/libleveldb.so.1: open failed: No such file or directory
ld: fatal: File processing errors. No output written to libleveldb.so.1.4
gmake: *** [libleveldb.so.1.4] Error 1
What version of the product are you using? On what operating system?
leveldb-1.4.0 on Solaris 10 x86
Please provide any additional information below.
The build recipe:
http://sourceforge.net/apps/trac/gar/browser/csw/mgar/pkg/leveldb/trunk/Makefile
The C++ libraries such as snappy, are under the /opt/csw/gxx prefix.
```
Original issue reported on code.google.com by `blizin...@google.com` on 8 May 2012 at 6:21
|
1.0
|
Build issue on Solaris - ```
What steps will reproduce the problem?
1. Untar 1.4.0 sources on Solaris 10, run gmake
The environment (partly set by the OpenCSW build system):
HOME="/home/maciej"
PATH="/opt/csw/gnu:/home/maciej/src/opencsw/pkg/leveldb/trunk/work/solaris10-i38
6/install-isa-amd64/opt/csw/bin/amd64:/home/maciej/src/opencsw/pkg/leveldb/trunk
/work/solaris10-i386/install-isa-amd64/opt/csw/bin:/home/maciej/src/opencsw/pkg/
leveldb/trunk/work/solaris10-i386/install-isa-amd64/opt/csw/sbin/amd64:/home/mac
iej/src/opencsw/pkg/leveldb/trunk/work/solaris10-i386/install-isa-amd64/opt/csw/
sbin:/opt/csw/bin/amd64:/opt/csw/bin:/opt/csw/sbin/amd64:/opt/csw/sbin:/opt/csw/
bin:/home/maciej/src/opencsw/pkg/.buildsys/v2/gar/bin:/usr/bin:/usr/sbin:/usr/ja
va/bin:/usr/ccs/bin:/usr/openwin/bin"
LC_ALL="C"
prefix="/opt/csw/gxx"
exec_prefix="/opt/csw/gxx"
bindir="/opt/csw/gxx/bin/amd64"
sbindir="/opt/csw/gxx/sbin/amd64" libexecdir="/opt/csw/gxx/libexec/amd64"
datadir="/opt/csw/gxx/share"
sysconfdir="/etc/opt/csw/gxx"
sharedstatedir="/opt/csw/gxx/share"
localstatedir="/var/opt/csw/gxx"
libdir="/opt/csw/gxx/lib/64"
infodir="/opt/csw/gxx/share/info" lispdir="/opt/csw/gxx/share/emacs/site-lisp"
includedir="/opt/csw/gxx/include"
mandir="/opt/csw/gxx/share/man"
docdir="/opt/csw/gxx/share/doc"
sourcedir="/opt/csw/src"
CPPFLAGS="-I/opt/csw/gxx/include -I/opt/csw/include"
CFLAGS="-O2 -pipe -m64 -march=opteron"
CXXFLAGS="-O2 -pipe -m64 -march=opteron"
LDFLAGS="-m64 -march=opteron -L/opt/csw/gxx/lib/64 -L/opt/csw/lib/64"
FFLAGS="-O2 -pipe -m64 -march=opteron"
FCFLAGS="-O2 -pipe -m64 -march=opteron"
F77="/opt/csw/bin/gfortran-4.6"
FC="/opt/csw/bin/gfortran-4.6"
ASFLAGS="" OPTFLAGS="-O2 -pipe -m64 -march=opteron" CC="/opt/csw/bin/gcc-4.6"
CXX="/opt/csw/bin/g++-4.6"
CC_HOME="/opt/csw"
CC_VERSION="gcc version 4.6.3 (GCC) "
CXX_VERSION="gcc version 4.6.3 (GCC) "
GARCH="i386"
GAROSREL="5.10"
GARPACKAGE="trunk"
LD_OPTIONS="-R/opt/csw/gxx/lib/\$ISALIST -R/opt/csw/gxx/lib/64
-R/opt/csw/lib/\$ISALIST -R/opt/csw/lib/64"
2.
3.
What is the expected output? What do you see instead?
I'm getting an error message:
rm -f libleveldb.a
ar -rs libleveldb.a db/builder.o db/c.o db/db_impl.o db/db_iter.o db/dbformat.o
db/filename.o db/log_reader.o db/log_writer.o db/memtable.o db/repair.o
db/table_cache.o db/version_edit.o db/version_set.o db/write_batch.o
table/block.o table/block_builder.o table/filter_block.o table/format.o
table/iterator.o table/merger.o table/table.o table/table_builder.o
table/two_level_iterator.o util/arena.o util/bloom.o util/cache.o util/coding.o
util/comparator.o util/crc32c.o util/env.o util/env_posix.o
util/filter_policy.o util/hash.o util/histogram.o util/logging.o util/options.o
util/status.o port/port_posix.o
ar: creating libleveldb.a
ld: warning: option -o appears more than once, first setting taken
ld: fatal: file
/home/maciej/src/opencsw/pkg/leveldb/trunk/work/solaris10-i386/build-isa-amd64/l
eveldb-1.4.0/libleveldb.so.1: open failed: No such file or directory
ld: fatal: File processing errors. No output written to libleveldb.so.1.4
gmake: *** [libleveldb.so.1.4] Error 1
What version of the product are you using? On what operating system?
leveldb-1.4.0 on Solaris 10 x86
Please provide any additional information below.
The build recipe:
http://sourceforge.net/apps/trac/gar/browser/csw/mgar/pkg/leveldb/trunk/Makefile
The C++ libraries such as snappy, are under the /opt/csw/gxx prefix.
```
Original issue reported on code.google.com by `blizin...@google.com` on 8 May 2012 at 6:21
|
defect
|
build issue on solaris what steps will reproduce the problem untar sources on solaris run gmake the environment partly set by the opencsw build system home home maciej path opt csw gnu home maciej src opencsw pkg leveldb trunk work install isa opt csw bin home maciej src opencsw pkg leveldb trunk work install isa opt csw bin home maciej src opencsw pkg leveldb trunk work install isa opt csw sbin home mac iej src opencsw pkg leveldb trunk work install isa opt csw sbin opt csw bin opt csw bin opt csw sbin opt csw sbin opt csw bin home maciej src opencsw pkg buildsys gar bin usr bin usr sbin usr ja va bin usr ccs bin usr openwin bin lc all c prefix opt csw gxx exec prefix opt csw gxx bindir opt csw gxx bin sbindir opt csw gxx sbin libexecdir opt csw gxx libexec datadir opt csw gxx share sysconfdir etc opt csw gxx sharedstatedir opt csw gxx share localstatedir var opt csw gxx libdir opt csw gxx lib infodir opt csw gxx share info lispdir opt csw gxx share emacs site lisp includedir opt csw gxx include mandir opt csw gxx share man docdir opt csw gxx share doc sourcedir opt csw src cppflags i opt csw gxx include i opt csw include cflags pipe march opteron cxxflags pipe march opteron ldflags march opteron l opt csw gxx lib l opt csw lib fflags pipe march opteron fcflags pipe march opteron opt csw bin gfortran fc opt csw bin gfortran asflags optflags pipe march opteron cc opt csw bin gcc cxx opt csw bin g cc home opt csw cc version gcc version gcc cxx version gcc version gcc garch garosrel garpackage trunk ld options r opt csw gxx lib isalist r opt csw gxx lib r opt csw lib isalist r opt csw lib what is the expected output what do you see instead i m getting an error message rm f libleveldb a ar rs libleveldb a db builder o db c o db db impl o db db iter o db dbformat o db filename o db log reader o db log writer o db memtable o db repair o db table cache o db version edit o db version set o db write batch o table block o table block builder o table filter block o table format o table iterator o table merger o table table o table table builder o table two level iterator o util arena o util bloom o util cache o util coding o util comparator o util o util env o util env posix o util filter policy o util hash o util histogram o util logging o util options o util status o port port posix o ar creating libleveldb a ld warning option o appears more than once first setting taken ld fatal file home maciej src opencsw pkg leveldb trunk work build isa l eveldb libleveldb so open failed no such file or directory ld fatal file processing errors no output written to libleveldb so gmake error what version of the product are you using on what operating system leveldb on solaris please provide any additional information below the build recipe the c libraries such as snappy are under the opt csw gxx prefix original issue reported on code google com by blizin google com on may at
| 1
|
4,883
| 3,897,418,257
|
IssuesEvent
|
2016-04-16 12:01:59
|
lionheart/openradar-mirror
|
https://api.github.com/repos/lionheart/openradar-mirror
|
opened
|
15615517: Application Loader: wrong tooltip regarding the minimum size of the Preview
|
classification:ui/usability reproducible:always status:open
|
#### Description
When I create new In-App Purchase in Application Loader the tooltip on the "Browse..." says the preview should be "minimum 320x460 in 72 dpi". When I submit a file in 352x460, however, I got this message
ERROR ITMS-9000: "Image dimensions '352x460' of image file 'preview5.6.7.jpg' do not match the supported dimensions. Supported dimensions are: 1280x800, 1440x900, 2880x1800, 2560x1600" at Software/SoftwareMetadata/SoftwareInAppPurchase (MZItmspSoftwareInAppPurchasePackage)
So who has the truth? :)
-
Product Version: Application Loaader 2.9 (439)
Created: 2013-12-09T14:01:38.990608
Originated: 2013-12-09T00:00:00
Open Radar Link: http://www.openradar.me/15615517
|
True
|
15615517: Application Loader: wrong tooltip regarding the minimum size of the Preview - #### Description
When I create new In-App Purchase in Application Loader the tooltip on the "Browse..." says the preview should be "minimum 320x460 in 72 dpi". When I submit a file in 352x460, however, I got this message
ERROR ITMS-9000: "Image dimensions '352x460' of image file 'preview5.6.7.jpg' do not match the supported dimensions. Supported dimensions are: 1280x800, 1440x900, 2880x1800, 2560x1600" at Software/SoftwareMetadata/SoftwareInAppPurchase (MZItmspSoftwareInAppPurchasePackage)
So who has the truth? :)
-
Product Version: Application Loaader 2.9 (439)
Created: 2013-12-09T14:01:38.990608
Originated: 2013-12-09T00:00:00
Open Radar Link: http://www.openradar.me/15615517
|
non_defect
|
application loader wrong tooltip regarding the minimum size of the preview description when i create new in app purchase in application loader the tooltip on the browse says the preview should be minimum in dpi when i submit a file in however i got this message error itms image dimensions of image file jpg do not match the supported dimensions supported dimensions are at software softwaremetadata softwareinapppurchase mzitmspsoftwareinapppurchasepackage so who has the truth product version application loaader created originated open radar link
| 0
|
7,552
| 2,610,405,155
|
IssuesEvent
|
2015-02-26 20:11:41
|
chrsmith/republic-at-war
|
https://api.github.com/repos/chrsmith/republic-at-war
|
opened
|
Mechis III Super Battle Droids
|
auto-migrated Priority-Medium Type-Defect
|
```
On Mechis III, Super Battle Droids have the following errors in their text:
1. Their names are "B1 Battledroid", instead of "B2 Super Battle Droid"
(battledroid should be spelled separately, checked it)
2. In their descriptions, it is written (36 units per squad), when they are not
actually in a squad
```
-----
Original issue reported on code.google.com by `jkouzman...@gmail.com` on 8 Jul 2011 at 7:06
|
1.0
|
Mechis III Super Battle Droids - ```
On Mechis III, Super Battle Droids have the following errors in their text:
1. Their names are "B1 Battledroid", instead of "B2 Super Battle Droid"
(battledroid should be spelled separately, checked it)
2. In their descriptions, it is written (36 units per squad), when they are not
actually in a squad
```
-----
Original issue reported on code.google.com by `jkouzman...@gmail.com` on 8 Jul 2011 at 7:06
|
defect
|
mechis iii super battle droids on mechis iii super battle droids have the following errors in their text their names are battledroid instead of super battle droid battledroid should be spelled separately checked it in their descriptions it is written units per squad when they are not actually in a squad original issue reported on code google com by jkouzman gmail com on jul at
| 1
|
40,878
| 10,209,573,646
|
IssuesEvent
|
2019-08-14 13:02:20
|
primefaces/primefaces
|
https://api.github.com/repos/primefaces/primefaces
|
closed
|
SelectCheckboxMenu: Disabling a SelectItem does nothing
|
defect
|
## 1) Environment
- PrimeFaces version: `7.0`
- Does it work on the newest released PrimeFaces version? Version?
- Does it work on the newest sources in GitHub? (Build by source -> https://github.com/primefaces/primefaces/wiki/Building-From-Source) `No`
- Application server + version: `Wildfly 13`
- Affected browsers: `Firefox`, `Chrome`, `Edge`
## 2) Expected behavior
The checkbox get's disabled when the Bean sets the disabled attribute of the SelectItem to `true`.
## 3) Actual behavior
Disabling the SelectItem has no visible effect.
When I add the `ui-state-disabled` class to the checkbox the checkbox changes as expected.
It seems like the change in the bean does not trigger the css class to be added to the element.
## 4) Steps to reproduce
Test project can be found at https://github.com/theobisproject/primefaces-test on the `selectItem-disable` branch
|
1.0
|
SelectCheckboxMenu: Disabling a SelectItem does nothing - ## 1) Environment
- PrimeFaces version: `7.0`
- Does it work on the newest released PrimeFaces version? Version?
- Does it work on the newest sources in GitHub? (Build by source -> https://github.com/primefaces/primefaces/wiki/Building-From-Source) `No`
- Application server + version: `Wildfly 13`
- Affected browsers: `Firefox`, `Chrome`, `Edge`
## 2) Expected behavior
The checkbox get's disabled when the Bean sets the disabled attribute of the SelectItem to `true`.
## 3) Actual behavior
Disabling the SelectItem has no visible effect.
When I add the `ui-state-disabled` class to the checkbox the checkbox changes as expected.
It seems like the change in the bean does not trigger the css class to be added to the element.
## 4) Steps to reproduce
Test project can be found at https://github.com/theobisproject/primefaces-test on the `selectItem-disable` branch
|
defect
|
selectcheckboxmenu disabling a selectitem does nothing environment primefaces version does it work on the newest released primefaces version version does it work on the newest sources in github build by source no application server version wildfly affected browsers firefox chrome edge expected behavior the checkbox get s disabled when the bean sets the disabled attribute of the selectitem to true actual behavior disabling the selectitem has no visible effect when i add the ui state disabled class to the checkbox the checkbox changes as expected it seems like the change in the bean does not trigger the css class to be added to the element steps to reproduce test project can be found at on the selectitem disable branch
| 1
|
53,568
| 13,261,922,076
|
IssuesEvent
|
2020-08-20 20:46:46
|
icecube-trac/tix4
|
https://api.github.com/repos/icecube-trac/tix4
|
closed
|
mysql DB I3OmDb at SPS does not have RDE values for Scintillator channels (Trac #1693)
|
Migrated from Trac defect other
|
The I3OmDb mysql DB does not have RDE values set for the 4 scinitillator channels. This, combined with #1684, caused fatal crashes of filter clients during the start of the 24 hr test run last night. good times were had by all trying to work around this.
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1693">https://code.icecube.wisc.edu/projects/icecube/ticket/1693</a>, reported by blaufussand owned by kohnen</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:12:58",
"_ts": "1550067178841456",
"description": "The I3OmDb mysql DB does not have RDE values set for the 4 scinitillator channels. This, combined with #1692, caused fatal crashes of filter clients during the start of the 24 hr test run last night. good times were had by all trying to work around this.\n\n",
"reporter": "blaufuss",
"cc": "",
"resolution": "fixed",
"time": "2016-05-05T14:30:49",
"component": "other",
"summary": "mysql DB I3OmDb at SPS does not have RDE values for Scintillator channels",
"priority": "normal",
"keywords": "I3OmDb tables",
"milestone": "",
"owner": "kohnen",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
mysql DB I3OmDb at SPS does not have RDE values for Scintillator channels (Trac #1693) - The I3OmDb mysql DB does not have RDE values set for the 4 scinitillator channels. This, combined with #1684, caused fatal crashes of filter clients during the start of the 24 hr test run last night. good times were had by all trying to work around this.
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1693">https://code.icecube.wisc.edu/projects/icecube/ticket/1693</a>, reported by blaufussand owned by kohnen</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:12:58",
"_ts": "1550067178841456",
"description": "The I3OmDb mysql DB does not have RDE values set for the 4 scinitillator channels. This, combined with #1692, caused fatal crashes of filter clients during the start of the 24 hr test run last night. good times were had by all trying to work around this.\n\n",
"reporter": "blaufuss",
"cc": "",
"resolution": "fixed",
"time": "2016-05-05T14:30:49",
"component": "other",
"summary": "mysql DB I3OmDb at SPS does not have RDE values for Scintillator channels",
"priority": "normal",
"keywords": "I3OmDb tables",
"milestone": "",
"owner": "kohnen",
"type": "defect"
}
```
</p>
</details>
|
defect
|
mysql db at sps does not have rde values for scintillator channels trac the mysql db does not have rde values set for the scinitillator channels this combined with caused fatal crashes of filter clients during the start of the hr test run last night good times were had by all trying to work around this migrated from json status closed changetime ts description the mysql db does not have rde values set for the scinitillator channels this combined with caused fatal crashes of filter clients during the start of the hr test run last night good times were had by all trying to work around this n n reporter blaufuss cc resolution fixed time component other summary mysql db at sps does not have rde values for scintillator channels priority normal keywords tables milestone owner kohnen type defect
| 1
|
2,911
| 3,249,373,482
|
IssuesEvent
|
2015-10-18 03:52:44
|
stedolan/jq
|
https://api.github.com/repos/stedolan/jq
|
closed
|
from_entries does not support "name"
|
usability
|
I had an array containing:
```
[{"name": "foo", "value":1}]
```
According to the docs, from_entries accepts key, Key, Name as valid keys. It's a little surprising that it supports 'key' or 'Key' but not 'name' or 'Name'. The docs are accurate but I spent about 10 minutes wondering why my filter didn't work as I saw the 2 cases for Key and assumed Name would work the same way :)
Any chance of making it consistent?
|
True
|
from_entries does not support "name" - I had an array containing:
```
[{"name": "foo", "value":1}]
```
According to the docs, from_entries accepts key, Key, Name as valid keys. It's a little surprising that it supports 'key' or 'Key' but not 'name' or 'Name'. The docs are accurate but I spent about 10 minutes wondering why my filter didn't work as I saw the 2 cases for Key and assumed Name would work the same way :)
Any chance of making it consistent?
|
non_defect
|
from entries does not support name i had an array containing according to the docs from entries accepts key key name as valid keys it s a little surprising that it supports key or key but not name or name the docs are accurate but i spent about minutes wondering why my filter didn t work as i saw the cases for key and assumed name would work the same way any chance of making it consistent
| 0
|
78,930
| 27,825,168,895
|
IssuesEvent
|
2023-03-19 17:27:56
|
scipy/scipy
|
https://api.github.com/repos/scipy/scipy
|
closed
|
BUG: scipy 1.5.4 and 1.10.0 return different p-values for same data
|
defect scipy.stats
|
### Describe your issue.
Hello, I was calculating p-value for my evaluation results and I found that different version of scipy can return different p-value for the same set of data.
The scipy 1.5.4 and 1.10.0, which is the default version for ubuntu 18.04 and 20.04, return different mannwhitneyu p-value. I'm wondering which version is correct or both are wrong?
### Reproducing Code Example
```python
## demo code
import sys
from scipy.stats import mannwhitneyu
from scipy import mean
print(sys.version)
tor_flvmeta_cov = [994, 994, 994, 994, 994, 995, 994, 995, 994, 994]
ff_flvmeta_cov = [996, 996, 996, 996, 996, 996, 996, 996, 996, 996]
stats, pval = mannwhitneyu(ff_flvmeta_cov, tor_flvmeta_cov)
print(pval)
print(mean(tor_flvmeta_cov))
print(mean(ff_flvmeta_cov))
ff_xmllint_cov = [8109, 8040, 7874, 8256, 8103, 8067, 7887, 7909, 7915, 8133]
afl_xmllint_cov = [8210, 7513, 8205, 7836, 8208, 8201, 8383, 7279, 7563, 8004]
stats, pval = mannwhitneyu(ff_xmllint_cov, afl_xmllint_cov)
print(pval)
print(mean(afl_xmllint_cov))
print(mean(ff_xmllint_cov))
```
### Error message
```shell
version1:
ubuntu 18.04 numpy-1.19.5 scipy-1.5.4 python3.6.9
results: 1.6449684029324486e-05, 994.2, 996.0
0.4849249884965778, 7940.2, 8029.3
version2:
ubuntu 20.04 numpy-1.24.1 scipy-1.10.0 python3.8.10
results: 3.28993680586491e-05, 994.2, 996.0
1, 7940.2, 8029.3
```
### SciPy/NumPy/Python version information
numpy-1.19.5 scipy-1.5.4 python3.6.9 && numpy-1.24.1 scipy-1.10.0 python3.8.10
|
1.0
|
BUG: scipy 1.5.4 and 1.10.0 return different p-values for same data - ### Describe your issue.
Hello, I was calculating p-value for my evaluation results and I found that different version of scipy can return different p-value for the same set of data.
The scipy 1.5.4 and 1.10.0, which is the default version for ubuntu 18.04 and 20.04, return different mannwhitneyu p-value. I'm wondering which version is correct or both are wrong?
### Reproducing Code Example
```python
## demo code
import sys
from scipy.stats import mannwhitneyu
from scipy import mean
print(sys.version)
tor_flvmeta_cov = [994, 994, 994, 994, 994, 995, 994, 995, 994, 994]
ff_flvmeta_cov = [996, 996, 996, 996, 996, 996, 996, 996, 996, 996]
stats, pval = mannwhitneyu(ff_flvmeta_cov, tor_flvmeta_cov)
print(pval)
print(mean(tor_flvmeta_cov))
print(mean(ff_flvmeta_cov))
ff_xmllint_cov = [8109, 8040, 7874, 8256, 8103, 8067, 7887, 7909, 7915, 8133]
afl_xmllint_cov = [8210, 7513, 8205, 7836, 8208, 8201, 8383, 7279, 7563, 8004]
stats, pval = mannwhitneyu(ff_xmllint_cov, afl_xmllint_cov)
print(pval)
print(mean(afl_xmllint_cov))
print(mean(ff_xmllint_cov))
```
### Error message
```shell
version1:
ubuntu 18.04 numpy-1.19.5 scipy-1.5.4 python3.6.9
results: 1.6449684029324486e-05, 994.2, 996.0
0.4849249884965778, 7940.2, 8029.3
version2:
ubuntu 20.04 numpy-1.24.1 scipy-1.10.0 python3.8.10
results: 3.28993680586491e-05, 994.2, 996.0
1, 7940.2, 8029.3
```
### SciPy/NumPy/Python version information
numpy-1.19.5 scipy-1.5.4 python3.6.9 && numpy-1.24.1 scipy-1.10.0 python3.8.10
|
defect
|
bug scipy and return different p values for same data describe your issue hello i was calculating p value for my evaluation results and i found that different version of scipy can return different p value for the same set of data the scipy and which is the default version for ubuntu and return different mannwhitneyu p value i m wondering which version is correct or both are wrong reproducing code example python demo code import sys from scipy stats import mannwhitneyu from scipy import mean print sys version tor flvmeta cov ff flvmeta cov stats pval mannwhitneyu ff flvmeta cov tor flvmeta cov print pval print mean tor flvmeta cov print mean ff flvmeta cov ff xmllint cov afl xmllint cov stats pval mannwhitneyu ff xmllint cov afl xmllint cov print pval print mean afl xmllint cov print mean ff xmllint cov error message shell ubuntu numpy scipy results ubuntu numpy scipy results scipy numpy python version information numpy scipy numpy scipy
| 1
|
71,241
| 23,501,552,210
|
IssuesEvent
|
2022-08-18 08:52:19
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
opened
|
Wild failure on Clear Cache and Reload
|
T-Defect
|
### Steps to reproduce
My Element Nightly Desktop Mac wasn't showing all of the avatars, so I tried to clear cache and reload. Instead of clearing the cache and reloading, everything within the Element window was replaced with this business:
<img width="1440" alt="Screenshot 2022-08-18 at 09 47 54" src="https://user-images.githubusercontent.com/1922197/185353300-92705ec6-772c-4a67-8b9e-21f30498781a.png">
Element Nightly Version 2022081601 (2022081601)
### Outcome
#### What did you expect?
Clearing cache and reloading
#### What happened instead?
Total nonsense
### Operating system
macOS 12.5
### Application version
Element Nightly Version 2022081601 (2022081601)
### How did you install the app?
Website
### Homeserver
lant.uk
### Will you send logs?
No
|
1.0
|
Wild failure on Clear Cache and Reload - ### Steps to reproduce
My Element Nightly Desktop Mac wasn't showing all of the avatars, so I tried to clear cache and reload. Instead of clearing the cache and reloading, everything within the Element window was replaced with this business:
<img width="1440" alt="Screenshot 2022-08-18 at 09 47 54" src="https://user-images.githubusercontent.com/1922197/185353300-92705ec6-772c-4a67-8b9e-21f30498781a.png">
Element Nightly Version 2022081601 (2022081601)
### Outcome
#### What did you expect?
Clearing cache and reloading
#### What happened instead?
Total nonsense
### Operating system
macOS 12.5
### Application version
Element Nightly Version 2022081601 (2022081601)
### How did you install the app?
Website
### Homeserver
lant.uk
### Will you send logs?
No
|
defect
|
wild failure on clear cache and reload steps to reproduce my element nightly desktop mac wasn t showing all of the avatars so i tried to clear cache and reload instead of clearing the cache and reloading everything within the element window was replaced with this business img width alt screenshot at src element nightly version outcome what did you expect clearing cache and reloading what happened instead total nonsense operating system macos application version element nightly version how did you install the app website homeserver lant uk will you send logs no
| 1
|
2,371
| 2,607,898,560
|
IssuesEvent
|
2015-02-26 00:12:31
|
chrsmithdemos/zen-coding
|
https://api.github.com/repos/chrsmithdemos/zen-coding
|
closed
|
Cursor position in base html elements
|
auto-migrated Milestone-0.6 Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1. expand div
What is the expected output? What do you see instead?
Expected <div>|</div>
Got <div></div>
What version of the product are you using? On what operating system?
version 0.5 on Python 2.6 under Windows 7
Please provide any additional information below.
In version 0.3 cursor is positioned inside expanded base elements.
The same behavior expected. Do we need to provide explicitly cursor
position for all html base elements in settings?
```
-----
Original issue reported on code.google.com by `derigel` on 24 Nov 2009 at 1:53
|
1.0
|
Cursor position in base html elements - ```
What steps will reproduce the problem?
1. expand div
What is the expected output? What do you see instead?
Expected <div>|</div>
Got <div></div>
What version of the product are you using? On what operating system?
version 0.5 on Python 2.6 under Windows 7
Please provide any additional information below.
In version 0.3 cursor is positioned inside expanded base elements.
The same behavior expected. Do we need to provide explicitly cursor
position for all html base elements in settings?
```
-----
Original issue reported on code.google.com by `derigel` on 24 Nov 2009 at 1:53
|
defect
|
cursor position in base html elements what steps will reproduce the problem expand div what is the expected output what do you see instead expected got what version of the product are you using on what operating system version on python under windows please provide any additional information below in version cursor is positioned inside expanded base elements the same behavior expected do we need to provide explicitly cursor position for all html base elements in settings original issue reported on code google com by derigel on nov at
| 1
|
4,222
| 3,003,080,503
|
IssuesEvent
|
2015-07-24 21:04:06
|
brian-team/brian2
|
https://api.github.com/repos/brian-team/brian2
|
opened
|
Add option for language-specific semantics
|
component: codegen enhancement
|
For example, mod is different in Python and C++ but for many applications this may not matter. Similarly division, and perhaps others.
|
1.0
|
Add option for language-specific semantics - For example, mod is different in Python and C++ but for many applications this may not matter. Similarly division, and perhaps others.
|
non_defect
|
add option for language specific semantics for example mod is different in python and c but for many applications this may not matter similarly division and perhaps others
| 0
|
23,413
| 3,813,882,296
|
IssuesEvent
|
2016-03-28 09:12:42
|
night-ghost/minimosd-extra
|
https://api.github.com/repos/night-ghost/minimosd-extra
|
closed
|
Ezuhf RSSI, RSSI extra character
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1. Using minimosd found here:
http://api.ning.com/files/JH0C62scBfBAtFDqmGCv60XlaSBO2A3xoBrqtHWq-nP5g0wWTfQm9p
q-RHel2NpW7Dk5TClfOgSx7Qd-HqbgThVbUHT5aU60/MinimOSDExtra_Plane_Prerelease_2.4_r7
95.hex
EzUHF rssi finally works (outputting on channel 8 through CPPM)
However the EzUHF RSSI is inverted so the RSSI level goes from 0% at full
signal to 100% when the radio is off.
If the min and max values are entered in reverse it breaks the OSD rssi value.
What is the expected output? What do you see instead?
It would be good if the rssi raw values could be feed in high or low in either
position so reversed signals can work. Or if there was an invert button next to
the channel select for RSSI in the minimosd setup.
What version of the product are you using? On what operating system?
R795 on Arduplane 3.2 with APM 2.5
Please provide any additional information below.
Also on the OSD the rssi generates extra characters to the right of the % sign.
Sometimes its another percentage sign, sometimes its a random number.
If these have been fixed already sorry!
```
Original issue reported on code.google.com by `netwimbe...@gmail.com` on 31 Jan 2015 at 4:10
|
1.0
|
Ezuhf RSSI, RSSI extra character - ```
What steps will reproduce the problem?
1. Using minimosd found here:
http://api.ning.com/files/JH0C62scBfBAtFDqmGCv60XlaSBO2A3xoBrqtHWq-nP5g0wWTfQm9p
q-RHel2NpW7Dk5TClfOgSx7Qd-HqbgThVbUHT5aU60/MinimOSDExtra_Plane_Prerelease_2.4_r7
95.hex
EzUHF rssi finally works (outputting on channel 8 through CPPM)
However the EzUHF RSSI is inverted so the RSSI level goes from 0% at full
signal to 100% when the radio is off.
If the min and max values are entered in reverse it breaks the OSD rssi value.
What is the expected output? What do you see instead?
It would be good if the rssi raw values could be feed in high or low in either
position so reversed signals can work. Or if there was an invert button next to
the channel select for RSSI in the minimosd setup.
What version of the product are you using? On what operating system?
R795 on Arduplane 3.2 with APM 2.5
Please provide any additional information below.
Also on the OSD the rssi generates extra characters to the right of the % sign.
Sometimes its another percentage sign, sometimes its a random number.
If these have been fixed already sorry!
```
Original issue reported on code.google.com by `netwimbe...@gmail.com` on 31 Jan 2015 at 4:10
|
defect
|
ezuhf rssi rssi extra character what steps will reproduce the problem using minimosd found here q minimosdextra plane prerelease hex ezuhf rssi finally works outputting on channel through cppm however the ezuhf rssi is inverted so the rssi level goes from at full signal to when the radio is off if the min and max values are entered in reverse it breaks the osd rssi value what is the expected output what do you see instead it would be good if the rssi raw values could be feed in high or low in either position so reversed signals can work or if there was an invert button next to the channel select for rssi in the minimosd setup what version of the product are you using on what operating system on arduplane with apm please provide any additional information below also on the osd the rssi generates extra characters to the right of the sign sometimes its another percentage sign sometimes its a random number if these have been fixed already sorry original issue reported on code google com by netwimbe gmail com on jan at
| 1
|
457,111
| 13,151,867,449
|
IssuesEvent
|
2020-08-09 19:00:23
|
grpc/grpc
|
https://api.github.com/repos/grpc/grpc
|
closed
|
HTTPS proxy support
|
disposition/stale kind/enhancement lang/other priority/P2
|
<!--
This form is for bug reports and feature requests ONLY!
For general questions and troubleshooting, please ask/look for answers here:
- grpc.io mailing list: https://groups.google.com/forum/#!forum/grpc-io
- StackOverflow, with "grpc" tag: https://stackoverflow.com/questions/tagged/grpc
Issues specific to *grpc-java*, *grpc-go*, *grpc-node*, *grpc-dart*, *grpc-web* should be created in the repository they belong to (e.g. https://github.com/grpc/grpc-LANGUAGE/issues/new)
-->
### Is your feature request related to a problem? Please describe.
While there is proxy support for GRPC, only HTTP proxies are currently supported. I cannot use GRPC if I'm behind an HTTPS proxy (require SSL handshake).
If I try I get this error message:
```
'https' scheme not supported in proxy URI
```
### Describe the solution you'd like
Would be nice to extend the proxy support to include HTTPS proxies. Are there any plans to do so? I couldn't find any related issues.
|
1.0
|
HTTPS proxy support - <!--
This form is for bug reports and feature requests ONLY!
For general questions and troubleshooting, please ask/look for answers here:
- grpc.io mailing list: https://groups.google.com/forum/#!forum/grpc-io
- StackOverflow, with "grpc" tag: https://stackoverflow.com/questions/tagged/grpc
Issues specific to *grpc-java*, *grpc-go*, *grpc-node*, *grpc-dart*, *grpc-web* should be created in the repository they belong to (e.g. https://github.com/grpc/grpc-LANGUAGE/issues/new)
-->
### Is your feature request related to a problem? Please describe.
While there is proxy support for GRPC, only HTTP proxies are currently supported. I cannot use GRPC if I'm behind an HTTPS proxy (require SSL handshake).
If I try I get this error message:
```
'https' scheme not supported in proxy URI
```
### Describe the solution you'd like
Would be nice to extend the proxy support to include HTTPS proxies. Are there any plans to do so? I couldn't find any related issues.
|
non_defect
|
https proxy support this form is for bug reports and feature requests only for general questions and troubleshooting please ask look for answers here grpc io mailing list stackoverflow with grpc tag issues specific to grpc java grpc go grpc node grpc dart grpc web should be created in the repository they belong to e g is your feature request related to a problem please describe while there is proxy support for grpc only http proxies are currently supported i cannot use grpc if i m behind an https proxy require ssl handshake if i try i get this error message https scheme not supported in proxy uri describe the solution you d like would be nice to extend the proxy support to include https proxies are there any plans to do so i couldn t find any related issues
| 0
|
166,148
| 20,718,133,849
|
IssuesEvent
|
2022-03-13 00:25:49
|
vincenzodistasio97/home-cloud
|
https://api.github.com/repos/vincenzodistasio97/home-cloud
|
closed
|
CVE-2021-23434 (High) detected in object-path-0.11.4.tgz - autoclosed
|
security vulnerability
|
## CVE-2021-23434 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>object-path-0.11.4.tgz</b></p></summary>
<p>Access deep object properties using a path</p>
<p>Library home page: <a href="https://registry.npmjs.org/object-path/-/object-path-0.11.4.tgz">https://registry.npmjs.org/object-path/-/object-path-0.11.4.tgz</a></p>
<p>Path to dependency file: /client/package.json</p>
<p>Path to vulnerable library: /client/node_modules/object-path/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-3.4.1.tgz (Root Library)
- resolve-url-loader-3.1.1.tgz
- adjust-sourcemap-loader-2.0.0.tgz
- :x: **object-path-0.11.4.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/vincenzodistasio97/home-cloud/commit/0eb270221557ac4df481974af8dfb9ea1288bc9b">0eb270221557ac4df481974af8dfb9ea1288bc9b</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package object-path before 0.11.6. A type confusion vulnerability can lead to a bypass of CVE-2020-15256 when the path components used in the path parameter are arrays. In particular, the condition currentPath === '__proto__' returns false if currentPath is ['__proto__']. This is because the === operator returns always false when the type of the operands is different.
<p>Publish Date: 2021-08-27
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23434>CVE-2021-23434</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23434">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23434</a></p>
<p>Release Date: 2021-08-27</p>
<p>Fix Resolution (object-path): 0.11.6</p>
<p>Direct dependency fix Resolution (react-scripts): 3.4.4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-23434 (High) detected in object-path-0.11.4.tgz - autoclosed - ## CVE-2021-23434 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>object-path-0.11.4.tgz</b></p></summary>
<p>Access deep object properties using a path</p>
<p>Library home page: <a href="https://registry.npmjs.org/object-path/-/object-path-0.11.4.tgz">https://registry.npmjs.org/object-path/-/object-path-0.11.4.tgz</a></p>
<p>Path to dependency file: /client/package.json</p>
<p>Path to vulnerable library: /client/node_modules/object-path/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-3.4.1.tgz (Root Library)
- resolve-url-loader-3.1.1.tgz
- adjust-sourcemap-loader-2.0.0.tgz
- :x: **object-path-0.11.4.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/vincenzodistasio97/home-cloud/commit/0eb270221557ac4df481974af8dfb9ea1288bc9b">0eb270221557ac4df481974af8dfb9ea1288bc9b</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package object-path before 0.11.6. A type confusion vulnerability can lead to a bypass of CVE-2020-15256 when the path components used in the path parameter are arrays. In particular, the condition currentPath === '__proto__' returns false if currentPath is ['__proto__']. This is because the === operator returns always false when the type of the operands is different.
<p>Publish Date: 2021-08-27
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23434>CVE-2021-23434</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23434">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23434</a></p>
<p>Release Date: 2021-08-27</p>
<p>Fix Resolution (object-path): 0.11.6</p>
<p>Direct dependency fix Resolution (react-scripts): 3.4.4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve high detected in object path tgz autoclosed cve high severity vulnerability vulnerable library object path tgz access deep object properties using a path library home page a href path to dependency file client package json path to vulnerable library client node modules object path package json dependency hierarchy react scripts tgz root library resolve url loader tgz adjust sourcemap loader tgz x object path tgz vulnerable library found in head commit a href found in base branch master vulnerability details this affects the package object path before a type confusion vulnerability can lead to a bypass of cve when the path components used in the path parameter are arrays in particular the condition currentpath proto returns false if currentpath is this is because the operator returns always false when the type of the operands is different publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution object path direct dependency fix resolution react scripts step up your open source security game with whitesource
| 0
|
65,595
| 19,588,310,928
|
IssuesEvent
|
2022-01-05 09:54:39
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
closed
|
Unable to add stickers
|
T-Defect S-Minor A-Scalar A-Stickers O-Uncommon
|
### Steps to reproduce
1. click on show stickers
2. click on add some now
3. try to add any sticker
### Outcome
#### What did you expect?
Sticker pack gets added
#### What happened instead?
got an error in the web console, no error in UI
scalar-384bfbefc6406a22e76a.bundle.js:1 Unhandled rejection Error: Missing room_id in request at https://scalar.vector.im/scalar-384bfbefc6406a22e76a.bundle.js:1:93370 at l (https://scalar.vector.im/scalar-384bfbefc6406a22e76a.bundle.js:1:86094) at T._settlePromiseFromHandler (https://scalar.vector.im/scalar-384bfbefc6406a22e76a.bundle.js:1:59604) at T._settlePromise (https://scalar.vector.im/scalar-384bfbefc6406a22e76a.bundle.js:1:60404) at T._settlePromise0 (https://scalar.vector.im/scalar-384bfbefc6406a22e76a.bundle.js:1:61103) at T._settlePromises (https://scalar.vector.im/scalar-384bfbefc6406a22e76a.bundle.js:1:62430)From previous event: at T.j [as _captureStackTrace] (https://scalar.vector.im/scalar-384bfbefc6406a22e76a.bundle.js:1:30887) at T._then (https://scalar.vector.im/scalar-384bfbefc6406a22e76a.bundle.js:1:55190) at T.then (https://scalar.vector.im/scalar-384bfbefc6406a22e76a.bundle.js:1:53576) at a.value (https://scalar.vector.im/scalar-384bfbefc6406a22e76a.bundle.js:63:140460) at a.value (https://scalar.vector.im/scalar-384bfbefc6406a22e76a.bundle.js:63:139814) at a.value (https://scalar.vector.im/scalar-384bfbefc6406a22e76a.bundle.js:63:139590) at Object.o (https://scalar.vector.im/scalar-384bfbefc6406a22e76a.bundle.js:23:46493) at s (https://scalar.vector.im/scalar-384bfbefc6406a22e76a.bundle.js:23:44694) at Object.executeDispatchesInOrder (https://scalar.vector.im/scalar-384bfbefc6406a22e76a.bundle.js:23:45423) at p (https://scalar.vector.im/scalar-384bfbefc6406a22e76a.bundle.js:23:12996) at d (https://scalar.vector.im/scalar-384bfbefc6406a22e76a.bundle.js:23:13122) at Array.forEach (<anonymous>) at e.exports (https://scalar.vector.im/scalar-384bfbefc6406a22e76a.bundle.js:37:18164) at Object.processEventQueue (https://scalar.vector.im/scalar-384bfbefc6406a22e76a.bundle.js:23:14514) at https://scalar.vector.im/scalar-384bfbefc6406a22e76a.bundle.js:69:114561 at Object.handleTopLevel [as _handleTopLevel] (https://scalar.vector.im/scalar-384bfbefc6406a22e76a.bundle.js:69:114583) at d (https://scalar.vector.im/scalar-384bfbefc6406a22e76a.bundle.js:77:32559) at c.perform (https://scalar.vector.im/scalar-384bfbefc6406a22e76a.bundle.js:23:20596) at Object.batchedUpdates (https://scalar.vector.im/scalar-384bfbefc6406a22e76a.bundle.js:77:32041) at Object.batchedUpdates (https://scalar.vector.im/scalar-384bfbefc6406a22e76a.bundle.js:18:35126) at dispatchEvent (https://scalar.vector.im/scalar-384bfbefc6406a22e76a.bundle.js:77:33343)
### Operating system
macos
### Browser information
chrome 94.0.4606.81
### URL for webapp
app.element.io
### Homeserver
_No response_
### Will you send logs?
No
|
1.0
|
Unable to add stickers - ### Steps to reproduce
1. click on show stickers
2. click on add some now
3. try to add any sticker
### Outcome
#### What did you expect?
Sticker pack gets added
#### What happened instead?
got an error in the web console, no error in UI
scalar-384bfbefc6406a22e76a.bundle.js:1 Unhandled rejection Error: Missing room_id in request at https://scalar.vector.im/scalar-384bfbefc6406a22e76a.bundle.js:1:93370 at l (https://scalar.vector.im/scalar-384bfbefc6406a22e76a.bundle.js:1:86094) at T._settlePromiseFromHandler (https://scalar.vector.im/scalar-384bfbefc6406a22e76a.bundle.js:1:59604) at T._settlePromise (https://scalar.vector.im/scalar-384bfbefc6406a22e76a.bundle.js:1:60404) at T._settlePromise0 (https://scalar.vector.im/scalar-384bfbefc6406a22e76a.bundle.js:1:61103) at T._settlePromises (https://scalar.vector.im/scalar-384bfbefc6406a22e76a.bundle.js:1:62430)From previous event: at T.j [as _captureStackTrace] (https://scalar.vector.im/scalar-384bfbefc6406a22e76a.bundle.js:1:30887) at T._then (https://scalar.vector.im/scalar-384bfbefc6406a22e76a.bundle.js:1:55190) at T.then (https://scalar.vector.im/scalar-384bfbefc6406a22e76a.bundle.js:1:53576) at a.value (https://scalar.vector.im/scalar-384bfbefc6406a22e76a.bundle.js:63:140460) at a.value (https://scalar.vector.im/scalar-384bfbefc6406a22e76a.bundle.js:63:139814) at a.value (https://scalar.vector.im/scalar-384bfbefc6406a22e76a.bundle.js:63:139590) at Object.o (https://scalar.vector.im/scalar-384bfbefc6406a22e76a.bundle.js:23:46493) at s (https://scalar.vector.im/scalar-384bfbefc6406a22e76a.bundle.js:23:44694) at Object.executeDispatchesInOrder (https://scalar.vector.im/scalar-384bfbefc6406a22e76a.bundle.js:23:45423) at p (https://scalar.vector.im/scalar-384bfbefc6406a22e76a.bundle.js:23:12996) at d (https://scalar.vector.im/scalar-384bfbefc6406a22e76a.bundle.js:23:13122) at Array.forEach (<anonymous>) at e.exports (https://scalar.vector.im/scalar-384bfbefc6406a22e76a.bundle.js:37:18164) at Object.processEventQueue (https://scalar.vector.im/scalar-384bfbefc6406a22e76a.bundle.js:23:14514) at https://scalar.vector.im/scalar-384bfbefc6406a22e76a.bundle.js:69:114561 at Object.handleTopLevel [as _handleTopLevel] (https://scalar.vector.im/scalar-384bfbefc6406a22e76a.bundle.js:69:114583) at d (https://scalar.vector.im/scalar-384bfbefc6406a22e76a.bundle.js:77:32559) at c.perform (https://scalar.vector.im/scalar-384bfbefc6406a22e76a.bundle.js:23:20596) at Object.batchedUpdates (https://scalar.vector.im/scalar-384bfbefc6406a22e76a.bundle.js:77:32041) at Object.batchedUpdates (https://scalar.vector.im/scalar-384bfbefc6406a22e76a.bundle.js:18:35126) at dispatchEvent (https://scalar.vector.im/scalar-384bfbefc6406a22e76a.bundle.js:77:33343)
### Operating system
macos
### Browser information
chrome 94.0.4606.81
### URL for webapp
app.element.io
### Homeserver
_No response_
### Will you send logs?
No
|
defect
|
unable to add stickers steps to reproduce click on show stickers click on add some now try to add any sticker outcome what did you expect sticker pack gets added what happened instead got an error in the web console no error in ui scalar bundle js unhandled rejection error missing room id in request at at l at t settlepromisefromhandler at t settlepromise at t at t settlepromises previous event at t j at t then at t then at a value at a value at a value at object o at s at object executedispatchesinorder at p at d at array foreach at e exports at object processeventqueue at at object handletoplevel at d at c perform at object batchedupdates at object batchedupdates at dispatchevent operating system macos browser information chrome url for webapp app element io homeserver no response will you send logs no
| 1
|
63,490
| 17,689,273,875
|
IssuesEvent
|
2021-08-24 07:56:40
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
closed
|
Microcopy on community dashboard says "Drag a community to the left side" even when the tag panel is disabled
|
T-Defect X-Needs-Info S-Minor
|
### Description
It's somewhat difficult to drag the group to something that doesn't exist :(
### Steps to reproduce
- Disable the tag panel in your settings
- Click on the communities icon
- Read the help text
### Version information
<!-- IMPORTANT: please answer the following questions, to help us narrow down the problem -->
- **Platform**: web (in-browser)
- **Browser**: Chrome 65
- **OS**: Windows 10
- **URL**: tang.ents.ca (0.14 - based on /app)
|
1.0
|
Microcopy on community dashboard says "Drag a community to the left side" even when the tag panel is disabled - ### Description
It's somewhat difficult to drag the group to something that doesn't exist :(
### Steps to reproduce
- Disable the tag panel in your settings
- Click on the communities icon
- Read the help text
### Version information
<!-- IMPORTANT: please answer the following questions, to help us narrow down the problem -->
- **Platform**: web (in-browser)
- **Browser**: Chrome 65
- **OS**: Windows 10
- **URL**: tang.ents.ca (0.14 - based on /app)
|
defect
|
microcopy on community dashboard says drag a community to the left side even when the tag panel is disabled description it s somewhat difficult to drag the group to something that doesn t exist steps to reproduce disable the tag panel in your settings click on the communities icon read the help text version information platform web in browser browser chrome os windows url tang ents ca based on app
| 1
|
396,277
| 27,110,011,527
|
IssuesEvent
|
2023-02-15 14:44:14
|
ferdlestier/actions
|
https://api.github.com/repos/ferdlestier/actions
|
closed
|
The first Issue
|
documentation
|
## This issue is a guideline on how we'll work with Actions
Ideas:
1. Idea 1
2. Idea 2
3. Idea 3
## Deliverables
- [x] Reflection 1
- [x] Reflection 2
- [x] Reflection 3
DRI | Deliverable | Due Date
--- | --- | ---
User1|Propose changes|Jan 23
User2|Merge Pull Request|Feb 23
|
1.0
|
The first Issue - ## This issue is a guideline on how we'll work with Actions
Ideas:
1. Idea 1
2. Idea 2
3. Idea 3
## Deliverables
- [x] Reflection 1
- [x] Reflection 2
- [x] Reflection 3
DRI | Deliverable | Due Date
--- | --- | ---
User1|Propose changes|Jan 23
User2|Merge Pull Request|Feb 23
|
non_defect
|
the first issue this issue is a guideline on how we ll work with actions ideas idea idea idea deliverables reflection reflection reflection dri deliverable due date propose changes jan merge pull request feb
| 0
|
74,249
| 14,224,286,890
|
IssuesEvent
|
2020-11-17 19:26:33
|
nhcarrigan/BeccaBot
|
https://api.github.com/repos/nhcarrigan/BeccaBot
|
closed
|
[BUG] - Space Command not repsonding
|
help wanted 💻 aspect: code 🛠 goal: fix 🟧 priority: high 🧹 status: ticket work required
|
# Bug Report
The space command doesn't work.
## Describe the bug
When u use the |space becca bot types for a bit but doesn't then give any reply.
<!--A clear and concise description of what the bug is.-->
## Expected behavior
<!--A clear and concise description of what you expected to happen.-->
## Screenshots
<!--If applicable, add screenshots to help explain your problem.-->
## Additional information
<!--Add any other context about the problem here.-->
|
1.0
|
[BUG] - Space Command not repsonding - # Bug Report
The space command doesn't work.
## Describe the bug
When u use the |space becca bot types for a bit but doesn't then give any reply.
<!--A clear and concise description of what the bug is.-->
## Expected behavior
<!--A clear and concise description of what you expected to happen.-->
## Screenshots
<!--If applicable, add screenshots to help explain your problem.-->
## Additional information
<!--Add any other context about the problem here.-->
|
non_defect
|
space command not repsonding bug report the space command doesn t work describe the bug when u use the space becca bot types for a bit but doesn t then give any reply expected behavior screenshots additional information
| 0
|
310,927
| 26,754,660,765
|
IssuesEvent
|
2023-01-30 22:47:00
|
ChainSafe/lodestar
|
https://api.github.com/repos/ChainSafe/lodestar
|
closed
|
Update github workflows to avoid `::set-output`
|
prio-high scope-testing
|
We need to update our github workflows to not use `::set-output`. We have by May 31 2023 to do so.
See https://github.blog/changelog/2022-10-11-github-actions-deprecating-save-state-and-set-output-commands/
|
1.0
|
Update github workflows to avoid `::set-output` - We need to update our github workflows to not use `::set-output`. We have by May 31 2023 to do so.
See https://github.blog/changelog/2022-10-11-github-actions-deprecating-save-state-and-set-output-commands/
|
non_defect
|
update github workflows to avoid set output we need to update our github workflows to not use set output we have by may to do so see
| 0
|
381,531
| 11,276,553,676
|
IssuesEvent
|
2020-01-14 23:36:12
|
googleapis/google-api-java-client-services
|
https://api.github.com/repos/googleapis/google-api-java-client-services
|
closed
|
Synthesis failed for tagmanager
|
autosynth failure priority: p1 type: bug
|
Hello! Autosynth couldn't regenerate tagmanager. :broken_heart:
Here's the output from running `synth.py`:
```
Cloning into 'working_repo'...
Checking out files: 25% (16351/65361)
Checking out files: 26% (16994/65361)
Checking out files: 27% (17648/65361)
Checking out files: 28% (18302/65361)
Checking out files: 29% (18955/65361)
Checking out files: 30% (19609/65361)
Checking out files: 31% (20262/65361)
Checking out files: 32% (20916/65361)
Checking out files: 33% (21570/65361)
Checking out files: 34% (22223/65361)
Checking out files: 35% (22877/65361)
Checking out files: 36% (23530/65361)
Checking out files: 37% (24184/65361)
Checking out files: 38% (24838/65361)
Checking out files: 39% (25491/65361)
Checking out files: 40% (26145/65361)
Checking out files: 41% (26799/65361)
Checking out files: 42% (27452/65361)
Checking out files: 43% (28106/65361)
Checking out files: 44% (28759/65361)
Checking out files: 45% (29413/65361)
Checking out files: 46% (30067/65361)
Checking out files: 47% (30720/65361)
Checking out files: 48% (31374/65361)
Checking out files: 49% (32027/65361)
Checking out files: 50% (32681/65361)
Checking out files: 51% (33335/65361)
Checking out files: 52% (33988/65361)
Checking out files: 52% (34477/65361)
Checking out files: 53% (34642/65361)
Checking out files: 54% (35295/65361)
Checking out files: 55% (35949/65361)
Checking out files: 56% (36603/65361)
Checking out files: 57% (37256/65361)
Checking out files: 58% (37910/65361)
Checking out files: 59% (38563/65361)
Checking out files: 60% (39217/65361)
Checking out files: 61% (39871/65361)
Checking out files: 62% (40524/65361)
Checking out files: 63% (41178/65361)
Checking out files: 64% (41832/65361)
Checking out files: 65% (42485/65361)
Checking out files: 66% (43139/65361)
Checking out files: 67% (43792/65361)
Checking out files: 68% (44446/65361)
Checking out files: 69% (45100/65361)
Checking out files: 70% (45753/65361)
Checking out files: 71% (46407/65361)
Checking out files: 72% (47060/65361)
Checking out files: 73% (47714/65361)
Checking out files: 74% (48368/65361)
Checking out files: 75% (49021/65361)
Checking out files: 76% (49675/65361)
Checking out files: 77% (50328/65361)
Checking out files: 78% (50982/65361)
Checking out files: 79% (51636/65361)
Checking out files: 79% (51685/65361)
Checking out files: 80% (52289/65361)
Checking out files: 81% (52943/65361)
Checking out files: 82% (53597/65361)
Checking out files: 83% (54250/65361)
Checking out files: 84% (54904/65361)
Checking out files: 85% (55557/65361)
Checking out files: 86% (56211/65361)
Checking out files: 87% (56865/65361)
Checking out files: 88% (57518/65361)
Checking out files: 89% (58172/65361)
Checking out files: 90% (58825/65361)
Checking out files: 91% (59479/65361)
Checking out files: 92% (60133/65361)
Checking out files: 93% (60786/65361)
Checking out files: 94% (61440/65361)
Checking out files: 95% (62093/65361)
Checking out files: 96% (62747/65361)
Checking out files: 97% (63401/65361)
Checking out files: 98% (64054/65361)
Checking out files: 99% (64708/65361)
Checking out files: 100% (65361/65361)
Checking out files: 100% (65361/65361), done.
Switched to branch 'autosynth-tagmanager'
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 256, in <module>
main()
File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 196, in main
last_synth_commit_hash = get_last_metadata_commit(args.metadata_path)
File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 149, in get_last_metadata_commit
text=True,
File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/subprocess.py", line 403, in run
with Popen(*popenargs, **kwargs) as process:
TypeError: __init__() got an unexpected keyword argument 'text'
```
Google internal developers can see the full log [here](https://sponge/40f694d4-43de-41f0-b993-f4694e4a45de).
|
1.0
|
Synthesis failed for tagmanager - Hello! Autosynth couldn't regenerate tagmanager. :broken_heart:
Here's the output from running `synth.py`:
```
Cloning into 'working_repo'...
Checking out files: 25% (16351/65361)
Checking out files: 26% (16994/65361)
Checking out files: 27% (17648/65361)
Checking out files: 28% (18302/65361)
Checking out files: 29% (18955/65361)
Checking out files: 30% (19609/65361)
Checking out files: 31% (20262/65361)
Checking out files: 32% (20916/65361)
Checking out files: 33% (21570/65361)
Checking out files: 34% (22223/65361)
Checking out files: 35% (22877/65361)
Checking out files: 36% (23530/65361)
Checking out files: 37% (24184/65361)
Checking out files: 38% (24838/65361)
Checking out files: 39% (25491/65361)
Checking out files: 40% (26145/65361)
Checking out files: 41% (26799/65361)
Checking out files: 42% (27452/65361)
Checking out files: 43% (28106/65361)
Checking out files: 44% (28759/65361)
Checking out files: 45% (29413/65361)
Checking out files: 46% (30067/65361)
Checking out files: 47% (30720/65361)
Checking out files: 48% (31374/65361)
Checking out files: 49% (32027/65361)
Checking out files: 50% (32681/65361)
Checking out files: 51% (33335/65361)
Checking out files: 52% (33988/65361)
Checking out files: 52% (34477/65361)
Checking out files: 53% (34642/65361)
Checking out files: 54% (35295/65361)
Checking out files: 55% (35949/65361)
Checking out files: 56% (36603/65361)
Checking out files: 57% (37256/65361)
Checking out files: 58% (37910/65361)
Checking out files: 59% (38563/65361)
Checking out files: 60% (39217/65361)
Checking out files: 61% (39871/65361)
Checking out files: 62% (40524/65361)
Checking out files: 63% (41178/65361)
Checking out files: 64% (41832/65361)
Checking out files: 65% (42485/65361)
Checking out files: 66% (43139/65361)
Checking out files: 67% (43792/65361)
Checking out files: 68% (44446/65361)
Checking out files: 69% (45100/65361)
Checking out files: 70% (45753/65361)
Checking out files: 71% (46407/65361)
Checking out files: 72% (47060/65361)
Checking out files: 73% (47714/65361)
Checking out files: 74% (48368/65361)
Checking out files: 75% (49021/65361)
Checking out files: 76% (49675/65361)
Checking out files: 77% (50328/65361)
Checking out files: 78% (50982/65361)
Checking out files: 79% (51636/65361)
Checking out files: 79% (51685/65361)
Checking out files: 80% (52289/65361)
Checking out files: 81% (52943/65361)
Checking out files: 82% (53597/65361)
Checking out files: 83% (54250/65361)
Checking out files: 84% (54904/65361)
Checking out files: 85% (55557/65361)
Checking out files: 86% (56211/65361)
Checking out files: 87% (56865/65361)
Checking out files: 88% (57518/65361)
Checking out files: 89% (58172/65361)
Checking out files: 90% (58825/65361)
Checking out files: 91% (59479/65361)
Checking out files: 92% (60133/65361)
Checking out files: 93% (60786/65361)
Checking out files: 94% (61440/65361)
Checking out files: 95% (62093/65361)
Checking out files: 96% (62747/65361)
Checking out files: 97% (63401/65361)
Checking out files: 98% (64054/65361)
Checking out files: 99% (64708/65361)
Checking out files: 100% (65361/65361)
Checking out files: 100% (65361/65361), done.
Switched to branch 'autosynth-tagmanager'
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 256, in <module>
main()
File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 196, in main
last_synth_commit_hash = get_last_metadata_commit(args.metadata_path)
File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 149, in get_last_metadata_commit
text=True,
File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/subprocess.py", line 403, in run
with Popen(*popenargs, **kwargs) as process:
TypeError: __init__() got an unexpected keyword argument 'text'
```
Google internal developers can see the full log [here](https://sponge/40f694d4-43de-41f0-b993-f4694e4a45de).
|
non_defect
|
synthesis failed for tagmanager hello autosynth couldn t regenerate tagmanager broken heart here s the output from running synth py cloning into working repo checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files checking out files done switched to branch autosynth tagmanager traceback most recent call last file home kbuilder pyenv versions lib runpy py line in run module as main main mod spec file home kbuilder pyenv versions lib runpy py line in run code exec code run globals file tmpfs src git autosynth autosynth synth py line in main file tmpfs src git autosynth autosynth synth py line in main last synth commit hash get last metadata commit args metadata path file tmpfs src git autosynth autosynth synth py line in get last metadata commit text true file home kbuilder pyenv versions lib subprocess py line in run with popen popenargs kwargs as process typeerror init got an unexpected keyword argument text google internal developers can see the full log
| 0
|
67,801
| 21,161,538,021
|
IssuesEvent
|
2022-04-07 09:47:59
|
jOOQ/jOOQ
|
https://api.github.com/repos/jOOQ/jOOQ
|
closed
|
DSL.noCondition() isn't applied correctly to aggregate FILTER WHERE clause
|
T: Defect C: Functionality P: Medium E: All Editions
|
This:
```java
count().filterWhere(noCondition())
```
Should generate:
```sql
COUNT(*)
```
But it produces this, instead:
```java
COUNT(*) FILTER (WHERE TRUE)
```
|
1.0
|
DSL.noCondition() isn't applied correctly to aggregate FILTER WHERE clause - This:
```java
count().filterWhere(noCondition())
```
Should generate:
```sql
COUNT(*)
```
But it produces this, instead:
```java
COUNT(*) FILTER (WHERE TRUE)
```
|
defect
|
dsl nocondition isn t applied correctly to aggregate filter where clause this java count filterwhere nocondition should generate sql count but it produces this instead java count filter where true
| 1
|
80,958
| 30,628,960,817
|
IssuesEvent
|
2023-07-24 13:24:51
|
vector-im/element-meta
|
https://api.github.com/repos/vector-im/element-meta
|
closed
|
Preview of uploaded BMP image doesn't work (image opens fine)
|
O-Uncommon S-Major T-Defect A-Timeline A-Media A-File-Upload
|
### Steps to reproduce
1. Upload an image to Element-Web in the BMP image format.
2. Look at the timeline after the upload is done.
3. Open the image.
4. Go back to the timeline.
### Outcome
#### What did you expect?
To see a preview of the image in the timeline.
#### What happened instead?
There is just a grey-filled box. Opening the image works fine. I can see what I've uploaded. But the timeline doesn't show a preview/thumbnail of the image.
### Operating system
_No response_
### Browser information
Firefox 98
### URL for webapp
_No response_
### Application version
Element-Web 1.10.7
### Homeserver
_No response_
### Will you send logs?
No
|
1.0
|
Preview of uploaded BMP image doesn't work (image opens fine) - ### Steps to reproduce
1. Upload an image to Element-Web in the BMP image format.
2. Look at the timeline after the upload is done.
3. Open the image.
4. Go back to the timeline.
### Outcome
#### What did you expect?
To see a preview of the image in the timeline.
#### What happened instead?
There is just a grey-filled box. Opening the image works fine. I can see what I've uploaded. But the timeline doesn't show a preview/thumbnail of the image.
### Operating system
_No response_
### Browser information
Firefox 98
### URL for webapp
_No response_
### Application version
Element-Web 1.10.7
### Homeserver
_No response_
### Will you send logs?
No
|
defect
|
preview of uploaded bmp image doesn t work image opens fine steps to reproduce upload an image to element web in the bmp image format look at the timeline after the upload is done open the image go back to the timeline outcome what did you expect to see a preview of the image in the timeline what happened instead there is just a grey filled box opening the image works fine i can see what i ve uploaded but the timeline doesn t show a preview thumbnail of the image operating system no response browser information firefox url for webapp no response application version element web homeserver no response will you send logs no
| 1
|
74,005
| 24,900,998,125
|
IssuesEvent
|
2022-10-28 20:53:04
|
damen-dotcms/issue-test
|
https://api.github.com/repos/damen-dotcms/issue-test
|
opened
|
adslfalsdjkf
|
Type - Defect
|
[](https://mrkr.io/s/635c412f358f583487f656e9/0)
---
**Reported by:** Marker Test (damen.gilland+markertest@dotcms.com) - [Contact via Marker.io](https://app.marker.io/i/635c412f358f583487f656ec_0748406a30cece39?advanced=1&comments=1)
**Source URL:** [https://github.com/users/damen-dotcms/projects/1/views/1](https://github.com/users/damen-dotcms/projects/1/views/1)
**Issue details:** [Open in Marker.io](https://app.marker.io/i/635c412f358f583487f656ec_0748406a30cece39?advanced=1)
**Console:** [📃 0 logs](https://app.marker.io/i/635c412f358f583487f656ec_0748406a30cece39?advanced=1&activePane=console)
<table><tr><td><strong>Device type</strong></td><td>desktop</td></tr><tr><td><strong>Browser</strong></td><td>Firefox 106.0</td></tr><tr><td><strong>Screen Size</strong></td><td>2056 x 1329</td></tr><tr><td><strong>OS</strong></td><td>OS X 10.15</td></tr><tr><td><strong>Viewport Size</strong></td><td>2056 x 1172</td></tr><tr><td><strong>Zoom Level</strong></td><td>100%</td></tr><tr><td><strong>Pixel Ratio</strong></td><td>@​2x</td></tr></table>
|
1.0
|
adslfalsdjkf - [](https://mrkr.io/s/635c412f358f583487f656e9/0)
---
**Reported by:** Marker Test (damen.gilland+markertest@dotcms.com) - [Contact via Marker.io](https://app.marker.io/i/635c412f358f583487f656ec_0748406a30cece39?advanced=1&comments=1)
**Source URL:** [https://github.com/users/damen-dotcms/projects/1/views/1](https://github.com/users/damen-dotcms/projects/1/views/1)
**Issue details:** [Open in Marker.io](https://app.marker.io/i/635c412f358f583487f656ec_0748406a30cece39?advanced=1)
**Console:** [📃 0 logs](https://app.marker.io/i/635c412f358f583487f656ec_0748406a30cece39?advanced=1&activePane=console)
<table><tr><td><strong>Device type</strong></td><td>desktop</td></tr><tr><td><strong>Browser</strong></td><td>Firefox 106.0</td></tr><tr><td><strong>Screen Size</strong></td><td>2056 x 1329</td></tr><tr><td><strong>OS</strong></td><td>OS X 10.15</td></tr><tr><td><strong>Viewport Size</strong></td><td>2056 x 1172</td></tr><tr><td><strong>Zoom Level</strong></td><td>100%</td></tr><tr><td><strong>Pixel Ratio</strong></td><td>@​2x</td></tr></table>
|
defect
|
adslfalsdjkf reported by marker test damen gilland markertest dotcms com source url issue details console device type desktop browser firefox screen size x os os x viewport size x zoom level pixel ratio
| 1
|
78,591
| 27,614,605,114
|
IssuesEvent
|
2023-03-09 18:19:08
|
dotCMS/core
|
https://api.github.com/repos/dotCMS/core
|
closed
|
Unable to invalidate Script API response cache.
|
Type : Defect OKR : Customer Support
|
### Describe the bug
Unable to invalidate Script API response cache.
Related Ticket: https://dotcms.zendesk.com/agent/tickets/109335.
I implemented the cache through this documentation: https://www.dotcms.com/docs/latest/scripting-api#ResponseCaching.
### To Reproduce
Steps to reproduce the behavior:
1. Go to the demo site.
2. Create a new content type test with a title field.
3. Create a new script API GET endpoint.
4. Deploy the API with the file I attached.
5. Try to invalidate the cache with https://www.dotcms.com/docs/latest/tag-based-caching-block-cache#dotCacheInvalidate.
### Expected behavior
The cache should be invalidated.
### Additional context
When a script API response is cached the key becomes` anonymous_1_/api/vtl/test`
[get.zip](https://github.com/dotCMS/core/files/10302147/get.zip)
|
1.0
|
Unable to invalidate Script API response cache. - ### Describe the bug
Unable to invalidate Script API response cache.
Related Ticket: https://dotcms.zendesk.com/agent/tickets/109335.
I implemented the cache through this documentation: https://www.dotcms.com/docs/latest/scripting-api#ResponseCaching.
### To Reproduce
Steps to reproduce the behavior:
1. Go to the demo site.
2. Create a new content type test with a title field.
3. Create a new script API GET endpoint.
4. Deploy the API with the file I attached.
5. Try to invalidate the cache with https://www.dotcms.com/docs/latest/tag-based-caching-block-cache#dotCacheInvalidate.
### Expected behavior
The cache should be invalidated.
### Additional context
When a script API response is cached the key becomes` anonymous_1_/api/vtl/test`
[get.zip](https://github.com/dotCMS/core/files/10302147/get.zip)
|
defect
|
unable to invalidate script api response cache describe the bug unable to invalidate script api response cache related ticket i implemented the cache through this documentation to reproduce steps to reproduce the behavior go to the demo site create a new content type test with a title field create a new script api get endpoint deploy the api with the file i attached try to invalidate the cache with expected behavior the cache should be invalidated additional context when a script api response is cached the key becomes anonymous api vtl test
| 1
|
290,958
| 21,912,129,159
|
IssuesEvent
|
2022-05-21 08:01:33
|
Yamato-Security/hayabusa
|
https://api.github.com/repos/Yamato-Security/hayabusa
|
closed
|
一部ルールの読み込みができない
|
documentation Priority:Low
|
**Describe the issue**
WIndows環境で一部ルールがアンチウィルスに検出され、動作することができない
**Step to Reproduce**
Steps to reproduce the behavior:
1. execute `hayabusa.exe -d ./hayabusa-sample-evtx/`
2. outputed `Errors were generated. Please check ./logs/errorlog-....log for details.`
3. See errorlog
**Expected behavior**
ドキュメントでそのようなエラー文が出たらアンチウィルスの除外に加えるようにドキュメントに記載したほうがよい
**Environment (please complete the following information):**
- OS: Windows10
- hayabusa version v1.2.2
|
1.0
|
一部ルールの読み込みができない - **Describe the issue**
WIndows環境で一部ルールがアンチウィルスに検出され、動作することができない
**Step to Reproduce**
Steps to reproduce the behavior:
1. execute `hayabusa.exe -d ./hayabusa-sample-evtx/`
2. outputed `Errors were generated. Please check ./logs/errorlog-....log for details.`
3. See errorlog
**Expected behavior**
ドキュメントでそのようなエラー文が出たらアンチウィルスの除外に加えるようにドキュメントに記載したほうがよい
**Environment (please complete the following information):**
- OS: Windows10
- hayabusa version v1.2.2
|
non_defect
|
一部ルールの読み込みができない describe the issue windows環境で一部ルールがアンチウィルスに検出され、動作することができない step to reproduce steps to reproduce the behavior execute hayabusa exe d hayabusa sample evtx outputed errors were generated please check logs errorlog log for details see errorlog expected behavior ドキュメントでそのようなエラー文が出たらアンチウィルスの除外に加えるようにドキュメントに記載したほうがよい environment please complete the following information os hayabusa version
| 0
|
129,179
| 10,566,066,471
|
IssuesEvent
|
2019-10-05 16:08:08
|
moodlebox/moodle-tool_moodlebox
|
https://api.github.com/repos/moodlebox/moodle-tool_moodlebox
|
closed
|
Password validation prevent uppercase characters to be used
|
Status: tests passed Type: bug
|
This is a regression caused by cd3165e9157263be710220ea667920f5def7f5dd, see #22.
For context, see: https://discuss.moodlebox.net/d/108-moodlebox-mit-apple-ger-ten
|
1.0
|
Password validation prevent uppercase characters to be used - This is a regression caused by cd3165e9157263be710220ea667920f5def7f5dd, see #22.
For context, see: https://discuss.moodlebox.net/d/108-moodlebox-mit-apple-ger-ten
|
non_defect
|
password validation prevent uppercase characters to be used this is a regression caused by see for context see
| 0
|
2,588
| 2,698,421,485
|
IssuesEvent
|
2015-04-03 06:29:37
|
anticoders/gagarin
|
https://api.github.com/repos/anticoders/gagarin
|
closed
|
False positives on "Gagarin is not there"
|
bug documentation
|
Fro time to time the tests fail on CircleCI due to the following reason:
https://circleci.com/gh/anticoders/gagarin/274


which is obviously false positive, and it only means that the Gagarin callout was not received by the testing framework. But the reason this is happening is completely unknown.
Should anyone have some insight, please report it here.
|
1.0
|
False positives on "Gagarin is not there" - Fro time to time the tests fail on CircleCI due to the following reason:
https://circleci.com/gh/anticoders/gagarin/274


which is obviously false positive, and it only means that the Gagarin callout was not received by the testing framework. But the reason this is happening is completely unknown.
Should anyone have some insight, please report it here.
|
non_defect
|
false positives on gagarin is not there fro time to time the tests fail on circleci due to the following reason which is obviously false positive and it only means that the gagarin callout was not received by the testing framework but the reason this is happening is completely unknown should anyone have some insight please report it here
| 0
|
75,278
| 25,742,865,942
|
IssuesEvent
|
2022-12-08 07:41:08
|
cython/cython
|
https://api.github.com/repos/cython/cython
|
closed
|
[BUG] int annotation now sets the type to exact Python int
|
defect Python Semantics
|
**Describe the bug**
Since 31d40c8c62acef9509675155fe5b5bb8e48dba5a an annotation of `int` now corresponds to an exact Python `int`. This matters on Python 2 because it rejects `long`. Was this intended?
**To Reproduce**
Code to reproduce the behaviour:
```cython
def f(x: int):
return x+1
```
then from python 2
```python
import testmod
testmod.f(1) # works
testmod.f(long(1)) # fails
```
**Environment (please complete the following information):**
- Python version: 2.7
- Cython version: current master branch
**Additional context**
Obviously this is the long-term plan, but I'm not sure we meant to do it now?
Also, whether or not it's supported we probably need some test coverage
|
1.0
|
[BUG] int annotation now sets the type to exact Python int - **Describe the bug**
Since 31d40c8c62acef9509675155fe5b5bb8e48dba5a an annotation of `int` now corresponds to an exact Python `int`. This matters on Python 2 because it rejects `long`. Was this intended?
**To Reproduce**
Code to reproduce the behaviour:
```cython
def f(x: int):
return x+1
```
then from python 2
```python
import testmod
testmod.f(1) # works
testmod.f(long(1)) # fails
```
**Environment (please complete the following information):**
- Python version: 2.7
- Cython version: current master branch
**Additional context**
Obviously this is the long-term plan, but I'm not sure we meant to do it now?
Also, whether or not it's supported we probably need some test coverage
|
defect
|
int annotation now sets the type to exact python int describe the bug since an annotation of int now corresponds to an exact python int this matters on python because it rejects long was this intended to reproduce code to reproduce the behaviour cython def f x int return x then from python python import testmod testmod f works testmod f long fails environment please complete the following information python version cython version current master branch additional context obviously this is the long term plan but i m not sure we meant to do it now also whether or not it s supported we probably need some test coverage
| 1
|
64,819
| 18,923,619,587
|
IssuesEvent
|
2021-11-17 06:43:49
|
scipy/scipy
|
https://api.github.com/repos/scipy/scipy
|
opened
|
BUG: test failure for win+x86_32 builds in wheel builder repo
|
defect
|
### Describe your issue.
a
### Reproducing Code Example
```python
a
```
### Error message
```shell
a
```
### SciPy/NumPy/Python version information
a
|
1.0
|
BUG: test failure for win+x86_32 builds in wheel builder repo - ### Describe your issue.
a
### Reproducing Code Example
```python
a
```
### Error message
```shell
a
```
### SciPy/NumPy/Python version information
a
|
defect
|
bug test failure for win builds in wheel builder repo describe your issue a reproducing code example python a error message shell a scipy numpy python version information a
| 1
|
132,914
| 12,521,692,646
|
IssuesEvent
|
2020-06-03 17:50:16
|
lcs1001/gestion-aulas-informatica
|
https://api.github.com/repos/lcs1001/gestion-aulas-informatica
|
closed
|
Modificar el documento de Especificación de Requisitos Software
|
documentation
|
Modificar la Especificación de Requisitos Software tras los cambios planteados en la reunión:
- Unificación de las ventanas de mantenimiento de centros y departamentos y de mantenimiento de responsables en una única ventana "Mantenimiento de Centros y Departamentos", ya que sólo puede haber un único responsable.
|
1.0
|
Modificar el documento de Especificación de Requisitos Software - Modificar la Especificación de Requisitos Software tras los cambios planteados en la reunión:
- Unificación de las ventanas de mantenimiento de centros y departamentos y de mantenimiento de responsables en una única ventana "Mantenimiento de Centros y Departamentos", ya que sólo puede haber un único responsable.
|
non_defect
|
modificar el documento de especificación de requisitos software modificar la especificación de requisitos software tras los cambios planteados en la reunión unificación de las ventanas de mantenimiento de centros y departamentos y de mantenimiento de responsables en una única ventana mantenimiento de centros y departamentos ya que sólo puede haber un único responsable
| 0
|
363,827
| 25,468,383,611
|
IssuesEvent
|
2022-11-25 07:49:26
|
appsmithorg/appsmith-docs
|
https://api.github.com/repos/appsmithorg/appsmith-docs
|
closed
|
[Docs]: information about hash in the URL context object
|
Documentation A-Force Ready for Doc Team User Education Pod
|
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Documentation Link
https://docs.appsmith.com/reference/appsmith-framework/context-object#url
### Discord/slack/intercom Link
_No response_
### GitBook Insights searchTerms or link
na
### Google Analytics - search terms or a link
na
### Describe the problem
Add info about what the hash string in the URL context object and also the different parts of the url
https://app.intercom.com/a/inbox/y10e7138/inbox/shared/all/conversation/164629100149082#part_id=initial-part-164629100149082-1380545718
https://app.intercom.com/a/apps/y10e7138/inbox/inbox/conversation/164629100116059#part_id=comment-164629100116059-15009745516
### Describe the improvement
na
### Why do you think this change is needed?
na
|
1.0
|
[Docs]: information about hash in the URL context object - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Documentation Link
https://docs.appsmith.com/reference/appsmith-framework/context-object#url
### Discord/slack/intercom Link
_No response_
### GitBook Insights searchTerms or link
na
### Google Analytics - search terms or a link
na
### Describe the problem
Add info about what the hash string in the URL context object and also the different parts of the url
https://app.intercom.com/a/inbox/y10e7138/inbox/shared/all/conversation/164629100149082#part_id=initial-part-164629100149082-1380545718
https://app.intercom.com/a/apps/y10e7138/inbox/inbox/conversation/164629100116059#part_id=comment-164629100116059-15009745516
### Describe the improvement
na
### Why do you think this change is needed?
na
|
non_defect
|
information about hash in the url context object is there an existing issue for this i have searched the existing issues documentation link discord slack intercom link no response gitbook insights searchterms or link na google analytics search terms or a link na describe the problem add info about what the hash string in the url context object and also the different parts of the url describe the improvement na why do you think this change is needed na
| 0
|
17,030
| 2,966,764,601
|
IssuesEvent
|
2015-07-12 07:22:51
|
sporritt/jsPlumb
|
https://api.github.com/repos/sporritt/jsPlumb
|
reopened
|
jsPlumbInstance.draggable('ObjId',{ containment:"dragDrop"}) don't calc offsets
|
defect
|
Hello Together,
I found a problem with jsPlumbInstance.draggable('ObjId',{ containment:"dragDrop"}). This don't calculate the offset. If the Div "dragDrop" have margin, float or something else.
Here a Live-Demo:
http://jsfiddle.net/cmuzp89u/
Here you see, that it work, the windowX Elements stops, but not in the blue Box. The stop box is displaced.
Best wishes
Björn
|
1.0
|
jsPlumbInstance.draggable('ObjId',{ containment:"dragDrop"}) don't calc offsets - Hello Together,
I found a problem with jsPlumbInstance.draggable('ObjId',{ containment:"dragDrop"}). This don't calculate the offset. If the Div "dragDrop" have margin, float or something else.
Here a Live-Demo:
http://jsfiddle.net/cmuzp89u/
Here you see, that it work, the windowX Elements stops, but not in the blue Box. The stop box is displaced.
Best wishes
Björn
|
defect
|
jsplumbinstance draggable objid containment dragdrop don t calc offsets hello together i found a problem with jsplumbinstance draggable objid containment dragdrop this don t calculate the offset if the div dragdrop have margin float or something else here a live demo here you see that it work the windowx elements stops but not in the blue box the stop box is displaced best wishes björn
| 1
|
728,919
| 25,099,833,153
|
IssuesEvent
|
2022-11-08 12:54:15
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
bacologia.wordpress.com - see bug description
|
browser-firefox priority-critical os-linux engine-gecko
|
<!-- @browser: Firefox 106.0.4 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; rv:106.0) Gecko/20100101 Firefox/106.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/113575 -->
**URL**: https://bacologia.wordpress.com/2022/11/02/szybkie-wiesci-64/
**Browser / Version**: Firefox 106.0.4
**Operating System**: Linux
**Tested Another Browser**: Yes Other
**Problem type**: Something else
**Description**: The video or audio does not play
**Steps to Reproduce**:
- Very high RAM usage
- All videos are loading at the same time (that shouldn't happen) - auto play is turned off
Check here: https://bacologia.wordpress.com/2022/11/02/szybkie-wiesci-64/
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
bacologia.wordpress.com - see bug description - <!-- @browser: Firefox 106.0.4 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; rv:106.0) Gecko/20100101 Firefox/106.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/113575 -->
**URL**: https://bacologia.wordpress.com/2022/11/02/szybkie-wiesci-64/
**Browser / Version**: Firefox 106.0.4
**Operating System**: Linux
**Tested Another Browser**: Yes Other
**Problem type**: Something else
**Description**: The video or audio does not play
**Steps to Reproduce**:
- Very high RAM usage
- All videos are loading at the same time (that shouldn't happen) - auto play is turned off
Check here: https://bacologia.wordpress.com/2022/11/02/szybkie-wiesci-64/
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_defect
|
bacologia wordpress com see bug description url browser version firefox operating system linux tested another browser yes other problem type something else description the video or audio does not play steps to reproduce very high ram usage all videos are loading at the same time that shouldn t happen auto play is turned off check here browser configuration none from with ❤️
| 0
|
51,795
| 12,809,180,154
|
IssuesEvent
|
2020-07-03 15:05:18
|
pwa-builder/PWABuilder
|
https://api.github.com/repos/pwa-builder/PWABuilder
|
closed
|
[Keyboard Navigation - PWA Builder - PWA Builder Inking] : Heading hierarchy is not correct, h1 used after h3.
|
A11yCT A11yMAS A11yMediumImpact Accessibility HCL- PWABuilder MAS2.4.6 Severity3 bug :bug: fixed needs triage :mag:
|
**User Experience:**
If headings are not clear and descriptive, it would be difficult for screen reader dependent users to find the information they seek, and cannot understand the relationships between different parts of the content. Also if labels are not Descriptive users can not identify specific components within the content.
**Test Environment:**
OS: Windows 10 build 19608.1006
Browser: Edge - Anaheim - Version 85.0.545.0 (Official build) dev (64-bit)
URL: https://pwabuilderfeatures.z22.web.core.windows.net/component/inking
**Repro Steps:**
1. Open URL: https://pwabuilderfeatures.z22.web.core.windows.net/component/inking in Edge Anaheim dev browser.
2. Navigate to pwa-inking Heading.
3. Check for heading structure for the headings from the code.
4. Observe the issue.
**Actual Result:**
Heading level not correct, H3 used after H1 heading level. And H2 used after H3, i.e. randomly structured.
**Expected Result:**
Heading level should be in a correct structure, i.e. in a ascending or descending order
**MAS Reference:**
https://microsoft.sharepoint.com/:w:/r/teams/msenable/_layouts/15/WopiFrame.aspx?sourcedoc={0cf80872-800d-4643-90cf-0c1c3d3e6260}

|
1.0
|
[Keyboard Navigation - PWA Builder - PWA Builder Inking] : Heading hierarchy is not correct, h1 used after h3. - **User Experience:**
If headings are not clear and descriptive, it would be difficult for screen reader dependent users to find the information they seek, and cannot understand the relationships between different parts of the content. Also if labels are not Descriptive users can not identify specific components within the content.
**Test Environment:**
OS: Windows 10 build 19608.1006
Browser: Edge - Anaheim - Version 85.0.545.0 (Official build) dev (64-bit)
URL: https://pwabuilderfeatures.z22.web.core.windows.net/component/inking
**Repro Steps:**
1. Open URL: https://pwabuilderfeatures.z22.web.core.windows.net/component/inking in Edge Anaheim dev browser.
2. Navigate to pwa-inking Heading.
3. Check for heading structure for the headings from the code.
4. Observe the issue.
**Actual Result:**
Heading level not correct, H3 used after H1 heading level. And H2 used after H3, i.e. randomly structured.
**Expected Result:**
Heading level should be in a correct structure, i.e. in a ascending or descending order
**MAS Reference:**
https://microsoft.sharepoint.com/:w:/r/teams/msenable/_layouts/15/WopiFrame.aspx?sourcedoc={0cf80872-800d-4643-90cf-0c1c3d3e6260}

|
non_defect
|
heading hierarchy is not correct used after user experience if headings are not clear and descriptive it would be difficult for screen reader dependent users to find the information they seek and cannot understand the relationships between different parts of the content also if labels are not descriptive users can not identify specific components within the content test environment os windows build browser edge anaheim version official build dev bit url repro steps open url in edge anaheim dev browser navigate to pwa inking heading check for heading structure for the headings from the code observe the issue actual result heading level not correct used after heading level and used after i e randomly structured expected result heading level should be in a correct structure i e in a ascending or descending order mas reference
| 0
|
249,796
| 21,192,097,169
|
IssuesEvent
|
2022-04-08 18:40:43
|
mattimaier/bnote
|
https://api.github.com/repos/mattimaier/bnote
|
closed
|
4.0.0. alpha 3: Falsche Anzeige bei vergangenen Probenphasen
|
bug to test
|
Beim Menüpunkt "Probenphasen" werden zunächst vergangene und zukünftige Phasen in der Liste angezeigt. Beim Klick auf "Vergangene Probenphasen" werden nur die zukünftigen angezeigt.
Korrekt wäre m.E: Bei der Auswahl des Menüpunktes werden nur aktuelle und zukünftige Phasen angezeigt, beim Klick auf "Vergangene Probenphasen" nur die abgeschlossenen.
|
1.0
|
4.0.0. alpha 3: Falsche Anzeige bei vergangenen Probenphasen - Beim Menüpunkt "Probenphasen" werden zunächst vergangene und zukünftige Phasen in der Liste angezeigt. Beim Klick auf "Vergangene Probenphasen" werden nur die zukünftigen angezeigt.
Korrekt wäre m.E: Bei der Auswahl des Menüpunktes werden nur aktuelle und zukünftige Phasen angezeigt, beim Klick auf "Vergangene Probenphasen" nur die abgeschlossenen.
|
non_defect
|
alpha falsche anzeige bei vergangenen probenphasen beim menüpunkt probenphasen werden zunächst vergangene und zukünftige phasen in der liste angezeigt beim klick auf vergangene probenphasen werden nur die zukünftigen angezeigt korrekt wäre m e bei der auswahl des menüpunktes werden nur aktuelle und zukünftige phasen angezeigt beim klick auf vergangene probenphasen nur die abgeschlossenen
| 0
|
10,764
| 2,622,183,308
|
IssuesEvent
|
2015-03-04 00:19:46
|
byzhang/leveldb
|
https://api.github.com/repos/byzhang/leveldb
|
opened
|
Not worwing with make install and autoreconf
|
auto-migrated Priority-Medium Type-Defect
|
```
PCLinuxOS 32bit
+ cd leveldb-1.15.0
+ '[' 1 -eq 1 ']'
+ make install DESTDIR=/home/gg/src/rpm/BUILDROOT/leveldb-1.15.0-1.i386
make: *** No rule to make target 'install'. Stop.
error: Bad exit status from /home/gg/src/tmp/rpm-tmp.9GF1rG (%install)
Ok I can try use autoreconf, but not working ...
autoreconf -ivf
./configure
make %{?_smp_mflags}
+ cd leveldb-1.15.0
+ '[' 1 -eq 1 ']'
+ '[' 1 -eq 1 ']'
+ autoreconf -ivf
autoreconf: 'configure.ac' or 'configure.in' is required
error: Bad exit status from /home/gg/src/tmp/rpm-tmp.Oa2vFm (%build)
I see OpenSuse use debian path, I will try
```
Original issue reported on code.google.com by `swojskic...@wp.pl` on 15 Aug 2014 at 10:16
|
1.0
|
Not worwing with make install and autoreconf - ```
PCLinuxOS 32bit
+ cd leveldb-1.15.0
+ '[' 1 -eq 1 ']'
+ make install DESTDIR=/home/gg/src/rpm/BUILDROOT/leveldb-1.15.0-1.i386
make: *** No rule to make target 'install'. Stop.
error: Bad exit status from /home/gg/src/tmp/rpm-tmp.9GF1rG (%install)
Ok I can try use autoreconf, but not working ...
autoreconf -ivf
./configure
make %{?_smp_mflags}
+ cd leveldb-1.15.0
+ '[' 1 -eq 1 ']'
+ '[' 1 -eq 1 ']'
+ autoreconf -ivf
autoreconf: 'configure.ac' or 'configure.in' is required
error: Bad exit status from /home/gg/src/tmp/rpm-tmp.Oa2vFm (%build)
I see OpenSuse use debian path, I will try
```
Original issue reported on code.google.com by `swojskic...@wp.pl` on 15 Aug 2014 at 10:16
|
defect
|
not worwing with make install and autoreconf pclinuxos cd leveldb make install destdir home gg src rpm buildroot leveldb make no rule to make target install stop error bad exit status from home gg src tmp rpm tmp install ok i can try use autoreconf but not working autoreconf ivf configure make smp mflags cd leveldb autoreconf ivf autoreconf configure ac or configure in is required error bad exit status from home gg src tmp rpm tmp build i see opensuse use debian path i will try original issue reported on code google com by swojskic wp pl on aug at
| 1
|
36,370
| 7,919,417,260
|
IssuesEvent
|
2018-07-04 16:44:01
|
ONSdigital/sdc-global-design-patterns
|
https://api.github.com/repos/ONSdigital/sdc-global-design-patterns
|
closed
|
eQ Title bar visually too heavy
|
visual defect
|
### Expected behaviour
Font weight is visually the same as on other browsers when inverted (white/light on black/dark colours).
### Actual behaviour
Font weight is visually too heavy
### Steps to reproduce the behaviour
View header on Firefox and compare to Chrome/Safari
### Technical information
Requires the application of a browser specific fix:
`-webkit-font-smoothing: antialiased; -moz-osx-font-smoothing: grayscale;`
when a light colour is used on a dark background
#### Browser
Firefox 59
#### Operating System
High Sierra
### Screenshot

|
1.0
|
eQ Title bar visually too heavy - ### Expected behaviour
Font weight is visually the same as on other browsers when inverted (white/light on black/dark colours).
### Actual behaviour
Font weight is visually too heavy
### Steps to reproduce the behaviour
View header on Firefox and compare to Chrome/Safari
### Technical information
Requires the application of a browser specific fix:
`-webkit-font-smoothing: antialiased; -moz-osx-font-smoothing: grayscale;`
when a light colour is used on a dark background
#### Browser
Firefox 59
#### Operating System
High Sierra
### Screenshot

|
defect
|
eq title bar visually too heavy expected behaviour font weight is visually the same as on other browsers when inverted white light on black dark colours actual behaviour font weight is visually too heavy steps to reproduce the behaviour view header on firefox and compare to chrome safari technical information requires the application of a browser specific fix webkit font smoothing antialiased moz osx font smoothing grayscale when a light colour is used on a dark background browser firefox operating system high sierra screenshot
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.