Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3 values | title stringlengths 1 757 | labels stringlengths 4 664 | body stringlengths 3 261k | index stringclasses 10 values | text_combine stringlengths 96 261k | label stringclasses 2 values | text stringlengths 96 232k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
15,938 | 2,869,100,656 | IssuesEvent | 2015-06-05 23:20:29 | dart-lang/sdk | https://api.github.com/repos/dart-lang/sdk | closed | pkg/watcher/test/utils.dart closes sandbox before watches. | Area-Pkg Pkg-Watcher Priority-Unassigned Triaged Type-Defect | On Windows, removing the directory before closing the watcher can lead to unwanted behavior.
Sadly, I was not able to identify how to change the order of execution here. | 1.0 | pkg/watcher/test/utils.dart closes sandbox before watches. - On Windows, removing the directory before closing the watcher can lead to unwanted behavior.
Sadly, I was not able to identify how to change the order of execution here. | defect | pkg watcher test utils dart closes sandbox before watches on windows removing the directory before closing the watcher can lead to unwanted behavior sadly i was not able to identify how to change the order of execution here | 1 |
69,350 | 22,320,666,040 | IssuesEvent | 2022-06-14 05:58:07 | vector-im/element-web | https://api.github.com/repos/vector-im/element-web | closed | "Join meeting" button in pre-meeting screen not clickable. | T-Defect X-Cannot-Reproduce X-Regression S-Major A-Electron A-Jitsi O-Uncommon | ### Steps to reproduce
Element-desktop 1.9.8 and Element-nightly 2022011301 under Debian 11.2/stable
with KDE/Plasma, with a local private matrix-synapse 1.49.0-1~bpo11+2 server,
also under Debian 11.2 with backports.
Some of the symptoms are the same as described in
https://github.com/vector-im/element-web/issues/18506
like the disconnections every 30 seconds.
Description
On startup, the main window pops up and displays the text messages just fine.
When I click on the jitsi icon the video field displays "Jitsi Video
Conference" and a button to "Join Conference".
When I do that, the video screen shows the camera input and at the bottom
half of it the pre-meeting screen has a white field with my user name and
a blue button to "Join meeting". Clicking on this button has no effect. I
can never join the meeting.
Shortly after (30 seconds), a pop-up says "You have been disconnected, you
may want to check the network connection. Reconnecting in 30 seconds." and
then it counts down. Then it disconnects again and again. This is the
same symptom described in issue 18506.
This goes on until I exit the program or I disable the pre-meeting screen
from the settings gear on the pre-meeting screen. If I do the latter I
see a grey screen with my camera input in the corner greyed out until the
next disconnection. At that time, the corner mini-screen goes black (with
the AV circle on it and the microphone and camera icons slashed out.
Clicking on the microphone or camera icons that show up when you move the
mouse over the video field don't enable them. The only thing that works
is the red hang-up button, which brings me back to the "Jitsi Video
Conference with the "Join Conference" green button" field.
Reconnecting now goes directly to the grey screen (no pre-meeting screen as
that has been disabled) until the next disconnection. The first time (before
the first disconnection, the field at the top (when mouse is hovering over the
video field had the name of the room, subsequent times it had "Jitsi Heb...".
At least I can access the config menu (...) and re-enable the pre-meeting
screen. When that is done, it's back to the unclickable blue "Join meeting"
button. However, now the microphone and camera icons in the pre-meeting
screen are usable. Clicking them re-enables the video in the top half of the
video screen.
So, it is totally impossible to make video calls from this system and the
sequences above are totally repeatable. What can I test or where can I look
to find more details to track down this issue? Some of this behavior has
been present for a few releases, before that Element-desktop worked
correctly on this computer, and most recently it did so a few days ago, until
apparently the latest upgrade. I do not remember upgrading element-desktop
manually but the pre-meeting screen is new, so I suspect this latest behavior
has been caused by the latest release.
On another computer (a laptop with built in camera) with the same debian
stable OS, I see the same issue with the pre-meeting screen but once that is
disabled, the video part of the screen turns grey and no amount of hovering
or clicking can bring up the red hang-up button, so it is impossible to do
anything but exit the program.
I have seen on a few occasions a very quickly disappearing pop-up message at
the beginning, when I just started the program, but it disappeared in less
than a second, so couldn't read what it said.
### Outcome
#### What did you expect?
That I could join the specified room via video call like I had done before.
#### What happened instead?
The program is totally unusable.
### Operating system
Debian 11.2/stable with KDE/Plasma, with a local private matrix-synapse 1.49.0-1~bpo11+2 server, also under Debian 11.2 with backports.
### Application version
Element-desktop 1.9.8 and Element-nightly 2022011301
### How did you install the app?
deb [signed-by=/usr/share/keyrings/riot-im-archive-keyring.gpg] https://packages.riot.im/debian/ bullseye main
### Homeserver
local private matrix-synapse 1.49.0-1~bpo11+2 server, also under Debian 11.2 with backports.
### Will you send logs?
Yes | 1.0 | "Join meeting" button in pre-meeting screen not clickable. - ### Steps to reproduce
Element-desktop 1.9.8 and Element-nightly 2022011301 under Debian 11.2/stable
with KDE/Plasma, with a local private matrix-synapse 1.49.0-1~bpo11+2 server,
also under Debian 11.2 with backports.
Some of the symptoms are the same as described in
https://github.com/vector-im/element-web/issues/18506
like the disconnections every 30 seconds.
Description
On startup, the main window pops up and displays the text messages just fine.
When I click on the jitsi icon the video field displays "Jitsi Video
Conference" and a button to "Join Conference".
When I do that, the video screen shows the camera input and at the bottom
half of it the pre-meeting screen has a white field with my user name and
a blue button to "Join meeting". Clicking on this button has no effect. I
can never join the meeting.
Shortly after (30 seconds), a pop-up says "You have been disconnected, you
may want to check the network connection. Reconnecting in 30 seconds." and
then it counts down. Then it disconnects again and again. This is the
same symptom described in issue 18506.
This goes on until I exit the program or I disable the pre-meeting screen
from the settings gear on the pre-meeting screen. If I do the latter I
see a grey screen with my camera input in the corner greyed out until the
next disconnection. At that time, the corner mini-screen goes black (with
the AV circle on it and the microphone and camera icons slashed out.
Clicking on the microphone or camera icons that show up when you move the
mouse over the video field don't enable them. The only thing that works
is the red hang-up button, which brings me back to the "Jitsi Video
Conference with the "Join Conference" green button" field.
Reconnecting now goes directly to the grey screen (no pre-meeting screen as
that has been disabled) until the next disconnection. The first time (before
the first disconnection, the field at the top (when mouse is hovering over the
video field had the name of the room, subsequent times it had "Jitsi Heb...".
At least I can access the config menu (...) and re-enable the pre-meeting
screen. When that is done, it's back to the unclickable blue "Join meeting"
button. However, now the microphone and camera icons in the pre-meeting
screen are usable. Clicking them re-enables the video in the top half of the
video screen.
So, it is totally impossible to make video calls from this system and the
sequences above are totally repeatable. What can I test or where can I look
to find more details to track down this issue? Some of this behavior has
been present for a few releases, before that Element-desktop worked
correctly on this computer, and most recently it did so a few days ago, until
apparently the latest upgrade. I do not remember upgrading element-desktop
manually but the pre-meeting screen is new, so I suspect this latest behavior
has been caused by the latest release.
On another computer (a laptop with built in camera) with the same debian
stable OS, I see the same issue with the pre-meeting screen but once that is
disabled, the video part of the screen turns grey and no amount of hovering
or clicking can bring up the red hang-up button, so it is impossible to do
anything but exit the program.
I have seen on a few occasions a very quickly disappearing pop-up message at
the beginning, when I just started the program, but it disappeared in less
than a second, so couldn't read what it said.
### Outcome
#### What did you expect?
That I could join the specified room via video call like I had done before.
#### What happened instead?
The program is totally unusable.
### Operating system
Debian 11.2/stable with KDE/Plasma, with a local private matrix-synapse 1.49.0-1~bpo11+2 server, also under Debian 11.2 with backports.
### Application version
Element-desktop 1.9.8 and Element-nightly 2022011301
### How did you install the app?
deb [signed-by=/usr/share/keyrings/riot-im-archive-keyring.gpg] https://packages.riot.im/debian/ bullseye main
### Homeserver
local private matrix-synapse 1.49.0-1~bpo11+2 server, also under Debian 11.2 with backports.
### Will you send logs?
Yes | defect | join meeting button in pre meeting screen not clickable steps to reproduce element desktop and element nightly under debian stable with kde plasma with a local private matrix synapse server also under debian with backports some of the symptoms are the same as described in like the disconnections every seconds description on startup the main window pops up and displays the text messages just fine when i click on the jitsi icon the video field displays jitsi video conference and a button to join conference when i do that the video screen shows the camera input and at the bottom half of it the pre meeting screen has a white field with my user name and a blue button to join meeting clicking on this button has no effect i can never join the meeting shortly after seconds a pop up says you have been disconnected you may want to check the network connection reconnecting in seconds and then it counts down then it disconnects again and again this is the same symptom described in issue this goes on until i exit the program or i disable the pre meeting screen from the settings gear on the pre meeting screen if i do the latter i see a grey screen with my camera input in the corner greyed out until the next disconnection at that time the corner mini screen goes black with the av circle on it and the microphone and camera icons slashed out clicking on the microphone or camera icons that show up when you move the mouse over the video field don t enable them the only thing that works is the red hang up button which brings me back to the jitsi video conference with the join conference green button field reconnecting now goes directly to the grey screen no pre meeting screen as that has been disabled until the next disconnection the first time before the first disconnection the field at the top when mouse is hovering over the video field had the name of the room subsequent times it had jitsi heb at least i can access the config menu and re enable the pre meeting screen when that is done it s back to the unclickable blue join meeting button however now the microphone and camera icons in the pre meeting screen are usable clicking them re enables the video in the top half of the video screen so it is totally impossible to make video calls from this system and the sequences above are totally repeatable what can i test or where can i look to find more details to track down this issue some of this behavior has been present for a few releases before that element desktop worked correctly on this computer and most recently it did so a few days ago until apparently the latest upgrade i do not remember upgrading element desktop manually but the pre meeting screen is new so i suspect this latest behavior has been caused by the latest release on another computer a laptop with built in camera with the same debian stable os i see the same issue with the pre meeting screen but once that is disabled the video part of the screen turns grey and no amount of hovering or clicking can bring up the red hang up button so it is impossible to do anything but exit the program i have seen on a few occasions a very quickly disappearing pop up message at the beginning when i just started the program but it disappeared in less than a second so couldn t read what it said outcome what did you expect that i could join the specified room via video call like i had done before what happened instead the program is totally unusable operating system debian stable with kde plasma with a local private matrix synapse server also under debian with backports application version element desktop and element nightly how did you install the app deb bullseye main homeserver local private matrix synapse server also under debian with backports will you send logs yes | 1 |
60,116 | 17,023,339,368 | IssuesEvent | 2021-07-03 01:30:48 | tomhughes/trac-tickets | https://api.github.com/repos/tomhughes/trac-tickets | closed | Zipped (.zip) GPXs don't upload | Component: website Priority: major Resolution: fixed Type: defect | **[Submitted to the original trac issue database at 12.25am, Sunday, 4th January 2009]**
Zipped versions of
http://www.openstreetmap.org/user/Richard/traces/287243
http://www.openstreetmap.org/user/Richard/traces/287244
failed to parse, generating a failure e-mail with the error
Generic XML parse error
XML parser at line 1 column 2
The gzipped versions of the same GPX have uploaded fine (as per above).
I suppose I could alternatively file a trac ticket at apple.com to ask them to change their contextual-menu compression option to produce .gz rather than .zip... | 1.0 | Zipped (.zip) GPXs don't upload - **[Submitted to the original trac issue database at 12.25am, Sunday, 4th January 2009]**
Zipped versions of
http://www.openstreetmap.org/user/Richard/traces/287243
http://www.openstreetmap.org/user/Richard/traces/287244
failed to parse, generating a failure e-mail with the error
Generic XML parse error
XML parser at line 1 column 2
The gzipped versions of the same GPX have uploaded fine (as per above).
I suppose I could alternatively file a trac ticket at apple.com to ask them to change their contextual-menu compression option to produce .gz rather than .zip... | defect | zipped zip gpxs don t upload zipped versions of failed to parse generating a failure e mail with the error generic xml parse error xml parser at line column the gzipped versions of the same gpx have uploaded fine as per above i suppose i could alternatively file a trac ticket at apple com to ask them to change their contextual menu compression option to produce gz rather than zip | 1 |
470,032 | 13,530,013,176 | IssuesEvent | 2020-09-15 19:13:00 | kubernetes/website | https://api.github.com/repos/kubernetes/website | closed | Incorrect version number in Limitations section | help wanted kind/cleanup language/en lifecycle/rotten priority/backlog | **This is a Bug Report**
<!-- Thanks for filing an issue! Before submitting, please fill in the following information. -->
<!-- See https://kubernetes.io/docs/contribute/start/ for guidance on writing an actionable issue description. -->
<!--Required Information-->
**Problem:**
Under the limitations section, it says: "In Kubernetes version 1.5"
**Proposed Solution:**
Correct version number
**Page to Update:**
https://kubernetes.io/docs/setup/best-practices/node-conformance/
<!--Optional Information (remove the comment tags around information you would like to include)-->
<!--Kubernetes Version:-->
<!--Additional Information:-->
| 1.0 | Incorrect version number in Limitations section - **This is a Bug Report**
<!-- Thanks for filing an issue! Before submitting, please fill in the following information. -->
<!-- See https://kubernetes.io/docs/contribute/start/ for guidance on writing an actionable issue description. -->
<!--Required Information-->
**Problem:**
Under the limitations section, it says: "In Kubernetes version 1.5"
**Proposed Solution:**
Correct version number
**Page to Update:**
https://kubernetes.io/docs/setup/best-practices/node-conformance/
<!--Optional Information (remove the comment tags around information you would like to include)-->
<!--Kubernetes Version:-->
<!--Additional Information:-->
| non_defect | incorrect version number in limitations section this is a bug report problem under the limitations section it says in kubernetes version proposed solution correct version number page to update | 0 |
41,667 | 10,563,182,873 | IssuesEvent | 2019-10-04 20:16:31 | networkx/networkx | https://api.github.com/repos/networkx/networkx | closed | `from_pandas_edgelist` creates empty graph if there are no attributes | Defect | Using `from_pandas_edgelist` with `edge_attr=True` results in an empty graph if there are no attribute columns. If a user passes in a dataframe which _may_ contain attribute columns, it will yield an empty graph if there are no non-source/target columns. | 1.0 | `from_pandas_edgelist` creates empty graph if there are no attributes - Using `from_pandas_edgelist` with `edge_attr=True` results in an empty graph if there are no attribute columns. If a user passes in a dataframe which _may_ contain attribute columns, it will yield an empty graph if there are no non-source/target columns. | defect | from pandas edgelist creates empty graph if there are no attributes using from pandas edgelist with edge attr true results in an empty graph if there are no attribute columns if a user passes in a dataframe which may contain attribute columns it will yield an empty graph if there are no non source target columns | 1 |
72,277 | 24,031,689,762 | IssuesEvent | 2022-09-15 15:29:35 | NREL/EnergyPlus | https://api.github.com/repos/NREL/EnergyPlus | closed | TIMESTAMP Column in Custom Monthly Report Tables under both peak heating and peak cooling report has a trailing space | Defect | Issue overview
--------------
The Following SQL query does not work, this is true for both the cooling peak and heating peak tables
`SELECT Value FROM tabulardatawithstrings WHERE ReportName='BUILDING ENERGY PERFORMANCE - DISTRICT COOLING PEAK DEMAND' and ReportForString='Meter' and TableName='Custom Monthly Report' and RowName='December' and ColumnName='DISTRICTCOOLING:FACILITY {TIMESTAMP}' ;`
There is a trailing space in the column name so this query works
`SELECT Value FROM tabulardatawithstrings WHERE ReportName='BUILDING ENERGY PERFORMANCE - DISTRICT COOLING PEAK DEMAND' and ReportForString='Meter' and TableName='Custom Monthly Report' and RowName='December' and ColumnName='DISTRICTCOOLING:FACILITY {TIMESTAMP} ' ;`
But to make a more flexible work around Ill use this code below so it works now in existing state but will still wok once this bug is fixed without me having to alter code. `ColumnName LIKE '%DISTRICTCOOLING:FACILITY {TIMESTAMP}%' `
As a note, this seems to be relatively isolated to this column in these two tables. In the same table `DISTRICTCOOLING:FACILITY {Maximum}` doesn't exhibit this issue and in another table `EnergyMeters / Annual and Peak Values - Other` a column with TIMESTAMP doesn't have the trailing space `Timestamp of Maximum {TIMESTAMP}`
### Details
Some additional details for this issue (if relevant):
- E+ Version 22.1
### Checklist
Add to this list or remove from it as applicable. This is a simple templated set of guidelines.
- [x] Defect file added (list location of defect file here)
- [x] Ticket added to Pivotal for defect (development team task)
- [ ] Pull request created (the pull request will have additional tasks related to reviewing changes that fix this defect)
| 1.0 | TIMESTAMP Column in Custom Monthly Report Tables under both peak heating and peak cooling report has a trailing space - Issue overview
--------------
The Following SQL query does not work, this is true for both the cooling peak and heating peak tables
`SELECT Value FROM tabulardatawithstrings WHERE ReportName='BUILDING ENERGY PERFORMANCE - DISTRICT COOLING PEAK DEMAND' and ReportForString='Meter' and TableName='Custom Monthly Report' and RowName='December' and ColumnName='DISTRICTCOOLING:FACILITY {TIMESTAMP}' ;`
There is a trailing space in the column name so this query works
`SELECT Value FROM tabulardatawithstrings WHERE ReportName='BUILDING ENERGY PERFORMANCE - DISTRICT COOLING PEAK DEMAND' and ReportForString='Meter' and TableName='Custom Monthly Report' and RowName='December' and ColumnName='DISTRICTCOOLING:FACILITY {TIMESTAMP} ' ;`
But to make a more flexible work around Ill use this code below so it works now in existing state but will still wok once this bug is fixed without me having to alter code. `ColumnName LIKE '%DISTRICTCOOLING:FACILITY {TIMESTAMP}%' `
As a note, this seems to be relatively isolated to this column in these two tables. In the same table `DISTRICTCOOLING:FACILITY {Maximum}` doesn't exhibit this issue and in another table `EnergyMeters / Annual and Peak Values - Other` a column with TIMESTAMP doesn't have the trailing space `Timestamp of Maximum {TIMESTAMP}`
### Details
Some additional details for this issue (if relevant):
- E+ Version 22.1
### Checklist
Add to this list or remove from it as applicable. This is a simple templated set of guidelines.
- [x] Defect file added (list location of defect file here)
- [x] Ticket added to Pivotal for defect (development team task)
- [ ] Pull request created (the pull request will have additional tasks related to reviewing changes that fix this defect)
| defect | timestamp column in custom monthly report tables under both peak heating and peak cooling report has a trailing space issue overview the following sql query does not work this is true for both the cooling peak and heating peak tables select value from tabulardatawithstrings where reportname building energy performance district cooling peak demand and reportforstring meter and tablename custom monthly report and rowname december and columnname districtcooling facility timestamp there is a trailing space in the column name so this query works select value from tabulardatawithstrings where reportname building energy performance district cooling peak demand and reportforstring meter and tablename custom monthly report and rowname december and columnname districtcooling facility timestamp but to make a more flexible work around ill use this code below so it works now in existing state but will still wok once this bug is fixed without me having to alter code columnname like districtcooling facility timestamp as a note this seems to be relatively isolated to this column in these two tables in the same table districtcooling facility maximum doesn t exhibit this issue and in another table energymeters annual and peak values other a column with timestamp doesn t have the trailing space timestamp of maximum timestamp details some additional details for this issue if relevant e version checklist add to this list or remove from it as applicable this is a simple templated set of guidelines defect file added list location of defect file here ticket added to pivotal for defect development team task pull request created the pull request will have additional tasks related to reviewing changes that fix this defect | 1 |
67,561 | 20,994,320,943 | IssuesEvent | 2022-03-29 12:15:45 | jOOQ/jOOQ | https://api.github.com/repos/jOOQ/jOOQ | closed | CAST to PostgreSQL enum type lacks type qualification | T: Defect C: Functionality C: DB: PostgreSQL P: Medium E: All Editions | ### Expected behavior and actual behavior:
Using `DSL.cast()` with some database enum type, the generated SQL doesn't qualify the enum type, leading to errors (unless the schema is in the search_path; this is how we uncovered this bug actually: our app works OK for us developers and on our demo servers, but failed when deployed on our client's servers, because they use a different database user, whose name doesn't match that of the database schema).
jOOQ will generate `cast(?::"the_schema"."the_enum" as the_enum)` (or `cast(?::"the_schema"."the_enum"[] as the_enum[])` for an array, in our actual case), which will trigger an error `ERROR: type "the_enum" does not exist(..)`.
Note how the type in the cast is not qualified, whereas it's correctly qualified in typing the parameter.
Of course, the cast here is redundant (I'm almost certain it was necessary in an older version of jOOQ though, but at least we have an easy fix for our app), but interestingly we also have a domain type (`CREATE DOMAIN … AS text`) that in turn **requires** us to use a cast when used as an array, and in this case jOOQ correctly uses the qualified domain type: `cast(?::varchar[] as "the_schema"."the_domain"[])`.
### Steps to reproduce the problem (if possible, create an MCVE: https://github.com/jOOQ/jOOQ-mcve):
* have an enum type in your schema (`CREATE TYPE … AS ENUM …`) and a table using it for one of its columns
* use a `DSL.cast(…, MY_TABLE.MY_FIELD)` where the field is of the enum type
* have a search_path that doesn't include your schema
MCVE at https://github.com/atolcd-contrib/jOOQ-mcve/tree/issue-10277
### Versions:
- jOOQ: 3.12.4 and 3.13.2
- Java: OpenJDK 11
- Database (include vendor): Postgresql (reproduced on 11 and 12)
- OS: Linux (Arch Linux, and Docker's `openjdk:11`)
- JDBC Driver (include name if inofficial driver): `org.postgresql:postgresql` versions 42.2.9 and 42.2.14
| 1.0 | CAST to PostgreSQL enum type lacks type qualification - ### Expected behavior and actual behavior:
Using `DSL.cast()` with some database enum type, the generated SQL doesn't qualify the enum type, leading to errors (unless the schema is in the search_path; this is how we uncovered this bug actually: our app works OK for us developers and on our demo servers, but failed when deployed on our client's servers, because they use a different database user, whose name doesn't match that of the database schema).
jOOQ will generate `cast(?::"the_schema"."the_enum" as the_enum)` (or `cast(?::"the_schema"."the_enum"[] as the_enum[])` for an array, in our actual case), which will trigger an error `ERROR: type "the_enum" does not exist(..)`.
Note how the type in the cast is not qualified, whereas it's correctly qualified in typing the parameter.
Of course, the cast here is redundant (I'm almost certain it was necessary in an older version of jOOQ though, but at least we have an easy fix for our app), but interestingly we also have a domain type (`CREATE DOMAIN … AS text`) that in turn **requires** us to use a cast when used as an array, and in this case jOOQ correctly uses the qualified domain type: `cast(?::varchar[] as "the_schema"."the_domain"[])`.
### Steps to reproduce the problem (if possible, create an MCVE: https://github.com/jOOQ/jOOQ-mcve):
* have an enum type in your schema (`CREATE TYPE … AS ENUM …`) and a table using it for one of its columns
* use a `DSL.cast(…, MY_TABLE.MY_FIELD)` where the field is of the enum type
* have a search_path that doesn't include your schema
MCVE at https://github.com/atolcd-contrib/jOOQ-mcve/tree/issue-10277
### Versions:
- jOOQ: 3.12.4 and 3.13.2
- Java: OpenJDK 11
- Database (include vendor): Postgresql (reproduced on 11 and 12)
- OS: Linux (Arch Linux, and Docker's `openjdk:11`)
- JDBC Driver (include name if inofficial driver): `org.postgresql:postgresql` versions 42.2.9 and 42.2.14
| defect | cast to postgresql enum type lacks type qualification expected behavior and actual behavior using dsl cast with some database enum type the generated sql doesn t qualify the enum type leading to errors unless the schema is in the search path this is how we uncovered this bug actually our app works ok for us developers and on our demo servers but failed when deployed on our client s servers because they use a different database user whose name doesn t match that of the database schema jooq will generate cast the schema the enum as the enum or cast the schema the enum as the enum for an array in our actual case which will trigger an error error type the enum does not exist note how the type in the cast is not qualified whereas it s correctly qualified in typing the parameter of course the cast here is redundant i m almost certain it was necessary in an older version of jooq though but at least we have an easy fix for our app but interestingly we also have a domain type create domain … as text that in turn requires us to use a cast when used as an array and in this case jooq correctly uses the qualified domain type cast varchar as the schema the domain steps to reproduce the problem if possible create an mcve have an enum type in your schema create type … as enum … and a table using it for one of its columns use a dsl cast … my table my field where the field is of the enum type have a search path that doesn t include your schema mcve at versions jooq and java openjdk database include vendor postgresql reproduced on and os linux arch linux and docker s openjdk jdbc driver include name if inofficial driver org postgresql postgresql versions and | 1 |
13,580 | 16,093,401,878 | IssuesEvent | 2021-04-26 19:40:16 | Creators-of-Create/Create | https://api.github.com/repos/Creators-of-Create/Create | closed | OreTweaker Ore Generation Incompatibility | compatibility needs input | I apologize for the length of this, but I couldn't upload config files directly into here.
It appears that at OreTweaker and Create ore generation are incompatible with one another. I kept Create's ore generation as is and only edited the vanilla generation using OreTweaker. Here are copies of the Create and OreTweaker configs:
**Create**
```
[worldgen]
#
#Modify Create's impact on your terrain
[worldgen.v2]
#
#Prevents all worldgen added by Create from taking effect
disableWorldGen = false
#
#Forward caught TileEntityExceptions to the log at debug level.
logTeErrors = false
[worldgen.v2.copper_ore]
#
#Range: > 0
clusterSize = 18
#
#Amount of clusters generated per Chunk.
# >1 to spawn multiple.
# <1 to make it a chance.
# 0 to disable.
#Range: 0.0 ~ 512.0
frequency = 2.0
#
#Range: > 0
minHeight = 40
#
#Range: > 0
maxHeight = 85
[worldgen.v2.weathered_limestone]
#
#Range: > 0
clusterSize = 128
#
#Amount of clusters generated per Chunk.
# >1 to spawn multiple.
# <1 to make it a chance.
# 0 to disable.
#Range: 0.0 ~ 512.0
frequency = 0.015625
#
#Range: > 0
minHeight = 10
#
#Range: > 0
maxHeight = 30
[worldgen.v2.zinc_ore]
#
#Range: > 0
clusterSize = 14
#
#Amount of clusters generated per Chunk.
# >1 to spawn multiple.
# <1 to make it a chance.
# 0 to disable.
#Range: 0.0 ~ 512.0
frequency = 4.0
#
#Range: > 0
minHeight = 15
#
#Range: > 0
maxHeight = 70
[worldgen.v2.limestone]
#
#Range: > 0
clusterSize = 128
#
#Amount of clusters generated per Chunk.
# >1 to spawn multiple.
# <1 to make it a chance.
# 0 to disable.
#Range: 0.0 ~ 512.0
frequency = 0.015625
#
#Range: > 0
minHeight = 30
#
#Range: > 0
maxHeight = 70
[worldgen.v2.dolomite]
#
#Range: > 0
clusterSize = 128
#
#Amount of clusters generated per Chunk.
# >1 to spawn multiple.
# <1 to make it a chance.
# 0 to disable.
#Range: 0.0 ~ 512.0
frequency = 0.015625
#
#Range: > 0
minHeight = 20
#
#Range: > 0
maxHeight = 70
[worldgen.v2.gabbro]
#
#Range: > 0
clusterSize = 128
#
#Amount of clusters generated per Chunk.
# >1 to spawn multiple.
# <1 to make it a chance.
# 0 to disable.
#Range: 0.0 ~ 512.0
frequency = 0.015625
#
#Range: > 0
minHeight = 20
#
#Range: > 0
maxHeight = 70
[worldgen.v2.scoria]
#
#Range: > 0
clusterSize = 128
#
#Amount of clusters generated per Chunk.
# >1 to spawn multiple.
# <1 to make it a chance.
# 0 to disable.
#Range: 0.0 ~ 512.0
frequency = 0.03125
#
#Range: > 0
minHeight = 0
#
#Range: > 0
maxHeight = 10
```
**OreTweaker**
```
"Enable Debug Output" = false
"Disable Ores" = ["minecraft:coal_ore", "minecraft:iron_ore", "minecraft:gold_ore", "minecraft:diamond_ore", "minecraft:lapis_ore", "minecraft:redstone_ore", "minecraft:emerald_ore"]
[["Custom Ore"]]
"Ore Name" = "minecraft:lapis_ore"
"Max Vein Size" = 7
"Filler Name" = "minecraft:stone"
"Min Vein Level" = 1
"Max Vein Level" = 48
"Spawn Rate" = 8
[["Custom Ore"]]
"Ore Name" = "minecraft:redstone_ore"
"Max Vein Size" = 7
"Filler Name" = "minecraft:stone"
"Min Vein Level" = 1
"Max Vein Level" = 48
"Spawn Rate" = 8
[["Custom Ore"]]
"Ore Name" = "minecraft:emerald_ore"
"Max Vein Size" = 1
"Filler Name" = "minecraft:stone"
"Min Vein Level" = 1
"Max Vein Level" = 48
"Spawn Rate" = 1
```
With these configs I used World Stripper to uncover the terrain in new chunks and this is what I got:

You may notice that EvilCraft is also in this mod. OreTweaker does not interfere with its ore generation. I did a similar thing in The Nether as I have Netherrocks installed as well (the only other mod that affects ore generation) and it appears to work as intended:

So this seems to be specific to Create. As I have no way of telling whether this is a Create or an OreTweaker problem, I am simply reporting this to both. And for the sake of completion, here is my log file for this session:
[latest.log](https://github.com/Creators-of-Create/Create/files/6369536/latest.log)
| True | OreTweaker Ore Generation Incompatibility - I apologize for the length of this, but I couldn't upload config files directly into here.
It appears that at OreTweaker and Create ore generation are incompatible with one another. I kept Create's ore generation as is and only edited the vanilla generation using OreTweaker. Here are copies of the Create and OreTweaker configs:
**Create**
```
[worldgen]
#
#Modify Create's impact on your terrain
[worldgen.v2]
#
#Prevents all worldgen added by Create from taking effect
disableWorldGen = false
#
#Forward caught TileEntityExceptions to the log at debug level.
logTeErrors = false
[worldgen.v2.copper_ore]
#
#Range: > 0
clusterSize = 18
#
#Amount of clusters generated per Chunk.
# >1 to spawn multiple.
# <1 to make it a chance.
# 0 to disable.
#Range: 0.0 ~ 512.0
frequency = 2.0
#
#Range: > 0
minHeight = 40
#
#Range: > 0
maxHeight = 85
[worldgen.v2.weathered_limestone]
#
#Range: > 0
clusterSize = 128
#
#Amount of clusters generated per Chunk.
# >1 to spawn multiple.
# <1 to make it a chance.
# 0 to disable.
#Range: 0.0 ~ 512.0
frequency = 0.015625
#
#Range: > 0
minHeight = 10
#
#Range: > 0
maxHeight = 30
[worldgen.v2.zinc_ore]
#
#Range: > 0
clusterSize = 14
#
#Amount of clusters generated per Chunk.
# >1 to spawn multiple.
# <1 to make it a chance.
# 0 to disable.
#Range: 0.0 ~ 512.0
frequency = 4.0
#
#Range: > 0
minHeight = 15
#
#Range: > 0
maxHeight = 70
[worldgen.v2.limestone]
#
#Range: > 0
clusterSize = 128
#
#Amount of clusters generated per Chunk.
# >1 to spawn multiple.
# <1 to make it a chance.
# 0 to disable.
#Range: 0.0 ~ 512.0
frequency = 0.015625
#
#Range: > 0
minHeight = 30
#
#Range: > 0
maxHeight = 70
[worldgen.v2.dolomite]
#
#Range: > 0
clusterSize = 128
#
#Amount of clusters generated per Chunk.
# >1 to spawn multiple.
# <1 to make it a chance.
# 0 to disable.
#Range: 0.0 ~ 512.0
frequency = 0.015625
#
#Range: > 0
minHeight = 20
#
#Range: > 0
maxHeight = 70
[worldgen.v2.gabbro]
#
#Range: > 0
clusterSize = 128
#
#Amount of clusters generated per Chunk.
# >1 to spawn multiple.
# <1 to make it a chance.
# 0 to disable.
#Range: 0.0 ~ 512.0
frequency = 0.015625
#
#Range: > 0
minHeight = 20
#
#Range: > 0
maxHeight = 70
[worldgen.v2.scoria]
#
#Range: > 0
clusterSize = 128
#
#Amount of clusters generated per Chunk.
# >1 to spawn multiple.
# <1 to make it a chance.
# 0 to disable.
#Range: 0.0 ~ 512.0
frequency = 0.03125
#
#Range: > 0
minHeight = 0
#
#Range: > 0
maxHeight = 10
```
**OreTweaker**
```
"Enable Debug Output" = false
"Disable Ores" = ["minecraft:coal_ore", "minecraft:iron_ore", "minecraft:gold_ore", "minecraft:diamond_ore", "minecraft:lapis_ore", "minecraft:redstone_ore", "minecraft:emerald_ore"]
[["Custom Ore"]]
"Ore Name" = "minecraft:lapis_ore"
"Max Vein Size" = 7
"Filler Name" = "minecraft:stone"
"Min Vein Level" = 1
"Max Vein Level" = 48
"Spawn Rate" = 8
[["Custom Ore"]]
"Ore Name" = "minecraft:redstone_ore"
"Max Vein Size" = 7
"Filler Name" = "minecraft:stone"
"Min Vein Level" = 1
"Max Vein Level" = 48
"Spawn Rate" = 8
[["Custom Ore"]]
"Ore Name" = "minecraft:emerald_ore"
"Max Vein Size" = 1
"Filler Name" = "minecraft:stone"
"Min Vein Level" = 1
"Max Vein Level" = 48
"Spawn Rate" = 1
```
With these configs I used World Stripper to uncover the terrain in new chunks and this is what I got:

You may notice that EvilCraft is also in this mod. OreTweaker does not interfere with its ore generation. I did a similar thing in The Nether as I have Netherrocks installed as well (the only other mod that affects ore generation) and it appears to work as intended:

So this seems to be specific to Create. As I have no way of telling whether this is a Create or an OreTweaker problem, I am simply reporting this to both. And for the sake of completion, here is my log file for this session:
[latest.log](https://github.com/Creators-of-Create/Create/files/6369536/latest.log)
| non_defect | oretweaker ore generation incompatibility i apologize for the length of this but i couldn t upload config files directly into here it appears that at oretweaker and create ore generation are incompatible with one another i kept create s ore generation as is and only edited the vanilla generation using oretweaker here are copies of the create and oretweaker configs create modify create s impact on your terrain prevents all worldgen added by create from taking effect disableworldgen false forward caught tileentityexceptions to the log at debug level logteerrors false range clustersize amount of clusters generated per chunk to spawn multiple to make it a chance to disable range frequency range minheight range maxheight range clustersize amount of clusters generated per chunk to spawn multiple to make it a chance to disable range frequency range minheight range maxheight range clustersize amount of clusters generated per chunk to spawn multiple to make it a chance to disable range frequency range minheight range maxheight range clustersize amount of clusters generated per chunk to spawn multiple to make it a chance to disable range frequency range minheight range maxheight range clustersize amount of clusters generated per chunk to spawn multiple to make it a chance to disable range frequency range minheight range maxheight range clustersize amount of clusters generated per chunk to spawn multiple to make it a chance to disable range frequency range minheight range maxheight range clustersize amount of clusters generated per chunk to spawn multiple to make it a chance to disable range frequency range minheight range maxheight oretweaker enable debug output false disable ores ore name minecraft lapis ore max vein size filler name minecraft stone min vein level max vein level spawn rate ore name minecraft redstone ore max vein size filler name minecraft stone min vein level max vein level spawn rate ore name minecraft emerald ore max vein size filler name minecraft stone min vein level max vein level spawn rate with these configs i used world stripper to uncover the terrain in new chunks and this is what i got you may notice that evilcraft is also in this mod oretweaker does not interfere with its ore generation i did a similar thing in the nether as i have netherrocks installed as well the only other mod that affects ore generation and it appears to work as intended so this seems to be specific to create as i have no way of telling whether this is a create or an oretweaker problem i am simply reporting this to both and for the sake of completion here is my log file for this session | 0 |
47,211 | 10,054,193,727 | IssuesEvent | 2019-07-21 23:28:00 | EdenServer/community | https://api.github.com/repos/EdenServer/community | closed | Treasure and Tribulations no effect on enfeebling magic | in-code-review | We attempted this BCNM as nin rdm brd, our enfeebling got resisted but when we tried to enfeebling it again we got the no effect message as if the magic landed but it clearly did not land as his accuracy/attack speed did not lower and we never got the paralyzed message. | 1.0 | Treasure and Tribulations no effect on enfeebling magic - We attempted this BCNM as nin rdm brd, our enfeebling got resisted but when we tried to enfeebling it again we got the no effect message as if the magic landed but it clearly did not land as his accuracy/attack speed did not lower and we never got the paralyzed message. | non_defect | treasure and tribulations no effect on enfeebling magic we attempted this bcnm as nin rdm brd our enfeebling got resisted but when we tried to enfeebling it again we got the no effect message as if the magic landed but it clearly did not land as his accuracy attack speed did not lower and we never got the paralyzed message | 0 |
471,322 | 13,564,971,086 | IssuesEvent | 2020-09-18 10:55:16 | inspireui/support | https://api.github.com/repos/inspireui/support | reopened | Firebase version solving failed | Fluxshopify ⭐️ priority-ticket |

Because firebase_admob 0.9.3+4 depends on firebase_core ^0.4.2+1 and no versions of firebase_admob match >0.9.3+4 <0.10.0, firebase_admob ^0.9.3+4 requires firebase_core ^0.4.2+1.
So, because fstore depends on both firebase_core ^0.5.0 and firebase_admob ^0.9.3+4, version solving failed.
pub get failed (1; So, because fstore depends on both firebase_core ^0.5.0 and firebase_admob ^0.9.3+4, version solving failed.)
i try everything but not solved so i'm not able to make apk file.. | 1.0 | Firebase version solving failed -

Because firebase_admob 0.9.3+4 depends on firebase_core ^0.4.2+1 and no versions of firebase_admob match >0.9.3+4 <0.10.0, firebase_admob ^0.9.3+4 requires firebase_core ^0.4.2+1.
So, because fstore depends on both firebase_core ^0.5.0 and firebase_admob ^0.9.3+4, version solving failed.
pub get failed (1; So, because fstore depends on both firebase_core ^0.5.0 and firebase_admob ^0.9.3+4, version solving failed.)
i try everything but not solved so i'm not able to make apk file.. | non_defect | firebase version solving failed because firebase admob depends on firebase core and no versions of firebase admob match firebase admob requires firebase core so because fstore depends on both firebase core and firebase admob version solving failed pub get failed so because fstore depends on both firebase core and firebase admob version solving failed i try everything but not solved so i m not able to make apk file | 0 |
170,235 | 13,179,386,072 | IssuesEvent | 2020-08-12 10:50:00 | Aalto-LeTech/intellij-plugin | https://api.github.com/repos/Aalto-LeTech/intellij-plugin | closed | "Modules" toolbar when removed must be easily returned | manual testing medium | it should be easy to return the "modules" to the UI:
+ it shows in A+ menu (or ToolsMenu)
+ it has an attached key combination
+ `ctrl + shift + a` | 1.0 | "Modules" toolbar when removed must be easily returned - it should be easy to return the "modules" to the UI:
+ it shows in A+ menu (or ToolsMenu)
+ it has an attached key combination
+ `ctrl + shift + a` | non_defect | modules toolbar when removed must be easily returned it should be easy to return the modules to the ui it shows in a menu or toolsmenu it has an attached key combination ctrl shift a | 0 |
91,823 | 26,493,467,865 | IssuesEvent | 2023-01-18 02:00:40 | docker/docs | https://api.github.com/repos/docker/docs | closed | Add ** pattern to dockerignore examples in documentation | area/Build lifecycle/stale | File: [engine/reference/builder.md](https://docs.docker.com/engine/reference/builder/)
## Request
Add a simple case using the ** pattern to the example dockerfile section e.g. `**/temp*`.
## Reason
`.dockerignore` has different semantics from a `.gitignore` file which is often missed as highlighted by this SO question viewed 24k times https://stackoverflow.com/questions/40261164/docker-ignores-patterns-in-dockerignore/40261165#40261165
The **/ pattern matching for any depth matching of file patterns is obscured in a paragraph of text near the end of the .dockerignore section that is easily missed:
> Beyond Go's filepath.Match rules, Docker also supports a special
> wildcard string `**` that matches any number of directories (including
> zero). For example, `**/*.go` will exclude all files that end with `.go`
> that are found in all directories, including the root of the build context.
| 1.0 | Add ** pattern to dockerignore examples in documentation - File: [engine/reference/builder.md](https://docs.docker.com/engine/reference/builder/)
## Request
Add a simple case using the ** pattern to the example dockerfile section e.g. `**/temp*`.
## Reason
`.dockerignore` has different semantics from a `.gitignore` file which is often missed as highlighted by this SO question viewed 24k times https://stackoverflow.com/questions/40261164/docker-ignores-patterns-in-dockerignore/40261165#40261165
The **/ pattern matching for any depth matching of file patterns is obscured in a paragraph of text near the end of the .dockerignore section that is easily missed:
> Beyond Go's filepath.Match rules, Docker also supports a special
> wildcard string `**` that matches any number of directories (including
> zero). For example, `**/*.go` will exclude all files that end with `.go`
> that are found in all directories, including the root of the build context.
| non_defect | add pattern to dockerignore examples in documentation file request add a simple case using the pattern to the example dockerfile section e g temp reason dockerignore has different semantics from a gitignore file which is often missed as highlighted by this so question viewed times the pattern matching for any depth matching of file patterns is obscured in a paragraph of text near the end of the dockerignore section that is easily missed beyond go s filepath match rules docker also supports a special wildcard string that matches any number of directories including zero for example go will exclude all files that end with go that are found in all directories including the root of the build context | 0 |
28,263 | 5,231,350,537 | IssuesEvent | 2017-01-30 01:46:32 | prettydiff/prettydiff | https://api.github.com/repos/prettydiff/prettydiff | closed | Rust - breaking nested generic type | Defect Parsing Pending Release | **Rust is not yet supported**
Even though Rust is not officially supported yet I would still like to capture the parts of its syntax that appear similar to the languages that currently are supported.
```
#![feature(lookup_host)]
use std::collections;
struct DnsLookupCache {
cache : std::collections::HashMap<Box<String>, Vec<std::net::SocketAddr>>
}
impl DnsLookupCache {
fn new() -> DnsLookupCache {
DnsLookupCache {
cache: std::collections::HashMap::new()
}
}
fn lookup(&mut self, host: &String) -> std::io::Result<&Vec<std::net::SocketAddr>> {
match self.cache.get(host) {
Some(cached) => Ok(cached),
None => {
let mut hosts = Vec::new();
for result in try!(std::net::lookup_host(host)) {
hosts.push(result);
}
let owned_host = host.clone();
self.cache.insert(Box::new(owned_host), hosts);
return self.lookup(host);
}
}
}
}
```
* Broken pseudo-shebang on the first line
* Broken nested type generic `<Box<String>, Vec<std::net::SocketAddr>>`
* There are probably other improvements that can also be added. | 1.0 | Rust - breaking nested generic type - **Rust is not yet supported**
Even though Rust is not officially supported yet I would still like to capture the parts of its syntax that appear similar to the languages that currently are supported.
```
#![feature(lookup_host)]
use std::collections;
struct DnsLookupCache {
cache : std::collections::HashMap<Box<String>, Vec<std::net::SocketAddr>>
}
impl DnsLookupCache {
fn new() -> DnsLookupCache {
DnsLookupCache {
cache: std::collections::HashMap::new()
}
}
fn lookup(&mut self, host: &String) -> std::io::Result<&Vec<std::net::SocketAddr>> {
match self.cache.get(host) {
Some(cached) => Ok(cached),
None => {
let mut hosts = Vec::new();
for result in try!(std::net::lookup_host(host)) {
hosts.push(result);
}
let owned_host = host.clone();
self.cache.insert(Box::new(owned_host), hosts);
return self.lookup(host);
}
}
}
}
```
* Broken pseudo-shebang on the first line
* Broken nested type generic `<Box<String>, Vec<std::net::SocketAddr>>`
* There are probably other improvements that can also be added. | defect | rust breaking nested generic type rust is not yet supported even though rust is not officially supported yet i would still like to capture the parts of its syntax that appear similar to the languages that currently are supported use std collections struct dnslookupcache cache std collections hashmap vec impl dnslookupcache fn new dnslookupcache dnslookupcache cache std collections hashmap new fn lookup mut self host string std io result match self cache get host some cached ok cached none let mut hosts vec new for result in try std net lookup host host hosts push result let owned host host clone self cache insert box new owned host hosts return self lookup host broken pseudo shebang on the first line broken nested type generic vec there are probably other improvements that can also be added | 1 |
829,774 | 31,897,756,609 | IssuesEvent | 2023-09-18 04:33:58 | headwirecom/helix-sportsmagazine | https://api.github.com/repos/headwirecom/helix-sportsmagazine | closed | Load Ceros iframe embed as early as possible | enhancement priority-1 | On landing pages, especially the home page, there are multiple ceros embeds with dynamic content. They are performance killer so we can't preload them. We can try to load them on first scroll or after 3s as delayed content. The idea is to avoid having that experience of big white spaces with a spinner when scrolling down as we do currently because we only load them once they appear in the viewport (default embed behavior). | 1.0 | Load Ceros iframe embed as early as possible - On landing pages, especially the home page, there are multiple ceros embeds with dynamic content. They are performance killer so we can't preload them. We can try to load them on first scroll or after 3s as delayed content. The idea is to avoid having that experience of big white spaces with a spinner when scrolling down as we do currently because we only load them once they appear in the viewport (default embed behavior). | non_defect | load ceros iframe embed as early as possible on landing pages especially the home page there are multiple ceros embeds with dynamic content they are performance killer so we can t preload them we can try to load them on first scroll or after as delayed content the idea is to avoid having that experience of big white spaces with a spinner when scrolling down as we do currently because we only load them once they appear in the viewport default embed behavior | 0 |
146,294 | 5,614,946,054 | IssuesEvent | 2017-04-03 13:36:19 | opencaching/opencaching-pl | https://api.github.com/repos/opencaching/opencaching-pl | opened | Convert all non-ASCII characters from hint | Component_CacheEdit Component_i18n Priority_Low Type_Enhancement x_Usability | The hint is encoded using ROT13.
This only works on the 26 letter basic latin/english alphabet.
Any national characters in the hint don't get encoded correctly by ROT13.
Hint should be converted to ASCII and stored as such in the database.
I've come upon this with the need to convert it to ASCII for file export (if it were correctly stored, it wouldn't need such conversion at that point. | 1.0 | Convert all non-ASCII characters from hint - The hint is encoded using ROT13.
This only works on the 26 letter basic latin/english alphabet.
Any national characters in the hint don't get encoded correctly by ROT13.
Hint should be converted to ASCII and stored as such in the database.
I've come upon this with the need to convert it to ASCII for file export (if it were correctly stored, it wouldn't need such conversion at that point. | non_defect | convert all non ascii characters from hint the hint is encoded using this only works on the letter basic latin english alphabet any national characters in the hint don t get encoded correctly by hint should be converted to ascii and stored as such in the database i ve come upon this with the need to convert it to ascii for file export if it were correctly stored it wouldn t need such conversion at that point | 0 |
13,106 | 2,732,898,851 | IssuesEvent | 2015-04-17 10:04:55 | tiku01/oryx-editor | https://api.github.com/repos/tiku01/oryx-editor | closed | Undo of Canvas resize | auto-migrated Component-Editor Priority-Medium Type-Defect | ```
If I enlarge the canvas there is no way of undoing this operation.
Desirable would be to also include this feature in undo/redo
```
Original issue reported on code.google.com by `gero.dec...@googlemail.com` on 29 Oct 2008 at 2:14 | 1.0 | Undo of Canvas resize - ```
If I enlarge the canvas there is no way of undoing this operation.
Desirable would be to also include this feature in undo/redo
```
Original issue reported on code.google.com by `gero.dec...@googlemail.com` on 29 Oct 2008 at 2:14 | defect | undo of canvas resize if i enlarge the canvas there is no way of undoing this operation desirable would be to also include this feature in undo redo original issue reported on code google com by gero dec googlemail com on oct at | 1 |
61,624 | 17,023,742,354 | IssuesEvent | 2021-07-03 03:36:01 | tomhughes/trac-tickets | https://api.github.com/repos/tomhughes/trac-tickets | closed | The From and To dropdown lists are empty when trying to add turn restrictions. | Component: potlatch2 Priority: minor Resolution: worksforme Type: defect | **[Submitted to the original trac issue database at 2.58pm, Thursday, 25th August 2011]**
When I'm trying to add turn restrinctions to a junction the From and To dropdown lists are empty most of the time.
I tried to do this to just created roads and existing roads too. The dropdown lists are empty many times.
I'm not sure if I'm doing something wrong or this is a bug. But if this is a bug it should be fixed because it's making Navdroyd to suggest ludicrous turns and I can't even fix it in OSM.
Thanks! | 1.0 | The From and To dropdown lists are empty when trying to add turn restrictions. - **[Submitted to the original trac issue database at 2.58pm, Thursday, 25th August 2011]**
When I'm trying to add turn restrinctions to a junction the From and To dropdown lists are empty most of the time.
I tried to do this to just created roads and existing roads too. The dropdown lists are empty many times.
I'm not sure if I'm doing something wrong or this is a bug. But if this is a bug it should be fixed because it's making Navdroyd to suggest ludicrous turns and I can't even fix it in OSM.
Thanks! | defect | the from and to dropdown lists are empty when trying to add turn restrictions when i m trying to add turn restrinctions to a junction the from and to dropdown lists are empty most of the time i tried to do this to just created roads and existing roads too the dropdown lists are empty many times i m not sure if i m doing something wrong or this is a bug but if this is a bug it should be fixed because it s making navdroyd to suggest ludicrous turns and i can t even fix it in osm thanks | 1 |
110,103 | 13,905,807,106 | IssuesEvent | 2020-10-20 10:22:21 | owncloud/client | https://api.github.com/repos/owncloud/client | closed | Connection Wizard - checkboxes too small when selected Local folder is not empty | Design & UX bug p3-medium | Client: 2.6.0rc2 (build 12577)
macOS 10.15, Ubuntu 19.04
Server: 10.3.0 stable
Steps to recreate:
1) Select 'Add new' account in the account tab
2) Enter server url
3) Enter login details
4) In 'Setup local folder options' select a Local folder that is not empty
5) Check the 'Ask for confirmation' checkboxes
Actual result: 'Ask for confirmation before synchronizing' checkboxes are too small. (But they get displayed when the dialog is bigger)
Expected result: Checkboxes are always properly displayed.
Ubuntu
<img width="734" alt="Screenshot 2019-10-29 at 09 52 20" src="https://user-images.githubusercontent.com/49001702/67752463-4e925600-fa33-11e9-8ed1-c7c6fbe72fb5.png">
mac
<img width="748" alt="Screenshot 2019-10-29 at 09 48 31" src="https://user-images.githubusercontent.com/49001702/67752465-4fc38300-fa33-11e9-872e-44e239b7b6ba.png">
| 1.0 | Connection Wizard - checkboxes too small when selected Local folder is not empty - Client: 2.6.0rc2 (build 12577)
macOS 10.15, Ubuntu 19.04
Server: 10.3.0 stable
Steps to recreate:
1) Select 'Add new' account in the account tab
2) Enter server url
3) Enter login details
4) In 'Setup local folder options' select a Local folder that is not empty
5) Check the 'Ask for confirmation' checkboxes
Actual result: 'Ask for confirmation before synchronizing' checkboxes are too small. (But they get displayed when the dialog is bigger)
Expected result: Checkboxes are always properly displayed.
Ubuntu
<img width="734" alt="Screenshot 2019-10-29 at 09 52 20" src="https://user-images.githubusercontent.com/49001702/67752463-4e925600-fa33-11e9-8ed1-c7c6fbe72fb5.png">
mac
<img width="748" alt="Screenshot 2019-10-29 at 09 48 31" src="https://user-images.githubusercontent.com/49001702/67752465-4fc38300-fa33-11e9-872e-44e239b7b6ba.png">
| non_defect | connection wizard checkboxes too small when selected local folder is not empty client build macos ubuntu server stable steps to recreate select add new account in the account tab enter server url enter login details in setup local folder options select a local folder that is not empty check the ask for confirmation checkboxes actual result ask for confirmation before synchronizing checkboxes are too small but they get displayed when the dialog is bigger expected result checkboxes are always properly displayed ubuntu img width alt screenshot at src mac img width alt screenshot at src | 0 |
294,692 | 22,160,704,098 | IssuesEvent | 2022-06-04 13:18:34 | FranGemo1/Proyecto-programador-ispc-2022 | https://api.github.com/repos/FranGemo1/Proyecto-programador-ispc-2022 | opened | Resumen de tema: 4. Sistemas Gestores de Bases de Datos | documentation | Realizar un resumen del material de la plataforma del Ispc, materia programador de los SGBD. | 1.0 | Resumen de tema: 4. Sistemas Gestores de Bases de Datos - Realizar un resumen del material de la plataforma del Ispc, materia programador de los SGBD. | non_defect | resumen de tema sistemas gestores de bases de datos realizar un resumen del material de la plataforma del ispc materia programador de los sgbd | 0 |
28,745 | 5,348,389,282 | IssuesEvent | 2017-02-18 04:23:26 | amitdholiya/vqmod | https://api.github.com/repos/amitdholiya/vqmod | reopened | Administrator index.php not writeable | auto-migrated Priority-Medium Type-Defect | ```
NOTE THAT THIS IS FOR VQMOD ENGINE ERRORS ONLY. FOR GENERAL ERRORS FROM
MODIFICATIONS CONTACT YOUR DEVELOPER
What steps will reproduce the problem?
1.Administrator index.php not writeable
2.
3.
What is the expected output? What do you see instead?
vQmod Version:
Server Operating System:
Please provide any additional information below.
```
Original issue reported on code.google.com by `juzail...@gmail.com` on 23 Jul 2014 at 1:40
| 1.0 | Administrator index.php not writeable - ```
NOTE THAT THIS IS FOR VQMOD ENGINE ERRORS ONLY. FOR GENERAL ERRORS FROM
MODIFICATIONS CONTACT YOUR DEVELOPER
What steps will reproduce the problem?
1.Administrator index.php not writeable
2.
3.
What is the expected output? What do you see instead?
vQmod Version:
Server Operating System:
Please provide any additional information below.
```
Original issue reported on code.google.com by `juzail...@gmail.com` on 23 Jul 2014 at 1:40
| defect | administrator index php not writeable note that this is for vqmod engine errors only for general errors from modifications contact your developer what steps will reproduce the problem administrator index php not writeable what is the expected output what do you see instead vqmod version server operating system please provide any additional information below original issue reported on code google com by juzail gmail com on jul at | 1 |
233,140 | 17,855,653,808 | IssuesEvent | 2021-09-05 00:55:23 | iskhakov-s/process_image | https://api.github.com/repos/iskhakov-s/process_image | opened | Necessary Features | documentation enhancement | ### Readability/Standards Changes
- [ ] Add documentation for functions
- [ ] Handle errors to make sure the correct arguments are passed
- [ ] add type identifiers for important variables or function return values
### Modifications
- [ ] Integrate img_analyzer.py and img.py / fix the circular dependency
- [ ] make a universal setter for img.py, since changing hsv of an image changes its rgb
- [ ] merge marsimg and img folders
- [ ] use matplotlib instead of tabulate for compiledanalysis func
- [ ] **_analyze all of the images in a readable way in images.ipynb_** *IMPORTANT\*
### Specific
- [ ] allow parameters to be passed to the measure functions in grey_analysis
### New Features
- [ ] add saving and loading functionality, save each set of images in a folder w a txt file for analysis numbers | 1.0 | Necessary Features - ### Readability/Standards Changes
- [ ] Add documentation for functions
- [ ] Handle errors to make sure the correct arguments are passed
- [ ] add type identifiers for important variables or function return values
### Modifications
- [ ] Integrate img_analyzer.py and img.py / fix the circular dependency
- [ ] make a universal setter for img.py, since changing hsv of an image changes its rgb
- [ ] merge marsimg and img folders
- [ ] use matplotlib instead of tabulate for compiledanalysis func
- [ ] **_analyze all of the images in a readable way in images.ipynb_** *IMPORTANT\*
### Specific
- [ ] allow parameters to be passed to the measure functions in grey_analysis
### New Features
- [ ] add saving and loading functionality, save each set of images in a folder w a txt file for analysis numbers | non_defect | necessary features readability standards changes add documentation for functions handle errors to make sure the correct arguments are passed add type identifiers for important variables or function return values modifications integrate img analyzer py and img py fix the circular dependency make a universal setter for img py since changing hsv of an image changes its rgb merge marsimg and img folders use matplotlib instead of tabulate for compiledanalysis func analyze all of the images in a readable way in images ipynb important specific allow parameters to be passed to the measure functions in grey analysis new features add saving and loading functionality save each set of images in a folder w a txt file for analysis numbers | 0 |
61,896 | 17,023,803,064 | IssuesEvent | 2021-07-03 03:56:27 | tomhughes/trac-tickets | https://api.github.com/repos/tomhughes/trac-tickets | closed | Wrong association of roads in Roth (1533968) | Component: nominatim Priority: major Resolution: duplicate Type: defect | **[Submitted to the original trac issue database at 1.54pm, Wednesday, 13th June 2012]**
All roads in Roth (id: 1533968) seem to be associcated with Schnbach (2465110383) which is a neighbour village but doesn't belong to the same town; Roth is part of Driedorf (land mass 128858491) while Schnbach is part of Herborn (land mass 128858191)
I've no idea how to fix that myself | 1.0 | Wrong association of roads in Roth (1533968) - **[Submitted to the original trac issue database at 1.54pm, Wednesday, 13th June 2012]**
All roads in Roth (id: 1533968) seem to be associcated with Schnbach (2465110383) which is a neighbour village but doesn't belong to the same town; Roth is part of Driedorf (land mass 128858491) while Schnbach is part of Herborn (land mass 128858191)
I've no idea how to fix that myself | defect | wrong association of roads in roth all roads in roth id seem to be associcated with schnbach which is a neighbour village but doesn t belong to the same town roth is part of driedorf land mass while schnbach is part of herborn land mass i ve no idea how to fix that myself | 1 |
721,183 | 24,820,566,558 | IssuesEvent | 2022-10-25 16:06:22 | Lightning-AI/lightning | https://api.github.com/repos/Lightning-AI/lightning | closed | batch size finder not running training steps after first batch size | bug trainer: tune priority: 1 | When trying to automatically find largest batch size, only validation steps are taken. Eg:
```
model.batch_size = tuner.scale_batch_size(model, mode='power', init_val=1, steps_per_trial=3)
```
will run:
```
validation_step(batch_idx=0) #part of the sanity check code
validation_step(batch_idx=1) #part of the sanity check code
train_step(batch_idx=0)
train_step(batch_idx=1)
train_step(batch_idx=2)
Batch size 1 succeeded, trying batch size 2
validation_step(batch_idx=0)
validation_step(batch_idx=1)
Batch size 2 succeeded, trying batch size 4
validation_step(batch_idx=0)
validation_step(batch_idx=1)
Batch size 4 succeeded, trying batch size 8
```
etc. thus this will return a batch size much larger than the one that fits in memory during the train step.
The issue is in ` pytorch_lightning/loops/fit_loop.py`:
```
def done(self) -> bool:
"""Evaluates when to leave the loop."""
# TODO(@awaelchli): Move track steps inside training loop and move part of these condition inside training loop
stop_steps = _is_max_limit_reached(self.epoch_loop.global_step, self.max_steps)
[...]
```
The variable `self.epoch_loop.global_step` was not reset to 0 when attempting a new batch size. In this case, it will be `3` on batch size 2, returning `True` as the value of the `done` flag, and ultimately setting `self.skip=True` in `pytorch_lightning/loops/base.py`.
cc @akihironitta @borda @rohitgr7 | 1.0 | batch size finder not running training steps after first batch size - When trying to automatically find largest batch size, only validation steps are taken. Eg:
```
model.batch_size = tuner.scale_batch_size(model, mode='power', init_val=1, steps_per_trial=3)
```
will run:
```
validation_step(batch_idx=0) #part of the sanity check code
validation_step(batch_idx=1) #part of the sanity check code
train_step(batch_idx=0)
train_step(batch_idx=1)
train_step(batch_idx=2)
Batch size 1 succeeded, trying batch size 2
validation_step(batch_idx=0)
validation_step(batch_idx=1)
Batch size 2 succeeded, trying batch size 4
validation_step(batch_idx=0)
validation_step(batch_idx=1)
Batch size 4 succeeded, trying batch size 8
```
etc. thus this will return a batch size much larger than the one that fits in memory during the train step.
The issue is in ` pytorch_lightning/loops/fit_loop.py`:
```
def done(self) -> bool:
"""Evaluates when to leave the loop."""
# TODO(@awaelchli): Move track steps inside training loop and move part of these condition inside training loop
stop_steps = _is_max_limit_reached(self.epoch_loop.global_step, self.max_steps)
[...]
```
The variable `self.epoch_loop.global_step` was not reset to 0 when attempting a new batch size. In this case, it will be `3` on batch size 2, returning `True` as the value of the `done` flag, and ultimately setting `self.skip=True` in `pytorch_lightning/loops/base.py`.
cc @akihironitta @borda @rohitgr7 | non_defect | batch size finder not running training steps after first batch size when trying to automatically find largest batch size only validation steps are taken eg model batch size tuner scale batch size model mode power init val steps per trial will run validation step batch idx part of the sanity check code validation step batch idx part of the sanity check code train step batch idx train step batch idx train step batch idx batch size succeeded trying batch size validation step batch idx validation step batch idx batch size succeeded trying batch size validation step batch idx validation step batch idx batch size succeeded trying batch size etc thus this will return a batch size much larger than the one that fits in memory during the train step the issue is in pytorch lightning loops fit loop py def done self bool evaluates when to leave the loop todo awaelchli move track steps inside training loop and move part of these condition inside training loop stop steps is max limit reached self epoch loop global step self max steps the variable self epoch loop global step was not reset to when attempting a new batch size in this case it will be on batch size returning true as the value of the done flag and ultimately setting self skip true in pytorch lightning loops base py cc akihironitta borda | 0 |
62,990 | 6,822,364,950 | IssuesEvent | 2017-11-07 19:49:33 | DecipherNow/gm-fabric-dashboard | https://api.github.com/repos/DecipherNow/gm-fabric-dashboard | opened | Unit Tests for src/utils/index.js | priority-2 Testing | Implement the stubbed out tests and describe blocks in `index.test.js` | 1.0 | Unit Tests for src/utils/index.js - Implement the stubbed out tests and describe blocks in `index.test.js` | non_defect | unit tests for src utils index js implement the stubbed out tests and describe blocks in index test js | 0 |
52,509 | 13,224,794,336 | IssuesEvent | 2020-08-17 19:51:45 | icecube-trac/tix4 | https://api.github.com/repos/icecube-trac/tix4 | opened | [clsim] I3CLSimServer deadlocks if given more than 1 I3CLSimStepToPhotonConverter (Trac #2360) | Incomplete Migration Migrated from Trac combo simulation defect | <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/2360">https://code.icecube.wisc.edu/projects/icecube/ticket/2360</a>, reported by jvansantenand owned by jvansanten</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-12-05T20:04:42",
"_ts": "1575576282029750",
"description": "I3CLSimServer appears to deadlock if trying to service more than GPU at a time. @lulu observed this behavior when attempting to run hobo-snowstorm on interactive multi-GPU nodes in Chiba.\n\nThis can also be reproduced by running e.g. \n{{{\nCUDA_VISIBLE_DEVICES=0,1 ./env-shell.sh clsim/resources/scripts/benchmark.py -n 1\n}}}\non a dual-GPU system. With either `CUDA_VISIBLE_DEVICES=0` or `CUDA_VISIBLE_DEVICES=1`, it completes within a few seconds. With both GPUs enabled, it hangs forever.",
"reporter": "jvansanten",
"cc": "eganster, lulu",
"resolution": "fixed",
"time": "2019-09-20T02:42:58",
"component": "combo simulation",
"summary": "[clsim] I3CLSimServer deadlocks if given more than 1 I3CLSimStepToPhotonConverter",
"priority": "major",
"keywords": "",
"milestone": "",
"owner": "jvansanten",
"type": "defect"
}
```
</p>
</details>
| 1.0 | [clsim] I3CLSimServer deadlocks if given more than 1 I3CLSimStepToPhotonConverter (Trac #2360) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/2360">https://code.icecube.wisc.edu/projects/icecube/ticket/2360</a>, reported by jvansantenand owned by jvansanten</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-12-05T20:04:42",
"_ts": "1575576282029750",
"description": "I3CLSimServer appears to deadlock if trying to service more than GPU at a time. @lulu observed this behavior when attempting to run hobo-snowstorm on interactive multi-GPU nodes in Chiba.\n\nThis can also be reproduced by running e.g. \n{{{\nCUDA_VISIBLE_DEVICES=0,1 ./env-shell.sh clsim/resources/scripts/benchmark.py -n 1\n}}}\non a dual-GPU system. With either `CUDA_VISIBLE_DEVICES=0` or `CUDA_VISIBLE_DEVICES=1`, it completes within a few seconds. With both GPUs enabled, it hangs forever.",
"reporter": "jvansanten",
"cc": "eganster, lulu",
"resolution": "fixed",
"time": "2019-09-20T02:42:58",
"component": "combo simulation",
"summary": "[clsim] I3CLSimServer deadlocks if given more than 1 I3CLSimStepToPhotonConverter",
"priority": "major",
"keywords": "",
"milestone": "",
"owner": "jvansanten",
"type": "defect"
}
```
</p>
</details>
| defect | deadlocks if given more than trac migrated from json status closed changetime ts description appears to deadlock if trying to service more than gpu at a time lulu observed this behavior when attempting to run hobo snowstorm on interactive multi gpu nodes in chiba n nthis can also be reproduced by running e g n ncuda visible devices env shell sh clsim resources scripts benchmark py n n non a dual gpu system with either cuda visible devices or cuda visible devices it completes within a few seconds with both gpus enabled it hangs forever reporter jvansanten cc eganster lulu resolution fixed time component combo simulation summary deadlocks if given more than priority major keywords milestone owner jvansanten type defect | 1 |
11,937 | 7,742,934,829 | IssuesEvent | 2018-05-29 11:09:55 | getgauge/gauge-python | https://api.github.com/repos/getgauge/gauge-python | closed | With large number of steps the CPU usage is high when project is loaded | performance ready for QA | **Expected behavior**
The CPU usage should be within an acceptable limit
**Actual behavior**
The CPU usage exceeds 90 and sometimes 100 as well
**Steps to replicate**
* Create a `gauge-python` project
* Create an implementation file with more than 100 steps
* Here is a sample project
[gauge-test.zip](https://github.com/getgauge/gauge-python/files/2030042/gauge-test.zip)
* Open it in VSCode
**Version**
```
Gauge version: 0.9.9.nightly-2018-05-21
Commit Hash: f7d0def
Plugins
-------
python (0.3.3.nightly-2018-05-21)
``` | True | With large number of steps the CPU usage is high when project is loaded - **Expected behavior**
The CPU usage should be within an acceptable limit
**Actual behavior**
The CPU usage exceeds 90 and sometimes 100 as well
**Steps to replicate**
* Create a `gauge-python` project
* Create an implementation file with more than 100 steps
* Here is a sample project
[gauge-test.zip](https://github.com/getgauge/gauge-python/files/2030042/gauge-test.zip)
* Open it in VSCode
**Version**
```
Gauge version: 0.9.9.nightly-2018-05-21
Commit Hash: f7d0def
Plugins
-------
python (0.3.3.nightly-2018-05-21)
``` | non_defect | with large number of steps the cpu usage is high when project is loaded expected behavior the cpu usage should be within an acceptable limit actual behavior the cpu usage exceeds and sometimes as well steps to replicate create a gauge python project create an implementation file with more than steps here is a sample project open it in vscode version gauge version nightly commit hash plugins python nightly | 0 |
20,923 | 3,436,404,174 | IssuesEvent | 2015-12-12 10:44:04 | nikcross/open-forum | https://api.github.com/repos/nikcross/open-forum | closed | page.js run in edit mode | auto-migrated Priority-Medium Type-Defect | ```
What steps will reproduce the problem?
1. Choose a page with a page.js that references a page element like a layer
2. Open a page in edit mode
3.
What is the expected output?
The page.js file should not be run
What do you see instead?
The page.js file is run and causes an error
```
Original issue reported on code.google.com by `nicholas...@gmail.com` on 19 Sep 2008 at 2:07 | 1.0 | page.js run in edit mode - ```
What steps will reproduce the problem?
1. Choose a page with a page.js that references a page element like a layer
2. Open a page in edit mode
3.
What is the expected output?
The page.js file should not be run
What do you see instead?
The page.js file is run and causes an error
```
Original issue reported on code.google.com by `nicholas...@gmail.com` on 19 Sep 2008 at 2:07 | defect | page js run in edit mode what steps will reproduce the problem choose a page with a page js that references a page element like a layer open a page in edit mode what is the expected output the page js file should not be run what do you see instead the page js file is run and causes an error original issue reported on code google com by nicholas gmail com on sep at | 1 |
37,040 | 8,211,560,442 | IssuesEvent | 2018-09-04 14:07:41 | NREL/EnergyPlus | https://api.github.com/repos/NREL/EnergyPlus | closed | Crash in table reports with SizingPeriod:WeatherFileConditionType as the only type of sizing period | Defect Priority1 | Issue overview
--------------
User file has two SizingPeriod:WeatherFileConditionType objects and not other sizing periods. Simulation crashes with an array bounds error in `CollectPeakZoneConditions` which is triggered by the ZoneComponentLoadSummary.
### Workaround
Disable the ZoneComponentLoadSummary report.
Or add a dummy design day like this one, but it's not clear if the ZoneComponentLoadSummary will report the correct peak conditions in this case.
```
SizingPeriod:DesignDay,
DummyDesignDay, !- Name
1, !- Month
21, !- Day of Month
WinterDesignDay, !- Day Type
22.0, !- Maximum Dry-Bulb Temperature {C}
0.0, !- Daily Dry-Bulb Temperature Range {deltaC}
, !- Dry-Bulb Temperature Range Modifier Type
, !- Dry-Bulb Temperature Range Modifier Day Schedule Name
Wetbulb, !- Humidity Condition Type
10.0, !- Wetbulb or DewPoint at Maximum Dry-Bulb {C}
, !- Humidity Condition Day Schedule Name
, !- Humidity Ratio at Maximum Dry-Bulb {kgWater/kgDryAir}
, !- Enthalpy at Maximum Dry-Bulb {J/kg}
, !- Daily Wet-Bulb Temperature Range {deltaC}
99063., !- Barometric Pressure {Pa}
4.9, !- Wind Speed {m/s}
270, !- Wind Direction {deg}
No, !- Rain Indicator
No, !- Snow Indicator
No, !- Daylight Saving Time Indicator
ASHRAEClearSky, !- Solar Model Indicator
, !- Beam Solar Day Schedule Name
, !- Diffuse Solar Day Schedule Name
, !- ASHRAE Clear Sky Optical Depth for Beam Irradiance (taub) {dimensionless}
, !- ASHRAE Clear Sky Optical Depth for Diffuse Irradiance (taud) {dimensionless}
0.0; !- Sky Clearness
```
### Details
Some additional details for this issue (if relevant):
- Platform Win64
- Version of EnergyPlus v8.9.0 and v9.0 73281ffba3
- Helpdesk ticket number 12889
### Checklist
Add to this list or remove from it as applicable. This is a simple templated set of guidelines.
- [x] Defect file added EnergyPlusDevSupport\DefectFiles
- [ ] Ticket added to Pivotal for defect (development team task)
- [ ] Pull request created (the pull request will have additional tasks related to reviewing changes that fix this defect)
| 1.0 | Crash in table reports with SizingPeriod:WeatherFileConditionType as the only type of sizing period - Issue overview
--------------
User file has two SizingPeriod:WeatherFileConditionType objects and not other sizing periods. Simulation crashes with an array bounds error in `CollectPeakZoneConditions` which is triggered by the ZoneComponentLoadSummary.
### Workaround
Disable the ZoneComponentLoadSummary report.
Or add a dummy design day like this one, but it's not clear if the ZoneComponentLoadSummary will report the correct peak conditions in this case.
```
SizingPeriod:DesignDay,
DummyDesignDay, !- Name
1, !- Month
21, !- Day of Month
WinterDesignDay, !- Day Type
22.0, !- Maximum Dry-Bulb Temperature {C}
0.0, !- Daily Dry-Bulb Temperature Range {deltaC}
, !- Dry-Bulb Temperature Range Modifier Type
, !- Dry-Bulb Temperature Range Modifier Day Schedule Name
Wetbulb, !- Humidity Condition Type
10.0, !- Wetbulb or DewPoint at Maximum Dry-Bulb {C}
, !- Humidity Condition Day Schedule Name
, !- Humidity Ratio at Maximum Dry-Bulb {kgWater/kgDryAir}
, !- Enthalpy at Maximum Dry-Bulb {J/kg}
, !- Daily Wet-Bulb Temperature Range {deltaC}
99063., !- Barometric Pressure {Pa}
4.9, !- Wind Speed {m/s}
270, !- Wind Direction {deg}
No, !- Rain Indicator
No, !- Snow Indicator
No, !- Daylight Saving Time Indicator
ASHRAEClearSky, !- Solar Model Indicator
, !- Beam Solar Day Schedule Name
, !- Diffuse Solar Day Schedule Name
, !- ASHRAE Clear Sky Optical Depth for Beam Irradiance (taub) {dimensionless}
, !- ASHRAE Clear Sky Optical Depth for Diffuse Irradiance (taud) {dimensionless}
0.0; !- Sky Clearness
```
### Details
Some additional details for this issue (if relevant):
- Platform Win64
- Version of EnergyPlus v8.9.0 and v9.0 73281ffba3
- Helpdesk ticket number 12889
### Checklist
Add to this list or remove from it as applicable. This is a simple templated set of guidelines.
- [x] Defect file added EnergyPlusDevSupport\DefectFiles
- [ ] Ticket added to Pivotal for defect (development team task)
- [ ] Pull request created (the pull request will have additional tasks related to reviewing changes that fix this defect)
| defect | crash in table reports with sizingperiod weatherfileconditiontype as the only type of sizing period issue overview user file has two sizingperiod weatherfileconditiontype objects and not other sizing periods simulation crashes with an array bounds error in collectpeakzoneconditions which is triggered by the zonecomponentloadsummary workaround disable the zonecomponentloadsummary report or add a dummy design day like this one but it s not clear if the zonecomponentloadsummary will report the correct peak conditions in this case sizingperiod designday dummydesignday name month day of month winterdesignday day type maximum dry bulb temperature c daily dry bulb temperature range deltac dry bulb temperature range modifier type dry bulb temperature range modifier day schedule name wetbulb humidity condition type wetbulb or dewpoint at maximum dry bulb c humidity condition day schedule name humidity ratio at maximum dry bulb kgwater kgdryair enthalpy at maximum dry bulb j kg daily wet bulb temperature range deltac barometric pressure pa wind speed m s wind direction deg no rain indicator no snow indicator no daylight saving time indicator ashraeclearsky solar model indicator beam solar day schedule name diffuse solar day schedule name ashrae clear sky optical depth for beam irradiance taub dimensionless ashrae clear sky optical depth for diffuse irradiance taud dimensionless sky clearness details some additional details for this issue if relevant platform version of energyplus and helpdesk ticket number checklist add to this list or remove from it as applicable this is a simple templated set of guidelines defect file added energyplusdevsupport defectfiles ticket added to pivotal for defect development team task pull request created the pull request will have additional tasks related to reviewing changes that fix this defect | 1 |
34,284 | 7,434,898,865 | IssuesEvent | 2018-03-26 12:41:01 | primefaces/primeng | https://api.github.com/repos/primefaces/primeng | closed | Dropdown autoWidth is not updated on value change | defect | I'm submitting a bug report
Plunker:
https://plnkr.co/edit/YBl5bOWqGTT3Ve5vMdg2
**Current behavior**
When p-dropdown is opened from a dialog and initialized with a value, the drop-down is rendered empty. The default value is not selected.
**Expected behavior**
This should work in a dialog the same way as it works elsewhere, as it is a common use case
**Minimal reproduction of the problem with instructions**
See plunker. The potential workaround is to pull the list of options from a remote location (or do some delays with code) as there is likely some order of rendering that causes this issue.
**What is the motivation / use case for changing the behavior?**
Make the PrimeNg framework work right.
**Please tell us about your environment:**VS Code, Plunker
* **Angular version:** 4.2.4, 4.3.0
* **PrimeNG version:** 4.2.3, 4.3.0
* **Browser:** [all |
* **Language:** [TypeScript]
* **Node (for AoT issues):** `node --version` =
| 1.0 | Dropdown autoWidth is not updated on value change - I'm submitting a bug report
Plunker:
https://plnkr.co/edit/YBl5bOWqGTT3Ve5vMdg2
**Current behavior**
When p-dropdown is opened from a dialog and initialized with a value, the drop-down is rendered empty. The default value is not selected.
**Expected behavior**
This should work in a dialog the same way as it works elsewhere, as it is a common use case
**Minimal reproduction of the problem with instructions**
See plunker. The potential workaround is to pull the list of options from a remote location (or do some delays with code) as there is likely some order of rendering that causes this issue.
**What is the motivation / use case for changing the behavior?**
Make the PrimeNg framework work right.
**Please tell us about your environment:**VS Code, Plunker
* **Angular version:** 4.2.4, 4.3.0
* **PrimeNG version:** 4.2.3, 4.3.0
* **Browser:** [all |
* **Language:** [TypeScript]
* **Node (for AoT issues):** `node --version` =
| defect | dropdown autowidth is not updated on value change i m submitting a bug report plunker current behavior when p dropdown is opened from a dialog and initialized with a value the drop down is rendered empty the default value is not selected expected behavior this should work in a dialog the same way as it works elsewhere as it is a common use case minimal reproduction of the problem with instructions see plunker the potential workaround is to pull the list of options from a remote location or do some delays with code as there is likely some order of rendering that causes this issue what is the motivation use case for changing the behavior make the primeng framework work right please tell us about your environment vs code plunker angular version primeng version browser all language node for aot issues node version | 1 |
48,927 | 13,428,563,264 | IssuesEvent | 2020-09-06 22:20:44 | Watemlifts/odoo | https://api.github.com/repos/Watemlifts/odoo | opened | CVE-2020-8203 (High) detected in lodash-1.0.2.tgz | security vulnerability | ## CVE-2020-8203 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-1.0.2.tgz</b></p></summary>
<p>A utility library delivering consistency, customization, performance, and extras.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-1.0.2.tgz">https://registry.npmjs.org/lodash/-/lodash-1.0.2.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/odoo/addons/web/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/odoo/addons/web/node_modules/globule/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- grunt-contrib-watch-0.5.3.tgz (Root Library)
- gaze-0.4.3.tgz
- globule-0.1.0.tgz
- :x: **lodash-1.0.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Watemlifts/odoo/commit/3f4edb9af8b76273782d2f6bda623e71cd9aa929">3f4edb9af8b76273782d2f6bda623e71cd9aa929</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Prototype pollution attack when using _.zipObjectDeep in lodash <= 4.17.15.
<p>Publish Date: 2020-07-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8203>CVE-2020-8203</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1523">https://www.npmjs.com/advisories/1523</a></p>
<p>Release Date: 2020-07-23</p>
<p>Fix Resolution: lodash - 4.17.19</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-8203 (High) detected in lodash-1.0.2.tgz - ## CVE-2020-8203 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-1.0.2.tgz</b></p></summary>
<p>A utility library delivering consistency, customization, performance, and extras.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-1.0.2.tgz">https://registry.npmjs.org/lodash/-/lodash-1.0.2.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/odoo/addons/web/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/odoo/addons/web/node_modules/globule/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- grunt-contrib-watch-0.5.3.tgz (Root Library)
- gaze-0.4.3.tgz
- globule-0.1.0.tgz
- :x: **lodash-1.0.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Watemlifts/odoo/commit/3f4edb9af8b76273782d2f6bda623e71cd9aa929">3f4edb9af8b76273782d2f6bda623e71cd9aa929</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Prototype pollution attack when using _.zipObjectDeep in lodash <= 4.17.15.
<p>Publish Date: 2020-07-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8203>CVE-2020-8203</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1523">https://www.npmjs.com/advisories/1523</a></p>
<p>Release Date: 2020-07-23</p>
<p>Fix Resolution: lodash - 4.17.19</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | cve high detected in lodash tgz cve high severity vulnerability vulnerable library lodash tgz a utility library delivering consistency customization performance and extras library home page a href path to dependency file tmp ws scm odoo addons web package json path to vulnerable library tmp ws scm odoo addons web node modules globule node modules lodash package json dependency hierarchy grunt contrib watch tgz root library gaze tgz globule tgz x lodash tgz vulnerable library found in head commit a href vulnerability details prototype pollution attack when using zipobjectdeep in lodash publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution lodash step up your open source security game with whitesource | 0 |
10,020 | 2,618,931,238 | IssuesEvent | 2015-03-03 00:00:21 | chrsmith/open-ig | https://api.github.com/repos/chrsmith/open-ig | closed | Kereskedővel való üzenetváltás | auto-migrated Chat Milestone-0.95.200 Priority-Medium Type-Defect | ```
Game version: 0.95.117
Operating System: Ubuntu 12.04.1 LTS 32 bit
Java runtime version: 1.7.0_09
Installed using the Launcher? yes
Game language: hu
Küldetés: blokád alatt van a San Sterling
Probléma: nagyjából a 4. vagy 5. kereskedőnél (az aki nem hajlandó
felszólításra visszafordulni), az üzenet váltás végtelen ciklusba
kerül. Így amikor elkezdem lőni a hajót, nem írja ki, hogy oké
visszafordulok. Ebből a ciklusból csak úgy lehet kilépni, ha azt mondom
neki rendben mehet tovább.
```
Original issue reported on code.google.com by `szikes.a...@gmail.com` on 3 Dec 2012 at 8:40 | 1.0 | Kereskedővel való üzenetváltás - ```
Game version: 0.95.117
Operating System: Ubuntu 12.04.1 LTS 32 bit
Java runtime version: 1.7.0_09
Installed using the Launcher? yes
Game language: hu
Küldetés: blokád alatt van a San Sterling
Probléma: nagyjából a 4. vagy 5. kereskedőnél (az aki nem hajlandó
felszólításra visszafordulni), az üzenet váltás végtelen ciklusba
kerül. Így amikor elkezdem lőni a hajót, nem írja ki, hogy oké
visszafordulok. Ebből a ciklusból csak úgy lehet kilépni, ha azt mondom
neki rendben mehet tovább.
```
Original issue reported on code.google.com by `szikes.a...@gmail.com` on 3 Dec 2012 at 8:40 | defect | kereskedővel való üzenetváltás game version operating system ubuntu lts bit java runtime version installed using the launcher yes game language hu küldetés blokád alatt van a san sterling probléma nagyjából a vagy kereskedőnél az aki nem hajlandó felszólításra visszafordulni az üzenet váltás végtelen ciklusba kerül így amikor elkezdem lőni a hajót nem írja ki hogy oké visszafordulok ebből a ciklusból csak úgy lehet kilépni ha azt mondom neki rendben mehet tovább original issue reported on code google com by szikes a gmail com on dec at | 1 |
314,372 | 23,518,149,250 | IssuesEvent | 2022-08-19 00:52:06 | aws/aws-cli | https://api.github.com/repos/aws/aws-cli | closed | Documentation of code layout | documentation feature-request closed-for-staleness | When looking over the code I thought it would be nice to have a document that describes the overall layout. Currently I have an in-progress preview available in my local fork:
https://github.com/cwgem/aws-cli/blob/65f04a1cc581163051ad8cf05a58d688ade1cd0f/ARCHITECTURE.rst
As the next step is documentation of the `awscli` subdirectory I'd like to get your thoughts on this before I devote more time to it. My thoughts on going with a local document instead of a Wiki entry is that I can see someone wanting to inspect the code in a non-network environment (a plane that does not offer Wifi for example). Another potential benefit is helping to ease the hurdle of contribution. Thanks ahead of time for your thoughts! | 1.0 | Documentation of code layout - When looking over the code I thought it would be nice to have a document that describes the overall layout. Currently I have an in-progress preview available in my local fork:
https://github.com/cwgem/aws-cli/blob/65f04a1cc581163051ad8cf05a58d688ade1cd0f/ARCHITECTURE.rst
As the next step is documentation of the `awscli` subdirectory I'd like to get your thoughts on this before I devote more time to it. My thoughts on going with a local document instead of a Wiki entry is that I can see someone wanting to inspect the code in a non-network environment (a plane that does not offer Wifi for example). Another potential benefit is helping to ease the hurdle of contribution. Thanks ahead of time for your thoughts! | non_defect | documentation of code layout when looking over the code i thought it would be nice to have a document that describes the overall layout currently i have an in progress preview available in my local fork as the next step is documentation of the awscli subdirectory i d like to get your thoughts on this before i devote more time to it my thoughts on going with a local document instead of a wiki entry is that i can see someone wanting to inspect the code in a non network environment a plane that does not offer wifi for example another potential benefit is helping to ease the hurdle of contribution thanks ahead of time for your thoughts | 0 |
735,276 | 25,387,474,299 | IssuesEvent | 2022-11-21 23:27:20 | clt313/SuperballVR | https://api.github.com/repos/clt313/SuperballVR | closed | Add player movement | priority: high | During the game, players should be able to move around with one of the analog sticks on their hand controller. | 1.0 | Add player movement - During the game, players should be able to move around with one of the analog sticks on their hand controller. | non_defect | add player movement during the game players should be able to move around with one of the analog sticks on their hand controller | 0 |
51,396 | 13,207,462,695 | IssuesEvent | 2020-08-14 23:11:52 | icecube-trac/tix4 | https://api.github.com/repos/icecube-trac/tix4 | opened | ttrigger's cluster.c compiler flags cause failure (Trac #336) | Incomplete Migration Migrated from Trac combo reconstruction defect | <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/336">https://code.icecube.wisc.edu/projects/icecube/ticket/336</a>, reported by dunkmanand owned by dima</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2015-07-07T22:32:33",
"_ts": "1436308353324715",
"description": "CMakeFile.txt contains \"-std=c99 -Wall -Werror\" flags for cluster.c; this causes a gcc error on the Penn State cluster. \n\nAttached is the full cmake + make outputs",
"reporter": "dunkman",
"cc": "",
"resolution": "fixed",
"time": "2011-12-06T22:25:39",
"component": "combo reconstruction",
"summary": "ttrigger's cluster.c compiler flags cause failure",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "dima",
"type": "defect"
}
```
</p>
</details>
| 1.0 | ttrigger's cluster.c compiler flags cause failure (Trac #336) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/336">https://code.icecube.wisc.edu/projects/icecube/ticket/336</a>, reported by dunkmanand owned by dima</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2015-07-07T22:32:33",
"_ts": "1436308353324715",
"description": "CMakeFile.txt contains \"-std=c99 -Wall -Werror\" flags for cluster.c; this causes a gcc error on the Penn State cluster. \n\nAttached is the full cmake + make outputs",
"reporter": "dunkman",
"cc": "",
"resolution": "fixed",
"time": "2011-12-06T22:25:39",
"component": "combo reconstruction",
"summary": "ttrigger's cluster.c compiler flags cause failure",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "dima",
"type": "defect"
}
```
</p>
</details>
| defect | ttrigger s cluster c compiler flags cause failure trac migrated from json status closed changetime ts description cmakefile txt contains std wall werror flags for cluster c this causes a gcc error on the penn state cluster n nattached is the full cmake make outputs reporter dunkman cc resolution fixed time component combo reconstruction summary ttrigger s cluster c compiler flags cause failure priority normal keywords milestone owner dima type defect | 1 |
326,233 | 9,954,313,847 | IssuesEvent | 2019-07-05 08:03:53 | QbitArtifacts/rec-issues | https://api.github.com/repos/QbitArtifacts/rec-issues | closed | 🔥Error sección 'Mi cuenta' app Android | app bug confirmed priority: 2 | Cuando pulsas en el botón 'Guardar y salir' en la pantalla 'Mi cuenta' de la App de Android aparece el error de la imagen, independientemente de si editas algo o no en esta pantalla.

| 1.0 | 🔥Error sección 'Mi cuenta' app Android - Cuando pulsas en el botón 'Guardar y salir' en la pantalla 'Mi cuenta' de la App de Android aparece el error de la imagen, independientemente de si editas algo o no en esta pantalla.

| non_defect | 🔥error sección mi cuenta app android cuando pulsas en el botón guardar y salir en la pantalla mi cuenta de la app de android aparece el error de la imagen independientemente de si editas algo o no en esta pantalla | 0 |
182,343 | 14,115,884,362 | IssuesEvent | 2020-11-07 23:26:33 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | closed | roachtest: pgx failed | C-test-failure O-roachtest O-robot branch-master release-blocker | [(roachtest).pgx failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2424980&tab=buildLog) on [master@61c96aaca632dfba5154d55d82c5af0732053b72](https://github.com/cockroachdb/cockroach/commits/61c96aaca632dfba5154d55d82c5af0732053b72):
```
The test failed on branch=master, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/pgx/run_1
pgx.go:80,pgx.go:131,test_runner.go:755: No pgx blocklist defined for cockroach version v21.1.0-alpha.00000000-7-g61c96aaca6
```
<details><summary>More</summary><p>
Artifacts: [/pgx](https://teamcity.cockroachdb.com/viewLog.html?buildId=2424980&tab=artifacts#/pgx)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Apgx.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
| 2.0 | roachtest: pgx failed - [(roachtest).pgx failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2424980&tab=buildLog) on [master@61c96aaca632dfba5154d55d82c5af0732053b72](https://github.com/cockroachdb/cockroach/commits/61c96aaca632dfba5154d55d82c5af0732053b72):
```
The test failed on branch=master, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/pgx/run_1
pgx.go:80,pgx.go:131,test_runner.go:755: No pgx blocklist defined for cockroach version v21.1.0-alpha.00000000-7-g61c96aaca6
```
<details><summary>More</summary><p>
Artifacts: [/pgx](https://teamcity.cockroachdb.com/viewLog.html?buildId=2424980&tab=artifacts#/pgx)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Apgx.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
| non_defect | roachtest pgx failed on the test failed on branch master cloud gce test artifacts and logs in home agent work go src github com cockroachdb cockroach artifacts pgx run pgx go pgx go test runner go no pgx blocklist defined for cockroach version alpha more artifacts powered by | 0 |
60,287 | 17,023,388,397 | IssuesEvent | 2021-07-03 01:46:16 | tomhughes/trac-tickets | https://api.github.com/repos/tomhughes/trac-tickets | closed | [Proposed-PATCH] Long changeset edit messages cause the /edits list to spill over two two lines per commit | Component: website Priority: minor Resolution: fixed Type: defect | **[Submitted to the original trac issue database at 5.18pm, Wednesday, 22nd April 2009]**
This problem (see screenshot) appears on Ubuntu 9.04 running Firefox 3.0.8. | 1.0 | [Proposed-PATCH] Long changeset edit messages cause the /edits list to spill over two two lines per commit - **[Submitted to the original trac issue database at 5.18pm, Wednesday, 22nd April 2009]**
This problem (see screenshot) appears on Ubuntu 9.04 running Firefox 3.0.8. | defect | long changeset edit messages cause the edits list to spill over two two lines per commit this problem see screenshot appears on ubuntu running firefox | 1 |
340,517 | 24,658,197,971 | IssuesEvent | 2022-10-18 02:56:36 | Final-healthree/healthree-backend | https://api.github.com/repos/Final-healthree/healthree-backend | closed | Bug : `fluent-ffmpeg` 영상 병합(concatenate) 시 에러 발생 | bug documentation | ## **9월 13일**
MVP를 앞두고, 프론트 서버와 연결하기 전에 마지막으로 테스트를 해보는 중 비디오 병합 기능에서 갑작스러운 에러가 발생했다.
- 기능 구현 이래로 영상은 늘 병합이 잘 됐는데, 이번 테스트에서 처음으로 마지막 3번째 영상에서 병합이 되지 않았다.
- @phenomenonlee님과 데모 영상을 찍어 공유했고 병합이 되는 영상과 되지 않는 영상의 차이를 구분했다.
- 이 과정에서 우리는 병합이 되는 영상과 그렇지 않은 영상의 차이가 **`Dolby Vision`** 란 걸 알게 됐고, **`Dolby Vision`** 이더라도 같은 **`Dolby Vision`** 끼리는 영상 병합이 되는 것을 확인했다.
- 따라서, 병합하고자 하는 영상의 형식이 하나라도 다른 경우 병합이 되지 않는다고 추론할 수 있었다.
<br>
<details>
<summary>이미지 열기</summary>
<div markdown="1">
<img src="https://user-images.githubusercontent.com/99732695/191316383-8d49c298-cdfd-44d8-81ec-e06d1ba072c4.png" width=400px>
<img src="https://user-images.githubusercontent.com/99732695/191316395-a0125358-ed9d-4da4-9199-6ca0f775d546.png" width=400px>
</div>
</details>
추론을 바탕으로 우리가 실행할 수 있는 방법은 2가지로 보였다.
먼저 업로드 된 영상을 console.log()로 확인하여 어떤 속성이 `Dolby Vision` 을 나타내는지 차이점을 찾아낸 뒤,
1. `Dolby Vision`인 영상은 예외로 처리한다.
- `Dolby Vision` 은 아이폰 기준 2020년 10월 아이폰12 출시 이후에 탑재한 기능이라 인코딩을 하려면 다운그레이드를 해야 한다는 단점이 있기 때문에, 이는 비효율적이라 판단하여 예외 처리로 진행하고자 했다.
- 즉, 유저가 같은 형식의 비디오를 업로드 할 수 있게 하고자 했다.
2. 제 3의 형식으로 비디오 3개를 모두 인코딩을 해준다.
- 영상 3가지 중, 2가지가 같은 형식의 영상이어도 마지막 하나가 다르다면 발생하는 에러였기 때문에 모든 영상을 인코딩하는 건 속도 측면에서도 그렇고 여러 지점에서 서버 자원의 낭비라는 의견을 나누었다.
- 따라서 제일 후순위로 미뤄놓았다.
3. 백엔드는 아니지만, 1의 과정을 프론트에서 예외를 처리해줄 수 있는지 한 번 논의를 해보잔 의견을 나눴다.
--- | 1.0 | Bug : `fluent-ffmpeg` 영상 병합(concatenate) 시 에러 발생 - ## **9월 13일**
MVP를 앞두고, 프론트 서버와 연결하기 전에 마지막으로 테스트를 해보는 중 비디오 병합 기능에서 갑작스러운 에러가 발생했다.
- 기능 구현 이래로 영상은 늘 병합이 잘 됐는데, 이번 테스트에서 처음으로 마지막 3번째 영상에서 병합이 되지 않았다.
- @phenomenonlee님과 데모 영상을 찍어 공유했고 병합이 되는 영상과 되지 않는 영상의 차이를 구분했다.
- 이 과정에서 우리는 병합이 되는 영상과 그렇지 않은 영상의 차이가 **`Dolby Vision`** 란 걸 알게 됐고, **`Dolby Vision`** 이더라도 같은 **`Dolby Vision`** 끼리는 영상 병합이 되는 것을 확인했다.
- 따라서, 병합하고자 하는 영상의 형식이 하나라도 다른 경우 병합이 되지 않는다고 추론할 수 있었다.
<br>
<details>
<summary>이미지 열기</summary>
<div markdown="1">
<img src="https://user-images.githubusercontent.com/99732695/191316383-8d49c298-cdfd-44d8-81ec-e06d1ba072c4.png" width=400px>
<img src="https://user-images.githubusercontent.com/99732695/191316395-a0125358-ed9d-4da4-9199-6ca0f775d546.png" width=400px>
</div>
</details>
추론을 바탕으로 우리가 실행할 수 있는 방법은 2가지로 보였다.
먼저 업로드 된 영상을 console.log()로 확인하여 어떤 속성이 `Dolby Vision` 을 나타내는지 차이점을 찾아낸 뒤,
1. `Dolby Vision`인 영상은 예외로 처리한다.
- `Dolby Vision` 은 아이폰 기준 2020년 10월 아이폰12 출시 이후에 탑재한 기능이라 인코딩을 하려면 다운그레이드를 해야 한다는 단점이 있기 때문에, 이는 비효율적이라 판단하여 예외 처리로 진행하고자 했다.
- 즉, 유저가 같은 형식의 비디오를 업로드 할 수 있게 하고자 했다.
2. 제 3의 형식으로 비디오 3개를 모두 인코딩을 해준다.
- 영상 3가지 중, 2가지가 같은 형식의 영상이어도 마지막 하나가 다르다면 발생하는 에러였기 때문에 모든 영상을 인코딩하는 건 속도 측면에서도 그렇고 여러 지점에서 서버 자원의 낭비라는 의견을 나누었다.
- 따라서 제일 후순위로 미뤄놓았다.
3. 백엔드는 아니지만, 1의 과정을 프론트에서 예외를 처리해줄 수 있는지 한 번 논의를 해보잔 의견을 나눴다.
--- | non_defect | bug fluent ffmpeg 영상 병합 concatenate 시 에러 발생 mvp를 앞두고 프론트 서버와 연결하기 전에 마지막으로 테스트를 해보는 중 비디오 병합 기능에서 갑작스러운 에러가 발생했다 기능 구현 이래로 영상은 늘 병합이 잘 됐는데 이번 테스트에서 처음으로 마지막 영상에서 병합이 되지 않았다 phenomenonlee님과 데모 영상을 찍어 공유했고 병합이 되는 영상과 되지 않는 영상의 차이를 구분했다 이 과정에서 우리는 병합이 되는 영상과 그렇지 않은 영상의 차이가 dolby vision 란 걸 알게 됐고 dolby vision 이더라도 같은 dolby vision 끼리는 영상 병합이 되는 것을 확인했다 따라서 병합하고자 하는 영상의 형식이 하나라도 다른 경우 병합이 되지 않는다고 추론할 수 있었다 이미지 열기 추론을 바탕으로 우리가 실행할 수 있는 방법은 보였다 먼저 업로드 된 영상을 console log 로 확인하여 어떤 속성이 dolby vision 을 나타내는지 차이점을 찾아낸 뒤 dolby vision 인 영상은 예외로 처리한다 dolby vision 은 아이폰 기준 출시 이후에 탑재한 기능이라 인코딩을 하려면 다운그레이드를 해야 한다는 단점이 있기 때문에 이는 비효율적이라 판단하여 예외 처리로 진행하고자 했다 즉 유저가 같은 형식의 비디오를 업로드 할 수 있게 하고자 했다 제 형식으로 비디오 모두 인코딩을 해준다 영상 중 같은 형식의 영상이어도 마지막 하나가 다르다면 발생하는 에러였기 때문에 모든 영상을 인코딩하는 건 속도 측면에서도 그렇고 여러 지점에서 서버 자원의 낭비라는 의견을 나누었다 따라서 제일 후순위로 미뤄놓았다 백엔드는 아니지만 과정을 프론트에서 예외를 처리해줄 수 있는지 한 번 논의를 해보잔 의견을 나눴다 | 0 |
71,624 | 23,731,141,237 | IssuesEvent | 2022-08-31 01:54:05 | SeleniumHQ/selenium | https://api.github.com/repos/SeleniumHQ/selenium | closed | [🐛 Bug]: [Python] [Firefox] install_addon polluting /tmp | C-py I-defect | ### What happened?
I'm not sure if this is a Selenium or a Marionette issue.
After multiple starts with local addon file installs I ran out of disk space and discovered a huge mess in /tmp.
A lot of copies of the local addon file (identical file size).
Any chance to automatically clean those up when we delete the temporary profile dir?
### How can we reproduce the issue?
```shell
browser = webdriver.Firefox()
browser.install_addon(os.path.join(os.getcwd(), 'NAME.xpi'), True)#addon file is in the cwd
browser.quit()
```
### Relevant log output
```shell
...
1780499 addon-ffed1201-6e45-4a10-8fa8-5c0a326680f7.xpi
1780499 addon-9b4bde59-a74d-4706-90a2-20ec1a010c80.xpi
1780499 addon-f91bb2ff-b81b-43ea-9b54-3d819ac6406f.xpi
...
```
### Operating System
Ubuntu 20.04
### Selenium version
Python 3.8.10 Selenium 4.3.0
### What are the browser(s) and version(s) where you see this issue?
Firefox 101
### What are the browser driver(s) and version(s) where you see this issue?
GeckoDriver 101
### Are you using Selenium Grid?
_No response_ | 1.0 | [🐛 Bug]: [Python] [Firefox] install_addon polluting /tmp - ### What happened?
I'm not sure if this is a Selenium or a Marionette issue.
After multiple starts with local addon file installs I ran out of disk space and discovered a huge mess in /tmp.
A lot of copies of the local addon file (identical file size).
Any chance to automatically clean those up when we delete the temporary profile dir?
### How can we reproduce the issue?
```shell
browser = webdriver.Firefox()
browser.install_addon(os.path.join(os.getcwd(), 'NAME.xpi'), True)#addon file is in the cwd
browser.quit()
```
### Relevant log output
```shell
...
1780499 addon-ffed1201-6e45-4a10-8fa8-5c0a326680f7.xpi
1780499 addon-9b4bde59-a74d-4706-90a2-20ec1a010c80.xpi
1780499 addon-f91bb2ff-b81b-43ea-9b54-3d819ac6406f.xpi
...
```
### Operating System
Ubuntu 20.04
### Selenium version
Python 3.8.10 Selenium 4.3.0
### What are the browser(s) and version(s) where you see this issue?
Firefox 101
### What are the browser driver(s) and version(s) where you see this issue?
GeckoDriver 101
### Are you using Selenium Grid?
_No response_ | defect | install addon polluting tmp what happened i m not sure if this is a selenium or a marionette issue after multiple starts with local addon file installs i ran out of disk space and discovered a huge mess in tmp a lot of copies of the local addon file identical file size any chance to automatically clean those up when we delete the temporary profile dir how can we reproduce the issue shell browser webdriver firefox browser install addon os path join os getcwd name xpi true addon file is in the cwd browser quit relevant log output shell addon xpi addon xpi addon xpi operating system ubuntu selenium version python selenium what are the browser s and version s where you see this issue firefox what are the browser driver s and version s where you see this issue geckodriver are you using selenium grid no response | 1 |
40,506 | 20,946,260,848 | IssuesEvent | 2022-03-26 00:49:44 | neovim/neovim | https://api.github.com/repos/neovim/neovim | closed | Strange interaction between cursorbind and conceallevel | bug performance display | ### Neovim version (nvim -v)
0.4.4, 0.5.0, 685cf398130c61c158401b992a1893c2405cd7d2
### Vim (not Nvim) behaves the same?
no, vim 8.2.2434
### Operating system/version
Ubuntu 21.04
### Terminal name/version
Xterm(361)
### $TERM environment variable
xterm-256color
### Installation
compiled
### How to reproduce the issue
```
nvim -u NONE
:20vsplit /etc/passwd
:set conceallevel=2
:windo set nowrap scrollbind cursorbind
iabcdefghijklmnopqrstuvwxyz
```
### Expected behavior
Vertical split on left stays put.
### Actual behavior
Vertical split scrolls horizontally to reflect cursor position. And with larger files, performance is excruciatingly slow.
Note that updating the cursor column is correct. The preemptive redraw with `'conceallevel'` is the anomaly. | True | Strange interaction between cursorbind and conceallevel - ### Neovim version (nvim -v)
0.4.4, 0.5.0, 685cf398130c61c158401b992a1893c2405cd7d2
### Vim (not Nvim) behaves the same?
no, vim 8.2.2434
### Operating system/version
Ubuntu 21.04
### Terminal name/version
Xterm(361)
### $TERM environment variable
xterm-256color
### Installation
compiled
### How to reproduce the issue
```
nvim -u NONE
:20vsplit /etc/passwd
:set conceallevel=2
:windo set nowrap scrollbind cursorbind
iabcdefghijklmnopqrstuvwxyz
```
### Expected behavior
Vertical split on left stays put.
### Actual behavior
Vertical split scrolls horizontally to reflect cursor position. And with larger files, performance is excruciatingly slow.
Note that updating the cursor column is correct. The preemptive redraw with `'conceallevel'` is the anomaly. | non_defect | strange interaction between cursorbind and conceallevel neovim version nvim v vim not nvim behaves the same no vim operating system version ubuntu terminal name version xterm term environment variable xterm installation compiled how to reproduce the issue nvim u none etc passwd set conceallevel windo set nowrap scrollbind cursorbind iabcdefghijklmnopqrstuvwxyz expected behavior vertical split on left stays put actual behavior vertical split scrolls horizontally to reflect cursor position and with larger files performance is excruciatingly slow note that updating the cursor column is correct the preemptive redraw with conceallevel is the anomaly | 0 |
110,782 | 24,010,167,281 | IssuesEvent | 2022-09-14 18:03:35 | WordPress/openverse-catalog | https://api.github.com/repos/WordPress/openverse-catalog | opened | Wikimedia DAG times out due to very large batches | bug 🟧 priority: high 🛠 goal: fix 💻 aspect: code | ## Description
<!-- Concisely describe the bug. Compare your experience with what you expected to happen. -->
<!-- For example: "I clicked the 'submit' button and instead of seeing a thank you message, I saw a blank page." -->
Some recent DagRuns of the `wikimedia_commons_workflow` have timed out before completion in the `pull_data` step. It looks like this is happening due to encountering extremely large batches.
These show up in the logs as very long sections where the reported `gaicontinue` token is identical, and the `gucontinue` token is almost identical except for a changing integer suffix. Here's a very small snippet from a recent run:
```
[2022-09-07, 15:52:30 UTC] {wikimedia_commons.py:106} INFO - New continue token: {'gucontinue': 'Arrows-orphan.svg|arwiki|3120418', 'gaicontinue': '20220802024848|Lohjanjärvi_in_September.jpg', 'continue': 'gaicontinue||imageinfo'}
[2022-09-07, 15:52:30 UTC] {wikimedia_commons.py:130} INFO - Got 250 pages
[2022-09-07, 15:52:30 UTC] {wikimedia_commons.py:130} INFO - Got 250 pages
[2022-09-07, 15:52:30 UTC] {wikimedia_commons.py:130} INFO - Got 250 pages
[2022-09-07, 15:52:31 UTC] {wikimedia_commons.py:106} INFO - New continue token: {'gucontinue': 'Arrows-orphan.svg|arwiki|3125414', 'gaicontinue': '20220802024848|Lohjanjärvi_in_September.jpg', 'continue': 'gaicontinue||imageinfo'}
[2022-09-07, 15:52:31 UTC] {wikimedia_commons.py:130} INFO - Got 250 pages
[2022-09-07, 15:52:31 UTC] {wikimedia_commons.py:130} INFO - Got 250 pages
[2022-09-07, 15:52:31 UTC] {wikimedia_commons.py:130} INFO - Got 250 pages
[2022-09-07, 15:52:32 UTC] {wikimedia_commons.py:106} INFO - New continue token: {'gucontinue': 'Arrows-orphan.svg|arwiki|3127601', 'gaicontinue': '20220802024848|Lohjanjärvi_in_September.jpg', 'continue': 'gaicontinue||imageinfo'}
```
In the DagRun that example is taken from, that batch took almost 11.5 hours to process (from the first log containing the `Arrows-orphan` continue token, to the final `Found batchcomplete` log), but only 208 records were ultimately processed from that batch.
## Reproduction
Go to the Wikimedia DAG in production and observe logs for any of the recent failed `pull_data` tasks. The example here comes from processing for August 02, 2022. You should also be able to manually run the DAG with that `date` to reproduce the issue.
## Possible resolution
<!-- Add any other context about the problem here; or delete the section entirely. -->
Some initial thoughts for ways to resolve this:
1. **Increase the timeout for Wikimedia's `pull_data` task across the board.** I'm concerned about doing this because Wikimedia _generally_ should complete very quickly. This could cause problems in reingestion, which tries to run many Wikimedia dagruns.
2. **Add some handling for long-running batches.** We could try to add a custom timeout to Wikimedia's batch processing, to exit early if a batch like this is encountered. If we do so, we risk losing some data.
As part of this ticket it would be a good idea to look through the logs of a few more of these failures to try to confirm how much data we're getting from these large batches.
## Resolution
<!-- Replace the [ ] with [x] to check the box. -->
- [ ] 🙋 I would be interested in resolving this bug.
| 1.0 | Wikimedia DAG times out due to very large batches - ## Description
<!-- Concisely describe the bug. Compare your experience with what you expected to happen. -->
<!-- For example: "I clicked the 'submit' button and instead of seeing a thank you message, I saw a blank page." -->
Some recent DagRuns of the `wikimedia_commons_workflow` have timed out before completion in the `pull_data` step. It looks like this is happening due to encountering extremely large batches.
These show up in the logs as very long sections where the reported `gaicontinue` token is identical, and the `gucontinue` token is almost identical except for a changing integer suffix. Here's a very small snippet from a recent run:
```
[2022-09-07, 15:52:30 UTC] {wikimedia_commons.py:106} INFO - New continue token: {'gucontinue': 'Arrows-orphan.svg|arwiki|3120418', 'gaicontinue': '20220802024848|Lohjanjärvi_in_September.jpg', 'continue': 'gaicontinue||imageinfo'}
[2022-09-07, 15:52:30 UTC] {wikimedia_commons.py:130} INFO - Got 250 pages
[2022-09-07, 15:52:30 UTC] {wikimedia_commons.py:130} INFO - Got 250 pages
[2022-09-07, 15:52:30 UTC] {wikimedia_commons.py:130} INFO - Got 250 pages
[2022-09-07, 15:52:31 UTC] {wikimedia_commons.py:106} INFO - New continue token: {'gucontinue': 'Arrows-orphan.svg|arwiki|3125414', 'gaicontinue': '20220802024848|Lohjanjärvi_in_September.jpg', 'continue': 'gaicontinue||imageinfo'}
[2022-09-07, 15:52:31 UTC] {wikimedia_commons.py:130} INFO - Got 250 pages
[2022-09-07, 15:52:31 UTC] {wikimedia_commons.py:130} INFO - Got 250 pages
[2022-09-07, 15:52:31 UTC] {wikimedia_commons.py:130} INFO - Got 250 pages
[2022-09-07, 15:52:32 UTC] {wikimedia_commons.py:106} INFO - New continue token: {'gucontinue': 'Arrows-orphan.svg|arwiki|3127601', 'gaicontinue': '20220802024848|Lohjanjärvi_in_September.jpg', 'continue': 'gaicontinue||imageinfo'}
```
In the DagRun that example is taken from, that batch took almost 11.5 hours to process (from the first log containing the `Arrows-orphan` continue token, to the final `Found batchcomplete` log), but only 208 records were ultimately processed from that batch.
## Reproduction
Go to the Wikimedia DAG in production and observe logs for any of the recent failed `pull_data` tasks. The example here comes from processing for August 02, 2022. You should also be able to manually run the DAG with that `date` to reproduce the issue.
## Possible resolution
<!-- Add any other context about the problem here; or delete the section entirely. -->
Some initial thoughts for ways to resolve this:
1. **Increase the timeout for Wikimedia's `pull_data` task across the board.** I'm concerned about doing this because Wikimedia _generally_ should complete very quickly. This could cause problems in reingestion, which tries to run many Wikimedia dagruns.
2. **Add some handling for long-running batches.** We could try to add a custom timeout to Wikimedia's batch processing, to exit early if a batch like this is encountered. If we do so, we risk losing some data.
As part of this ticket it would be a good idea to look through the logs of a few more of these failures to try to confirm how much data we're getting from these large batches.
## Resolution
<!-- Replace the [ ] with [x] to check the box. -->
- [ ] 🙋 I would be interested in resolving this bug.
| non_defect | wikimedia dag times out due to very large batches description some recent dagruns of the wikimedia commons workflow have timed out before completion in the pull data step it looks like this is happening due to encountering extremely large batches these show up in the logs as very long sections where the reported gaicontinue token is identical and the gucontinue token is almost identical except for a changing integer suffix here s a very small snippet from a recent run wikimedia commons py info new continue token gucontinue arrows orphan svg arwiki gaicontinue lohjanjärvi in september jpg continue gaicontinue imageinfo wikimedia commons py info got pages wikimedia commons py info got pages wikimedia commons py info got pages wikimedia commons py info new continue token gucontinue arrows orphan svg arwiki gaicontinue lohjanjärvi in september jpg continue gaicontinue imageinfo wikimedia commons py info got pages wikimedia commons py info got pages wikimedia commons py info got pages wikimedia commons py info new continue token gucontinue arrows orphan svg arwiki gaicontinue lohjanjärvi in september jpg continue gaicontinue imageinfo in the dagrun that example is taken from that batch took almost hours to process from the first log containing the arrows orphan continue token to the final found batchcomplete log but only records were ultimately processed from that batch reproduction go to the wikimedia dag in production and observe logs for any of the recent failed pull data tasks the example here comes from processing for august you should also be able to manually run the dag with that date to reproduce the issue possible resolution some initial thoughts for ways to resolve this increase the timeout for wikimedia s pull data task across the board i m concerned about doing this because wikimedia generally should complete very quickly this could cause problems in reingestion which tries to run many wikimedia dagruns add some handling for long running batches we could try to add a custom timeout to wikimedia s batch processing to exit early if a batch like this is encountered if we do so we risk losing some data as part of this ticket it would be a good idea to look through the logs of a few more of these failures to try to confirm how much data we re getting from these large batches resolution 🙋 i would be interested in resolving this bug | 0 |
69,494 | 30,301,040,407 | IssuesEvent | 2023-07-10 05:57:14 | ps2gg/ps2.gg | https://api.github.com/repos/ps2gg/ps2.gg | closed | Color code friends & sesh embeds | Scope: UI Type: Enhancement Service: Peepo | ### I'm submitting a... <!-- Check with [x] -->
- [ ] Bug report
- [x] Feature request
- [ ] Documentation request
### Current behavior <!-- Describe how the issue manifests. -->
Sesh and friend embeds use the default embed colors in every situation.
### Expected behavior <!-- Describe the desired behavior. -->
Should use the success colors when players are online or other data is being displayed.
This helps us form a more uniform design language, which can be deployed to communicate changes without words.
### Definition of Done <!-- What requirements need to be fulfilled before we can release it -->
- [Universal Definition of Done](https://github.com/ps2gg/ps2.gg/blob/master/docs/standards/Definition-Of-Done.md) is adhered to
## <!-- Additional information (optional) -->
| 1.0 | Color code friends & sesh embeds - ### I'm submitting a... <!-- Check with [x] -->
- [ ] Bug report
- [x] Feature request
- [ ] Documentation request
### Current behavior <!-- Describe how the issue manifests. -->
Sesh and friend embeds use the default embed colors in every situation.
### Expected behavior <!-- Describe the desired behavior. -->
Should use the success colors when players are online or other data is being displayed.
This helps us form a more uniform design language, which can be deployed to communicate changes without words.
### Definition of Done <!-- What requirements need to be fulfilled before we can release it -->
- [Universal Definition of Done](https://github.com/ps2gg/ps2.gg/blob/master/docs/standards/Definition-Of-Done.md) is adhered to
## <!-- Additional information (optional) -->
| non_defect | color code friends sesh embeds i m submitting a bug report feature request documentation request current behavior sesh and friend embeds use the default embed colors in every situation expected behavior should use the success colors when players are online or other data is being displayed this helps us form a more uniform design language which can be deployed to communicate changes without words definition of done is adhered to | 0 |
65,599 | 19,588,985,042 | IssuesEvent | 2022-01-05 10:38:31 | vector-im/element-web | https://api.github.com/repos/vector-im/element-web | closed | Space name is weirdly inserted into "Preferences" view | T-Defect X-Regression S-Tolerable A-Spaces-Settings O-Uncommon good first issue | ### Steps to reproduce
1. Create a space with a long name (should be more than 19 chars)
2. Navigate to that space
3. Click the space name to open the drop down menu
4. Click "Preferences"
### Outcome
#### What did you expect?
There should either no space name visible or it should not be hidden behind other elements.
#### What happened instead?
The space name is partly hidden behind other elements (see image below).

### Operating system
Ubuntu 21.10
### Application version
Element Nightly version: 2021122001 Olm version: 3.2.8
### How did you install the app?
_No response_
### Homeserver
_No response_
### Will you send logs?
No | 1.0 | Space name is weirdly inserted into "Preferences" view - ### Steps to reproduce
1. Create a space with a long name (should be more than 19 chars)
2. Navigate to that space
3. Click the space name to open the drop down menu
4. Click "Preferences"
### Outcome
#### What did you expect?
There should either no space name visible or it should not be hidden behind other elements.
#### What happened instead?
The space name is partly hidden behind other elements (see image below).

### Operating system
Ubuntu 21.10
### Application version
Element Nightly version: 2021122001 Olm version: 3.2.8
### How did you install the app?
_No response_
### Homeserver
_No response_
### Will you send logs?
No | defect | space name is weirdly inserted into preferences view steps to reproduce create a space with a long name should be more than chars navigate to that space click the space name to open the drop down menu click preferences outcome what did you expect there should either no space name visible or it should not be hidden behind other elements what happened instead the space name is partly hidden behind other elements see image below operating system ubuntu application version element nightly version olm version how did you install the app no response homeserver no response will you send logs no | 1 |
41,268 | 10,350,657,307 | IssuesEvent | 2019-09-05 03:45:36 | zealdocs/zeal | https://api.github.com/repos/zealdocs/zeal | closed | angular docs | resolution/duplicate scope/ui/webview type/defect | Angular docs are not getting opened. I tried downloading it two times still nothing shows. | 1.0 | angular docs - Angular docs are not getting opened. I tried downloading it two times still nothing shows. | defect | angular docs angular docs are not getting opened i tried downloading it two times still nothing shows | 1 |
103,914 | 11,385,742,219 | IssuesEvent | 2020-01-29 11:46:49 | matestack/matestack-ui-core | https://api.github.com/repos/matestack/matestack-ui-core | opened | Document when and where action can be used | documentation | The [action](https://www.matestack.org/docs/components/action.md) documentation should probably mentioned just what can be wrapped with it.
My current gut feeling is that anything can be wrapped with it just like the `on_click` but I'm not sure and the docs don't tell me (and only ever wrap `button`).
-------------------------------------------------------
As a side note, the wrapping of `action`/`async` feels slightly weird to me but might have reasons I don't understand. I'd have kind of expected them to be a common option for all components. Although that would then kind of break the "everything is a component" concept.
| 1.0 | Document when and where action can be used - The [action](https://www.matestack.org/docs/components/action.md) documentation should probably mentioned just what can be wrapped with it.
My current gut feeling is that anything can be wrapped with it just like the `on_click` but I'm not sure and the docs don't tell me (and only ever wrap `button`).
-------------------------------------------------------
As a side note, the wrapping of `action`/`async` feels slightly weird to me but might have reasons I don't understand. I'd have kind of expected them to be a common option for all components. Although that would then kind of break the "everything is a component" concept.
| non_defect | document when and where action can be used the documentation should probably mentioned just what can be wrapped with it my current gut feeling is that anything can be wrapped with it just like the on click but i m not sure and the docs don t tell me and only ever wrap button as a side note the wrapping of action async feels slightly weird to me but might have reasons i don t understand i d have kind of expected them to be a common option for all components although that would then kind of break the everything is a component concept | 0 |
172,032 | 21,031,051,254 | IssuesEvent | 2022-03-31 01:03:29 | TreyM-WSS/Struts2-Examples | https://api.github.com/repos/TreyM-WSS/Struts2-Examples | opened | CVE-2022-22950 (Medium) detected in spring-expression-3.2.0.RELEASE.jar, spring-expression-3.0.5.RELEASE.jar | security vulnerability | ## CVE-2022-22950 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>spring-expression-3.2.0.RELEASE.jar</b>, <b>spring-expression-3.0.5.RELEASE.jar</b></p></summary>
<p>
<details><summary><b>spring-expression-3.2.0.RELEASE.jar</b></p></summary>
<p>Spring Expression Language (SpEL)</p>
<p>Library home page: <a href="https://github.com/SpringSource/spring-framework">https://github.com/SpringSource/spring-framework</a></p>
<p>Path to dependency file: /Struts2Spring3Hibernate/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/spring-expression/3.2.0.RELEASE/spring-expression-3.2.0.RELEASE.jar,/Struts2Spring3Hibernate/target/Struts2Spring3Hibernate3/WEB-INF/lib/spring-expression-3.2.0.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- :x: **spring-expression-3.2.0.RELEASE.jar** (Vulnerable Library)
</details>
<details><summary><b>spring-expression-3.0.5.RELEASE.jar</b></p></summary>
<p>Spring Framework Parent</p>
<p>Path to dependency file: /Struts2Junit4/pom.xml</p>
<p>Path to vulnerable library: /Struts2Junit4/target/Struts2Junit4-1.0/WEB-INF/lib/spring-expression-3.0.5.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-expression/3.0.5.RELEASE/spring-expression-3.0.5.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- :x: **spring-expression-3.0.5.RELEASE.jar** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Spring Framework versions 5.3.0 - 5.3.16 and older unsupported versions, it is possible for a user to provide a specially crafted SpEL expression that may cause a denial of service condition
<p>Publish Date: 2022-01-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-22950>CVE-2022-22950</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://tanzu.vmware.com/security/cve-2022-22950">https://tanzu.vmware.com/security/cve-2022-22950</a></p>
<p>Release Date: 2022-01-11</p>
<p>Fix Resolution: org.springframework:spring-expression:5.3.17</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.springframework","packageName":"spring-expression","packageVersion":"3.2.0.RELEASE","packageFilePaths":["/Struts2Spring3Hibernate/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"org.springframework:spring-expression:3.2.0.RELEASE","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.springframework:spring-expression:5.3.17","isBinary":false},{"packageType":"Java","groupId":"org.springframework","packageName":"spring-expression","packageVersion":"3.0.5.RELEASE","packageFilePaths":["/Struts2Junit4/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"org.springframework:spring-expression:3.0.5.RELEASE","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.springframework:spring-expression:5.3.17","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2022-22950","vulnerabilityDetails":"In Spring Framework versions 5.3.0 - 5.3.16 and older unsupported versions, it is possible for a user to provide a specially crafted SpEL expression that may cause a denial of service condition","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-22950","cvss3Severity":"medium","cvss3Score":"5.4","cvss3Metrics":{"A":"Low","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> --> | True | CVE-2022-22950 (Medium) detected in spring-expression-3.2.0.RELEASE.jar, spring-expression-3.0.5.RELEASE.jar - ## CVE-2022-22950 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>spring-expression-3.2.0.RELEASE.jar</b>, <b>spring-expression-3.0.5.RELEASE.jar</b></p></summary>
<p>
<details><summary><b>spring-expression-3.2.0.RELEASE.jar</b></p></summary>
<p>Spring Expression Language (SpEL)</p>
<p>Library home page: <a href="https://github.com/SpringSource/spring-framework">https://github.com/SpringSource/spring-framework</a></p>
<p>Path to dependency file: /Struts2Spring3Hibernate/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/spring-expression/3.2.0.RELEASE/spring-expression-3.2.0.RELEASE.jar,/Struts2Spring3Hibernate/target/Struts2Spring3Hibernate3/WEB-INF/lib/spring-expression-3.2.0.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- :x: **spring-expression-3.2.0.RELEASE.jar** (Vulnerable Library)
</details>
<details><summary><b>spring-expression-3.0.5.RELEASE.jar</b></p></summary>
<p>Spring Framework Parent</p>
<p>Path to dependency file: /Struts2Junit4/pom.xml</p>
<p>Path to vulnerable library: /Struts2Junit4/target/Struts2Junit4-1.0/WEB-INF/lib/spring-expression-3.0.5.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-expression/3.0.5.RELEASE/spring-expression-3.0.5.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- :x: **spring-expression-3.0.5.RELEASE.jar** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Spring Framework versions 5.3.0 - 5.3.16 and older unsupported versions, it is possible for a user to provide a specially crafted SpEL expression that may cause a denial of service condition
<p>Publish Date: 2022-01-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-22950>CVE-2022-22950</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://tanzu.vmware.com/security/cve-2022-22950">https://tanzu.vmware.com/security/cve-2022-22950</a></p>
<p>Release Date: 2022-01-11</p>
<p>Fix Resolution: org.springframework:spring-expression:5.3.17</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.springframework","packageName":"spring-expression","packageVersion":"3.2.0.RELEASE","packageFilePaths":["/Struts2Spring3Hibernate/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"org.springframework:spring-expression:3.2.0.RELEASE","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.springframework:spring-expression:5.3.17","isBinary":false},{"packageType":"Java","groupId":"org.springframework","packageName":"spring-expression","packageVersion":"3.0.5.RELEASE","packageFilePaths":["/Struts2Junit4/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"org.springframework:spring-expression:3.0.5.RELEASE","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.springframework:spring-expression:5.3.17","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2022-22950","vulnerabilityDetails":"In Spring Framework versions 5.3.0 - 5.3.16 and older unsupported versions, it is possible for a user to provide a specially crafted SpEL expression that may cause a denial of service condition","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-22950","cvss3Severity":"medium","cvss3Score":"5.4","cvss3Metrics":{"A":"Low","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> --> | non_defect | cve medium detected in spring expression release jar spring expression release jar cve medium severity vulnerability vulnerable libraries spring expression release jar spring expression release jar spring expression release jar spring expression language spel library home page a href path to dependency file pom xml path to vulnerable library home wss scanner repository org springframework spring expression release spring expression release jar target web inf lib spring expression release jar dependency hierarchy x spring expression release jar vulnerable library spring expression release jar spring framework parent path to dependency file pom xml path to vulnerable library target web inf lib spring expression release jar home wss scanner repository org springframework spring expression release spring expression release jar dependency hierarchy x spring expression release jar vulnerable library found in base branch master vulnerability details in spring framework versions and older unsupported versions it is possible for a user to provide a specially crafted spel expression that may cause a denial of service condition publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org springframework spring expression rescue worker helmet automatic remediation is available for this issue isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree org springframework spring expression release isminimumfixversionavailable true minimumfixversion org springframework spring expression isbinary false packagetype java groupid org springframework packagename spring expression packageversion release packagefilepaths istransitivedependency false dependencytree org springframework spring expression release isminimumfixversionavailable true minimumfixversion org springframework spring expression isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails in spring framework versions and older unsupported versions it is possible for a user to provide a specially crafted spel expression that may cause a denial of service condition vulnerabilityurl | 0 |
72,437 | 31,768,883,159 | IssuesEvent | 2023-09-12 10:27:30 | gauravrs18/issue_onboarding | https://api.github.com/repos/gauravrs18/issue_onboarding | closed | dev-angular-integration-account-services-accounts-api-integration
-search-support-phone | CX-account-services | dev-angular-integration-account-services-accounts-api-integration
-search-support-phone | 1.0 | dev-angular-integration-account-services-accounts-api-integration
-search-support-phone - dev-angular-integration-account-services-accounts-api-integration
-search-support-phone | non_defect | dev angular integration account services accounts api integration search support phone dev angular integration account services accounts api integration search support phone | 0 |
225,968 | 7,496,802,120 | IssuesEvent | 2018-04-08 13:26:26 | HabitRPG/habitica | https://api.github.com/repos/HabitRPG/habitica | closed | Subscription renewal hourglasses | priority: important section: Payments status: issue: in progress status: needs reply type: medium level coding | Hello,
I subscribed for 3 months consecutive, the time finished and paypal automatically renewed the subscription for more 3 months.
But with the renewal I didn't get a hourglass for the 3 new months.
UUID: ac18def4-67c9-4d8c-9785-80914dbeb5c4
Thank you
## <bountysource-plugin>
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/9311551-subscription-renewal-hourglasses?utm_campaign=plugin&utm_content=tracker%2F68393&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F68393&utm_medium=issues&utm_source=github).
</bountysource-plugin>
| 1.0 | Subscription renewal hourglasses - Hello,
I subscribed for 3 months consecutive, the time finished and paypal automatically renewed the subscription for more 3 months.
But with the renewal I didn't get a hourglass for the 3 new months.
UUID: ac18def4-67c9-4d8c-9785-80914dbeb5c4
Thank you
## <bountysource-plugin>
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/9311551-subscription-renewal-hourglasses?utm_campaign=plugin&utm_content=tracker%2F68393&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F68393&utm_medium=issues&utm_source=github).
</bountysource-plugin>
| non_defect | subscription renewal hourglasses hello i subscribed for months consecutive the time finished and paypal automatically renewed the subscription for more months but with the renewal i didn t get a hourglass for the new months uuid thank you want to back this issue we accept bounties via | 0 |
68,697 | 9,211,687,866 | IssuesEvent | 2019-03-09 17:27:49 | glest/glest.github.io | https://api.github.com/repos/glest/glest.github.io | closed | add strategy guide for "Magic" faction | documentation help wanted | an expanded strategy guide for the Magic faction would be useful, and could be added as a separate section to https://zetaglest.github.io/docs/strategy_guide.html
The repo file to edit is https://github.com/ZetaGlest/zetaglest.github.io/blob/master/docs/strategy_guide.html
(Eventually we'll create separate pages as needed).
Preferably, a Magic strategy guide should explain multiple strategies.
Don't worry too much about grammar, or if you're English isn't very good. Ideally it will be reviewed and improved by other people over time.
@biels can you work on this? If not, you may want to subscribe to the ticket.
| 1.0 | add strategy guide for "Magic" faction - an expanded strategy guide for the Magic faction would be useful, and could be added as a separate section to https://zetaglest.github.io/docs/strategy_guide.html
The repo file to edit is https://github.com/ZetaGlest/zetaglest.github.io/blob/master/docs/strategy_guide.html
(Eventually we'll create separate pages as needed).
Preferably, a Magic strategy guide should explain multiple strategies.
Don't worry too much about grammar, or if you're English isn't very good. Ideally it will be reviewed and improved by other people over time.
@biels can you work on this? If not, you may want to subscribe to the ticket.
| non_defect | add strategy guide for magic faction an expanded strategy guide for the magic faction would be useful and could be added as a separate section to the repo file to edit is eventually we ll create separate pages as needed preferably a magic strategy guide should explain multiple strategies don t worry too much about grammar or if you re english isn t very good ideally it will be reviewed and improved by other people over time biels can you work on this if not you may want to subscribe to the ticket | 0 |
23,686 | 3,851,865,580 | IssuesEvent | 2016-04-06 05:27:56 | GPF/imame4all | https://api.github.com/repos/GPF/imame4all | closed | can you port mame 0.72 to android | auto-migrated Priority-Medium Type-Defect | ```
What steps will reproduce the problem?
1.yousing sgs3 for mk games
2.mk games runing slow on sgs3
3.
What is the expected output? What do you see instead?
What version of the product are you using? On what operating system?
mame4droid 1.2.1 4.0.4 operating sistem
Please provide any additional information below.
please port 072 mame for mk games dey ar the best ever on mame
```
Original issue reported on code.google.com by `markocur...@gmail.com` on 12 Nov 2012 at 2:35 | 1.0 | can you port mame 0.72 to android - ```
What steps will reproduce the problem?
1.yousing sgs3 for mk games
2.mk games runing slow on sgs3
3.
What is the expected output? What do you see instead?
What version of the product are you using? On what operating system?
mame4droid 1.2.1 4.0.4 operating sistem
Please provide any additional information below.
please port 072 mame for mk games dey ar the best ever on mame
```
Original issue reported on code.google.com by `markocur...@gmail.com` on 12 Nov 2012 at 2:35 | defect | can you port mame to android what steps will reproduce the problem yousing for mk games mk games runing slow on what is the expected output what do you see instead what version of the product are you using on what operating system operating sistem please provide any additional information below please port mame for mk games dey ar the best ever on mame original issue reported on code google com by markocur gmail com on nov at | 1 |
2,406 | 3,441,045,930 | IssuesEvent | 2015-12-14 16:53:26 | orientechnologies/orientdb | https://api.github.com/repos/orientechnologies/orientdb | closed | Performance can be improved with adding simple caches for reflection calls in Object API | enhancement performance | When profiling I found out that several methods used in the Object API use a lot of reflection and that this can cause slowing down because a lot of these calls are slow.
This can be remedied by caching results of these calls. It can be cached for operations for which the result never changes, for instance the Fields of a Class, getting the annotations of a class, etc.
I propose to implement this at key places in the Object API. I'm preparing a pull request with some simple changes that resulted in a 50% performance gain in some calls for us, especially when the caches have been warmed up.
| True | Performance can be improved with adding simple caches for reflection calls in Object API - When profiling I found out that several methods used in the Object API use a lot of reflection and that this can cause slowing down because a lot of these calls are slow.
This can be remedied by caching results of these calls. It can be cached for operations for which the result never changes, for instance the Fields of a Class, getting the annotations of a class, etc.
I propose to implement this at key places in the Object API. I'm preparing a pull request with some simple changes that resulted in a 50% performance gain in some calls for us, especially when the caches have been warmed up.
| non_defect | performance can be improved with adding simple caches for reflection calls in object api when profiling i found out that several methods used in the object api use a lot of reflection and that this can cause slowing down because a lot of these calls are slow this can be remedied by caching results of these calls it can be cached for operations for which the result never changes for instance the fields of a class getting the annotations of a class etc i propose to implement this at key places in the object api i m preparing a pull request with some simple changes that resulted in a performance gain in some calls for us especially when the caches have been warmed up | 0 |
64,716 | 18,843,966,799 | IssuesEvent | 2021-11-11 12:57:12 | primefaces/primefaces | https://api.github.com/repos/primefaces/primefaces | opened | DataTable: Custom filter reset works for h:selectOneMenu but not for p:selectOneMenu | defect | **Describe the defect**
When using `p:selectOneMenu` in `DataTable` as custom filter, a call to `DataTable#clearFilters()` resets the filter of the `DataTable` but not the displayed value of the `p:selectOneMenu`. It works as expected when using `h:selectOneMenu` instead.
**Reproducer**
See [primefaces-test.zip](https://github.com/primefaces/primefaces/files/7520464/primefaces-test.zip).
**Environment:**
- PF Version: _11.0.0-RC2_
- JSF + version: _Mojarra 2.3.14.payara-p2_
- Affected browsers: _ALL_
**To Reproduce**
Steps to reproduce the behavior:
1. Start the reproducer.
2. Select 'true' in column _c1_
3. Select 'true' in column _c2_
4. Click the filter reset button.
5. Filter for `DataTable` is reset, value of _c1_ filter is reset, but value of _c2_ filter is still 'true'.
**Expected behavior**
Value of custom filter with `p:selectOneMenu` should be reset too when calling `DataTable#clearFilters()`.
| 1.0 | DataTable: Custom filter reset works for h:selectOneMenu but not for p:selectOneMenu - **Describe the defect**
When using `p:selectOneMenu` in `DataTable` as custom filter, a call to `DataTable#clearFilters()` resets the filter of the `DataTable` but not the displayed value of the `p:selectOneMenu`. It works as expected when using `h:selectOneMenu` instead.
**Reproducer**
See [primefaces-test.zip](https://github.com/primefaces/primefaces/files/7520464/primefaces-test.zip).
**Environment:**
- PF Version: _11.0.0-RC2_
- JSF + version: _Mojarra 2.3.14.payara-p2_
- Affected browsers: _ALL_
**To Reproduce**
Steps to reproduce the behavior:
1. Start the reproducer.
2. Select 'true' in column _c1_
3. Select 'true' in column _c2_
4. Click the filter reset button.
5. Filter for `DataTable` is reset, value of _c1_ filter is reset, but value of _c2_ filter is still 'true'.
**Expected behavior**
Value of custom filter with `p:selectOneMenu` should be reset too when calling `DataTable#clearFilters()`.
| defect | datatable custom filter reset works for h selectonemenu but not for p selectonemenu describe the defect when using p selectonemenu in datatable as custom filter a call to datatable clearfilters resets the filter of the datatable but not the displayed value of the p selectonemenu it works as expected when using h selectonemenu instead reproducer see environment pf version jsf version mojarra payara affected browsers all to reproduce steps to reproduce the behavior start the reproducer select true in column select true in column click the filter reset button filter for datatable is reset value of filter is reset but value of filter is still true expected behavior value of custom filter with p selectonemenu should be reset too when calling datatable clearfilters | 1 |
21,542 | 3,518,269,235 | IssuesEvent | 2016-01-12 12:01:15 | Virtual-Labs/problem-solving-iiith | https://api.github.com/repos/Virtual-Labs/problem-solving-iiith | reopened | QA_Advanced Arithmatic_UI | Category :UI Defect raised on: 26-11-2015 Developed by:IIIT Hyd Release Number Severity :S3 Status :Open Version Number :1.1 | Defect Description:
In the Landing page of "Advanced Arithmatic" experiment, the 'Home' &'Problem Solving Lab' links are present outside of the page width instead the links should be placed within the page limit inorder to maintain the page utility.
Actual Result:
In the Landing page of "Advanced Arithmatic" experiment,the 'Home' &'Problem Solving Lab' links are placed outside of the page width.
Environment :
OS: Windows 7, Ubuntu-16.04,Centos-6
Browsers: Firefox-42.0,Chrome-47.0,chromium-45.0
Bandwidth : 100Mbps
Hardware Configuration:8GBRAM ,
Processor:i5
Test Step Link:
https://github.com/Virtual-Labs/problem-solving-iiith/blob/master/test-cases/integration_test-cases/Advanced%20Arithmatic/Advanced%20Arithmatic_01_Usability_smk.org

| 1.0 | QA_Advanced Arithmatic_UI - Defect Description:
In the Landing page of "Advanced Arithmatic" experiment, the 'Home' &'Problem Solving Lab' links are present outside of the page width instead the links should be placed within the page limit inorder to maintain the page utility.
Actual Result:
In the Landing page of "Advanced Arithmatic" experiment,the 'Home' &'Problem Solving Lab' links are placed outside of the page width.
Environment :
OS: Windows 7, Ubuntu-16.04,Centos-6
Browsers: Firefox-42.0,Chrome-47.0,chromium-45.0
Bandwidth : 100Mbps
Hardware Configuration:8GBRAM ,
Processor:i5
Test Step Link:
https://github.com/Virtual-Labs/problem-solving-iiith/blob/master/test-cases/integration_test-cases/Advanced%20Arithmatic/Advanced%20Arithmatic_01_Usability_smk.org

| defect | qa advanced arithmatic ui defect description in the landing page of advanced arithmatic experiment the home problem solving lab links are present outside of the page width instead the links should be placed within the page limit inorder to maintain the page utility actual result in the landing page of advanced arithmatic experiment the home problem solving lab links are placed outside of the page width environment os windows ubuntu centos browsers firefox chrome chromium bandwidth hardware configuration processor test step link | 1 |
2,718 | 2,532,820,842 | IssuesEvent | 2015-01-23 18:44:09 | google/error-prone | https://api.github.com/repos/google/error-prone | closed | Investigate turning on tree end positions by default | migrated Priority-Medium Status-Accepted Type-Enhancement | _[Original issue](https://code.google.com/p/error-prone/issues/detail?id=228) created by **eaftan@google.com** on 2014-01-29 at 10:26 PM_
---
Currently javac only computes tree end positions if the -Xjcov option is passed at the command line, or a custom DiagnosticListener is provided. Turning on tree end positions incurs a ~50% memory penalty in javac.
We need tree end positions to construct suggested fixes, so currently we reparse the tree if we encounter any errors. However, some checks need to know the end position before we know if there are any errors (e.g., LongLiteralLowerCaseSuffix), and anyway we're not happy with our reparsing hack.
Investigate whether it is feasible to turn on -Xjcov by default. We can try building some larger open source projects with -Xjcov on and see if they pass.
Note that there has been a patch submitted to upstream javac to reduce memory usage (http://openjdk.5641.n7.nabble.com/javac-ending-positions-generation-and-DiagnosticListener-tt170348.html), but even if accepted it won't help our external users who are on older versions of javac. | 1.0 | Investigate turning on tree end positions by default - _[Original issue](https://code.google.com/p/error-prone/issues/detail?id=228) created by **eaftan@google.com** on 2014-01-29 at 10:26 PM_
---
Currently javac only computes tree end positions if the -Xjcov option is passed at the command line, or a custom DiagnosticListener is provided. Turning on tree end positions incurs a ~50% memory penalty in javac.
We need tree end positions to construct suggested fixes, so currently we reparse the tree if we encounter any errors. However, some checks need to know the end position before we know if there are any errors (e.g., LongLiteralLowerCaseSuffix), and anyway we're not happy with our reparsing hack.
Investigate whether it is feasible to turn on -Xjcov by default. We can try building some larger open source projects with -Xjcov on and see if they pass.
Note that there has been a patch submitted to upstream javac to reduce memory usage (http://openjdk.5641.n7.nabble.com/javac-ending-positions-generation-and-DiagnosticListener-tt170348.html), but even if accepted it won't help our external users who are on older versions of javac. | non_defect | investigate turning on tree end positions by default created by eaftan google com on at pm currently javac only computes tree end positions if the xjcov option is passed at the command line or a custom diagnosticlistener is provided turning on tree end positions incurs a memory penalty in javac we need tree end positions to construct suggested fixes so currently we reparse the tree if we encounter any errors however some checks need to know the end position before we know if there are any errors e g longliterallowercasesuffix and anyway we re not happy with our reparsing hack investigate whether it is feasible to turn on xjcov by default we can try building some larger open source projects with xjcov on and see if they pass note that there has been a patch submitted to upstream javac to reduce memory usage but even if accepted it won t help our external users who are on older versions of javac | 0 |
81,718 | 31,471,276,327 | IssuesEvent | 2023-08-30 07:43:20 | vector-im/element-web | https://api.github.com/repos/vector-im/element-web | opened | secure_backup_required in .well-known/matrix/client has no effect | T-Defect | ### Steps to reproduce
My `https://mydomain.net/.well-known/matrix/client`:
```
{
"im.vector.riot.e2ee": {
"default": false
},
"io.element.e2ee": {
"default": false,
"secure_backup_required": true,
"secure_backup_setup_methods": ["passphrase"]
},
"m.homeserver": {
"base_url": "https://matrix.mydomain.net"
},
"org.matrix.msc3575.proxy": {
"url": "https://matrix.mydomain.net/sliding-sync"
}
}
```
I know it's generally working because the default encryption setting is honored.
### Outcome
#### What did you expect?
That I can use Element only after setting up secure backup [as per the docs](https://github.com/vector-im/element-web/blob/develop/docs/e2ee.md#requiring-secure-backup).
#### What happened instead?
No visible change.
### Operating system
_No response_
### Browser information
_No response_
### URL for webapp
_No response_
### Application version
Element version 1.11.38, Olm version 3.2.14
### Homeserver
Synapse version 1.89.0
### Will you send logs?
No | 1.0 | secure_backup_required in .well-known/matrix/client has no effect - ### Steps to reproduce
My `https://mydomain.net/.well-known/matrix/client`:
```
{
"im.vector.riot.e2ee": {
"default": false
},
"io.element.e2ee": {
"default": false,
"secure_backup_required": true,
"secure_backup_setup_methods": ["passphrase"]
},
"m.homeserver": {
"base_url": "https://matrix.mydomain.net"
},
"org.matrix.msc3575.proxy": {
"url": "https://matrix.mydomain.net/sliding-sync"
}
}
```
I know it's generally working because the default encryption setting is honored.
### Outcome
#### What did you expect?
That I can use Element only after setting up secure backup [as per the docs](https://github.com/vector-im/element-web/blob/develop/docs/e2ee.md#requiring-secure-backup).
#### What happened instead?
No visible change.
### Operating system
_No response_
### Browser information
_No response_
### URL for webapp
_No response_
### Application version
Element version 1.11.38, Olm version 3.2.14
### Homeserver
Synapse version 1.89.0
### Will you send logs?
No | defect | secure backup required in well known matrix client has no effect steps to reproduce my im vector riot default false io element default false secure backup required true secure backup setup methods m homeserver base url org matrix proxy url i know it s generally working because the default encryption setting is honored outcome what did you expect that i can use element only after setting up secure backup what happened instead no visible change operating system no response browser information no response url for webapp no response application version element version olm version homeserver synapse version will you send logs no | 1 |
27,889 | 5,118,531,219 | IssuesEvent | 2017-01-08 07:06:08 | otros-systems/otroslogviewer | https://api.github.com/repos/otros-systems/otroslogviewer | closed | Add the possibility to adjust the size of a column to its content | auto-migrated Priority-Medium Type-Defect | ```
When "auto resize mode" is configured as "auto resize off", it would be nice to
offer the possibility to resize a column to its content like it is done it
Excel (MSOffice) by double clicking on the limit between columns header.
What version of the product are you using? On what operating system?
2011-10-14 / Win XP SP3
```
Original issue reported on code.google.com by `Renan.BE...@gmail.com` on 19 Oct 2011 at 9:25
| 1.0 | Add the possibility to adjust the size of a column to its content - ```
When "auto resize mode" is configured as "auto resize off", it would be nice to
offer the possibility to resize a column to its content like it is done it
Excel (MSOffice) by double clicking on the limit between columns header.
What version of the product are you using? On what operating system?
2011-10-14 / Win XP SP3
```
Original issue reported on code.google.com by `Renan.BE...@gmail.com` on 19 Oct 2011 at 9:25
| defect | add the possibility to adjust the size of a column to its content when auto resize mode is configured as auto resize off it would be nice to offer the possibility to resize a column to its content like it is done it excel msoffice by double clicking on the limit between columns header what version of the product are you using on what operating system win xp original issue reported on code google com by renan be gmail com on oct at | 1 |
2,754 | 2,607,938,470 | IssuesEvent | 2015-02-26 00:29:50 | chrsmithdemos/minify | https://api.github.com/repos/chrsmithdemos/minify | closed | i have aproblem with test_Minify_CSS.php | auto-migrated Priority-Medium Release-2.1.5 Type-Defect | ```
Are you sure this is not a problem with your configuration? (ask on the
Google Group)
Minify commit/version: latest
PHP version: dunno
What steps did i made?
1. i went to http://multigaming.co/min_unit_tests/test_all.php
2. found errors.
3. want your help.
Expected output: all be "PASS"
Actual output:
PASS: Minify : 304 response (1 of 1 tests run so far have passed)
PASS: Minify : cache, and minifier classes aren't loaded for 304s (2 of 2 tests
run so far have passed)
PASS: Minify : JS and Expires (3 of 3 tests run so far have passed)
PASS: Minify : Issue 73 (4 of 4 tests run so far have passed)
PASS: Minify : Issue 89 : bubbleCssImports (5 of 5 tests run so far have passed)
PASS: Minify : Issue 89 : detect invalid imports (6 of 6 tests run so far have
passed)
PASS: Minify : Issue 89 : don't warn about valid imports (7 of 7 tests run so
far have passed)
PASS: Minify : CSS and Etag/Last-Modified (8 of 8 tests run so far have passed)
PASS: Minify_Build : single file path (9 of 9 tests run so far have passed)
PASS: Minify_Build : multiple file paths (10 of 10 tests run so far have passed)
PASS: Minify_Build : file path and a Minify_Source (11 of 11 tests run so far
have passed)
PASS: Minify_Build : uri() with no querystring (12 of 12 tests run so far have
passed)
PASS: Minify_Build : uri() with existing querystring (13 of 13 tests run so far
have passed)
PASS: Minify_HTML_Helper : given URIs (14 of 14 tests run so far have passed)
PASS: Minify_HTML_Helper : given filepaths (15 of 15 tests run so far have
passed)
PASS: Minify_HTML_Helper : non-existent group & debug (16 of 16 tests run so
far have passed)
PASS: Minify_HTML_Helper : existing group (17 of 17 tests run so far have
passed)
PASS: utils.php : Minify_mtime w/ files & obj (18 of 18 tests run so far have
passed)
PASS: utils.php : Minify_mtime w/ obj & group (19 of 19 tests run so far have
passed)
NOTE: Minify_Cache_File : path is set to: '/tmp'.
PASS: Minify_Cache_File : store (20 of 20 tests run so far have passed)
PASS: Minify_Cache_File : getSize (21 of 21 tests run so far have passed)
PASS: Minify_Cache_File : isValid (22 of 22 tests run so far have passed)
PASS: Minify_Cache_File : display (23 of 23 tests run so far have passed)
PASS: Minify_Cache_File : fetch (24 of 24 tests run so far have passed)
PASS: Minify_Cache_File : store w/ lock (25 of 25 tests run so far have passed)
PASS: Minify_Cache_File : getSize (26 of 26 tests run so far have passed)
PASS: Minify_Cache_File : isValid (27 of 27 tests run so far have passed)
PASS: Minify_Cache_File : display w/ lock (28 of 28 tests run so far have
passed)
PASS: Minify_Cache_File : fetch w/ lock (29 of 29 tests run so far have passed)
<br />
<b>Warning</b>: dir() has been disabled for security reasons in
<b>/home/multigam/public_html/min_unit_tests/test_Minify_CSS.php</b> on line
<b>11</b><br />
<br />
<b>Fatal error</b>: Call to a member function read() on a non-object in
<b>/home/multigam/public_html/min_unit_tests/test_Minify_CSS.php</b> on line
<b>12</b><br />
```
-----
Original issue reported on code.google.com by `m96a...@gmail.com` on 21 May 2014 at 10:17 | 1.0 | i have aproblem with test_Minify_CSS.php - ```
Are you sure this is not a problem with your configuration? (ask on the
Google Group)
Minify commit/version: latest
PHP version: dunno
What steps did i made?
1. i went to http://multigaming.co/min_unit_tests/test_all.php
2. found errors.
3. want your help.
Expected output: all be "PASS"
Actual output:
PASS: Minify : 304 response (1 of 1 tests run so far have passed)
PASS: Minify : cache, and minifier classes aren't loaded for 304s (2 of 2 tests
run so far have passed)
PASS: Minify : JS and Expires (3 of 3 tests run so far have passed)
PASS: Minify : Issue 73 (4 of 4 tests run so far have passed)
PASS: Minify : Issue 89 : bubbleCssImports (5 of 5 tests run so far have passed)
PASS: Minify : Issue 89 : detect invalid imports (6 of 6 tests run so far have
passed)
PASS: Minify : Issue 89 : don't warn about valid imports (7 of 7 tests run so
far have passed)
PASS: Minify : CSS and Etag/Last-Modified (8 of 8 tests run so far have passed)
PASS: Minify_Build : single file path (9 of 9 tests run so far have passed)
PASS: Minify_Build : multiple file paths (10 of 10 tests run so far have passed)
PASS: Minify_Build : file path and a Minify_Source (11 of 11 tests run so far
have passed)
PASS: Minify_Build : uri() with no querystring (12 of 12 tests run so far have
passed)
PASS: Minify_Build : uri() with existing querystring (13 of 13 tests run so far
have passed)
PASS: Minify_HTML_Helper : given URIs (14 of 14 tests run so far have passed)
PASS: Minify_HTML_Helper : given filepaths (15 of 15 tests run so far have
passed)
PASS: Minify_HTML_Helper : non-existent group & debug (16 of 16 tests run so
far have passed)
PASS: Minify_HTML_Helper : existing group (17 of 17 tests run so far have
passed)
PASS: utils.php : Minify_mtime w/ files & obj (18 of 18 tests run so far have
passed)
PASS: utils.php : Minify_mtime w/ obj & group (19 of 19 tests run so far have
passed)
NOTE: Minify_Cache_File : path is set to: '/tmp'.
PASS: Minify_Cache_File : store (20 of 20 tests run so far have passed)
PASS: Minify_Cache_File : getSize (21 of 21 tests run so far have passed)
PASS: Minify_Cache_File : isValid (22 of 22 tests run so far have passed)
PASS: Minify_Cache_File : display (23 of 23 tests run so far have passed)
PASS: Minify_Cache_File : fetch (24 of 24 tests run so far have passed)
PASS: Minify_Cache_File : store w/ lock (25 of 25 tests run so far have passed)
PASS: Minify_Cache_File : getSize (26 of 26 tests run so far have passed)
PASS: Minify_Cache_File : isValid (27 of 27 tests run so far have passed)
PASS: Minify_Cache_File : display w/ lock (28 of 28 tests run so far have
passed)
PASS: Minify_Cache_File : fetch w/ lock (29 of 29 tests run so far have passed)
<br />
<b>Warning</b>: dir() has been disabled for security reasons in
<b>/home/multigam/public_html/min_unit_tests/test_Minify_CSS.php</b> on line
<b>11</b><br />
<br />
<b>Fatal error</b>: Call to a member function read() on a non-object in
<b>/home/multigam/public_html/min_unit_tests/test_Minify_CSS.php</b> on line
<b>12</b><br />
```
-----
Original issue reported on code.google.com by `m96a...@gmail.com` on 21 May 2014 at 10:17 | defect | i have aproblem with test minify css php are you sure this is not a problem with your configuration ask on the google group minify commit version latest php version dunno what steps did i made i went to found errors want your help expected output all be pass actual output pass minify response of tests run so far have passed pass minify cache and minifier classes aren t loaded for of tests run so far have passed pass minify js and expires of tests run so far have passed pass minify issue of tests run so far have passed pass minify issue bubblecssimports of tests run so far have passed pass minify issue detect invalid imports of tests run so far have passed pass minify issue don t warn about valid imports of tests run so far have passed pass minify css and etag last modified of tests run so far have passed pass minify build single file path of tests run so far have passed pass minify build multiple file paths of tests run so far have passed pass minify build file path and a minify source of tests run so far have passed pass minify build uri with no querystring of tests run so far have passed pass minify build uri with existing querystring of tests run so far have passed pass minify html helper given uris of tests run so far have passed pass minify html helper given filepaths of tests run so far have passed pass minify html helper non existent group debug of tests run so far have passed pass minify html helper existing group of tests run so far have passed pass utils php minify mtime w files obj of tests run so far have passed pass utils php minify mtime w obj group of tests run so far have passed note minify cache file path is set to tmp pass minify cache file store of tests run so far have passed pass minify cache file getsize of tests run so far have passed pass minify cache file isvalid of tests run so far have passed pass minify cache file display of tests run so far have passed pass minify cache file fetch of tests run so far have passed pass minify cache file store w lock of tests run so far have passed pass minify cache file getsize of tests run so far have passed pass minify cache file isvalid of tests run so far have passed pass minify cache file display w lock of tests run so far have passed pass minify cache file fetch w lock of tests run so far have passed warning dir has been disabled for security reasons in home multigam public html min unit tests test minify css php on line fatal error call to a member function read on a non object in home multigam public html min unit tests test minify css php on line original issue reported on code google com by gmail com on may at | 1 |
66,001 | 19,849,376,214 | IssuesEvent | 2022-01-21 10:31:32 | matrix-org/synapse | https://api.github.com/repos/matrix-org/synapse | closed | TypeError: Object of type FrozenEvent is not JSON serializable | T-Defect X-Release-Blocker X-Regression | When testing out commit d1e6333f12d8a121c649c6176e0ac5e915345366 on matrix.org today, we ended up with the following stacktrace:
<details>
<summary>stacktrace</summary>
```
2022-01-19 13:31:19,225 - twisted - 279 - CRITICAL - sentinel - Unhandled error in Deferred:
2022-01-19 13:31:19,237 - twisted - 279 - CRITICAL - sentinel -
Capture point (most recent call last):
File "/usr/local/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/local/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/synapse/src/synapse/app/generic_worker.py", line 514, in <module>
main()
File "/home/synapse/src/synapse/app/generic_worker.py", line 510, in main
start(sys.argv[1:])
File "/home/synapse/src/synapse/app/generic_worker.py", line 505, in start
_base.start_worker_reactor("synapse-generic-worker", config)
File "/home/synapse/src/synapse/app/_base.py", line 126, in start_worker_reactor
run_command=run_command,
File "/home/synapse/src/synapse/app/_base.py", line 179, in start_reactor
File "/home/synapse/src/synapse/app/_base.py", line 163, in run
File "/home/synapse/env-py37/lib/python3.7/site-packages/twisted/internet/base.py", line 1318, in run
self.mainLoop()
File "/home/synapse/env-py37/lib/python3.7/site-packages/twisted/internet/base.py", line 1328, in mainLoop
reactorBaseSelf.runUntilCurrent()
File "/home/synapse/env-py37/lib/python3.7/site-packages/twisted/internet/base.py", line 967, in runUntilCurrent
f(*a, **kw)
File "/home/synapse/env-py37/lib/python3.7/site-packages/twisted/internet/defer.py", line 701, in errback
self._startRunCallbacks(fail)
File "/home/synapse/env-py37/lib/python3.7/site-packages/twisted/internet/defer.py", line 764, in _startRunCallbacks
self._runCallbacks()
File "/home/synapse/env-py37/lib/python3.7/site-packages/twisted/internet/defer.py", line 859, in _runCallbacks
current.result, *args, **kwargs
File "/home/synapse/env-py37/lib/python3.7/site-packages/twisted/internet/defer.py", line 1751, in gotResult
current_context.run(_inlineCallbacks, r, gen, status)
Traceback (most recent call last):
File "/home/synapse/env-py37/lib/python3.7/site-packages/twisted/internet/defer.py", line 1658, in _inlineCallbacks
cast(Failure, result).throwExceptionIntoGenerator, gen
File "/home/synapse/env-py37/lib/python3.7/site-packages/twisted/python/failure.py", line 500, in throwExceptionIntoGenerator
return g.throw(self.type, self.value, self.tb)
File "/home/synapse/src/synapse/http/server.py", line 779, in _async_write_json_to_request_in_thread
json_str = await defer_to_thread(request.reactor, encode, span)
File "/home/synapse/env-py37/lib/python3.7/site-packages/twisted/python/threadpool.py", line 238, in inContext
result = inContext.theWork() # type: ignore[attr-defined]
File "/home/synapse/env-py37/lib/python3.7/site-packages/twisted/python/threadpool.py", line 255, in <lambda>
ctx, func, *args, **kw
File "/home/synapse/env-py37/lib/python3.7/site-packages/twisted/python/context.py", line 118, in callWithContext
return self.currentContext().callWithContext(ctx, func, *args, **kw)
File "/home/synapse/env-py37/lib/python3.7/site-packages/twisted/python/context.py", line 83, in callWithContext
return func(*args, **kw)
File "/home/synapse/src/synapse/logging/context.py", line 958, in g
return f(*args, **kwargs)
File "/home/synapse/src/synapse/http/server.py", line 772, in encode
res = json_encoder(json_object)
File "/home/synapse/src/synapse/http/server.py", line 663, in _encode_json_bytes
return json_encoder.encode(json_object).encode("utf-8")
File "/usr/local/lib/python3.7/json/encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/usr/local/lib/python3.7/json/encoder.py", line 257, in iterencode
return _iterencode(o, 0)
File "/home/synapse/src/synapse/util/__init__.py", line 67, in _handle_frozendict
"Object of type %s is not JSON serializable" % obj.__class__.__name__
TypeError: Object of type FrozenEvent is not JSON serializable
```
</details>
Sentry link for those with access: https://sentry.matrix.org/sentry/synapse-matrixorg/issues/239798/
This issue only occurs on our `client_reader` workers, but does occur frequently across multiple instances of it.
Unfortunately no surrounding processed request lines are relevant. However, there do seem to be a number of replication related lines, so that may be a clue...
Note that a similar error has occurred before: https://github.com/matrix-org/synapse/issues/8678. | 1.0 | TypeError: Object of type FrozenEvent is not JSON serializable - When testing out commit d1e6333f12d8a121c649c6176e0ac5e915345366 on matrix.org today, we ended up with the following stacktrace:
<details>
<summary>stacktrace</summary>
```
2022-01-19 13:31:19,225 - twisted - 279 - CRITICAL - sentinel - Unhandled error in Deferred:
2022-01-19 13:31:19,237 - twisted - 279 - CRITICAL - sentinel -
Capture point (most recent call last):
File "/usr/local/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/local/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/synapse/src/synapse/app/generic_worker.py", line 514, in <module>
main()
File "/home/synapse/src/synapse/app/generic_worker.py", line 510, in main
start(sys.argv[1:])
File "/home/synapse/src/synapse/app/generic_worker.py", line 505, in start
_base.start_worker_reactor("synapse-generic-worker", config)
File "/home/synapse/src/synapse/app/_base.py", line 126, in start_worker_reactor
run_command=run_command,
File "/home/synapse/src/synapse/app/_base.py", line 179, in start_reactor
File "/home/synapse/src/synapse/app/_base.py", line 163, in run
File "/home/synapse/env-py37/lib/python3.7/site-packages/twisted/internet/base.py", line 1318, in run
self.mainLoop()
File "/home/synapse/env-py37/lib/python3.7/site-packages/twisted/internet/base.py", line 1328, in mainLoop
reactorBaseSelf.runUntilCurrent()
File "/home/synapse/env-py37/lib/python3.7/site-packages/twisted/internet/base.py", line 967, in runUntilCurrent
f(*a, **kw)
File "/home/synapse/env-py37/lib/python3.7/site-packages/twisted/internet/defer.py", line 701, in errback
self._startRunCallbacks(fail)
File "/home/synapse/env-py37/lib/python3.7/site-packages/twisted/internet/defer.py", line 764, in _startRunCallbacks
self._runCallbacks()
File "/home/synapse/env-py37/lib/python3.7/site-packages/twisted/internet/defer.py", line 859, in _runCallbacks
current.result, *args, **kwargs
File "/home/synapse/env-py37/lib/python3.7/site-packages/twisted/internet/defer.py", line 1751, in gotResult
current_context.run(_inlineCallbacks, r, gen, status)
Traceback (most recent call last):
File "/home/synapse/env-py37/lib/python3.7/site-packages/twisted/internet/defer.py", line 1658, in _inlineCallbacks
cast(Failure, result).throwExceptionIntoGenerator, gen
File "/home/synapse/env-py37/lib/python3.7/site-packages/twisted/python/failure.py", line 500, in throwExceptionIntoGenerator
return g.throw(self.type, self.value, self.tb)
File "/home/synapse/src/synapse/http/server.py", line 779, in _async_write_json_to_request_in_thread
json_str = await defer_to_thread(request.reactor, encode, span)
File "/home/synapse/env-py37/lib/python3.7/site-packages/twisted/python/threadpool.py", line 238, in inContext
result = inContext.theWork() # type: ignore[attr-defined]
File "/home/synapse/env-py37/lib/python3.7/site-packages/twisted/python/threadpool.py", line 255, in <lambda>
ctx, func, *args, **kw
File "/home/synapse/env-py37/lib/python3.7/site-packages/twisted/python/context.py", line 118, in callWithContext
return self.currentContext().callWithContext(ctx, func, *args, **kw)
File "/home/synapse/env-py37/lib/python3.7/site-packages/twisted/python/context.py", line 83, in callWithContext
return func(*args, **kw)
File "/home/synapse/src/synapse/logging/context.py", line 958, in g
return f(*args, **kwargs)
File "/home/synapse/src/synapse/http/server.py", line 772, in encode
res = json_encoder(json_object)
File "/home/synapse/src/synapse/http/server.py", line 663, in _encode_json_bytes
return json_encoder.encode(json_object).encode("utf-8")
File "/usr/local/lib/python3.7/json/encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/usr/local/lib/python3.7/json/encoder.py", line 257, in iterencode
return _iterencode(o, 0)
File "/home/synapse/src/synapse/util/__init__.py", line 67, in _handle_frozendict
"Object of type %s is not JSON serializable" % obj.__class__.__name__
TypeError: Object of type FrozenEvent is not JSON serializable
```
</details>
Sentry link for those with access: https://sentry.matrix.org/sentry/synapse-matrixorg/issues/239798/
This issue only occurs on our `client_reader` workers, but does occur frequently across multiple instances of it.
Unfortunately no surrounding processed request lines are relevant. However, there do seem to be a number of replication related lines, so that may be a clue...
Note that a similar error has occurred before: https://github.com/matrix-org/synapse/issues/8678. | defect | typeerror object of type frozenevent is not json serializable when testing out commit on matrix org today we ended up with the following stacktrace stacktrace twisted critical sentinel unhandled error in deferred twisted critical sentinel capture point most recent call last file usr local lib runpy py line in run module as main main mod spec file usr local lib runpy py line in run code exec code run globals file home synapse src synapse app generic worker py line in main file home synapse src synapse app generic worker py line in main start sys argv file home synapse src synapse app generic worker py line in start base start worker reactor synapse generic worker config file home synapse src synapse app base py line in start worker reactor run command run command file home synapse src synapse app base py line in start reactor file home synapse src synapse app base py line in run file home synapse env lib site packages twisted internet base py line in run self mainloop file home synapse env lib site packages twisted internet base py line in mainloop reactorbaseself rununtilcurrent file home synapse env lib site packages twisted internet base py line in rununtilcurrent f a kw file home synapse env lib site packages twisted internet defer py line in errback self startruncallbacks fail file home synapse env lib site packages twisted internet defer py line in startruncallbacks self runcallbacks file home synapse env lib site packages twisted internet defer py line in runcallbacks current result args kwargs file home synapse env lib site packages twisted internet defer py line in gotresult current context run inlinecallbacks r gen status traceback most recent call last file home synapse env lib site packages twisted internet defer py line in inlinecallbacks cast failure result throwexceptionintogenerator gen file home synapse env lib site packages twisted python failure py line in throwexceptionintogenerator return g throw self type self value self tb file home synapse src synapse http server py line in async write json to request in thread json str await defer to thread request reactor encode span file home synapse env lib site packages twisted python threadpool py line in incontext result incontext thework type ignore file home synapse env lib site packages twisted python threadpool py line in ctx func args kw file home synapse env lib site packages twisted python context py line in callwithcontext return self currentcontext callwithcontext ctx func args kw file home synapse env lib site packages twisted python context py line in callwithcontext return func args kw file home synapse src synapse logging context py line in g return f args kwargs file home synapse src synapse http server py line in encode res json encoder json object file home synapse src synapse http server py line in encode json bytes return json encoder encode json object encode utf file usr local lib json encoder py line in encode chunks self iterencode o one shot true file usr local lib json encoder py line in iterencode return iterencode o file home synapse src synapse util init py line in handle frozendict object of type s is not json serializable obj class name typeerror object of type frozenevent is not json serializable sentry link for those with access this issue only occurs on our client reader workers but does occur frequently across multiple instances of it unfortunately no surrounding processed request lines are relevant however there do seem to be a number of replication related lines so that may be a clue note that a similar error has occurred before | 1 |
6,124 | 2,610,221,580 | IssuesEvent | 2015-02-26 19:10:18 | chrsmith/somefinders | https://api.github.com/repos/chrsmith/somefinders | opened | коляска bogus инструкция | auto-migrated Priority-Medium Type-Defect | ```
'''Велислав Кузнецов'''
День добрый никак не могу найти .коляска bogus
инструкция. как то выкладывали уже
'''Амос Волков'''
Вот хороший сайт где можно скачать
http://bit.ly/1h3BngV
'''Владлен Попов'''
Спасибо вроде то но просит телефон вводить
'''Габриель Носков'''
Не это не влияет на баланс
'''Варлам Селезнёв'''
Неа все ок у меня ничего не списало
Информация о файле: коляска bogus инструкция
Загружен: В этом месяце
Скачан раз: 1098
Рейтинг: 459
Средняя скорость скачивания: 1055
Похожих файлов: 40
```
-----
Original issue reported on code.google.com by `kondense...@gmail.com` on 17 Dec 2013 at 11:59 | 1.0 | коляска bogus инструкция - ```
'''Велислав Кузнецов'''
День добрый никак не могу найти .коляска bogus
инструкция. как то выкладывали уже
'''Амос Волков'''
Вот хороший сайт где можно скачать
http://bit.ly/1h3BngV
'''Владлен Попов'''
Спасибо вроде то но просит телефон вводить
'''Габриель Носков'''
Не это не влияет на баланс
'''Варлам Селезнёв'''
Неа все ок у меня ничего не списало
Информация о файле: коляска bogus инструкция
Загружен: В этом месяце
Скачан раз: 1098
Рейтинг: 459
Средняя скорость скачивания: 1055
Похожих файлов: 40
```
-----
Original issue reported on code.google.com by `kondense...@gmail.com` on 17 Dec 2013 at 11:59 | defect | коляска bogus инструкция велислав кузнецов день добрый никак не могу найти коляска bogus инструкция как то выкладывали уже амос волков вот хороший сайт где можно скачать владлен попов спасибо вроде то но просит телефон вводить габриель носков не это не влияет на баланс варлам селезнёв неа все ок у меня ничего не списало информация о файле коляска bogus инструкция загружен в этом месяце скачан раз рейтинг средняя скорость скачивания похожих файлов original issue reported on code google com by kondense gmail com on dec at | 1 |
67,130 | 20,914,171,923 | IssuesEvent | 2022-03-24 11:59:22 | vector-im/element-web | https://api.github.com/repos/vector-im/element-web | closed | Videos don't get a thumbnail preview when you upload them | T-Defect | ### Steps to reproduce
1. Upload a video
2. Observe that the confirmation dialog doesn't include a thumbnail (unlike images, which get a thumbnail):
<img width="372" alt="Screenshot 2022-03-22 at 12 32 53" src="https://user-images.githubusercontent.com/1294269/159810182-170ad92d-70f1-4dce-ae87-346a13c40b14.png">
(whereas if it were an image, you'd see something like this:
<img width="500" alt="Screenshot 2022-03-23 at 23 02 22" src="https://user-images.githubusercontent.com/1294269/159810472-ea076050-7be5-40a8-94da-dc6fb497867f.png">
)
3. Accidentally post the wrong video
### Outcome
#### What did you expect?
Given we successfully thumbnail videos when we send them, we should display the thumbnail in the confirmation dialog, as we do for images.
#### What happened instead?
No thumbnail results in risk of uploading wrong video to room and subsequent awkward conversations with HR.
### Operating system
macos
### Browser information
nightly
### URL for webapp
nightly
### Application version
nightly
### Homeserver
matrix.org
### Will you send logs?
No | 1.0 | Videos don't get a thumbnail preview when you upload them - ### Steps to reproduce
1. Upload a video
2. Observe that the confirmation dialog doesn't include a thumbnail (unlike images, which get a thumbnail):
<img width="372" alt="Screenshot 2022-03-22 at 12 32 53" src="https://user-images.githubusercontent.com/1294269/159810182-170ad92d-70f1-4dce-ae87-346a13c40b14.png">
(whereas if it were an image, you'd see something like this:
<img width="500" alt="Screenshot 2022-03-23 at 23 02 22" src="https://user-images.githubusercontent.com/1294269/159810472-ea076050-7be5-40a8-94da-dc6fb497867f.png">
)
3. Accidentally post the wrong video
### Outcome
#### What did you expect?
Given we successfully thumbnail videos when we send them, we should display the thumbnail in the confirmation dialog, as we do for images.
#### What happened instead?
No thumbnail results in risk of uploading wrong video to room and subsequent awkward conversations with HR.
### Operating system
macos
### Browser information
nightly
### URL for webapp
nightly
### Application version
nightly
### Homeserver
matrix.org
### Will you send logs?
No | defect | videos don t get a thumbnail preview when you upload them steps to reproduce upload a video observe that the confirmation dialog doesn t include a thumbnail unlike images which get a thumbnail img width alt screenshot at src whereas if it were an image you d see something like this img width alt screenshot at src accidentally post the wrong video outcome what did you expect given we successfully thumbnail videos when we send them we should display the thumbnail in the confirmation dialog as we do for images what happened instead no thumbnail results in risk of uploading wrong video to room and subsequent awkward conversations with hr operating system macos browser information nightly url for webapp nightly application version nightly homeserver matrix org will you send logs no | 1 |
31,942 | 6,665,750,366 | IssuesEvent | 2017-10-03 03:40:07 | catmaid/CATMAID | https://api.github.com/repos/catmaid/CATMAID | closed | Spaces prepended to neuron names in the Connectivity widget | status: done type: defect | When searching for upstream neurons with e.g.:
/^KC
... nothing lists. Removing the '^' then works.
Then, this works (one space):
/ KC
... and this also (two spaces):
/ KC
... but this doesn't (three spaces):
/ KC
And then this works (the '^' plus two spaces):
/^ KC
None of the neuron names per se have any spaces before the first character.
And neuron names render without prepended spaces in the rows. | 1.0 | Spaces prepended to neuron names in the Connectivity widget - When searching for upstream neurons with e.g.:
/^KC
... nothing lists. Removing the '^' then works.
Then, this works (one space):
/ KC
... and this also (two spaces):
/ KC
... but this doesn't (three spaces):
/ KC
And then this works (the '^' plus two spaces):
/^ KC
None of the neuron names per se have any spaces before the first character.
And neuron names render without prepended spaces in the rows. | defect | spaces prepended to neuron names in the connectivity widget when searching for upstream neurons with e g kc nothing lists removing the then works then this works one space kc and this also two spaces kc but this doesn t three spaces kc and then this works the plus two spaces kc none of the neuron names per se have any spaces before the first character and neuron names render without prepended spaces in the rows | 1 |
42,345 | 10,980,308,924 | IssuesEvent | 2019-11-30 13:22:20 | tulir/mautrix-telegram | https://api.github.com/repos/tulir/mautrix-telegram | closed | Username mention conversion to matrix is sometimes case-sensitive | bug: defect | Probably when the puppet hasn't been cached? | 1.0 | Username mention conversion to matrix is sometimes case-sensitive - Probably when the puppet hasn't been cached? | defect | username mention conversion to matrix is sometimes case sensitive probably when the puppet hasn t been cached | 1 |
69,354 | 22,322,194,603 | IssuesEvent | 2022-06-14 07:33:38 | primefaces/primefaces | https://api.github.com/repos/primefaces/primefaces | closed | Menu: Using RequestScoped model is broken | :lady_beetle: defect elite | ### Describe the bug
Using a `@RequestScoped` model breaks in 12.0.0 due to #8443
### Reproducer
[pf-8443.zip](https://github.com/primefaces/primefaces/files/8863936/pf-8443.zip)
Use the reproducer in `@ViewScoped` and everything works and with `@RequestScoped` everything fails.
### Expected behavior
MenuModel should be backwards compatible with previous PF while fixing bugs
### PrimeFaces edition
Community
### PrimeFaces version
12.0.0-SNAPSHOT
### Theme
ALL
### JSF implementation
All
### JSF version
ALL
### Browser(s)
ALL | 1.0 | Menu: Using RequestScoped model is broken - ### Describe the bug
Using a `@RequestScoped` model breaks in 12.0.0 due to #8443
### Reproducer
[pf-8443.zip](https://github.com/primefaces/primefaces/files/8863936/pf-8443.zip)
Use the reproducer in `@ViewScoped` and everything works and with `@RequestScoped` everything fails.
### Expected behavior
MenuModel should be backwards compatible with previous PF while fixing bugs
### PrimeFaces edition
Community
### PrimeFaces version
12.0.0-SNAPSHOT
### Theme
ALL
### JSF implementation
All
### JSF version
ALL
### Browser(s)
ALL | defect | menu using requestscoped model is broken describe the bug using a requestscoped model breaks in due to reproducer use the reproducer in viewscoped and everything works and with requestscoped everything fails expected behavior menumodel should be backwards compatible with previous pf while fixing bugs primefaces edition community primefaces version snapshot theme all jsf implementation all jsf version all browser s all | 1 |
114,877 | 17,266,880,800 | IssuesEvent | 2021-07-22 14:44:59 | turkdevops/php-src | https://api.github.com/repos/turkdevops/php-src | closed | CVE-2020-7062 (High) detected in php-srcphp-7.1.0RC3 - autoclosed | security vulnerability | ## CVE-2020-7062 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>php-srcphp-7.1.0RC3</b></p></summary>
<p>
<p>The PHP Interpreter</p>
<p>Library home page: <a href=https://github.com/madorin/php-src.git>https://github.com/madorin/php-src.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/php-src/commit/ec57f9143f2fcf2e9a8d3dfa268da689d11be5e2">ec57f9143f2fcf2e9a8d3dfa268da689d11be5e2</a></p>
<p>Found in base branch: <b>microseconds</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>php-src/ext/session/session.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In PHP versions 7.2.x below 7.2.28, 7.3.x below 7.3.15 and 7.4.x below 7.4.3, when using file upload functionality, if upload progress tracking is enabled, but session.upload_progress.cleanup is set to 0 (disabled), and the file upload fails, the upload procedure would try to clean up data that does not exist and encounter null pointer dereference, which would likely lead to a crash.
<p>Publish Date: 2020-02-27
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7062>CVE-2020-7062</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7062">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7062</a></p>
<p>Release Date: 2020-02-27</p>
<p>Fix Resolution: php-7.2.28,php-7.3.15,php-7.4.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-7062 (High) detected in php-srcphp-7.1.0RC3 - autoclosed - ## CVE-2020-7062 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>php-srcphp-7.1.0RC3</b></p></summary>
<p>
<p>The PHP Interpreter</p>
<p>Library home page: <a href=https://github.com/madorin/php-src.git>https://github.com/madorin/php-src.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/php-src/commit/ec57f9143f2fcf2e9a8d3dfa268da689d11be5e2">ec57f9143f2fcf2e9a8d3dfa268da689d11be5e2</a></p>
<p>Found in base branch: <b>microseconds</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>php-src/ext/session/session.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In PHP versions 7.2.x below 7.2.28, 7.3.x below 7.3.15 and 7.4.x below 7.4.3, when using file upload functionality, if upload progress tracking is enabled, but session.upload_progress.cleanup is set to 0 (disabled), and the file upload fails, the upload procedure would try to clean up data that does not exist and encounter null pointer dereference, which would likely lead to a crash.
<p>Publish Date: 2020-02-27
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7062>CVE-2020-7062</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7062">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7062</a></p>
<p>Release Date: 2020-02-27</p>
<p>Fix Resolution: php-7.2.28,php-7.3.15,php-7.4.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | cve high detected in php srcphp autoclosed cve high severity vulnerability vulnerable library php srcphp the php interpreter library home page a href found in head commit a href found in base branch microseconds vulnerable source files php src ext session session c vulnerability details in php versions x below x below and x below when using file upload functionality if upload progress tracking is enabled but session upload progress cleanup is set to disabled and the file upload fails the upload procedure would try to clean up data that does not exist and encounter null pointer dereference which would likely lead to a crash publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution php php php step up your open source security game with whitesource | 0 |
535,174 | 15,683,641,073 | IssuesEvent | 2021-03-25 09:02:33 | wso2/product-apim | https://api.github.com/repos/wso2/product-apim | closed | JWT Authentication | API-M 4.0.0 Feature/KeyMgt Priority/High Type/Bug | ### Description:
Developer portal JWT Authentication
### Steps to reproduce:
Follow the documentation on JWT authentication.
https://apim.docs.wso2.com/en/latest/learn/api-security/oauth2/access-token-types/jwt-tokens/#secure-apis-using-jwt-self-contained-access-tokens
1. After step 6 click generate access token again
2. Click update
3. Following error occurs.



Affected Product Version:
APIM-4.0.0 Alpha
| 1.0 | JWT Authentication - ### Description:
Developer portal JWT Authentication
### Steps to reproduce:
Follow the documentation on JWT authentication.
https://apim.docs.wso2.com/en/latest/learn/api-security/oauth2/access-token-types/jwt-tokens/#secure-apis-using-jwt-self-contained-access-tokens
1. After step 6 click generate access token again
2. Click update
3. Following error occurs.



Affected Product Version:
APIM-4.0.0 Alpha
| non_defect | jwt authentication description developer portal jwt authentication steps to reproduce follow the documentation on jwt authentication after step click generate access token again click update following error occurs affected product version apim alpha | 0 |
49,605 | 13,187,239,047 | IssuesEvent | 2020-08-13 02:47:16 | icecube-trac/tix3 | https://api.github.com/repos/icecube-trac/tix3 | opened | [steamshovel] use of __all__ confuses sphinx documentation (Trac #1739) | Incomplete Migration Migrated from Trac combo core defect | <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1739">https://code.icecube.wisc.edu/ticket/1739</a>, reported by kjmeagher and owned by hdembinski</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:12:38",
"description": "`artists/__init__.py` uses `__all__` to specify the submodules in the package, but this confuses sphinx\n{{{\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.steamshovel.artists.rst:4: WARNING: missing attribute mentioned in :members: or __all__: module icecube.steamshovel.artists, attribute AngleClock\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.steamshovel.artists.rst:4: WARNING: missing attribute mentioned in :members: or __all__: module icecube.steamshovel.artists, attribute Bubbles\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.steamshovel.artists.rst:4: WARNING: missing attribute mentioned in :members: or __all__: module icecube.steamshovel.artists, attribute CoordinateSystem\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.steamshovel.artists.rst:4: WARNING: missing attribute mentioned in :members: or __all__: module icecube.steamshovel.artists, attribute DOMLabel\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.steamshovel.artists.rst:4: WARNING: missing attribute mentioned in :members: or __all__: module icecube.steamshovel.artists, attribute DOMLaunchHistogram\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.steamshovel.artists.rst:4: WARNING: missing attribute mentioned in :members: or __all__: module icecube.steamshovel.artists, attribute Ice\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.steamshovel.artists.rst:4: WARNING: missing attribute mentioned in :members: or __all__: module icecube.steamshovel.artists, attribute LEDPowerHouse\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.steamshovel.artists.rst:4: WARNING: missing attribute mentioned in :members: or __all__: module icecube.steamshovel.artists, attribute ParticleUncertainty\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.steamshovel.artists.rst:4: WARNING: missing attribute mentioned in :members: or __all__: module icecube.steamshovel.artists, attribute PhotonPaths\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.steamshovel.artists.rst:4: WARNING: missing attribute mentioned in :members: or __all__: module icecube.steamshovel.artists, attribute PlaneWave\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.steamshovel.artists.rst:4: WARNING: missing attribute mentioned in :members: or __all__: module icecube.steamshovel.artists, attribute Position\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.steamshovel.artists.rst:4: WARNING: missing attribute mentioned in :members: or __all__: module icecube.steamshovel.artists, attribute RecoPulseWaveform\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.steamshovel.artists.rst:4: WARNING: missing attribute mentioned in :members: or __all__: module icecube.steamshovel.artists, attribute TextSummary\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.steamshovel.artists.rst:4: WARNING: missing attribute mentioned in :members: or __all__: module icecube.steamshovel.artists, attribute UserLabel\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.steamshovel.artists.rst:4: WARNING: missing attribute mentioned in :members: or __all__: module icecube.steamshovel.artists, attribute Waveform\n\n}}}\n",
"reporter": "kjmeagher",
"cc": "",
"resolution": "wontfix",
"_ts": "1550067158057333",
"component": "combo core",
"summary": "[steamshovel] use of __all__ confuses sphinx documentation",
"priority": "minor",
"keywords": "documentation",
"time": "2016-06-10T08:35:27",
"milestone": "",
"owner": "hdembinski",
"type": "defect"
}
```
</p>
</details>
| 1.0 | [steamshovel] use of __all__ confuses sphinx documentation (Trac #1739) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1739">https://code.icecube.wisc.edu/ticket/1739</a>, reported by kjmeagher and owned by hdembinski</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:12:38",
"description": "`artists/__init__.py` uses `__all__` to specify the submodules in the package, but this confuses sphinx\n{{{\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.steamshovel.artists.rst:4: WARNING: missing attribute mentioned in :members: or __all__: module icecube.steamshovel.artists, attribute AngleClock\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.steamshovel.artists.rst:4: WARNING: missing attribute mentioned in :members: or __all__: module icecube.steamshovel.artists, attribute Bubbles\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.steamshovel.artists.rst:4: WARNING: missing attribute mentioned in :members: or __all__: module icecube.steamshovel.artists, attribute CoordinateSystem\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.steamshovel.artists.rst:4: WARNING: missing attribute mentioned in :members: or __all__: module icecube.steamshovel.artists, attribute DOMLabel\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.steamshovel.artists.rst:4: WARNING: missing attribute mentioned in :members: or __all__: module icecube.steamshovel.artists, attribute DOMLaunchHistogram\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.steamshovel.artists.rst:4: WARNING: missing attribute mentioned in :members: or __all__: module icecube.steamshovel.artists, attribute Ice\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.steamshovel.artists.rst:4: WARNING: missing attribute mentioned in :members: or __all__: module icecube.steamshovel.artists, attribute LEDPowerHouse\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.steamshovel.artists.rst:4: WARNING: missing attribute mentioned in :members: or __all__: module icecube.steamshovel.artists, attribute ParticleUncertainty\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.steamshovel.artists.rst:4: WARNING: missing attribute mentioned in :members: or __all__: module icecube.steamshovel.artists, attribute PhotonPaths\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.steamshovel.artists.rst:4: WARNING: missing attribute mentioned in :members: or __all__: module icecube.steamshovel.artists, attribute PlaneWave\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.steamshovel.artists.rst:4: WARNING: missing attribute mentioned in :members: or __all__: module icecube.steamshovel.artists, attribute Position\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.steamshovel.artists.rst:4: WARNING: missing attribute mentioned in :members: or __all__: module icecube.steamshovel.artists, attribute RecoPulseWaveform\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.steamshovel.artists.rst:4: WARNING: missing attribute mentioned in :members: or __all__: module icecube.steamshovel.artists, attribute TextSummary\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.steamshovel.artists.rst:4: WARNING: missing attribute mentioned in :members: or __all__: module icecube.steamshovel.artists, attribute UserLabel\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.steamshovel.artists.rst:4: WARNING: missing attribute mentioned in :members: or __all__: module icecube.steamshovel.artists, attribute Waveform\n\n}}}\n",
"reporter": "kjmeagher",
"cc": "",
"resolution": "wontfix",
"_ts": "1550067158057333",
"component": "combo core",
"summary": "[steamshovel] use of __all__ confuses sphinx documentation",
"priority": "minor",
"keywords": "documentation",
"time": "2016-06-10T08:35:27",
"milestone": "",
"owner": "hdembinski",
"type": "defect"
}
```
</p>
</details>
| defect | use of all confuses sphinx documentation trac migrated from json status closed changetime description artists init py uses all to specify the submodules in the package but this confuses sphinx n n users kmeagher icecube combo release sphinx build source python icecube steamshovel artists rst warning missing attribute mentioned in members or all module icecube steamshovel artists attribute angleclock n users kmeagher icecube combo release sphinx build source python icecube steamshovel artists rst warning missing attribute mentioned in members or all module icecube steamshovel artists attribute bubbles n users kmeagher icecube combo release sphinx build source python icecube steamshovel artists rst warning missing attribute mentioned in members or all module icecube steamshovel artists attribute coordinatesystem n users kmeagher icecube combo release sphinx build source python icecube steamshovel artists rst warning missing attribute mentioned in members or all module icecube steamshovel artists attribute domlabel n users kmeagher icecube combo release sphinx build source python icecube steamshovel artists rst warning missing attribute mentioned in members or all module icecube steamshovel artists attribute domlaunchhistogram n users kmeagher icecube combo release sphinx build source python icecube steamshovel artists rst warning missing attribute mentioned in members or all module icecube steamshovel artists attribute ice n users kmeagher icecube combo release sphinx build source python icecube steamshovel artists rst warning missing attribute mentioned in members or all module icecube steamshovel artists attribute ledpowerhouse n users kmeagher icecube combo release sphinx build source python icecube steamshovel artists rst warning missing attribute mentioned in members or all module icecube steamshovel artists attribute particleuncertainty n users kmeagher icecube combo release sphinx build source python icecube steamshovel artists rst warning missing attribute mentioned in members or all module icecube steamshovel artists attribute photonpaths n users kmeagher icecube combo release sphinx build source python icecube steamshovel artists rst warning missing attribute mentioned in members or all module icecube steamshovel artists attribute planewave n users kmeagher icecube combo release sphinx build source python icecube steamshovel artists rst warning missing attribute mentioned in members or all module icecube steamshovel artists attribute position n users kmeagher icecube combo release sphinx build source python icecube steamshovel artists rst warning missing attribute mentioned in members or all module icecube steamshovel artists attribute recopulsewaveform n users kmeagher icecube combo release sphinx build source python icecube steamshovel artists rst warning missing attribute mentioned in members or all module icecube steamshovel artists attribute textsummary n users kmeagher icecube combo release sphinx build source python icecube steamshovel artists rst warning missing attribute mentioned in members or all module icecube steamshovel artists attribute userlabel n users kmeagher icecube combo release sphinx build source python icecube steamshovel artists rst warning missing attribute mentioned in members or all module icecube steamshovel artists attribute waveform n n n reporter kjmeagher cc resolution wontfix ts component combo core summary use of all confuses sphinx documentation priority minor keywords documentation time milestone owner hdembinski type defect | 1 |
358,438 | 10,618,532,320 | IssuesEvent | 2019-10-13 05:28:38 | k8smeetup/website-tasks | https://api.github.com/repos/k8smeetup/website-tasks | opened | /docs/concepts/policy/limit-range.md | lang/zh priority/P0 sync/update version/1.16 welcome | Source File: [/docs/concepts/policy/limit-range.md](https://github.com/kubernetes/website/blob/release-1.16/content/en/docs/concepts/policy/limit-range.md)
Diff 查看原始文档更新差异命令:
```bash
git diff release-1.14 release-1.16 -- content/en/docs/concepts/policy/limit-range.md
``` | 1.0 | /docs/concepts/policy/limit-range.md - Source File: [/docs/concepts/policy/limit-range.md](https://github.com/kubernetes/website/blob/release-1.16/content/en/docs/concepts/policy/limit-range.md)
Diff 查看原始文档更新差异命令:
```bash
git diff release-1.14 release-1.16 -- content/en/docs/concepts/policy/limit-range.md
``` | non_defect | docs concepts policy limit range md source file diff 查看原始文档更新差异命令 bash git diff release release content en docs concepts policy limit range md | 0 |
46,282 | 13,055,885,172 | IssuesEvent | 2020-07-30 03:01:21 | icecube-trac/tix2 | https://api.github.com/repos/icecube-trac/tix2 | opened | [photospline] - divzero - loop pre-conditions aren't checked (Trac #921) | Incomplete Migration Migrated from Trac combo reconstruction defect | Migrated from https://code.icecube.wisc.edu/ticket/921
```json
{
"status": "closed",
"changetime": "2015-04-12T17:41:42",
"description": "http://goo.gl/s1mdUk\n\nloop preconditions aren't checked allowing a potential divide-by-zero error to occur in 8 steps.\n\nfix: pre-check and hard fail if loop pre-conditions suck",
"reporter": "nega",
"cc": "",
"resolution": "fixed",
"_ts": "1428860502423619",
"component": "combo reconstruction",
"summary": "[photospline] - divzero - loop pre-conditions aren't checked",
"priority": "normal",
"keywords": "photospline divzero",
"time": "2015-04-10T04:13:30",
"milestone": "",
"owner": "jvansanten",
"type": "defect"
}
```
| 1.0 | [photospline] - divzero - loop pre-conditions aren't checked (Trac #921) - Migrated from https://code.icecube.wisc.edu/ticket/921
```json
{
"status": "closed",
"changetime": "2015-04-12T17:41:42",
"description": "http://goo.gl/s1mdUk\n\nloop preconditions aren't checked allowing a potential divide-by-zero error to occur in 8 steps.\n\nfix: pre-check and hard fail if loop pre-conditions suck",
"reporter": "nega",
"cc": "",
"resolution": "fixed",
"_ts": "1428860502423619",
"component": "combo reconstruction",
"summary": "[photospline] - divzero - loop pre-conditions aren't checked",
"priority": "normal",
"keywords": "photospline divzero",
"time": "2015-04-10T04:13:30",
"milestone": "",
"owner": "jvansanten",
"type": "defect"
}
```
| defect | divzero loop pre conditions aren t checked trac migrated from json status closed changetime description preconditions aren t checked allowing a potential divide by zero error to occur in steps n nfix pre check and hard fail if loop pre conditions suck reporter nega cc resolution fixed ts component combo reconstruction summary divzero loop pre conditions aren t checked priority normal keywords photospline divzero time milestone owner jvansanten type defect | 1 |
751,577 | 26,250,415,232 | IssuesEvent | 2023-01-05 18:45:51 | AleoHQ/leo | https://api.github.com/repos/AleoHQ/leo | closed | [Proposal] Member function declarations outside circuit types | feature priority-medium | ## 💥 Proposal
Since the notion of member function is being extended from circuit types to non-circuit types (e.g. for bit/byte conversions), it makes more sense to declare member functions outside circuit types, similarly to Rust's `impl` blocks. We may want to use a different keyword form `impl`, such as `members` or `methods`, which seems more descriptive in the Leo context.
The need for this would be even more apparent if we extend Leo with enum types similar to Rust and want enum types to have member functions too.
Another reason is that the standard library is being extended with declarations like
```
circuit u8 {
function to_bits_le(self) -> [bool; 8];
....
}
```
which are meant for internal use but may end up being user-visible since since they are in the standard library. These are semantically problematic because scalar types like `u8` are not circuit types. Instead, the standard library could have something like
```
members u8 {
function to_bits_le(self) -> [bool; 8];
...
}
```
which is semantically clear.
Another reason is to allow the user to define and use their own member functions on existing types, e.g. define
```
members u8 {
function add3(self, x: Self, y: Self) -> Self { return self + x + y; }
}
```
and write
```
a.add3(b, c)
```
Besides member functions, we could also declare (static) constants, and in the future variables, in these `members` blocks. | 1.0 | [Proposal] Member function declarations outside circuit types - ## 💥 Proposal
Since the notion of member function is being extended from circuit types to non-circuit types (e.g. for bit/byte conversions), it makes more sense to declare member functions outside circuit types, similarly to Rust's `impl` blocks. We may want to use a different keyword form `impl`, such as `members` or `methods`, which seems more descriptive in the Leo context.
The need for this would be even more apparent if we extend Leo with enum types similar to Rust and want enum types to have member functions too.
Another reason is that the standard library is being extended with declarations like
```
circuit u8 {
function to_bits_le(self) -> [bool; 8];
....
}
```
which are meant for internal use but may end up being user-visible since since they are in the standard library. These are semantically problematic because scalar types like `u8` are not circuit types. Instead, the standard library could have something like
```
members u8 {
function to_bits_le(self) -> [bool; 8];
...
}
```
which is semantically clear.
Another reason is to allow the user to define and use their own member functions on existing types, e.g. define
```
members u8 {
function add3(self, x: Self, y: Self) -> Self { return self + x + y; }
}
```
and write
```
a.add3(b, c)
```
Besides member functions, we could also declare (static) constants, and in the future variables, in these `members` blocks. | non_defect | member function declarations outside circuit types 💥 proposal since the notion of member function is being extended from circuit types to non circuit types e g for bit byte conversions it makes more sense to declare member functions outside circuit types similarly to rust s impl blocks we may want to use a different keyword form impl such as members or methods which seems more descriptive in the leo context the need for this would be even more apparent if we extend leo with enum types similar to rust and want enum types to have member functions too another reason is that the standard library is being extended with declarations like circuit function to bits le self which are meant for internal use but may end up being user visible since since they are in the standard library these are semantically problematic because scalar types like are not circuit types instead the standard library could have something like members function to bits le self which is semantically clear another reason is to allow the user to define and use their own member functions on existing types e g define members function self x self y self self return self x y and write a b c besides member functions we could also declare static constants and in the future variables in these members blocks | 0 |
71,091 | 23,441,108,337 | IssuesEvent | 2022-08-15 14:57:39 | vector-im/element-web | https://api.github.com/repos/vector-im/element-web | closed | Error when joining room from space summary | T-Defect | ### Steps to reproduce
1. Viewed a space's summary to search for a room in it
2. Clicked "Join" for a room that the search turned up
3. Got a crash screen & was prompted to submit a case for it
### Outcome
#### What did you expect?
I should have just joined the room.
#### What happened instead?
The room join ~failed~ succeeded but an error page showed up.
### Operating system
Fedora Linux 36 (Workstation Edition)
### Browser information
Firefox 103.0.2
### URL for webapp
chat.element.io
### Application version
1.11.2
### Homeserver
https://element.ems.host
### Will you send logs?
Yes | 1.0 | Error when joining room from space summary - ### Steps to reproduce
1. Viewed a space's summary to search for a room in it
2. Clicked "Join" for a room that the search turned up
3. Got a crash screen & was prompted to submit a case for it
### Outcome
#### What did you expect?
I should have just joined the room.
#### What happened instead?
The room join ~failed~ succeeded but an error page showed up.
### Operating system
Fedora Linux 36 (Workstation Edition)
### Browser information
Firefox 103.0.2
### URL for webapp
chat.element.io
### Application version
1.11.2
### Homeserver
https://element.ems.host
### Will you send logs?
Yes | defect | error when joining room from space summary steps to reproduce viewed a space s summary to search for a room in it clicked join for a room that the search turned up got a crash screen was prompted to submit a case for it outcome what did you expect i should have just joined the room what happened instead the room join failed succeeded but an error page showed up operating system fedora linux workstation edition browser information firefox url for webapp chat element io application version homeserver will you send logs yes | 1 |
67,335 | 3,269,146,193 | IssuesEvent | 2015-10-23 15:05:53 | ExchangeCore/Concrete5-CKEditor | https://api.github.com/repos/ExchangeCore/Concrete5-CKEditor | opened | Get MKDocs up and running for plugin documentation | priority-medium type-enhancement | We'll host these over on http://docs.exchangecore.com/ and we'll just point to this for the marketplace docs too. This way anyone can contribute to them. | 1.0 | Get MKDocs up and running for plugin documentation - We'll host these over on http://docs.exchangecore.com/ and we'll just point to this for the marketplace docs too. This way anyone can contribute to them. | non_defect | get mkdocs up and running for plugin documentation we ll host these over on and we ll just point to this for the marketplace docs too this way anyone can contribute to them | 0 |
319,518 | 27,379,561,234 | IssuesEvent | 2023-02-28 09:06:32 | kubernetes/kubernetes | https://api.github.com/repos/kubernetes/kubernetes | opened | Probe tests are flaking often for Windows | kind/failing-test | ### Which jobs are failing?
It is a follow up from https://github.com/kubernetes/kubernetes/pull/115856#issuecomment-1447807752
Looks like liveness probes are failing with different symptoms. Looks like the failures are happening for different probe types.
### Which tests are failing?
https://storage.googleapis.com/k8s-triage/index.html?test=liveness
Preliminary analysis indicates it may be a Windows issue:
/sig windows
/sig node
### Since when has it been failing?
Don't know.
### Testgrid link
_No response_
### Reason for failure (if possible)
One test failure from https://github.com/kubernetes/kubernetes/pull/115856#issuecomment-1447807752. Seems like probes worked OK and reported failure. But kubelet didn't react on this failure and haven't restarted the Container at all.
There are other cases when for some reason kubelet keeps restarting container, even before any probes were started. Perhaps some issue with agnhost.
### Anything else we need to know?
_No response_
### Relevant SIG(s)
/sig windows
/sig node | 1.0 | Probe tests are flaking often for Windows - ### Which jobs are failing?
It is a follow up from https://github.com/kubernetes/kubernetes/pull/115856#issuecomment-1447807752
Looks like liveness probes are failing with different symptoms. Looks like the failures are happening for different probe types.
### Which tests are failing?
https://storage.googleapis.com/k8s-triage/index.html?test=liveness
Preliminary analysis indicates it may be a Windows issue:
/sig windows
/sig node
### Since when has it been failing?
Don't know.
### Testgrid link
_No response_
### Reason for failure (if possible)
One test failure from https://github.com/kubernetes/kubernetes/pull/115856#issuecomment-1447807752. Seems like probes worked OK and reported failure. But kubelet didn't react on this failure and haven't restarted the Container at all.
There are other cases when for some reason kubelet keeps restarting container, even before any probes were started. Perhaps some issue with agnhost.
### Anything else we need to know?
_No response_
### Relevant SIG(s)
/sig windows
/sig node | non_defect | probe tests are flaking often for windows which jobs are failing it is a follow up from looks like liveness probes are failing with different symptoms looks like the failures are happening for different probe types which tests are failing preliminary analysis indicates it may be a windows issue sig windows sig node since when has it been failing don t know testgrid link no response reason for failure if possible one test failure from seems like probes worked ok and reported failure but kubelet didn t react on this failure and haven t restarted the container at all there are other cases when for some reason kubelet keeps restarting container even before any probes were started perhaps some issue with agnhost anything else we need to know no response relevant sig s sig windows sig node | 0 |
10,776 | 2,622,185,528 | IssuesEvent | 2015-03-04 00:20:39 | byzhang/signal-collect | https://api.github.com/repos/byzhang/signal-collect | closed | Reported graph loading wait time (graphLoadingWaitInMilliseconds) is not correct | auto-migrated Priority-Medium Type-Defect | ```
The reported time is a lot shorter than the real wait time.
```
Original issue reported on code.google.com by `philip.stutz` on 8 Nov 2011 at 1:57 | 1.0 | Reported graph loading wait time (graphLoadingWaitInMilliseconds) is not correct - ```
The reported time is a lot shorter than the real wait time.
```
Original issue reported on code.google.com by `philip.stutz` on 8 Nov 2011 at 1:57 | defect | reported graph loading wait time graphloadingwaitinmilliseconds is not correct the reported time is a lot shorter than the real wait time original issue reported on code google com by philip stutz on nov at | 1 |
46,691 | 13,055,960,047 | IssuesEvent | 2020-07-30 03:14:28 | icecube-trac/tix2 | https://api.github.com/repos/icecube-trac/tix2 | opened | [steamshovel] test failure - race condition (Trac #1719) | Incomplete Migration Migrated from Trac combo core defect | Migrated from https://code.icecube.wisc.edu/ticket/1719
```json
{
"status": "closed",
"changetime": "2019-02-13T14:12:47",
"description": "There is apparently a race condition that needs fixing:\n\n{{{\ndschultz@tide2:~/Documents/offline_software/serialization/build_1.60$ python steamshovel/resources/resources/test/test_shovelio_against_I3Reader.py \nINFO (I3Module): Opened file /cvmfs/icecube.opensciencegrid.org/data/i3-test-data/sim/GCD.i3.gz (I3Reader.cxx:180 in void I3Reader::OpenNextFile())\nConfigure with filenamelist: /cvmfs/icecube.opensciencegrid.org/data/i3-test-data/sim/GCD.i3.gz /cvmfs/icecube.opensciencegrid.org/data/i3-test-data/sim/corsika.F2K010001_IC59_slim.i3.gz\nframe 1 Geometry\nframe 2 Calibration\nframe 3 DetectorStatus\nframe 4 Physics\nINFO (I3Module): Opened file /cvmfs/icecube.opensciencegrid.org/data/i3-test-data/sim/corsika.F2K010001_IC59_slim.i3.gz (I3Reader.cxx:180 in void I3Reader::OpenNextFile())\nframe 5 DAQ\nframe 6 DAQ\nframe 7 DAQ\nframe 8 DAQ\nframe 9 DAQ\nframe 10 DAQ\nframe 11 DAQ\nframe 12 DAQ\nframe 13 DAQ\nframe 14 DAQ\nframe 15 DAQ\nframe 16 DAQ\nframe 17 DAQ\nframe 18 DAQ\nframe 19 DAQ\nframe 20 DAQ\nframe 21 DAQ\nframe 22 DAQ\nframe 23 DAQ\nframe 24 DAQ\nframe 25 DAQ\nframe 26 DAQ\nframe 27 DAQ\nframe 28 DAQ\nframe 29 DAQ\nframe 30 DAQ\nframe 31 DAQ\nframe 32 DAQ\nframe 33 DAQ\nframe 34 DAQ\nframe 35 DAQ\nframe 36 DAQ\nframe 37 DAQ\nframe 38 DAQ\nframe 39 DAQ\nframe 40 DAQ\nframe 41 DAQ\nframe 42 DAQ\nframe 43 DAQ\nframe 44 DAQ\nframe 45 DAQ\nframe 46 DAQ\nframe 47 DAQ\nframe 48 DAQ\nframe 49 DAQ\nframe 50 DAQ\nframe 51 DAQ\nframe 52 DAQ\nframe 53 DAQ\nframe 54 DAQ\nframe 55 DAQ\nframe 56 DAQ\nframe 57 DAQ\nframe 58 DAQ\nframe 59 DAQ\nframe 60 DAQ\nframe 61 DAQ\nframe 62 DAQ\nframe 63 DAQ\nframe 64 DAQ\nframe 65 DAQ\nframe 66 DAQ\nframe 67 DAQ\nframe 68 DAQ\nframe 69 DAQ\nframe 70 DAQ\nframe 71 DAQ\nframe 72 DAQ\nframe 73 DAQ\nframe 74 DAQ\nframe 75 DAQ\nframe 76 DAQ\nframe 77 DAQ\nframe 78 DAQ\nframe 79 DAQ\nframe 80 DAQ\nframe 81 DAQ\nframe 82 DAQ\nframe 83 DAQ\nframe 84 DAQ\nframe 85 DAQ\nframe 86 DAQ\nframe 87 DAQ\nframe 88 DAQ\nframe 89 DAQ\nframe 90 DAQ\nframe 91 DAQ\nframe 92 DAQ\nframe 93 DAQ\nframe 94 DAQ\nframe 95 DAQ\nframe 96 DAQ\nframe 97 DAQ\nframe 98 DAQ\nframe 99 DAQ\nframe 100 DAQ\nframe 101 DAQ\nframe 102 DAQ\nframe 103 DAQ\nframe 104 DAQ\nNOTICE (I3Tray): I3Tray finishing... (I3Tray.cxx:480 in void I3Tray::Execute(unsigned int))\n.INFO (I3Module): Opened file /cvmfs/icecube.opensciencegrid.org/data/i3-test-data/event-viewer/Level3aGCD_IC79_EEData_Run00115990_slim.i3 (I3Reader.cxx:180 in void I3Reader::OpenNextFile())\nConfigure with filenamelist: /cvmfs/icecube.opensciencegrid.org/data/i3-test-data/event-viewer/Level3aGCD_IC79_EEData_Run00115990_slim.i3\nframe 0 Geometry\nframe 1 DAQ\nframe 2 Physics\nframe 3 DAQ\nframe 4 Physics\nframe 5 DAQ\nframe 6 Physics\nframe 7 DAQ\nNOTICE (I3Tray): I3Tray finishing... (I3Tray.cxx:480 in void I3Tray::Execute(unsigned int))\n.INFO (I3Module): Opened file /cvmfs/icecube.opensciencegrid.org/data/i3-test-data/event-viewer/Level3aGCD_IC79_EEData_Run00115990_slim.i3 (I3Reader.cxx:180 in void I3Reader::OpenNextFile())\nConfigure with filenamelist: /cvmfs/icecube.opensciencegrid.org/data/i3-test-data/event-viewer/Level3aGCD_IC79_EEData_Run00115990_slim.i3 /cvmfs/icecube.opensciencegrid.org/data/i3-test-data/event-viewer/Level3aGCD_IC79_EEData_Run00115990_slim.i3\nframe 0 Geometry\nframe 1 DAQ\nframe 2 Physics\nframe 3 DAQ\nframe 4 Physics\nframe 5 DAQ\nframe 6 Physics\nframe 7 DAQ\nINFO (I3Module): Opened file /cvmfs/icecube.opensciencegrid.org/data/i3-test-data/event-viewer/Level3aGCD_IC79_EEData_Run00115990_slim.i3 (I3Reader.cxx:180 in void I3Reader::OpenNextFile())\nframe 8 Geometry\nshovelio keys: 2 I3Reader keys: 13\n2 common keys\n11 keys only in I3Reader: InIceRawData IceTopVEMPulsesSLC FilterMask IceTopRawData PoleMuonLlhFitFitParams PoleMuonLlhFit PoleMuonLinefit DrivingTime PoleMuonLinefitParams OfflinePulses I3TriggerHierarchy\nERROR (I3Module): All: Exception thrown (I3Module.cxx:116 in void I3Module::Do(void (I3Module::*)()))\nF\n======================================================================\nFAIL: test_repeated_file (__main__.ShovelioI3ReaderComparisonTest)\n----------------------------------------------------------------------\nTraceback (most recent call last):\n File \"steamshovel/resources/resources/test/test_shovelio_against_I3Reader.py\", line 134, in test_repeated_file\n self._run_on( [ f, f ] )\n File \"steamshovel/resources/resources/test/test_shovelio_against_I3Reader.py\", line 126, in _run_on\n self.fail(e.message)\nAssertionError\n\n----------------------------------------------------------------------\nRan 3 tests in 2.336s\n\nFAILED (failures=1)\n}}}",
"reporter": "david.schultz",
"cc": "",
"resolution": "fixed",
"_ts": "1550067167842669",
"component": "combo core",
"summary": "[steamshovel] test failure - race condition",
"priority": "major",
"keywords": "",
"time": "2016-05-31T22:14:58",
"milestone": "",
"owner": "sander.vanheule",
"type": "defect"
}
```
| 1.0 | [steamshovel] test failure - race condition (Trac #1719) - Migrated from https://code.icecube.wisc.edu/ticket/1719
```json
{
"status": "closed",
"changetime": "2019-02-13T14:12:47",
"description": "There is apparently a race condition that needs fixing:\n\n{{{\ndschultz@tide2:~/Documents/offline_software/serialization/build_1.60$ python steamshovel/resources/resources/test/test_shovelio_against_I3Reader.py \nINFO (I3Module): Opened file /cvmfs/icecube.opensciencegrid.org/data/i3-test-data/sim/GCD.i3.gz (I3Reader.cxx:180 in void I3Reader::OpenNextFile())\nConfigure with filenamelist: /cvmfs/icecube.opensciencegrid.org/data/i3-test-data/sim/GCD.i3.gz /cvmfs/icecube.opensciencegrid.org/data/i3-test-data/sim/corsika.F2K010001_IC59_slim.i3.gz\nframe 1 Geometry\nframe 2 Calibration\nframe 3 DetectorStatus\nframe 4 Physics\nINFO (I3Module): Opened file /cvmfs/icecube.opensciencegrid.org/data/i3-test-data/sim/corsika.F2K010001_IC59_slim.i3.gz (I3Reader.cxx:180 in void I3Reader::OpenNextFile())\nframe 5 DAQ\nframe 6 DAQ\nframe 7 DAQ\nframe 8 DAQ\nframe 9 DAQ\nframe 10 DAQ\nframe 11 DAQ\nframe 12 DAQ\nframe 13 DAQ\nframe 14 DAQ\nframe 15 DAQ\nframe 16 DAQ\nframe 17 DAQ\nframe 18 DAQ\nframe 19 DAQ\nframe 20 DAQ\nframe 21 DAQ\nframe 22 DAQ\nframe 23 DAQ\nframe 24 DAQ\nframe 25 DAQ\nframe 26 DAQ\nframe 27 DAQ\nframe 28 DAQ\nframe 29 DAQ\nframe 30 DAQ\nframe 31 DAQ\nframe 32 DAQ\nframe 33 DAQ\nframe 34 DAQ\nframe 35 DAQ\nframe 36 DAQ\nframe 37 DAQ\nframe 38 DAQ\nframe 39 DAQ\nframe 40 DAQ\nframe 41 DAQ\nframe 42 DAQ\nframe 43 DAQ\nframe 44 DAQ\nframe 45 DAQ\nframe 46 DAQ\nframe 47 DAQ\nframe 48 DAQ\nframe 49 DAQ\nframe 50 DAQ\nframe 51 DAQ\nframe 52 DAQ\nframe 53 DAQ\nframe 54 DAQ\nframe 55 DAQ\nframe 56 DAQ\nframe 57 DAQ\nframe 58 DAQ\nframe 59 DAQ\nframe 60 DAQ\nframe 61 DAQ\nframe 62 DAQ\nframe 63 DAQ\nframe 64 DAQ\nframe 65 DAQ\nframe 66 DAQ\nframe 67 DAQ\nframe 68 DAQ\nframe 69 DAQ\nframe 70 DAQ\nframe 71 DAQ\nframe 72 DAQ\nframe 73 DAQ\nframe 74 DAQ\nframe 75 DAQ\nframe 76 DAQ\nframe 77 DAQ\nframe 78 DAQ\nframe 79 DAQ\nframe 80 DAQ\nframe 81 DAQ\nframe 82 DAQ\nframe 83 DAQ\nframe 84 DAQ\nframe 85 DAQ\nframe 86 DAQ\nframe 87 DAQ\nframe 88 DAQ\nframe 89 DAQ\nframe 90 DAQ\nframe 91 DAQ\nframe 92 DAQ\nframe 93 DAQ\nframe 94 DAQ\nframe 95 DAQ\nframe 96 DAQ\nframe 97 DAQ\nframe 98 DAQ\nframe 99 DAQ\nframe 100 DAQ\nframe 101 DAQ\nframe 102 DAQ\nframe 103 DAQ\nframe 104 DAQ\nNOTICE (I3Tray): I3Tray finishing... (I3Tray.cxx:480 in void I3Tray::Execute(unsigned int))\n.INFO (I3Module): Opened file /cvmfs/icecube.opensciencegrid.org/data/i3-test-data/event-viewer/Level3aGCD_IC79_EEData_Run00115990_slim.i3 (I3Reader.cxx:180 in void I3Reader::OpenNextFile())\nConfigure with filenamelist: /cvmfs/icecube.opensciencegrid.org/data/i3-test-data/event-viewer/Level3aGCD_IC79_EEData_Run00115990_slim.i3\nframe 0 Geometry\nframe 1 DAQ\nframe 2 Physics\nframe 3 DAQ\nframe 4 Physics\nframe 5 DAQ\nframe 6 Physics\nframe 7 DAQ\nNOTICE (I3Tray): I3Tray finishing... (I3Tray.cxx:480 in void I3Tray::Execute(unsigned int))\n.INFO (I3Module): Opened file /cvmfs/icecube.opensciencegrid.org/data/i3-test-data/event-viewer/Level3aGCD_IC79_EEData_Run00115990_slim.i3 (I3Reader.cxx:180 in void I3Reader::OpenNextFile())\nConfigure with filenamelist: /cvmfs/icecube.opensciencegrid.org/data/i3-test-data/event-viewer/Level3aGCD_IC79_EEData_Run00115990_slim.i3 /cvmfs/icecube.opensciencegrid.org/data/i3-test-data/event-viewer/Level3aGCD_IC79_EEData_Run00115990_slim.i3\nframe 0 Geometry\nframe 1 DAQ\nframe 2 Physics\nframe 3 DAQ\nframe 4 Physics\nframe 5 DAQ\nframe 6 Physics\nframe 7 DAQ\nINFO (I3Module): Opened file /cvmfs/icecube.opensciencegrid.org/data/i3-test-data/event-viewer/Level3aGCD_IC79_EEData_Run00115990_slim.i3 (I3Reader.cxx:180 in void I3Reader::OpenNextFile())\nframe 8 Geometry\nshovelio keys: 2 I3Reader keys: 13\n2 common keys\n11 keys only in I3Reader: InIceRawData IceTopVEMPulsesSLC FilterMask IceTopRawData PoleMuonLlhFitFitParams PoleMuonLlhFit PoleMuonLinefit DrivingTime PoleMuonLinefitParams OfflinePulses I3TriggerHierarchy\nERROR (I3Module): All: Exception thrown (I3Module.cxx:116 in void I3Module::Do(void (I3Module::*)()))\nF\n======================================================================\nFAIL: test_repeated_file (__main__.ShovelioI3ReaderComparisonTest)\n----------------------------------------------------------------------\nTraceback (most recent call last):\n File \"steamshovel/resources/resources/test/test_shovelio_against_I3Reader.py\", line 134, in test_repeated_file\n self._run_on( [ f, f ] )\n File \"steamshovel/resources/resources/test/test_shovelio_against_I3Reader.py\", line 126, in _run_on\n self.fail(e.message)\nAssertionError\n\n----------------------------------------------------------------------\nRan 3 tests in 2.336s\n\nFAILED (failures=1)\n}}}",
"reporter": "david.schultz",
"cc": "",
"resolution": "fixed",
"_ts": "1550067167842669",
"component": "combo core",
"summary": "[steamshovel] test failure - race condition",
"priority": "major",
"keywords": "",
"time": "2016-05-31T22:14:58",
"milestone": "",
"owner": "sander.vanheule",
"type": "defect"
}
```
| defect | test failure race condition trac migrated from json status closed changetime description there is apparently a race condition that needs fixing n n ndschultz documents offline software serialization build python steamshovel resources resources test test shovelio against py ninfo opened file cvmfs icecube opensciencegrid org data test data sim gcd gz cxx in void opennextfile nconfigure with filenamelist cvmfs icecube opensciencegrid org data test data sim gcd gz cvmfs icecube opensciencegrid org data test data sim corsika slim gz nframe geometry nframe calibration nframe detectorstatus nframe physics ninfo opened file cvmfs icecube opensciencegrid org data test data sim corsika slim gz cxx in void opennextfile nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nframe daq nnotice finishing cxx in void execute unsigned int n info opened file cvmfs icecube opensciencegrid org data test data event viewer eedata slim cxx in void opennextfile nconfigure with filenamelist cvmfs icecube opensciencegrid org data test data event viewer eedata slim nframe geometry nframe daq nframe physics nframe daq nframe physics nframe daq nframe physics nframe daq nnotice finishing cxx in void execute unsigned int n info opened file cvmfs icecube opensciencegrid org data test data event viewer eedata slim cxx in void opennextfile nconfigure with filenamelist cvmfs icecube opensciencegrid org data test data event viewer eedata slim cvmfs icecube opensciencegrid org data test data event viewer eedata slim nframe geometry nframe daq nframe physics nframe daq nframe physics nframe daq nframe physics nframe daq ninfo opened file cvmfs icecube opensciencegrid org data test data event viewer eedata slim cxx in void opennextfile nframe geometry nshovelio keys keys common keys keys only in inicerawdata icetopvempulsesslc filtermask icetoprawdata polemuonllhfitfitparams polemuonllhfit polemuonlinefit drivingtime polemuonlinefitparams offlinepulses nerror all exception thrown cxx in void do void nf n nfail test repeated file main n ntraceback most recent call last n file steamshovel resources resources test test shovelio against py line in test repeated file n self run on n file steamshovel resources resources test test shovelio against py line in run on n self fail e message nassertionerror n n nran tests in n nfailed failures n reporter david schultz cc resolution fixed ts component combo core summary test failure race condition priority major keywords time milestone owner sander vanheule type defect | 1 |
4,878 | 2,610,159,416 | IssuesEvent | 2015-02-26 18:50:35 | chrsmith/republic-at-war | https://api.github.com/repos/chrsmith/republic-at-war | closed | Performance Issue | auto-migrated Priority-Medium Type-Defect | ```
Possible slowdowns on Naboo
Naboo props have EXTREME high poly shadows. Re-build shadow meshes to be no
more than 150 polys per building.
```
-----
Original issue reported on code.google.com by `z3r0...@gmail.com` on 30 Jan 2011 at 5:08 | 1.0 | Performance Issue - ```
Possible slowdowns on Naboo
Naboo props have EXTREME high poly shadows. Re-build shadow meshes to be no
more than 150 polys per building.
```
-----
Original issue reported on code.google.com by `z3r0...@gmail.com` on 30 Jan 2011 at 5:08 | defect | performance issue possible slowdowns on naboo naboo props have extreme high poly shadows re build shadow meshes to be no more than polys per building original issue reported on code google com by gmail com on jan at | 1 |
331,303 | 10,063,856,931 | IssuesEvent | 2019-07-23 07:14:13 | kirbydesign/designsystem | https://api.github.com/repos/kirbydesign/designsystem | closed | [BUG] List event handlers and accessibility issues/requests | bug effort: days medium priority | **Describe the bug**
1. We are able to iterate through list items (by pressing 'tab' on the keyboard), but we are not able to select any of them (by pressing 'space' or 'enter')
2. Hovering on selectable list items does not change the cursor to a pointer
**To Reproduce**
Steps to reproduce the behavior:
1. Go to https://cookbook.kirby.design/home/showcase/action-sheet
2. Click on 'Show action sheet'
3. 'Tab' select any of the items
4. Press space or enter - nothing happens
5. Hover it with your mouse - it does not turn into a pointer
**Kirby version**
- [0.0.136]
**Expected behavior**
List entries should be operated by mouse, touch and keyboard users. This implies that pressing either 'space' or 'enter' should trigger a 'click' event on a list item, just as <button> elements work. See https://developer.mozilla.org/en-US/docs/Web/Accessibility/ARIA/Roles/button_role#Required_JavaScript_Features for more details. Additionally, hovering selectable list items should transform the mouse cursor into a pointer.
**Desktop (please complete the following information):**
- OS: Mac
- Browser: Chrome
- Version Newest | 1.0 | [BUG] List event handlers and accessibility issues/requests - **Describe the bug**
1. We are able to iterate through list items (by pressing 'tab' on the keyboard), but we are not able to select any of them (by pressing 'space' or 'enter')
2. Hovering on selectable list items does not change the cursor to a pointer
**To Reproduce**
Steps to reproduce the behavior:
1. Go to https://cookbook.kirby.design/home/showcase/action-sheet
2. Click on 'Show action sheet'
3. 'Tab' select any of the items
4. Press space or enter - nothing happens
5. Hover it with your mouse - it does not turn into a pointer
**Kirby version**
- [0.0.136]
**Expected behavior**
List entries should be operated by mouse, touch and keyboard users. This implies that pressing either 'space' or 'enter' should trigger a 'click' event on a list item, just as <button> elements work. See https://developer.mozilla.org/en-US/docs/Web/Accessibility/ARIA/Roles/button_role#Required_JavaScript_Features for more details. Additionally, hovering selectable list items should transform the mouse cursor into a pointer.
**Desktop (please complete the following information):**
- OS: Mac
- Browser: Chrome
- Version Newest | non_defect | list event handlers and accessibility issues requests describe the bug we are able to iterate through list items by pressing tab on the keyboard but we are not able to select any of them by pressing space or enter hovering on selectable list items does not change the cursor to a pointer to reproduce steps to reproduce the behavior go to click on show action sheet tab select any of the items press space or enter nothing happens hover it with your mouse it does not turn into a pointer kirby version expected behavior list entries should be operated by mouse touch and keyboard users this implies that pressing either space or enter should trigger a click event on a list item just as elements work see for more details additionally hovering selectable list items should transform the mouse cursor into a pointer desktop please complete the following information os mac browser chrome version newest | 0 |
386,374 | 26,680,067,920 | IssuesEvent | 2023-01-26 16:59:03 | dotnet/iot | https://api.github.com/repos/dotnet/iot | closed | Basic menu system UI for IoT devices | area-device-bindings documentation | If one where to create a graphical/textbased menu system UI in C# using .NET IoT device bindings for input (gpio, buttons, rotary encoders) and displays (1/2/4 line textmode or small graphical displays). Much like you can find on a 3D printer. Would that be something to include in this repo? I don't have the time to create my own nuget repo and maintain it. But I still think it is something that could be useful for a lot of IoT developers since most of these sensor systems with displays would need some kind of UI.
Structure and rendering should probably be separated like Xamarin Forms. I have looked a bit at @migueldeicaza's gui.cs but I think it is a bit too coupled to consoles/terminals to be usable here. But it is the most promising one I've found so far. | 1.0 | Basic menu system UI for IoT devices - If one where to create a graphical/textbased menu system UI in C# using .NET IoT device bindings for input (gpio, buttons, rotary encoders) and displays (1/2/4 line textmode or small graphical displays). Much like you can find on a 3D printer. Would that be something to include in this repo? I don't have the time to create my own nuget repo and maintain it. But I still think it is something that could be useful for a lot of IoT developers since most of these sensor systems with displays would need some kind of UI.
Structure and rendering should probably be separated like Xamarin Forms. I have looked a bit at @migueldeicaza's gui.cs but I think it is a bit too coupled to consoles/terminals to be usable here. But it is the most promising one I've found so far. | non_defect | basic menu system ui for iot devices if one where to create a graphical textbased menu system ui in c using net iot device bindings for input gpio buttons rotary encoders and displays line textmode or small graphical displays much like you can find on a printer would that be something to include in this repo i don t have the time to create my own nuget repo and maintain it but i still think it is something that could be useful for a lot of iot developers since most of these sensor systems with displays would need some kind of ui structure and rendering should probably be separated like xamarin forms i have looked a bit at migueldeicaza s gui cs but i think it is a bit too coupled to consoles terminals to be usable here but it is the most promising one i ve found so far | 0 |
239,806 | 7,800,057,690 | IssuesEvent | 2018-06-09 04:08:19 | space-city-rocketry/Avionics | https://api.github.com/repos/space-city-rocketry/Avionics | closed | Implement BNO055 sensor read code | HIGH PRIORITY good first issue | COMPLETE ISSUE #28 FIRST
https://github.com/space-city-rocketry/Avionics/blob/b60d2b5761aeb8a99799167a28ff3078f48d0b56/SCR_FSW_Prometheus_B/CDH.cpp#L25
Implement sensor read code to fill data variables created in issue #28
| 1.0 | Implement BNO055 sensor read code - COMPLETE ISSUE #28 FIRST
https://github.com/space-city-rocketry/Avionics/blob/b60d2b5761aeb8a99799167a28ff3078f48d0b56/SCR_FSW_Prometheus_B/CDH.cpp#L25
Implement sensor read code to fill data variables created in issue #28
| non_defect | implement sensor read code complete issue first implement sensor read code to fill data variables created in issue | 0 |
48,864 | 5,988,461,879 | IssuesEvent | 2017-06-02 04:49:10 | rancher/rancher | https://api.github.com/repos/rancher/rancher | closed | Can still create two catalogs with same name | area/catalog kind/bug status/resolved status/to-test | **Rancher versions:** master 5/25
**Steps to Reproduce:**
1. Add a catalog through Manage Catalog (environment)
2. Then add a catalog with the same name through settings
**Results:** Allows me to create both and now the global one doesn't show up anywhere.
| 1.0 | Can still create two catalogs with same name - **Rancher versions:** master 5/25
**Steps to Reproduce:**
1. Add a catalog through Manage Catalog (environment)
2. Then add a catalog with the same name through settings
**Results:** Allows me to create both and now the global one doesn't show up anywhere.
| non_defect | can still create two catalogs with same name rancher versions master steps to reproduce add a catalog through manage catalog environment then add a catalog with the same name through settings results allows me to create both and now the global one doesn t show up anywhere | 0 |
440,621 | 12,701,937,693 | IssuesEvent | 2020-06-22 19:08:44 | rstudio/gt | https://api.github.com/repos/rstudio/gt | closed | stubs are not striped even when `row.striping.include_stub = TRUE` | Difficulty: [2] Intermediate Effort: [2] Medium Priority: ♨︎ Critical Type: ☹︎ Bug | The `row.striping.include.stub` option does not affect the table output. I expect the stubs on even number rows to be yellow below.
```r
library(gt)
exibble %>%
gt(rowname_col = "row") %>%
opt_row_striping() %>%
tab_options(row.striping.include_stub = TRUE, row.striping.background_color = "yellow")
```
<img width="485" alt="Screen Shot 2020-04-09 at 12 31 32 PM" src="https://user-images.githubusercontent.com/2104579/78923574-0eadb100-7a5e-11ea-9374-6ec5b473e345.png">
| 1.0 | stubs are not striped even when `row.striping.include_stub = TRUE` - The `row.striping.include.stub` option does not affect the table output. I expect the stubs on even number rows to be yellow below.
```r
library(gt)
exibble %>%
gt(rowname_col = "row") %>%
opt_row_striping() %>%
tab_options(row.striping.include_stub = TRUE, row.striping.background_color = "yellow")
```
<img width="485" alt="Screen Shot 2020-04-09 at 12 31 32 PM" src="https://user-images.githubusercontent.com/2104579/78923574-0eadb100-7a5e-11ea-9374-6ec5b473e345.png">
| non_defect | stubs are not striped even when row striping include stub true the row striping include stub option does not affect the table output i expect the stubs on even number rows to be yellow below r library gt exibble gt rowname col row opt row striping tab options row striping include stub true row striping background color yellow img width alt screen shot at pm src | 0 |
17,707 | 12,511,017,153 | IssuesEvent | 2020-06-02 19:45:08 | Azure/pykusto | https://api.github.com/repos/Azure/pykusto | closed | Add "pretty" function for query | infrastructure | Complex query strings generated by "render()" are hard to read, a pretty function could help debugging | 1.0 | Add "pretty" function for query - Complex query strings generated by "render()" are hard to read, a pretty function could help debugging | non_defect | add pretty function for query complex query strings generated by render are hard to read a pretty function could help debugging | 0 |
12,422 | 2,697,960,017 | IssuesEvent | 2015-04-02 23:41:51 | FreeRADIUS/freeradius-server | https://api.github.com/repos/FreeRADIUS/freeradius-server | closed | Submitted user identity is used as server name in Authenticator Response in inner MS-CHAPv2 in PEAP | defect v2.x.x v3.0.x v3.1.x | FreeRADIUS 2.2.6 submitts the previously submitted user identity in the Authenticator Response message in EAP-MSCHAPv2 when using PEAP.
Correct behavior is to submit the host name.
The submitted server name be seen in wpa_supplicant logs.
Server name value should be in this case: RADIUSTE-C57770
Submitted server name: testidg1@radtestrealm.edu
Wpa_supplicant log output:
```
Successfully initialized wpa_supplicant
wlan0: SME: Trying to authenticate with a0:f3:c1:28:1d:1f (SSID='testnet' freq=2412 MHz)
wlan0: Trying to associate with a0:f3:c1:28:1d:1f (SSID='testnet' freq=2412 MHz)
wlan0: Associated with a0:f3:c1:28:1d:1f
wlan0: CTRL-EVENT-EAP-STARTED EAP authentication started
wlan0: CTRL-EVENT-EAP-PROPOSED-METHOD vendor=0 method=21 -> NAK
wlan0: CTRL-EVENT-EAP-PROPOSED-METHOD vendor=0 method=25
wlan0: CTRL-EVENT-EAP-METHOD EAP vendor 0 method 25 (PEAP) selected
wlan0: CTRL-EVENT-EAP-PEER-CERT depth=1 subject='/C=DE/L=Bochum/O=radtest/CN=RADIUSTest Root CA/emailAddress=none@none.com'
wlan0: CTRL-EVENT-EAP-PEER-CERT depth=0 subject='/C=DE/L=Bochum/O=radtest/CN=RADIUSTE-C57770.radtestrealm.edu/emailAddress=none@none.com'
EAP-MSCHAPV2: RX identifier 133 mschapv2_id 133
EAP-MSCHAPV2: Received challenge
EAP-MSCHAPV2: Authentication Servername - hexdump_ascii(len=25):
74 65 73 74 69 64 67 31 40 72 61 64 74 65 73 74 testidg1@radtest
72 65 61 6c 6d 2e 65 64 75 realm.edu
EAP-MSCHAPV2: Generating Challenge Response
EAP-MSCHAPV2: TX identifier 133 mschapv2_id 133 (response)
EAP-MSCHAPV2: RX identifier 134 mschapv2_id 133
EAP-MSCHAPV2: Received success
EAP-MSCHAPV2: Success message - hexdump_ascii(len=0):
EAP-MSCHAPV2: Authentication succeeded
EAP-TLV: TLV Result - Success - EAP-TLV/Phase2 Completed
wlan0: CTRL-EVENT-EAP-SUCCESS EAP authentication completed successfully
wlan0: WPA: Key negotiation completed with a0:f3:c1:28:1d:1f [PTK=CCMP GTK=TKIP]
wlan0: CTRL-EVENT-CONNECTED - Connection to a0:f3:c1:28:1d:1f completed [id=0 id_str=]
``` | 1.0 | Submitted user identity is used as server name in Authenticator Response in inner MS-CHAPv2 in PEAP - FreeRADIUS 2.2.6 submitts the previously submitted user identity in the Authenticator Response message in EAP-MSCHAPv2 when using PEAP.
Correct behavior is to submit the host name.
The submitted server name be seen in wpa_supplicant logs.
Server name value should be in this case: RADIUSTE-C57770
Submitted server name: testidg1@radtestrealm.edu
Wpa_supplicant log output:
```
Successfully initialized wpa_supplicant
wlan0: SME: Trying to authenticate with a0:f3:c1:28:1d:1f (SSID='testnet' freq=2412 MHz)
wlan0: Trying to associate with a0:f3:c1:28:1d:1f (SSID='testnet' freq=2412 MHz)
wlan0: Associated with a0:f3:c1:28:1d:1f
wlan0: CTRL-EVENT-EAP-STARTED EAP authentication started
wlan0: CTRL-EVENT-EAP-PROPOSED-METHOD vendor=0 method=21 -> NAK
wlan0: CTRL-EVENT-EAP-PROPOSED-METHOD vendor=0 method=25
wlan0: CTRL-EVENT-EAP-METHOD EAP vendor 0 method 25 (PEAP) selected
wlan0: CTRL-EVENT-EAP-PEER-CERT depth=1 subject='/C=DE/L=Bochum/O=radtest/CN=RADIUSTest Root CA/emailAddress=none@none.com'
wlan0: CTRL-EVENT-EAP-PEER-CERT depth=0 subject='/C=DE/L=Bochum/O=radtest/CN=RADIUSTE-C57770.radtestrealm.edu/emailAddress=none@none.com'
EAP-MSCHAPV2: RX identifier 133 mschapv2_id 133
EAP-MSCHAPV2: Received challenge
EAP-MSCHAPV2: Authentication Servername - hexdump_ascii(len=25):
74 65 73 74 69 64 67 31 40 72 61 64 74 65 73 74 testidg1@radtest
72 65 61 6c 6d 2e 65 64 75 realm.edu
EAP-MSCHAPV2: Generating Challenge Response
EAP-MSCHAPV2: TX identifier 133 mschapv2_id 133 (response)
EAP-MSCHAPV2: RX identifier 134 mschapv2_id 133
EAP-MSCHAPV2: Received success
EAP-MSCHAPV2: Success message - hexdump_ascii(len=0):
EAP-MSCHAPV2: Authentication succeeded
EAP-TLV: TLV Result - Success - EAP-TLV/Phase2 Completed
wlan0: CTRL-EVENT-EAP-SUCCESS EAP authentication completed successfully
wlan0: WPA: Key negotiation completed with a0:f3:c1:28:1d:1f [PTK=CCMP GTK=TKIP]
wlan0: CTRL-EVENT-CONNECTED - Connection to a0:f3:c1:28:1d:1f completed [id=0 id_str=]
``` | defect | submitted user identity is used as server name in authenticator response in inner ms in peap freeradius submitts the previously submitted user identity in the authenticator response message in eap when using peap correct behavior is to submit the host name the submitted server name be seen in wpa supplicant logs server name value should be in this case radiuste submitted server name radtestrealm edu wpa supplicant log output successfully initialized wpa supplicant sme trying to authenticate with ssid testnet freq mhz trying to associate with ssid testnet freq mhz associated with ctrl event eap started eap authentication started ctrl event eap proposed method vendor method nak ctrl event eap proposed method vendor method ctrl event eap method eap vendor method peap selected ctrl event eap peer cert depth subject c de l bochum o radtest cn radiustest root ca emailaddress none none com ctrl event eap peer cert depth subject c de l bochum o radtest cn radiuste radtestrealm edu emailaddress none none com eap rx identifier id eap received challenge eap authentication servername hexdump ascii len radtest realm edu eap generating challenge response eap tx identifier id response eap rx identifier id eap received success eap success message hexdump ascii len eap authentication succeeded eap tlv tlv result success eap tlv completed ctrl event eap success eap authentication completed successfully wpa key negotiation completed with ctrl event connected connection to completed | 1 |
75,297 | 25,754,654,119 | IssuesEvent | 2022-12-08 15:33:49 | jOOQ/jOOQ | https://api.github.com/repos/jOOQ/jOOQ | closed | Broken link in a logging statement and Javadoc | T: Defect C: Documentation P: Medium R: Fixed E: All Editions | ### Expected behavior
the new valid link seems to be : https://jdbc.postgresql.org/documentation/query/#getting-results-based-on-a-cursor
### Actual behavior
The log written by jooq is :
> INFO org.jooq.impl.AbstractResultQuery.lambda$info$5 [L.349] - Fetch Size : A fetch size of 1000 was set on a auto-commit PostgreSQL connection, which is not recommended. See http://jdbc.postgresql.org/documentation/head/query.html#query-with-cursor
the link : http://jdbc.postgresql.org/documentation/head/query.html#query-with-cursor is broken
### Steps to reproduce the problem
any select() with a `fetchSize(xxx)` followed by a `fetchLazy()` outside of a transactional context will produce the above log statement.
### jOOQ Version
jooq 3.17.5
### Database product and version
postgresql 14
### Java Version
azul 17
### OS Version
ubuntu 22
### JDBC driver name and version (include name if unofficial driver)
_No response_ | 1.0 | Broken link in a logging statement and Javadoc - ### Expected behavior
the new valid link seems to be : https://jdbc.postgresql.org/documentation/query/#getting-results-based-on-a-cursor
### Actual behavior
The log written by jooq is :
> INFO org.jooq.impl.AbstractResultQuery.lambda$info$5 [L.349] - Fetch Size : A fetch size of 1000 was set on a auto-commit PostgreSQL connection, which is not recommended. See http://jdbc.postgresql.org/documentation/head/query.html#query-with-cursor
the link : http://jdbc.postgresql.org/documentation/head/query.html#query-with-cursor is broken
### Steps to reproduce the problem
any select() with a `fetchSize(xxx)` followed by a `fetchLazy()` outside of a transactional context will produce the above log statement.
### jOOQ Version
jooq 3.17.5
### Database product and version
postgresql 14
### Java Version
azul 17
### OS Version
ubuntu 22
### JDBC driver name and version (include name if unofficial driver)
_No response_ | defect | broken link in a logging statement and javadoc expected behavior the new valid link seems to be actual behavior the log written by jooq is info org jooq impl abstractresultquery lambda info fetch size a fetch size of was set on a auto commit postgresql connection which is not recommended see the link is broken steps to reproduce the problem any select with a fetchsize xxx followed by a fetchlazy outside of a transactional context will produce the above log statement jooq version jooq database product and version postgresql java version azul os version ubuntu jdbc driver name and version include name if unofficial driver no response | 1 |
35,353 | 7,709,226,549 | IssuesEvent | 2018-05-22 08:32:53 | cakephp/cakephp | https://api.github.com/repos/cakephp/cakephp | closed | Undefined index in entity properties for new entity | Defect | This is a (multiple allowed):
* [x] bug
* [ ] enhancement
* [ ] feature-discussion (RFC)
* CakePHP Version: 3.6.4.
* Platform and Target: Apache / MySQL.
### What you did
I wrote the following reusable Trait for Users:
`namespace App\Model\Traits;
trait UsersPropertiesTrait
{
/**
* Return fullName of contact.
*
* @return string
*/
protected function _getFullName()
{
return $this->_properties['first_name'] . ' ' . $this->_properties['last_name'];
}
/**
* Return fullName + email.
*
* @return string
*/
protected function _getFullLabel()
{
return $this->_getFullName() . ' (' . $this->_properties['email'] . ')';
}
}`
Entity code for Users, based on CakeDC Users Class:
`namespace App\Model\Entity;
use App\Model\Traits\UsersPropertiesTrait;
use CakeDC\Users\Model\Entity\User;
class MyUser extends User
{
use UsersPropertiesTrait;
/**
* Fields that can be mass assigned using newEntity() or patchEntity().
*
* Note that when '*' is set to true, this allows all unspecified fields to
* be mass assigned. For security purposes, it is advised to set '*' to false
* (or remove it), and explicitly make individual fields accessible as needed.
*
* @var array
*/
protected $_accessible = [
'*' => true,
'id' => false
];
/**
* Fields that are excluded from JSON versions of the entity.
*
* @var array
*/
protected $_hidden = [
'password',
'token'
];
protected $_virtual = ['full_label', 'full_name'];
}`
The virtual properties are working if the data already exists. But since my upgrade to 3.6.4, it doesn't work with a new entity.
### What happened
**Undefined index: first_name in [/var/www/html/charlie/src/Model/Traits/UsersPropertiesTrait.php, line 14]**
The new entity hasn't the expected keys.
### What you expected to happen
What can i do to make this work again ?
| 1.0 | Undefined index in entity properties for new entity - This is a (multiple allowed):
* [x] bug
* [ ] enhancement
* [ ] feature-discussion (RFC)
* CakePHP Version: 3.6.4.
* Platform and Target: Apache / MySQL.
### What you did
I wrote the following reusable Trait for Users:
`namespace App\Model\Traits;
trait UsersPropertiesTrait
{
/**
* Return fullName of contact.
*
* @return string
*/
protected function _getFullName()
{
return $this->_properties['first_name'] . ' ' . $this->_properties['last_name'];
}
/**
* Return fullName + email.
*
* @return string
*/
protected function _getFullLabel()
{
return $this->_getFullName() . ' (' . $this->_properties['email'] . ')';
}
}`
Entity code for Users, based on CakeDC Users Class:
`namespace App\Model\Entity;
use App\Model\Traits\UsersPropertiesTrait;
use CakeDC\Users\Model\Entity\User;
class MyUser extends User
{
use UsersPropertiesTrait;
/**
* Fields that can be mass assigned using newEntity() or patchEntity().
*
* Note that when '*' is set to true, this allows all unspecified fields to
* be mass assigned. For security purposes, it is advised to set '*' to false
* (or remove it), and explicitly make individual fields accessible as needed.
*
* @var array
*/
protected $_accessible = [
'*' => true,
'id' => false
];
/**
* Fields that are excluded from JSON versions of the entity.
*
* @var array
*/
protected $_hidden = [
'password',
'token'
];
protected $_virtual = ['full_label', 'full_name'];
}`
The virtual properties are working if the data already exists. But since my upgrade to 3.6.4, it doesn't work with a new entity.
### What happened
**Undefined index: first_name in [/var/www/html/charlie/src/Model/Traits/UsersPropertiesTrait.php, line 14]**
The new entity hasn't the expected keys.
### What you expected to happen
What can i do to make this work again ?
| defect | undefined index in entity properties for new entity this is a multiple allowed bug enhancement feature discussion rfc cakephp version platform and target apache mysql what you did i wrote the following reusable trait for users namespace app model traits trait userspropertiestrait return fullname of contact return string protected function getfullname return this properties this properties return fullname email return string protected function getfulllabel return this getfullname this properties entity code for users based on cakedc users class namespace app model entity use app model traits userspropertiestrait use cakedc users model entity user class myuser extends user use userspropertiestrait fields that can be mass assigned using newentity or patchentity note that when is set to true this allows all unspecified fields to be mass assigned for security purposes it is advised to set to false or remove it and explicitly make individual fields accessible as needed var array protected accessible true id false fields that are excluded from json versions of the entity var array protected hidden password token protected virtual the virtual properties are working if the data already exists but since my upgrade to it doesn t work with a new entity what happened undefined index first name in the new entity hasn t the expected keys what you expected to happen what can i do to make this work again | 1 |
4,446 | 2,610,094,237 | IssuesEvent | 2015-02-26 18:28:27 | chrsmith/dsdsdaadf | https://api.github.com/repos/chrsmith/dsdsdaadf | opened | 深圳红蓝光祛除青春痘 | auto-migrated Priority-Medium Type-Defect | ```
深圳红蓝光祛除青春痘【深圳韩方科颜全国热线400-869-1818,24
小时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩��
�秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,�
��方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹
”健康祛痘技术并结合先进“先进豪华彩光”仪,开创国内��
�业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上�
��痘痘。
```
-----
Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 8:14 | 1.0 | 深圳红蓝光祛除青春痘 - ```
深圳红蓝光祛除青春痘【深圳韩方科颜全国热线400-869-1818,24
小时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩��
�秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,�
��方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹
”健康祛痘技术并结合先进“先进豪华彩光”仪,开创国内��
�业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上�
��痘痘。
```
-----
Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 8:14 | defect | 深圳红蓝光祛除青春痘 深圳红蓝光祛除青春痘【 , 】深圳韩方科颜专业祛痘连锁机构,机构以韩�� �秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,� ��方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹 ”健康祛痘技术并结合先进“先进豪华彩光”仪,开创国内�� �业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上� ��痘痘。 original issue reported on code google com by szft com on may at | 1 |
29,916 | 5,956,386,373 | IssuesEvent | 2017-05-28 16:19:38 | bridgedotnet/Bridge | https://api.github.com/repos/bridgedotnet/Bridge | closed | Case convention is ignored if nested class exists | defect in progress | ### Steps To Reproduce
https://deck.net/6cd77c2dd2e4d1c55206ef77a669d41e
```c#
[External]
[Namespace(false)]
[Convention(Target = ConventionTarget.Class, Notation = Notation.LowerCamelCase)]
public static class Startup1
{
public static class Next
{
public static void Test()
{
}
}
}
[External]
[Namespace(false)]
[Convention(Target = ConventionTarget.Class, Notation = Notation.LowerCamelCase)]
public static class Startup2
{
public static void Test()
{
}
}
public class Program
{
public static void Main()
{
Startup1.Next.Test();
Startup2.Test();
}
}
```
### Expected Result
```js
/**
* @compiler Bridge.NET 16.0.0-beta
*/
Bridge.assembly("Demo", function ($asm, globals) {
"use strict";
Bridge.define("Demo.Program", {
main: function Main() {
startup1.next.Test();
startup2.Test();
}
});
});
```
### Actual Result
```js
/**
* @compiler Bridge.NET 16.0.0-beta
*/
Bridge.assembly("Demo", function ($asm, globals) {
"use strict";
Bridge.define("Demo.Program", {
main: function Main() {
Startup1.next.Test();
startup2.Test();
}
});
});
```
| 1.0 | Case convention is ignored if nested class exists - ### Steps To Reproduce
https://deck.net/6cd77c2dd2e4d1c55206ef77a669d41e
```c#
[External]
[Namespace(false)]
[Convention(Target = ConventionTarget.Class, Notation = Notation.LowerCamelCase)]
public static class Startup1
{
public static class Next
{
public static void Test()
{
}
}
}
[External]
[Namespace(false)]
[Convention(Target = ConventionTarget.Class, Notation = Notation.LowerCamelCase)]
public static class Startup2
{
public static void Test()
{
}
}
public class Program
{
public static void Main()
{
Startup1.Next.Test();
Startup2.Test();
}
}
```
### Expected Result
```js
/**
* @compiler Bridge.NET 16.0.0-beta
*/
Bridge.assembly("Demo", function ($asm, globals) {
"use strict";
Bridge.define("Demo.Program", {
main: function Main() {
startup1.next.Test();
startup2.Test();
}
});
});
```
### Actual Result
```js
/**
* @compiler Bridge.NET 16.0.0-beta
*/
Bridge.assembly("Demo", function ($asm, globals) {
"use strict";
Bridge.define("Demo.Program", {
main: function Main() {
Startup1.next.Test();
startup2.Test();
}
});
});
```
| defect | case convention is ignored if nested class exists steps to reproduce c public static class public static class next public static void test public static class public static void test public class program public static void main next test test expected result js compiler bridge net beta bridge assembly demo function asm globals use strict bridge define demo program main function main next test test actual result js compiler bridge net beta bridge assembly demo function asm globals use strict bridge define demo program main function main next test test | 1 |
27,474 | 13,255,890,931 | IssuesEvent | 2020-08-20 11:45:51 | dotnet/runtime | https://api.github.com/repos/dotnet/runtime | opened | System.Text.Json 5.x preview deserialization is slower than Newtonsoft when using Stream scenarios on Xamarin platforms | tenet-performance | ### Description
I have found that if you use Newtonsoft's ability to deserialize from a `Stream `via its `JsonTextReader()`, then on some platforms, it is comparable to or even better than System.Text.Json. This is not the stated goal of your efforts.
So a typical case is deserializing the result of a HTTP call, which in Newtonsoft should be done like this:
```
HttpResponseMessage response = await _client.GetAsync($"{ControllerName}{additionalParameters}");
response.EnsureSuccessStatusCode();
using (var stream = await response.Content.ReadAsStreamAsync())
using (var reader = new StreamReader(stream))
using (var json = new JsonTextReader(reader))
items = _serializer.Deserialize<IEnumerable<T>>(json);
```
While with System.Text.Json the deserializing part will be:
```
using (var stream = await response.Content.ReadAsStreamAsync())
_items = await System.Text.Json.JsonSerializer.DeserializeAsync<IEnumerable<T>>(stream, JsonOptions.Default1);
```
I have found the performance to be roughly the same on iOS devices and MUCH slower (25-40%) on Android. But even equal performance is not the goal, I presume.
I don't know of a unit testing framework that would let me verify this, especially on real physical devices, which may have quite different characteristics than emulators.
So I have created a small Xamarin.Forms app to measure this, which I will be happy to share with you if you find it relevant.
I used another app of mine to [share screen shots of the results](https://journeydoc-dev-web.azurewebsites.net/PointsList/b550fd78-ebde-4c85-b017-8b41f9c5ee37/9c7ba16b-292e-4fd6-8e95-2369ff50db0c).
### Configuration
* Net Standard 2.1 (Xamarin.Forms 4.8.0.1269)
* Newtonsoft.Json 12.0.3
* System.Text.Json 5.0.0-preview.7.20364.11
* iOS 13.6 and Xamarin 10
* iPhone7 and Galaxy A50
### Data
In my app, I store a JSON payload from a real world domain model of 526 KB. It is deserialized 5 times and the average is returned for the two serializers.
If can measure that the performance of this is 3-7% slower on the iOS device and 25-40% slower on the Android device. On simulators, I get similar results.
<!--
* Please include any benchmark results, images of graphs, timings or measurements, or callstacks that are relevant.
* If possible please include text as text rather than images (so it shows up in searches).
* If applicable please include before and after measurements.
* There is helpful information about measuring code in this repo [here](https://github.com/dotnet/performance/blob/master/docs/benchmarking-workflow-dotnet-runtime.md).
--> | True | System.Text.Json 5.x preview deserialization is slower than Newtonsoft when using Stream scenarios on Xamarin platforms - ### Description
I have found that if you use Newtonsoft's ability to deserialize from a `Stream `via its `JsonTextReader()`, then on some platforms, it is comparable to or even better than System.Text.Json. This is not the stated goal of your efforts.
So a typical case is deserializing the result of a HTTP call, which in Newtonsoft should be done like this:
```
HttpResponseMessage response = await _client.GetAsync($"{ControllerName}{additionalParameters}");
response.EnsureSuccessStatusCode();
using (var stream = await response.Content.ReadAsStreamAsync())
using (var reader = new StreamReader(stream))
using (var json = new JsonTextReader(reader))
items = _serializer.Deserialize<IEnumerable<T>>(json);
```
While with System.Text.Json the deserializing part will be:
```
using (var stream = await response.Content.ReadAsStreamAsync())
_items = await System.Text.Json.JsonSerializer.DeserializeAsync<IEnumerable<T>>(stream, JsonOptions.Default1);
```
I have found the performance to be roughly the same on iOS devices and MUCH slower (25-40%) on Android. But even equal performance is not the goal, I presume.
I don't know of a unit testing framework that would let me verify this, especially on real physical devices, which may have quite different characteristics than emulators.
So I have created a small Xamarin.Forms app to measure this, which I will be happy to share with you if you find it relevant.
I used another app of mine to [share screen shots of the results](https://journeydoc-dev-web.azurewebsites.net/PointsList/b550fd78-ebde-4c85-b017-8b41f9c5ee37/9c7ba16b-292e-4fd6-8e95-2369ff50db0c).
### Configuration
* Net Standard 2.1 (Xamarin.Forms 4.8.0.1269)
* Newtonsoft.Json 12.0.3
* System.Text.Json 5.0.0-preview.7.20364.11
* iOS 13.6 and Xamarin 10
* iPhone7 and Galaxy A50
### Data
In my app, I store a JSON payload from a real world domain model of 526 KB. It is deserialized 5 times and the average is returned for the two serializers.
If can measure that the performance of this is 3-7% slower on the iOS device and 25-40% slower on the Android device. On simulators, I get similar results.
<!--
* Please include any benchmark results, images of graphs, timings or measurements, or callstacks that are relevant.
* If possible please include text as text rather than images (so it shows up in searches).
* If applicable please include before and after measurements.
* There is helpful information about measuring code in this repo [here](https://github.com/dotnet/performance/blob/master/docs/benchmarking-workflow-dotnet-runtime.md).
--> | non_defect | system text json x preview deserialization is slower than newtonsoft when using stream scenarios on xamarin platforms description i have found that if you use newtonsoft s ability to deserialize from a stream via its jsontextreader then on some platforms it is comparable to or even better than system text json this is not the stated goal of your efforts so a typical case is deserializing the result of a http call which in newtonsoft should be done like this httpresponsemessage response await client getasync controllername additionalparameters response ensuresuccessstatuscode using var stream await response content readasstreamasync using var reader new streamreader stream using var json new jsontextreader reader items serializer deserialize json while with system text json the deserializing part will be using var stream await response content readasstreamasync items await system text json jsonserializer deserializeasync stream jsonoptions i have found the performance to be roughly the same on ios devices and much slower on android but even equal performance is not the goal i presume i don t know of a unit testing framework that would let me verify this especially on real physical devices which may have quite different characteristics than emulators so i have created a small xamarin forms app to measure this which i will be happy to share with you if you find it relevant i used another app of mine to configuration net standard xamarin forms newtonsoft json system text json preview ios and xamarin and galaxy data in my app i store a json payload from a real world domain model of kb it is deserialized times and the average is returned for the two serializers if can measure that the performance of this is slower on the ios device and slower on the android device on simulators i get similar results please include any benchmark results images of graphs timings or measurements or callstacks that are relevant if possible please include text as text rather than images so it shows up in searches if applicable please include before and after measurements there is helpful information about measuring code in this repo | 0 |
55,158 | 14,244,358,322 | IssuesEvent | 2020-11-19 06:45:46 | line/armeria | https://api.github.com/repos/line/armeria | closed | IPV6 with scope id | defect | Endpoint accepts the ipv6 address with scope id, like xxxxxxxxxxxxxxxxxxxxx%2, but downstream calls use the Endpoint's host without scope id:
io.netty.channel.AbstractChannel$AnnotatedConnectException: connect(..) failed: Invalid argument: xxxxxxxxxxxxxxxxxxxxxx%2 | 1.0 | IPV6 with scope id - Endpoint accepts the ipv6 address with scope id, like xxxxxxxxxxxxxxxxxxxxx%2, but downstream calls use the Endpoint's host without scope id:
io.netty.channel.AbstractChannel$AnnotatedConnectException: connect(..) failed: Invalid argument: xxxxxxxxxxxxxxxxxxxxxx%2 | defect | with scope id endpoint accepts the address with scope id like xxxxxxxxxxxxxxxxxxxxx but downstream calls use the endpoint s host without scope id io netty channel abstractchannel annotatedconnectexception connect failed invalid argument xxxxxxxxxxxxxxxxxxxxxx | 1 |
246,103 | 18,835,523,226 | IssuesEvent | 2021-11-11 00:08:31 | zulip/zulip | https://api.github.com/repos/zulip/zulip | opened | Change register_queue API for `narrow` to match how narrows are encoded elsewhere | area: api area: documentation (api and integrations) | It was pointed out in [this conversation](https://chat.zulip.org/#narrow/stream/19-documentation/topic/'narrow'.20inconsistent.20api.20docs/near/1278088) that the /register API is still using the pre-dictionary old format for encoding narrows. We could document the inconsistency, but it'd be better to make it consistency, because the only specific production use for this feature that I'm aware of is `zerver/views/home.py` logic that's internal to the server, so there shouldn't be any complicated client-side migrations to do as part of cleaning this up.
(The background is that this is a rarely-used feature that likely was forgotten in the migration to the dictionary-based narrow format years ago).
This is an API change and should use our standard approach for documenting API changes in the API changelog, etc., so that any clients using this API can handle things correctly.
| 1.0 | Change register_queue API for `narrow` to match how narrows are encoded elsewhere - It was pointed out in [this conversation](https://chat.zulip.org/#narrow/stream/19-documentation/topic/'narrow'.20inconsistent.20api.20docs/near/1278088) that the /register API is still using the pre-dictionary old format for encoding narrows. We could document the inconsistency, but it'd be better to make it consistency, because the only specific production use for this feature that I'm aware of is `zerver/views/home.py` logic that's internal to the server, so there shouldn't be any complicated client-side migrations to do as part of cleaning this up.
(The background is that this is a rarely-used feature that likely was forgotten in the migration to the dictionary-based narrow format years ago).
This is an API change and should use our standard approach for documenting API changes in the API changelog, etc., so that any clients using this API can handle things correctly.
| non_defect | change register queue api for narrow to match how narrows are encoded elsewhere it was pointed out in that the register api is still using the pre dictionary old format for encoding narrows we could document the inconsistency but it d be better to make it consistency because the only specific production use for this feature that i m aware of is zerver views home py logic that s internal to the server so there shouldn t be any complicated client side migrations to do as part of cleaning this up the background is that this is a rarely used feature that likely was forgotten in the migration to the dictionary based narrow format years ago this is an api change and should use our standard approach for documenting api changes in the api changelog etc so that any clients using this api can handle things correctly | 0 |
230,942 | 25,482,819,042 | IssuesEvent | 2022-11-26 01:37:51 | maddyCode23/linux-4.1.15 | https://api.github.com/repos/maddyCode23/linux-4.1.15 | reopened | CVE-2017-5549 (Medium) detected in linux-stable-rtv4.1.33 | security vulnerability | ## CVE-2017-5549 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/maddyCode23/linux-4.1.15/commit/f1f3d2b150be669390b32dfea28e773471bdd6e7">f1f3d2b150be669390b32dfea28e773471bdd6e7</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/usb/serial/kl5kusb105.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The klsi_105_get_line_state function in drivers/usb/serial/kl5kusb105.c in the Linux kernel before 4.9.5 places uninitialized heap-memory contents into a log entry upon a failure to read the line status, which allows local users to obtain sensitive information by reading the log.
<p>Publish Date: 2017-02-06
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2017-5549>CVE-2017-5549</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2017-5549">https://www.linuxkernelcves.com/cves/CVE-2017-5549</a></p>
<p>Release Date: 2017-02-06</p>
<p>Fix Resolution: v4.10-rc4,v3.12.70,v3.16.41,v3.2.86,v4.1.39,v4.4.44,v4.9.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2017-5549 (Medium) detected in linux-stable-rtv4.1.33 - ## CVE-2017-5549 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/maddyCode23/linux-4.1.15/commit/f1f3d2b150be669390b32dfea28e773471bdd6e7">f1f3d2b150be669390b32dfea28e773471bdd6e7</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/usb/serial/kl5kusb105.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The klsi_105_get_line_state function in drivers/usb/serial/kl5kusb105.c in the Linux kernel before 4.9.5 places uninitialized heap-memory contents into a log entry upon a failure to read the line status, which allows local users to obtain sensitive information by reading the log.
<p>Publish Date: 2017-02-06
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2017-5549>CVE-2017-5549</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2017-5549">https://www.linuxkernelcves.com/cves/CVE-2017-5549</a></p>
<p>Release Date: 2017-02-06</p>
<p>Fix Resolution: v4.10-rc4,v3.12.70,v3.16.41,v3.2.86,v4.1.39,v4.4.44,v4.9.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | cve medium detected in linux stable cve medium severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in head commit a href found in base branch master vulnerable source files drivers usb serial c vulnerability details the klsi get line state function in drivers usb serial c in the linux kernel before places uninitialized heap memory contents into a log entry upon a failure to read the line status which allows local users to obtain sensitive information by reading the log publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend | 0 |
16,546 | 2,914,757,626 | IssuesEvent | 2015-06-23 08:08:53 | primefaces/primefaces | https://api.github.com/repos/primefaces/primefaces | closed | Filter for Mobile DataList appears 3 times. | 5.1.20 5.2.7 defect | For some time, we had some mobile pages using Primefaces Mobile. In a concrete page we have a dataList component with a single filter using passthrough components which used to work since PF 5.1.3. But last week we upgraded Primefaces to 5.1.4 and now instead of 1 single filter, appears 3 filters in the same page and the first one does not work.
We upgraded to PF 5.1.4 because it fixes some bugs related to other component, autocomplete, so we don't want to downgrade version.
I also created a forum thread about this: http://forum.primefaces.org/viewtopic.php?f=8&t=40481
Here is the code snippet:
```xml
<h:form id="listForm">
<p:dataList id="filesList" value="#{mobileSignFilesBean.filesList}" var="files" pt:data-filter="true" pt:data-inset="true" >
<p:column filterBy="#{files.fileNumber} #{files.customer} #{files.responsible}" filterMatchMode="contains">
<p:commandLink value="#{files.fileNumber} - #{files.customer} - #{files.responsible}" update=":details:detailForm" action="pm:details">
<f:setPropertyActionListener value="#{files}" target="#{mobileSignFilesBean.fileToDo}" />
</p:commandLink>
</p:column>
</p:dataList>
</h:form>
``` | 1.0 | Filter for Mobile DataList appears 3 times. - For some time, we had some mobile pages using Primefaces Mobile. In a concrete page we have a dataList component with a single filter using passthrough components which used to work since PF 5.1.3. But last week we upgraded Primefaces to 5.1.4 and now instead of 1 single filter, appears 3 filters in the same page and the first one does not work.
We upgraded to PF 5.1.4 because it fixes some bugs related to other component, autocomplete, so we don't want to downgrade version.
I also created a forum thread about this: http://forum.primefaces.org/viewtopic.php?f=8&t=40481
Here is the code snippet:
```xml
<h:form id="listForm">
<p:dataList id="filesList" value="#{mobileSignFilesBean.filesList}" var="files" pt:data-filter="true" pt:data-inset="true" >
<p:column filterBy="#{files.fileNumber} #{files.customer} #{files.responsible}" filterMatchMode="contains">
<p:commandLink value="#{files.fileNumber} - #{files.customer} - #{files.responsible}" update=":details:detailForm" action="pm:details">
<f:setPropertyActionListener value="#{files}" target="#{mobileSignFilesBean.fileToDo}" />
</p:commandLink>
</p:column>
</p:dataList>
</h:form>
``` | defect | filter for mobile datalist appears times for some time we had some mobile pages using primefaces mobile in a concrete page we have a datalist component with a single filter using passthrough components which used to work since pf but last week we upgraded primefaces to and now instead of single filter appears filters in the same page and the first one does not work we upgraded to pf because it fixes some bugs related to other component autocomplete so we don t want to downgrade version i also created a forum thread about this here is the code snippet xml | 1 |
2,677 | 4,877,530,935 | IssuesEvent | 2016-11-16 15:54:56 | CartoDB/cartodb | https://api.github.com/repos/CartoDB/cartodb | closed | We should not let execute geocoding jobs concurrently | Data-services geocoding | After some problems spotted by @AbelVM we've found that if we call two heavy geocoding tasks we are going to get a deadlock because they're acting on the same table so we need to forbid it.
To forbid this part we need to check for other geocoding jobs in the `geocodings` table for the same table with an status of `[started|running|submitted]`.
Also we have to check why are we not storing the `remote_id` from the first time because if the job fails we can use that `remote_id` to retrieve the finished batched job from nokia.
Last thing is to note that we're truncating the logs in the logs table. Most probably we are reaching the max number of characters.
// @saleiva @rafatower
| 1.0 | We should not let execute geocoding jobs concurrently - After some problems spotted by @AbelVM we've found that if we call two heavy geocoding tasks we are going to get a deadlock because they're acting on the same table so we need to forbid it.
To forbid this part we need to check for other geocoding jobs in the `geocodings` table for the same table with an status of `[started|running|submitted]`.
Also we have to check why are we not storing the `remote_id` from the first time because if the job fails we can use that `remote_id` to retrieve the finished batched job from nokia.
Last thing is to note that we're truncating the logs in the logs table. Most probably we are reaching the max number of characters.
// @saleiva @rafatower
| non_defect | we should not let execute geocoding jobs concurrently after some problems spotted by abelvm we ve found that if we call two heavy geocoding tasks we are going to get a deadlock because they re acting on the same table so we need to forbid it to forbid this part we need to check for other geocoding jobs in the geocodings table for the same table with an status of also we have to check why are we not storing the remote id from the first time because if the job fails we can use that remote id to retrieve the finished batched job from nokia last thing is to note that we re truncating the logs in the logs table most probably we are reaching the max number of characters saleiva rafatower | 0 |
412,327 | 27,854,127,702 | IssuesEvent | 2023-03-20 21:10:51 | opendp/opendp | https://api.github.com/repos/opendp/opendp | closed | Port of remaining SmartNoise notebooks | CATEGORY: Documentation OpenDP Core Effort 3 - Large :cake: sn-core-deprecate | After we have a reasonable version of #199, we should look at converting the remaining SmartNoise notebooks to use OpenDP directly.
Ready:
- [x] https://github.com/opendp/opendp/issues/332
- [x] https://github.com/opendp/opendp/issues/333
- [ ] https://github.com/opendp/opendp/issues/334
- [x] https://github.com/opendp/opendp/issues/335
Missing dependencies:
- [x] https://github.com/opendp/opendp/issues/336
- https://github.com/opendp/smartnoise-samples/blob/master/analysis/covariance.ipynb
- needs data munging transformations, covariance is multivariate
- https://github.com/opendp/smartnoise-samples/blob/master/analysis/tutorial_mental_health_in_tech_survey.ipynb
- needs filtering | 1.0 | Port of remaining SmartNoise notebooks - After we have a reasonable version of #199, we should look at converting the remaining SmartNoise notebooks to use OpenDP directly.
Ready:
- [x] https://github.com/opendp/opendp/issues/332
- [x] https://github.com/opendp/opendp/issues/333
- [ ] https://github.com/opendp/opendp/issues/334
- [x] https://github.com/opendp/opendp/issues/335
Missing dependencies:
- [x] https://github.com/opendp/opendp/issues/336
- https://github.com/opendp/smartnoise-samples/blob/master/analysis/covariance.ipynb
- needs data munging transformations, covariance is multivariate
- https://github.com/opendp/smartnoise-samples/blob/master/analysis/tutorial_mental_health_in_tech_survey.ipynb
- needs filtering | non_defect | port of remaining smartnoise notebooks after we have a reasonable version of we should look at converting the remaining smartnoise notebooks to use opendp directly ready missing dependencies needs data munging transformations covariance is multivariate needs filtering | 0 |
578,125 | 17,144,811,363 | IssuesEvent | 2021-07-13 13:37:06 | eventespresso/barista | https://api.github.com/repos/eventespresso/barista | closed | Review known issues we have with chakra when v1.0 is available | C: assets 💎 D: Packages 📦 P3: med priority 😐 S8: needs-feedback ❔ T: task 🧹 | 1. Modal shrinking, more info here: https://github.com/eventespresso/barista/issues/183
2. Tooltips are not working if displayed in a modal | 1.0 | Review known issues we have with chakra when v1.0 is available - 1. Modal shrinking, more info here: https://github.com/eventespresso/barista/issues/183
2. Tooltips are not working if displayed in a modal | non_defect | review known issues we have with chakra when is available modal shrinking more info here tooltips are not working if displayed in a modal | 0 |
286,759 | 24,782,632,343 | IssuesEvent | 2022-10-24 07:06:24 | Joystream/pioneer | https://api.github.com/repos/Joystream/pioneer | closed | Allow to re-use accounts with voting locks from previous election cycle without recovery. | bug mainnet to-triage qa-task qa-tested-ready-for-prod carthage | ## Scope
* [ ] In the staking account selector for voting display all accounts user has.
* [ ] Apply same designs we used in proposals account selector to voting accounts with specificities:
Voting lock applied in previous elections can be reused in subsequent elections without recovering
Voting lock applied in current election cannot be reused in current election (show as greyed out with lock icon and tooltip). Tooltip copy: "
* [ ] :point_up: Add tooltip on hover "Voting lock applies to your staking account when vote is casted, preventing this account to be reused in the same election cycle again. To make the balances transferable again, you can recover this lock in my profile> [linking to accounts tab]."
* [ ] :point_up: :point_up: Do not block accounts with voting locks from previous elections to be used in current election.
* [ ] Only the last lock can be recovered. Casting new vote automatically "overrides" the previous lock. [LInk to slack conv with confirmation](https://joystreamworkspace.slack.com/archives/C039XD9HKKL/p1665583869334289?thread_ts=1665581515.515139&cid=C039XD9HKKL)
## Context
**Taken at**: https://dao.joystream.org/#/election
**Fields**:
* Houston, we have a problem.: make all my stakes available for voting even if they have (vote) locks
* Comment: TODO: offer all stakes available for voting even if they have (vote) locks.
## Context
The stake used in the previous election is not available in this election.
It was used on a candidate who due to the known runtime bug received enough votes and stake but didn't get elected. However even if used on a winner it has to be available for selection in the next election.
The problem still is and it feels like i wrote this before that the UI tries to decide for me but gets it wrong. Do not hide stakes with existing vote locks from past elections/
Why does the lock expire 3d after the election vs on announcing like the handbooks [requires](https://joystream.gitbook.io/testnet-workspace/system/council#voting).
User needs to be able to decide and fail to learn.
* Feedback: l1dev

_[Download original image](https://resources.usersnap.com/company/f9f719df-a929-4461-bb6e-46dbfbceffb7/datapoint_screenshot/afe2abd8-c53c-4d12-8581-3ec3b43ed49b-annotated_e68a119f-4cdb-4bb2-9561-633b1ff2f355.png?etag=44c1e767d72e47e27e5ba0b959359f13)_
**Browser**: Chrome 102 (Linux)
**Screen size**: 1920x1080
**Browser size**: 1428x1371
**[Open #340 in Usersnap](https://app.usersnap.com/l/feedback/4209310a-cde4-4f56-8b7e-37554c18793a?utm_medium=integration&utm_campaign=integration&utm_content=open)**
Powered by **[Usersnap](https://usersnap.com/?utm_source=product&utm_medium=poweredbylink&utm_campaign=github_entry)**.
┆Issue is synchronized with this [Asana task](https://app.asana.com/0/1202132419573087/1203206170229285) by [Unito](https://www.unito.io)
| 1.0 | Allow to re-use accounts with voting locks from previous election cycle without recovery. - ## Scope
* [ ] In the staking account selector for voting display all accounts user has.
* [ ] Apply same designs we used in proposals account selector to voting accounts with specificities:
Voting lock applied in previous elections can be reused in subsequent elections without recovering
Voting lock applied in current election cannot be reused in current election (show as greyed out with lock icon and tooltip). Tooltip copy: "
* [ ] :point_up: Add tooltip on hover "Voting lock applies to your staking account when vote is casted, preventing this account to be reused in the same election cycle again. To make the balances transferable again, you can recover this lock in my profile> [linking to accounts tab]."
* [ ] :point_up: :point_up: Do not block accounts with voting locks from previous elections to be used in current election.
* [ ] Only the last lock can be recovered. Casting new vote automatically "overrides" the previous lock. [LInk to slack conv with confirmation](https://joystreamworkspace.slack.com/archives/C039XD9HKKL/p1665583869334289?thread_ts=1665581515.515139&cid=C039XD9HKKL)
## Context
**Taken at**: https://dao.joystream.org/#/election
**Fields**:
* Houston, we have a problem.: make all my stakes available for voting even if they have (vote) locks
* Comment: TODO: offer all stakes available for voting even if they have (vote) locks.
## Context
The stake used in the previous election is not available in this election.
It was used on a candidate who due to the known runtime bug received enough votes and stake but didn't get elected. However even if used on a winner it has to be available for selection in the next election.
The problem still is and it feels like i wrote this before that the UI tries to decide for me but gets it wrong. Do not hide stakes with existing vote locks from past elections/
Why does the lock expire 3d after the election vs on announcing like the handbooks [requires](https://joystream.gitbook.io/testnet-workspace/system/council#voting).
User needs to be able to decide and fail to learn.
* Feedback: l1dev

_[Download original image](https://resources.usersnap.com/company/f9f719df-a929-4461-bb6e-46dbfbceffb7/datapoint_screenshot/afe2abd8-c53c-4d12-8581-3ec3b43ed49b-annotated_e68a119f-4cdb-4bb2-9561-633b1ff2f355.png?etag=44c1e767d72e47e27e5ba0b959359f13)_
**Browser**: Chrome 102 (Linux)
**Screen size**: 1920x1080
**Browser size**: 1428x1371
**[Open #340 in Usersnap](https://app.usersnap.com/l/feedback/4209310a-cde4-4f56-8b7e-37554c18793a?utm_medium=integration&utm_campaign=integration&utm_content=open)**
Powered by **[Usersnap](https://usersnap.com/?utm_source=product&utm_medium=poweredbylink&utm_campaign=github_entry)**.
┆Issue is synchronized with this [Asana task](https://app.asana.com/0/1202132419573087/1203206170229285) by [Unito](https://www.unito.io)
| non_defect | allow to re use accounts with voting locks from previous election cycle without recovery scope in the staking account selector for voting display all accounts user has apply same designs we used in proposals account selector to voting accounts with specificities voting lock applied in previous elections can be reused in subsequent elections without recovering voting lock applied in current election cannot be reused in current election show as greyed out with lock icon and tooltip tooltip copy point up add tooltip on hover voting lock applies to your staking account when vote is casted preventing this account to be reused in the same election cycle again to make the balances transferable again you can recover this lock in my profile point up point up do not block accounts with voting locks from previous elections to be used in current election only the last lock can be recovered casting new vote automatically overrides the previous lock context taken at fields houston we have a problem make all my stakes available for voting even if they have vote locks comment todo offer all stakes available for voting even if they have vote locks context the stake used in the previous election is not available in this election it was used on a candidate who due to the known runtime bug received enough votes and stake but didn t get elected however even if used on a winner it has to be available for selection in the next election the problem still is and it feels like i wrote this before that the ui tries to decide for me but gets it wrong do not hide stakes with existing vote locks from past elections why does the lock expire after the election vs on announcing like the handbooks user needs to be able to decide and fail to learn feedback browser chrome linux screen size browser size powered by ┆issue is synchronized with this by | 0 |
678,661 | 23,205,980,180 | IssuesEvent | 2022-08-02 05:22:44 | phetsims/axon | https://api.github.com/repos/phetsims/axon | closed | Can we get rid of getListenerCount? | priority:2-high dev:typescript | From https://github.com/phetsims/axon/issues/402, @marlitas and I would like to remove getListenerCount from the Emitter and Property interfaces. Current usages seem to only be in tests. Can we get rid of the tests? If not, perhaps subclass and make that method public? | 1.0 | Can we get rid of getListenerCount? - From https://github.com/phetsims/axon/issues/402, @marlitas and I would like to remove getListenerCount from the Emitter and Property interfaces. Current usages seem to only be in tests. Can we get rid of the tests? If not, perhaps subclass and make that method public? | non_defect | can we get rid of getlistenercount from marlitas and i would like to remove getlistenercount from the emitter and property interfaces current usages seem to only be in tests can we get rid of the tests if not perhaps subclass and make that method public | 0 |
576 | 7,974,371,907 | IssuesEvent | 2018-07-17 05:07:31 | zcash/zcash | https://api.github.com/repos/zcash/zcash | closed | Zcashd binary not working on CentOS 7 | Linux portability usi | ```
[zcash@local ~]$ ./zcashd
./zcashd: /lib64/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by ./zcashd)
./zcashd: /lib64/libstdc++.so.6: version `CXXABI_1.3.8' not found (required by ./zcashd)
``` | True | Zcashd binary not working on CentOS 7 - ```
[zcash@local ~]$ ./zcashd
./zcashd: /lib64/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by ./zcashd)
./zcashd: /lib64/libstdc++.so.6: version `CXXABI_1.3.8' not found (required by ./zcashd)
``` | non_defect | zcashd binary not working on centos zcashd zcashd libstdc so version glibcxx not found required by zcashd zcashd libstdc so version cxxabi not found required by zcashd | 0 |
6,563 | 2,610,256,916 | IssuesEvent | 2015-02-26 19:22:00 | chrsmith/dsdsdaadf | https://api.github.com/repos/chrsmith/dsdsdaadf | opened | 深圳激光祛痘要多少钱 | auto-migrated Priority-Medium Type-Defect | ```
深圳激光祛痘要多少钱【深圳韩方科颜全国热线400-869-1818,24
小时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩��
�秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,�
��方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹
”健康祛痘技术并结合先进“先进豪华彩光”仪,开创国内��
�业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上�
��痘痘。
```
-----
Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 8:37 | 1.0 | 深圳激光祛痘要多少钱 - ```
深圳激光祛痘要多少钱【深圳韩方科颜全国热线400-869-1818,24
小时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩��
�秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,�
��方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹
”健康祛痘技术并结合先进“先进豪华彩光”仪,开创国内��
�业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上�
��痘痘。
```
-----
Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 8:37 | defect | 深圳激光祛痘要多少钱 深圳激光祛痘要多少钱【 , 】深圳韩方科颜专业祛痘连锁机构,机构以韩�� �秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,� ��方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹 ”健康祛痘技术并结合先进“先进豪华彩光”仪,开创国内�� �业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上� ��痘痘。 original issue reported on code google com by szft com on may at | 1 |
291,551 | 21,929,562,345 | IssuesEvent | 2022-05-23 08:32:48 | hyperledger/iroha | https://api.github.com/repos/hyperledger/iroha | closed | Review crate READMEs | iroha2 Documentation | Review existing readmes:
- [x] [cli](https://github.com/hyperledger/iroha/blob/iroha2-dev/cli/README.md) (#2244 )
- [x] [wasm](https://github.com/hyperledger/iroha/blob/iroha2-dev/wasm/README.md) (#2240)
- [x] [macro](https://github.com/hyperledger/iroha/blob/iroha2-dev/macro/README.md) (#2236)
- [x] [client](https://github.com/hyperledger/iroha/blob/iroha2-dev/client/README.md) (#2234)
- [x] [client_cli](https://github.com/hyperledger/iroha/blob/iroha2-dev/client_cli/README.md) (#2234)
- [x] [tools/kagami](https://github.com/hyperledger/iroha/blob/iroha2-dev/tools/kagami/README.md) (#2220 )
- [x] [client/benches/tps](https://github.com/hyperledger/iroha/blob/iroha2-dev/client/benches/tps/README.md) (#2230)
- [x] [tools/parity_scale_decoder](https://github.com/hyperledger/iroha/blob/iroha2-dev/tools/parity_scale_decoder/README.md) (#2224) | 1.0 | Review crate READMEs - Review existing readmes:
- [x] [cli](https://github.com/hyperledger/iroha/blob/iroha2-dev/cli/README.md) (#2244 )
- [x] [wasm](https://github.com/hyperledger/iroha/blob/iroha2-dev/wasm/README.md) (#2240)
- [x] [macro](https://github.com/hyperledger/iroha/blob/iroha2-dev/macro/README.md) (#2236)
- [x] [client](https://github.com/hyperledger/iroha/blob/iroha2-dev/client/README.md) (#2234)
- [x] [client_cli](https://github.com/hyperledger/iroha/blob/iroha2-dev/client_cli/README.md) (#2234)
- [x] [tools/kagami](https://github.com/hyperledger/iroha/blob/iroha2-dev/tools/kagami/README.md) (#2220 )
- [x] [client/benches/tps](https://github.com/hyperledger/iroha/blob/iroha2-dev/client/benches/tps/README.md) (#2230)
- [x] [tools/parity_scale_decoder](https://github.com/hyperledger/iroha/blob/iroha2-dev/tools/parity_scale_decoder/README.md) (#2224) | non_defect | review crate readmes review existing readmes | 0 |
72,537 | 24,169,317,662 | IssuesEvent | 2022-09-22 17:41:48 | scipy/scipy | https://api.github.com/repos/scipy/scipy | opened | BUG: Can't import rel_entr from scipy.special | defect | ### Describe your issue.
Import statement `from scipy.special import rel_entr` fails
### Reproducing Code Example
```python
from scipy.special import rel_entr
```
### Error message
```shell
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-1-36fe78ea68ea> in <module>
----> 1 from scipy.special import rel_entr
ImportError: cannot import name 'rel_entr' from 'scipy.special' (unknown location)
```
### SciPy/NumPy/Python version information
1.9.1 1.22.4 sys.version_info(major=3, minor=8, micro=8, releaselevel='final', serial=0) | 1.0 | BUG: Can't import rel_entr from scipy.special - ### Describe your issue.
Import statement `from scipy.special import rel_entr` fails
### Reproducing Code Example
```python
from scipy.special import rel_entr
```
### Error message
```shell
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-1-36fe78ea68ea> in <module>
----> 1 from scipy.special import rel_entr
ImportError: cannot import name 'rel_entr' from 'scipy.special' (unknown location)
```
### SciPy/NumPy/Python version information
1.9.1 1.22.4 sys.version_info(major=3, minor=8, micro=8, releaselevel='final', serial=0) | defect | bug can t import rel entr from scipy special describe your issue import statement from scipy special import rel entr fails reproducing code example python from scipy special import rel entr error message shell importerror traceback most recent call last in from scipy special import rel entr importerror cannot import name rel entr from scipy special unknown location scipy numpy python version information sys version info major minor micro releaselevel final serial | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.