Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
139,319
| 20,824,407,511
|
IssuesEvent
|
2022-03-18 18:54:29
|
flutter/flutter
|
https://api.github.com/repos/flutter/flutter
|
closed
|
Icon tree shaking removes ligatures
|
tool framework f: material design a: typography p: third party
|
Font awesome uses two glyphs for the different colored parts of their duotone icons. They recently updated this system to no longer use two separate code points for their primary and secondary glyph. Instead, they opted for ligatures - two consecutive equal code points of the primary are mapped to the secondary glyph.
As there is no way to address ligatures using IconData (because it only allows for one code point), ligatures will always be removed.
I am not sure if this is something that should even be worked on, as this is such a niche case - but I wanted to let you know nevertheless.
Expected (web):

Actual (apk with icon tree shaking):

|
1.0
|
Icon tree shaking removes ligatures - Font awesome uses two glyphs for the different colored parts of their duotone icons. They recently updated this system to no longer use two separate code points for their primary and secondary glyph. Instead, they opted for ligatures - two consecutive equal code points of the primary are mapped to the secondary glyph.
As there is no way to address ligatures using IconData (because it only allows for one code point), ligatures will always be removed.
I am not sure if this is something that should even be worked on, as this is such a niche case - but I wanted to let you know nevertheless.
Expected (web):

Actual (apk with icon tree shaking):

|
non_process
|
icon tree shaking removes ligatures font awesome uses two glyphs for the different colored parts of their duotone icons they recently updated this system to no longer use two separate code points for their primary and secondary glyph instead they opted for ligatures two consecutive equal code points of the primary are mapped to the secondary glyph as there is no way to address ligatures using icondata because it only allows for one code point ligatures will always be removed i am not sure if this is something that should even be worked on as this is such a niche case but i wanted to let you know nevertheless expected web actual apk with icon tree shaking
| 0
|
47,053
| 19,559,695,803
|
IssuesEvent
|
2022-01-03 14:39:02
|
PreMiD/Presences
|
https://api.github.com/repos/PreMiD/Presences
|
opened
|
NetGalley | netgalley.com
|
Service Request
|
### Discussed in https://github.com/PreMiD/Presences/discussions/4495
<div type='discussions-op-text'>
<sup>Originally posted by **jeijeisan** September 29, 2020</sup>
**Prerequisites and essential questions** <!--- Required, please answer the following questions as honestly as possible by changing the "[ ]" to "[x]" or by marking it after creating the issue (easier), not marking a question counts as "No". -->
- [x] Is it a popular site?
- [x] Is the website older than 2 months? <!--- It is necessary for the website to be older than 2 months. -->
- [ ] Is the site locked to a specific country/region?
- [ ] Is the site a paid service? (e.g. Netflix, Hulu)
- [ ] Does the website feature NSFW content? (e.g. porn, etc...)
- [ ] Are you a donator/patron?
- [x] Do you acknowledge that coding presences is completely voluntary and may take time for your service to be added regardless of priority?
**What's your Discord username?** <!--- Optional, unless you are a donator/patron. Ex. Clyde#0000 --> jeijeisan#5105
**What's the name of the service?** <!--- Required, Ex. www.youtube.com | YouTube --> NetGalley | https://www.netgalley.com
**What should the Presence display?** <!--- Required, make sure to be as clear as possible on what should be added. --> Playing/Browsing NetGalley, Browsing Dashboard, Browsing Your Shelf, Finding a title/Searching, Browsing [book title + author]
**If possible, please provide a logo for the service (512x512 minimum)** <!--- Optional, it is recommended to upload the image here instead of using a 3rd-party host. -->
</div>
|
1.0
|
NetGalley | netgalley.com - ### Discussed in https://github.com/PreMiD/Presences/discussions/4495
<div type='discussions-op-text'>
<sup>Originally posted by **jeijeisan** September 29, 2020</sup>
**Prerequisites and essential questions** <!--- Required, please answer the following questions as honestly as possible by changing the "[ ]" to "[x]" or by marking it after creating the issue (easier), not marking a question counts as "No". -->
- [x] Is it a popular site?
- [x] Is the website older than 2 months? <!--- It is necessary for the website to be older than 2 months. -->
- [ ] Is the site locked to a specific country/region?
- [ ] Is the site a paid service? (e.g. Netflix, Hulu)
- [ ] Does the website feature NSFW content? (e.g. porn, etc...)
- [ ] Are you a donator/patron?
- [x] Do you acknowledge that coding presences is completely voluntary and may take time for your service to be added regardless of priority?
**What's your Discord username?** <!--- Optional, unless you are a donator/patron. Ex. Clyde#0000 --> jeijeisan#5105
**What's the name of the service?** <!--- Required, Ex. www.youtube.com | YouTube --> NetGalley | https://www.netgalley.com
**What should the Presence display?** <!--- Required, make sure to be as clear as possible on what should be added. --> Playing/Browsing NetGalley, Browsing Dashboard, Browsing Your Shelf, Finding a title/Searching, Browsing [book title + author]
**If possible, please provide a logo for the service (512x512 minimum)** <!--- Optional, it is recommended to upload the image here instead of using a 3rd-party host. -->
</div>
|
non_process
|
netgalley netgalley com discussed in originally posted by jeijeisan september prerequisites and essential questions is it a popular site is the website older than months is the site locked to a specific country region is the site a paid service e g netflix hulu does the website feature nsfw content e g porn etc are you a donator patron do you acknowledge that coding presences is completely voluntary and may take time for your service to be added regardless of priority what s your discord username jeijeisan what s the name of the service netgalley what should the presence display playing browsing netgalley browsing dashboard browsing your shelf finding a title searching browsing if possible please provide a logo for the service minimum
| 0
|
323,350
| 23,943,174,241
|
IssuesEvent
|
2022-09-12 03:20:40
|
felangel/bloc
|
https://api.github.com/repos/felangel/bloc
|
closed
|
docs: weather app tutorial
|
good first issue example documentation
|
The weather app tutorial has an issue with the API link used in the example. It gives a 404 page not found error [metaweather. com](https://www.metaweather.com/). Please help update the tutorial.
|
1.0
|
docs: weather app tutorial - The weather app tutorial has an issue with the API link used in the example. It gives a 404 page not found error [metaweather. com](https://www.metaweather.com/). Please help update the tutorial.
|
non_process
|
docs weather app tutorial the weather app tutorial has an issue with the api link used in the example it gives a page not found error please help update the tutorial
| 0
|
672,892
| 22,844,029,510
|
IssuesEvent
|
2022-07-13 02:48:49
|
Elice-SW-2-Team14/Animal-Hospital
|
https://api.github.com/repos/Elice-SW-2-Team14/Animal-Hospital
|
opened
|
[FE] ๋ณ์ ์ ๋ณด ํ์ด์ง ํค์๋ ๊ธฐ๋ฅ ์ถ๊ฐ
|
๐จ Feature โ๏ธhigh-priority ๐ฅ Frontend
|
## ๐จ ๊ธฐ๋ฅ ์ค๋ช
ํค์๋๋ฅผ ์ ๊ณ `Enter` ํค๋ฅผ ๋๋ฅด๋ฉด ํด์ํ๊ทธ์ฒ๋ผ ์ถ๊ฐ๋๋ ๊ธฐ๋ฅ
## ๐ ์๋ฃ ์กฐ๊ฑด
- ๋ณ์ ์ ๋ณด ํ์ด์ง ํค์๋ ๊ธฐ๋ฅ ์ถ๊ฐ
## ๐ญ ๊ด๋ จ ๋ฐฑ๋ก๊ทธ
[[FE] ๋ด ๋ณ์ ์ ๋ณด ํ์ด์ง]-[๋ฉ์ธ ์ปดํฌ๋ํธ]-[ํค์๋]
## ๐ญ ์์ ์์
์๊ฐ
3h
|
1.0
|
[FE] ๋ณ์ ์ ๋ณด ํ์ด์ง ํค์๋ ๊ธฐ๋ฅ ์ถ๊ฐ - ## ๐จ ๊ธฐ๋ฅ ์ค๋ช
ํค์๋๋ฅผ ์ ๊ณ `Enter` ํค๋ฅผ ๋๋ฅด๋ฉด ํด์ํ๊ทธ์ฒ๋ผ ์ถ๊ฐ๋๋ ๊ธฐ๋ฅ
## ๐ ์๋ฃ ์กฐ๊ฑด
- ๋ณ์ ์ ๋ณด ํ์ด์ง ํค์๋ ๊ธฐ๋ฅ ์ถ๊ฐ
## ๐ญ ๊ด๋ จ ๋ฐฑ๋ก๊ทธ
[[FE] ๋ด ๋ณ์ ์ ๋ณด ํ์ด์ง]-[๋ฉ์ธ ์ปดํฌ๋ํธ]-[ํค์๋]
## ๐ญ ์์ ์์
์๊ฐ
3h
|
non_process
|
๋ณ์ ์ ๋ณด ํ์ด์ง ํค์๋ ๊ธฐ๋ฅ ์ถ๊ฐ ๐จ ๊ธฐ๋ฅ ์ค๋ช
ํค์๋๋ฅผ ์ ๊ณ enter ํค๋ฅผ ๋๋ฅด๋ฉด ํด์ํ๊ทธ์ฒ๋ผ ์ถ๊ฐ๋๋ ๊ธฐ๋ฅ ๐ ์๋ฃ ์กฐ๊ฑด ๋ณ์ ์ ๋ณด ํ์ด์ง ํค์๋ ๊ธฐ๋ฅ ์ถ๊ฐ ๐ญ ๊ด๋ จ ๋ฐฑ๋ก๊ทธ ๋ด ๋ณ์ ์ ๋ณด ํ์ด์ง ๐ญ ์์ ์์
์๊ฐ
| 0
|
18,854
| 24,768,991,023
|
IssuesEvent
|
2022-10-22 22:40:03
|
fertadeo/ISPC-2do-Cuat-Proyecto
|
https://api.github.com/repos/fertadeo/ISPC-2do-Cuat-Proyecto
|
closed
|
#TK6.1 diseรฑo intuitivo para login logout
|
good first issue in process
|
desarrollo en el nav del acceso facil e intuitivo a login y logout
|
1.0
|
#TK6.1 diseรฑo intuitivo para login logout - desarrollo en el nav del acceso facil e intuitivo a login y logout
|
process
|
diseรฑo intuitivo para login logout desarrollo en el nav del acceso facil e intuitivo a login y logout
| 1
|
13,216
| 22,308,797,051
|
IssuesEvent
|
2022-06-13 15:09:16
|
lodovicoazzini/unboxer
|
https://api.github.com/repos/lodovicoazzini/unboxer
|
closed
|
Export images for labelability
|
requirement
|
The images for the labelability should show two taggers at a time, the RQ is whether it exists a property such that it distinguishes the two clusters.
The clusters are chosen based on the number of misclassified elements in the cluster.
When choosing the images, we consider only images of misclassified entries. We select `n` images as the medoid of the cluster plus its `n - 1` closest neighbors.
|
1.0
|
Export images for labelability - The images for the labelability should show two taggers at a time, the RQ is whether it exists a property such that it distinguishes the two clusters.
The clusters are chosen based on the number of misclassified elements in the cluster.
When choosing the images, we consider only images of misclassified entries. We select `n` images as the medoid of the cluster plus its `n - 1` closest neighbors.
|
non_process
|
export images for labelability the images for the labelability should show two taggers at a time the rq is whether it exists a property such that it distinguishes the two clusters the clusters are chosen based on the number of misclassified elements in the cluster when choosing the images we consider only images of misclassified entries we select n images as the medoid of the cluster plus its n closest neighbors
| 0
|
15,101
| 18,836,385,097
|
IssuesEvent
|
2021-11-11 01:46:03
|
DevExpress/testcafe-hammerhead
|
https://api.github.com/repos/DevExpress/testcafe-hammerhead
|
closed
|
Implement basic XHTML support
|
TYPE: enhancement AREA: client SYSTEM: resource processing AREA: server STATE: Stale
|

Presumably, problem grows from non closed our `meta` tag (`<meta xmlns="http://www.w3.org/1999/xhtml" class="charset-hammerhead-shadow-ui" charset="iso-8859-1">`).
|
1.0
|
Implement basic XHTML support - 
Presumably, problem grows from non closed our `meta` tag (`<meta xmlns="http://www.w3.org/1999/xhtml" class="charset-hammerhead-shadow-ui" charset="iso-8859-1">`).
|
process
|
implement basic xhtml support presumably problem grows from non closed our meta tag
| 1
|
254,932
| 8,101,670,406
|
IssuesEvent
|
2018-08-12 16:16:50
|
saulgreenberg/Timelapse
|
https://api.github.com/repos/saulgreenberg/Timelapse
|
closed
|
One-step Undo / redo of changes to current file
|
enhancement low priority wontfix
|
Add an undo function that undoes any changes to the currently displayed file (including markers).
To do this:
-save the data whenever we show a file
-revert it on undo, while saving changed data so we can redo
Limitations:
- any changes to other records will disable undo e.g., with propagate, multiselections, etc.
|
1.0
|
One-step Undo / redo of changes to current file - Add an undo function that undoes any changes to the currently displayed file (including markers).
To do this:
-save the data whenever we show a file
-revert it on undo, while saving changed data so we can redo
Limitations:
- any changes to other records will disable undo e.g., with propagate, multiselections, etc.
|
non_process
|
one step undo redo of changes to current file add an undo function that undoes any changes to the currently displayed file including markers to do this save the data whenever we show a file revert it on undo while saving changed data so we can redo limitations any changes to other records will disable undo e g with propagate multiselections etc
| 0
|
155,779
| 24,516,526,393
|
IssuesEvent
|
2022-10-11 05:49:36
|
opensquare-network/bounties
|
https://api.github.com/repos/opensquare-network/bounties
|
closed
|
Show a notice for bounty import
|
UI priority:low design
|
Show a notice area which user can close. This notice won't appear for 2 weeks if closed. In this area, there are warnings:
- You are currently on [kusama|polkadot] network
- Switch to other network if you want to import bounties on it
- Only bounty curators can import bounties
|
1.0
|
Show a notice for bounty import - Show a notice area which user can close. This notice won't appear for 2 weeks if closed. In this area, there are warnings:
- You are currently on [kusama|polkadot] network
- Switch to other network if you want to import bounties on it
- Only bounty curators can import bounties
|
non_process
|
show a notice for bounty import show a notice area which user can close this notice won t appear for weeks if closed in this area there are warnings you are currently on network switch to other network if you want to import bounties on it only bounty curators can import bounties
| 0
|
8,091
| 11,269,751,348
|
IssuesEvent
|
2020-01-14 09:33:25
|
prisma/prisma2
|
https://api.github.com/repos/prisma/prisma2
|
opened
|
Introspection: autoincrement not recognized on ID field
|
process/candidate topic: introspection
|
I have this SQL schema:
```sql
CREATE TABLE posts (
id SERIAL PRIMARY KEY,
title character varying(256) NOT NULL,
content text,
published boolean NOT NULL DEFAULT false,
"authorId" integer REFERENCES users(id)
);
CREATE UNIQUE INDEX posts_pkey ON posts(id int4_ops);
CREATE TABLE users (
id SERIAL PRIMARY KEY,
name character varying(256),
email character varying(256) NOT NULL UNIQUE
);
CREATE UNIQUE INDEX users_pkey ON users(id int4_ops);
CREATE UNIQUE INDEX users_email_key ON users(email text_ops);
```
The resulting Prisma schema is:
```prisma
model posts {
authorId users?
content String?
id Int @id
published Boolean @default(false)
title String
}
model users {
email String @unique
id Int @id
name String?
postses posts[]
}
```
As the `id` fields on `posts` and `users` are annotated with `SERIAL`, should the introspection add `@default(autoincrement())` to the respective fields in the Prisma schema?
|
1.0
|
Introspection: autoincrement not recognized on ID field - I have this SQL schema:
```sql
CREATE TABLE posts (
id SERIAL PRIMARY KEY,
title character varying(256) NOT NULL,
content text,
published boolean NOT NULL DEFAULT false,
"authorId" integer REFERENCES users(id)
);
CREATE UNIQUE INDEX posts_pkey ON posts(id int4_ops);
CREATE TABLE users (
id SERIAL PRIMARY KEY,
name character varying(256),
email character varying(256) NOT NULL UNIQUE
);
CREATE UNIQUE INDEX users_pkey ON users(id int4_ops);
CREATE UNIQUE INDEX users_email_key ON users(email text_ops);
```
The resulting Prisma schema is:
```prisma
model posts {
authorId users?
content String?
id Int @id
published Boolean @default(false)
title String
}
model users {
email String @unique
id Int @id
name String?
postses posts[]
}
```
As the `id` fields on `posts` and `users` are annotated with `SERIAL`, should the introspection add `@default(autoincrement())` to the respective fields in the Prisma schema?
|
process
|
introspection autoincrement not recognized on id field i have this sql schema sql create table posts id serial primary key title character varying not null content text published boolean not null default false authorid integer references users id create unique index posts pkey on posts id ops create table users id serial primary key name character varying email character varying not null unique create unique index users pkey on users id ops create unique index users email key on users email text ops the resulting prisma schema is prisma model posts authorid users content string id int id published boolean default false title string model users email string unique id int id name string postses posts as the id fields on posts and users are annotated with serial should the introspection add default autoincrement to the respective fields in the prisma schema
| 1
|
660,351
| 21,962,639,775
|
IssuesEvent
|
2022-05-24 17:07:24
|
near/near-explorer
|
https://api.github.com/repos/near/near-explorer
|
closed
|
Transaction State Not Updating When Switching Account
|
bug Priority 1
|
When I go to an account, I can see the list of transactions but when I type in a different account in the search bar, the transactions donโt update until I refresh the page. I've attached a quick demo:
https://user-images.githubusercontent.com/57506486/169072218-f1be5173-f701-473d-9124-3f9800f3f20c.mov
|
1.0
|
Transaction State Not Updating When Switching Account - When I go to an account, I can see the list of transactions but when I type in a different account in the search bar, the transactions donโt update until I refresh the page. I've attached a quick demo:
https://user-images.githubusercontent.com/57506486/169072218-f1be5173-f701-473d-9124-3f9800f3f20c.mov
|
non_process
|
transaction state not updating when switching account when i go to an account i can see the list of transactions but when i type in a different account in the search bar the transactions donโt update until i refresh the page i ve attached a quick demo
| 0
|
5,659
| 3,264,602,327
|
IssuesEvent
|
2015-10-22 12:44:40
|
google/timesketch
|
https://api.github.com/repos/google/timesketch
|
closed
|
Cleanup HTML and make red banner less intrusive
|
code-health component/ui priority/P2
|
* There are unnecessary comments in all HTML templates. Let's remove them.
* Make the big red warning banner a bit less intrusive
* Use minified version of angular.
|
1.0
|
Cleanup HTML and make red banner less intrusive - * There are unnecessary comments in all HTML templates. Let's remove them.
* Make the big red warning banner a bit less intrusive
* Use minified version of angular.
|
non_process
|
cleanup html and make red banner less intrusive there are unnecessary comments in all html templates let s remove them make the big red warning banner a bit less intrusive use minified version of angular
| 0
|
285,147
| 8,755,161,264
|
IssuesEvent
|
2018-12-14 14:04:43
|
bio-tools/biotoolsRegistry
|
https://api.github.com/repos/bio-tools/biotoolsRegistry
|
closed
|
Support for biotoolsSchema 3.0.0 (was Reinstate support for biotoolsSchema 2.0.0 XML for I/O)
|
API fix verified high priority
|
We have the XSLT for import (transforming biotoolsSchema XML to framework-compatible format)
Jon to provide XSLT going the other way
Emil to plumb these in
|
1.0
|
Support for biotoolsSchema 3.0.0 (was Reinstate support for biotoolsSchema 2.0.0 XML for I/O) - We have the XSLT for import (transforming biotoolsSchema XML to framework-compatible format)
Jon to provide XSLT going the other way
Emil to plumb these in
|
non_process
|
support for biotoolsschema was reinstate support for biotoolsschema xml for i o we have the xslt for import transforming biotoolsschema xml to framework compatible format jon to provide xslt going the other way emil to plumb these in
| 0
|
2,877
| 5,833,008,326
|
IssuesEvent
|
2017-05-08 23:42:55
|
ncbo/bioportal-project
|
https://api.github.com/repos/ncbo/bioportal-project
|
closed
|
SCDO: failed to parse
|
in progress ontology processing problem
|
Received a complaint from a user on the support list that they submitted a new ontology ([Sickle Cell Disease Ontology](http://bioportal.bioontology.org/ontologies/SCDO)), and the status is "Uploaded, Error Rdf".
|
1.0
|
SCDO: failed to parse - Received a complaint from a user on the support list that they submitted a new ontology ([Sickle Cell Disease Ontology](http://bioportal.bioontology.org/ontologies/SCDO)), and the status is "Uploaded, Error Rdf".
|
process
|
scdo failed to parse received a complaint from a user on the support list that they submitted a new ontology and the status is uploaded error rdf
| 1
|
551,276
| 16,165,821,898
|
IssuesEvent
|
2021-05-01 13:23:11
|
sopra-fs21-group-15/client
|
https://api.github.com/repos/sopra-fs21-group-15/client
|
closed
|
Add a profile page
|
low priority task
|
Part of #15
When entering the room an API-call queries the user information which populates a form on the screen.
|
1.0
|
Add a profile page - Part of #15
When entering the room an API-call queries the user information which populates a form on the screen.
|
non_process
|
add a profile page part of when entering the room an api call queries the user information which populates a form on the screen
| 0
|
653,197
| 21,575,401,625
|
IssuesEvent
|
2022-05-02 13:16:20
|
horizon-efrei/HorizonWeb
|
https://api.github.com/repos/horizon-efrei/HorizonWeb
|
opened
|
๐ฅ๏ธ Espace de profil utilisateur/association
|
difficulty: easy priority: high status: approved type: feature target: frontend scope: clubs
|
### Pour quelle partie de l'infrastructure souhaitez-vous proposer une suggestion ?
Site Web
### Votre idรฉe
Les espaces ou l'on pourrait voir les profils utilisateurs et les pages d'associations. On pourrait y retrouver les informations principales de l'utilisateur/association, tels que son nom/photo/rรดle/type/description/engagements/activitรฉ...
### Autre contexte
_No response_
|
1.0
|
๐ฅ๏ธ Espace de profil utilisateur/association - ### Pour quelle partie de l'infrastructure souhaitez-vous proposer une suggestion ?
Site Web
### Votre idรฉe
Les espaces ou l'on pourrait voir les profils utilisateurs et les pages d'associations. On pourrait y retrouver les informations principales de l'utilisateur/association, tels que son nom/photo/rรดle/type/description/engagements/activitรฉ...
### Autre contexte
_No response_
|
non_process
|
๐ฅ๏ธ espace de profil utilisateur association pour quelle partie de l infrastructure souhaitez vous proposer une suggestion site web votre idรฉe les espaces ou l on pourrait voir les profils utilisateurs et les pages d associations on pourrait y retrouver les informations principales de l utilisateur association tels que son nom photo rรดle type description engagements activitรฉ autre contexte no response
| 0
|
19,834
| 26,228,601,445
|
IssuesEvent
|
2023-01-04 21:17:48
|
kitspace/kitspace-v2
|
https://api.github.com/repos/kitspace/kitspace-v2
|
closed
|
`dropbot-120-channel-pogo-pin-board.kicad` preview images are aligned to sides ()
|
bug processor
|
The top image is aligned to the left, and the bottom image to the right, they should be centered instead.
This is broken in v1 too.

|
1.0
|
`dropbot-120-channel-pogo-pin-board.kicad` preview images are aligned to sides () - The top image is aligned to the left, and the bottom image to the right, they should be centered instead.
This is broken in v1 too.

|
process
|
dropbot channel pogo pin board kicad preview images are aligned to sides the top image is aligned to the left and the bottom image to the right they should be centered instead this is broken in too
| 1
|
299,956
| 22,634,544,244
|
IssuesEvent
|
2022-06-30 17:34:35
|
jmirsteinban/AIII
|
https://api.github.com/repos/jmirsteinban/AIII
|
closed
|
Agregar Mรณdulo de Garantรญas al Cronograma
|
documentation
|
Hay que agregar al cronograma el mรณdulo de Garantรญas.
|
1.0
|
Agregar Mรณdulo de Garantรญas al Cronograma - Hay que agregar al cronograma el mรณdulo de Garantรญas.
|
non_process
|
agregar mรณdulo de garantรญas al cronograma hay que agregar al cronograma el mรณdulo de garantรญas
| 0
|
7,222
| 10,349,664,473
|
IssuesEvent
|
2019-09-04 23:24:01
|
jupyter/nbconvert
|
https://api.github.com/repos/jupyter/nbconvert
|
closed
|
Optional store_history for ExecutePreprocessor.preprocess_cell
|
Preprocessor:Execute
|
I would appreciate having the option to turn off history storing when calling preprocess_cell for an ExecutePreprocessor. That is, accept an additional "store_history" argument with a default value of True in the preprocess_cell method, as well as the run_cell method. This "store_history" would need to be passed from preprocess_cell to run_cell which will then pass it on to the "execute" method of the KernelClient object. I would find this convenient when using nbconvert to execute a collection of notebooks for regression testing and don't have a need to store the history while doing so. I have tested out that this suits my purposes by overriding "run_cell" and passing store_history=False to the KernelClient's "execute" method.
|
1.0
|
Optional store_history for ExecutePreprocessor.preprocess_cell - I would appreciate having the option to turn off history storing when calling preprocess_cell for an ExecutePreprocessor. That is, accept an additional "store_history" argument with a default value of True in the preprocess_cell method, as well as the run_cell method. This "store_history" would need to be passed from preprocess_cell to run_cell which will then pass it on to the "execute" method of the KernelClient object. I would find this convenient when using nbconvert to execute a collection of notebooks for regression testing and don't have a need to store the history while doing so. I have tested out that this suits my purposes by overriding "run_cell" and passing store_history=False to the KernelClient's "execute" method.
|
process
|
optional store history for executepreprocessor preprocess cell i would appreciate having the option to turn off history storing when calling preprocess cell for an executepreprocessor that is accept an additional store history argument with a default value of true in the preprocess cell method as well as the run cell method this store history would need to be passed from preprocess cell to run cell which will then pass it on to the execute method of the kernelclient object i would find this convenient when using nbconvert to execute a collection of notebooks for regression testing and don t have a need to store the history while doing so i have tested out that this suits my purposes by overriding run cell and passing store history false to the kernelclient s execute method
| 1
|
7,478
| 10,569,936,179
|
IssuesEvent
|
2019-10-06 23:07:46
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Processing/OGR: do not use shapefile as format for temp input files
|
Feature Request Processing
|
Author Name: **Tobias Wendorff** (Tobias Wendorff)
Original Redmine Issue: [21279](https://issues.qgis.org/issues/21279)
Redmine category:processing/core
---
The algorithms from GDAL's toolbox for vector processing still use shapefiles as intermediate format. This results in corrupted fieldnames, since those are limited in the DBF format. Also, field contents might get corrupted if they're longer than 255 chars.
@ogr2ogr -f "ESRI Shapefile" C:/Users/tobwen/AppData/Local/Temp/processing_5fe343f547494d5e9fc31fe5cc38c868/680b663b94af46ab9516f2acd4141efd/OUTPUT.shp R:/demo.geojson@
Please use GPKG for this.
|
1.0
|
Processing/OGR: do not use shapefile as format for temp input files - Author Name: **Tobias Wendorff** (Tobias Wendorff)
Original Redmine Issue: [21279](https://issues.qgis.org/issues/21279)
Redmine category:processing/core
---
The algorithms from GDAL's toolbox for vector processing still use shapefiles as intermediate format. This results in corrupted fieldnames, since those are limited in the DBF format. Also, field contents might get corrupted if they're longer than 255 chars.
@ogr2ogr -f "ESRI Shapefile" C:/Users/tobwen/AppData/Local/Temp/processing_5fe343f547494d5e9fc31fe5cc38c868/680b663b94af46ab9516f2acd4141efd/OUTPUT.shp R:/demo.geojson@
Please use GPKG for this.
|
process
|
processing ogr do not use shapefile as format for temp input files author name tobias wendorff tobias wendorff original redmine issue redmine category processing core the algorithms from gdal s toolbox for vector processing still use shapefiles as intermediate format this results in corrupted fieldnames since those are limited in the dbf format also field contents might get corrupted if they re longer than chars f esri shapefile c users tobwen appdata local temp processing output shp r demo geojson please use gpkg for this
| 1
|
272,481
| 23,677,404,777
|
IssuesEvent
|
2022-08-28 09:45:04
|
Uuvana-Studios/longvinter-windows-client
|
https://api.github.com/repos/Uuvana-Studios/longvinter-windows-client
|
closed
|
Lost all my equips/ inventory when logging out
|
Bug Not Tested
|
The Bug
Whenever I log out of Uuvana 1 by selecting 'disconnect' I lose all of my inventory and equips (apart from my ammo and Cash).
Steps to reproduce the behavior:
1. Go to Uuvana 1 server
2. Equip items/ Ammo/ have items in inventory
3. Exit via the disconnect button
4. Log back in via Play instead of Continue to view error
**Expected behavior**
You should see your inventory/ equips missing
**Desktop (please complete the following information):**
- OS:Windows 10
- Game Version 1.0.8.b
- Steam Version same?
**Additional context**
I was being shot at, at the time. Please can I get my stuff back too
|
1.0
|
Lost all my equips/ inventory when logging out - The Bug
Whenever I log out of Uuvana 1 by selecting 'disconnect' I lose all of my inventory and equips (apart from my ammo and Cash).
Steps to reproduce the behavior:
1. Go to Uuvana 1 server
2. Equip items/ Ammo/ have items in inventory
3. Exit via the disconnect button
4. Log back in via Play instead of Continue to view error
**Expected behavior**
You should see your inventory/ equips missing
**Desktop (please complete the following information):**
- OS:Windows 10
- Game Version 1.0.8.b
- Steam Version same?
**Additional context**
I was being shot at, at the time. Please can I get my stuff back too
|
non_process
|
lost all my equips inventory when logging out the bug whenever i log out of uuvana by selecting disconnect i lose all of my inventory and equips apart from my ammo and cash steps to reproduce the behavior go to uuvana server equip items ammo have items in inventory exit via the disconnect button log back in via play instead of continue to view error expected behavior you should see your inventory equips missing desktop please complete the following information os windows game version b steam version same additional context i was being shot at at the time please can i get my stuff back too
| 0
|
7,414
| 10,540,357,875
|
IssuesEvent
|
2019-10-02 08:12:56
|
codacy/codacy-meta
|
https://api.github.com/repos/codacy/codacy-meta
|
closed
|
Add reviewers automatically on PRs
|
Processes Tech
|
At the moment when opening a PR no default reviewer are assigned, making it hard to track pending pull requests.
|
1.0
|
Add reviewers automatically on PRs - At the moment when opening a PR no default reviewer are assigned, making it hard to track pending pull requests.
|
process
|
add reviewers automatically on prs at the moment when opening a pr no default reviewer are assigned making it hard to track pending pull requests
| 1
|
255,314
| 8,121,395,922
|
IssuesEvent
|
2018-08-16 08:02:29
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
m.youtube.com - video or audio doesn't play
|
browser-firefox-mobile priority-critical
|
<!-- @browser: Firefox Mobile 63.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 7.1.2; Mobile; rv:63.0) Gecko/63.0 Firefox/63.0 -->
<!-- @reported_with: web -->
**URL**: https://m.youtube.com/watch?v=OOOh6dBaZYA#
**Browser / Version**: Firefox Mobile 63.0
**Operating System**: Android 7.1.2
**Tested Another Browser**: No
**Problem type**: Video or audio doesn't play
**Description**: video doesn't play
**Steps to Reproduce**:
_From [webcompat.com](https://webcompat.com/) with โค๏ธ_
|
1.0
|
m.youtube.com - video or audio doesn't play - <!-- @browser: Firefox Mobile 63.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 7.1.2; Mobile; rv:63.0) Gecko/63.0 Firefox/63.0 -->
<!-- @reported_with: web -->
**URL**: https://m.youtube.com/watch?v=OOOh6dBaZYA#
**Browser / Version**: Firefox Mobile 63.0
**Operating System**: Android 7.1.2
**Tested Another Browser**: No
**Problem type**: Video or audio doesn't play
**Description**: video doesn't play
**Steps to Reproduce**:
_From [webcompat.com](https://webcompat.com/) with โค๏ธ_
|
non_process
|
m youtube com video or audio doesn t play url browser version firefox mobile operating system android tested another browser no problem type video or audio doesn t play description video doesn t play steps to reproduce from with โค๏ธ
| 0
|
6,874
| 10,012,427,778
|
IssuesEvent
|
2019-07-15 13:11:12
|
DevExpress/testcafe-hammerhead
|
https://api.github.com/repos/DevExpress/testcafe-hammerhead
|
closed
|
Incorrect script processing
|
SYSTEM: resource processing TYPE: bug health-monitor
|
Origin script:
```js
d[f]=async function(){await(x={qwe:123},y(x))}
```
Processed script:
```js
__set$(d,f,async function(){await x={qwe:123},y(x);})
```
|
1.0
|
Incorrect script processing - Origin script:
```js
d[f]=async function(){await(x={qwe:123},y(x))}
```
Processed script:
```js
__set$(d,f,async function(){await x={qwe:123},y(x);})
```
|
process
|
incorrect script processing origin script js d async function await x qwe y x processed script js set d f async function await x qwe y x
| 1
|
18,723
| 24,611,449,273
|
IssuesEvent
|
2022-10-14 22:04:55
|
GoogleCloudPlatform/cloud-ops-sandbox
|
https://api.github.com/repos/GoogleCloudPlatform/cloud-ops-sandbox
|
closed
|
chore: Cleanup repo from all artifacts of the microservice demo application
|
priority: p1 type: process
|
Remove all artifacts of the Hipster shop from the repo.
Move monitoring Terraform module become the only one [Terraform configuration](https://www.terraform.io/language#about-the-terraform-language) in the repo.
*NOTE:* Completing this task will break the repo.
|
1.0
|
chore: Cleanup repo from all artifacts of the microservice demo application - Remove all artifacts of the Hipster shop from the repo.
Move monitoring Terraform module become the only one [Terraform configuration](https://www.terraform.io/language#about-the-terraform-language) in the repo.
*NOTE:* Completing this task will break the repo.
|
process
|
chore cleanup repo from all artifacts of the microservice demo application remove all artifacts of the hipster shop from the repo move monitoring terraform module become the only one in the repo note completing this task will break the repo
| 1
|
4,703
| 5,233,742,880
|
IssuesEvent
|
2017-01-30 13:52:02
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
Making so_import broken due to issue with SO URIs
|
editors-discussion Infrastructure time unknown (external dependencies etc)
|
Header of the SO file (excerpt)
```
xmlns:so-xp="http://purl.obolibrary.org/obo/so-xp.obo#">
<owl:Ontology rdf:about="http://purl.obolibrary.org/obo/so-xp.obo.owl">
<owl:versionIRI rdf:resource="http://purl.obolibrary.org/obo/so-xp.obo/so-xp/releases/2015-11-24/so-xp.owl/so-xp.obo.owl"/>
```
The so-xp based URIs create an issue for our import of SO terms.
|
1.0
|
Making so_import broken due to issue with SO URIs - Header of the SO file (excerpt)
```
xmlns:so-xp="http://purl.obolibrary.org/obo/so-xp.obo#">
<owl:Ontology rdf:about="http://purl.obolibrary.org/obo/so-xp.obo.owl">
<owl:versionIRI rdf:resource="http://purl.obolibrary.org/obo/so-xp.obo/so-xp/releases/2015-11-24/so-xp.owl/so-xp.obo.owl"/>
```
The so-xp based URIs create an issue for our import of SO terms.
|
non_process
|
making so import broken due to issue with so uris header of the so file excerpt xmlns so xp owl ontology rdf about owl versioniri rdf resource the so xp based uris create an issue for our import of so terms
| 0
|
17,334
| 23,152,191,202
|
IssuesEvent
|
2022-07-29 09:24:16
|
Tencent/tdesign-miniprogram
|
https://api.github.com/repos/Tencent/tdesign-miniprogram
|
closed
|
[button] loadingไธ่ฝฌ
|
processing
|
### tdesign-miniprogram ็ๆฌ
0.17.0
### ้็ฐ้พๆฅ
_No response_
### ้็ฐๆญฅ้ชค
_No response_
### ๆๆ็ปๆ
button็loadingไธ่ฝฌ
### ๅฎ้
็ปๆ
button็loadingๆญฃๅธธ่ฝฌ
### ๆกๆถ็ๆฌ
_No response_
### ๆต่งๅจ็ๆฌ
_No response_
### ็ณป็ป็ๆฌ
_No response_
### Node็ๆฌ
_No response_
### ่กฅๅ
่ฏดๆ
_No response_
|
1.0
|
[button] loadingไธ่ฝฌ - ### tdesign-miniprogram ็ๆฌ
0.17.0
### ้็ฐ้พๆฅ
_No response_
### ้็ฐๆญฅ้ชค
_No response_
### ๆๆ็ปๆ
button็loadingไธ่ฝฌ
### ๅฎ้
็ปๆ
button็loadingๆญฃๅธธ่ฝฌ
### ๆกๆถ็ๆฌ
_No response_
### ๆต่งๅจ็ๆฌ
_No response_
### ็ณป็ป็ๆฌ
_No response_
### Node็ๆฌ
_No response_
### ่กฅๅ
่ฏดๆ
_No response_
|
process
|
loadingไธ่ฝฌ tdesign miniprogram ็ๆฌ ้็ฐ้พๆฅ no response ้็ฐๆญฅ้ชค no response ๆๆ็ปๆ button็loadingไธ่ฝฌ ๅฎ้
็ปๆ button็loadingๆญฃๅธธ่ฝฌ ๆกๆถ็ๆฌ no response ๆต่งๅจ็ๆฌ no response ็ณป็ป็ๆฌ no response node็ๆฌ no response ่กฅๅ
่ฏดๆ no response
| 1
|
67,614
| 9,082,108,190
|
IssuesEvent
|
2019-02-17 09:08:15
|
foundersandcoders/master-reference
|
https://api.github.com/repos/foundersandcoders/master-reference
|
opened
|
Move and update consensus decisions document
|
documentation
|
The [consensus decisions](https://github.com/foundersandcoders/master-reference/blob/master/cooperative-structures/consensus-decisions.md) document probably belongs in `london-programme`, not in the `master-reference`, as it relates to the governance of Founders and Coders CIC in London and not to the programmes we support overseas.
The document will also probably need to be updated after the AGM, depending on the results of [this proposal](https://github.com/foundersandcoders/london-programme/issues/868).
|
1.0
|
Move and update consensus decisions document - The [consensus decisions](https://github.com/foundersandcoders/master-reference/blob/master/cooperative-structures/consensus-decisions.md) document probably belongs in `london-programme`, not in the `master-reference`, as it relates to the governance of Founders and Coders CIC in London and not to the programmes we support overseas.
The document will also probably need to be updated after the AGM, depending on the results of [this proposal](https://github.com/foundersandcoders/london-programme/issues/868).
|
non_process
|
move and update consensus decisions document the document probably belongs in london programme not in the master reference as it relates to the governance of founders and coders cic in london and not to the programmes we support overseas the document will also probably need to be updated after the agm depending on the results of
| 0
|
16,237
| 9,321,045,016
|
IssuesEvent
|
2019-03-27 02:04:04
|
haskell/containers
|
https://api.github.com/repos/haskell/containers
|
opened
|
Improve isSubsetOf
|
performance
|
Currently, `isSubsetOf s1 s2` checks that `size s1 <= size s2`, and if so invokes `isSubsetOfX` which recursively splits `s2` along the structure of `s1` without further size checks. I doubt this is the right approach.
### Wrong side
Splitting a set takes logarithmic time and allocation, so I believe we should split the *smaller* set along the structure of the *larger* one whenever the sizes are different. This is most easily accomplished by splitting `s1` along the structure of `s2` instead. That changes the shape of the algorithm somewhat, but I think that should be okay.
### More size tests are probably better
A size check takes constant time and no allocation, and can save a split and potentially a lot of comparisons. So I think we should perform recursive size checks. Once we split `s1` at the root of `s2`, I think we should check that the left subtree of `s2` is no smaller than the left side of the split and that the right subtree of `s2` is no smaller than the right side of the split before proceeding recursively. For example, suppose we have `{1,2,3,4,5,6,7}` with a root of `4` and want to check if `{0,1,2,3,4,5,6}` is a subset. After the first split, we'll immediately see that it's not, because `{0,1,2,3}` has more elements than `{1,2,3}`.
|
True
|
Improve isSubsetOf - Currently, `isSubsetOf s1 s2` checks that `size s1 <= size s2`, and if so invokes `isSubsetOfX` which recursively splits `s2` along the structure of `s1` without further size checks. I doubt this is the right approach.
### Wrong side
Splitting a set takes logarithmic time and allocation, so I believe we should split the *smaller* set along the structure of the *larger* one whenever the sizes are different. This is most easily accomplished by splitting `s1` along the structure of `s2` instead. That changes the shape of the algorithm somewhat, but I think that should be okay.
### More size tests are probably better
A size check takes constant time and no allocation, and can save a split and potentially a lot of comparisons. So I think we should perform recursive size checks. Once we split `s1` at the root of `s2`, I think we should check that the left subtree of `s2` is no smaller than the left side of the split and that the right subtree of `s2` is no smaller than the right side of the split before proceeding recursively. For example, suppose we have `{1,2,3,4,5,6,7}` with a root of `4` and want to check if `{0,1,2,3,4,5,6}` is a subset. After the first split, we'll immediately see that it's not, because `{0,1,2,3}` has more elements than `{1,2,3}`.
|
non_process
|
improve issubsetof currently issubsetof checks that size size and if so invokes issubsetofx which recursively splits along the structure of without further size checks i doubt this is the right approach wrong side splitting a set takes logarithmic time and allocation so i believe we should split the smaller set along the structure of the larger one whenever the sizes are different this is most easily accomplished by splitting along the structure of instead that changes the shape of the algorithm somewhat but i think that should be okay more size tests are probably better a size check takes constant time and no allocation and can save a split and potentially a lot of comparisons so i think we should perform recursive size checks once we split at the root of i think we should check that the left subtree of is no smaller than the left side of the split and that the right subtree of is no smaller than the right side of the split before proceeding recursively for example suppose we have with a root of and want to check if is a subset after the first split we ll immediately see that it s not because has more elements than
| 0
|
37,811
| 15,378,147,703
|
IssuesEvent
|
2021-03-02 17:57:03
|
BCDevOps/developer-experience
|
https://api.github.com/repos/BCDevOps/developer-experience
|
opened
|
Install Cost Management Operators in GOLD
|
ops and shared services
|
**Describe the issue**
Platform Services Team would like to have crost management operators installed in Silver cluster
https://www.redhat.com/en/blog/introducing-openshift-cost-management-human-readable-view-cloud-native-application-costs
**Additional context**
Cost Management service for Openshift will allow us to get more insight into application cost.
**Definition of done**
Install and enable the cost management parts on the Silver cluster a per instruction here
https://access.redhat.com/documentation/en-us/openshift_container_platform/4.5/html/getting_started_with_cost_management/assembly_adding_sources_cost#installing_cost_mgmt-operator
|
1.0
|
Install Cost Management Operators in GOLD - **Describe the issue**
Platform Services Team would like to have crost management operators installed in Silver cluster
https://www.redhat.com/en/blog/introducing-openshift-cost-management-human-readable-view-cloud-native-application-costs
**Additional context**
Cost Management service for Openshift will allow us to get more insight into application cost.
**Definition of done**
Install and enable the cost management parts on the Silver cluster a per instruction here
https://access.redhat.com/documentation/en-us/openshift_container_platform/4.5/html/getting_started_with_cost_management/assembly_adding_sources_cost#installing_cost_mgmt-operator
|
non_process
|
install cost management operators in gold describe the issue platform services team would like to have crost management operators installed in silver cluster additional context cost management service for openshift will allow us to get more insight into application cost definition of done install and enable the cost management parts on the silver cluster a per instruction here
| 0
|
4,188
| 7,135,392,411
|
IssuesEvent
|
2018-01-23 00:50:38
|
VeliovGroup/Meteor-Files
|
https://api.github.com/repos/VeliovGroup/Meteor-Files
|
closed
|
Error after upgrading to 1.9.6 "onAfterRemove" is read-only
|
Post Processing question
|
Hi! After upgrade to Meteor 1.6, I upgraded all packages (meteor and npm) but it seems that something got wrong with ostrio:files. The files collection was working just fine, this started after upgrade.
ostrio:files@1.9.6
METEOR@1.6.0.1
Server Issue
**Error log:**
> => Errors prevented startup:
>
> While building for web.browser:
> imports/api/Images/Images.js:1:94: imports/api/Images/Images.js:
> "onAfterRemove" is read-only
>
> While building for os.linux.x86_64:
> imports/api/Images/Images.js:1:94: imports/api/Images/Images.js:
> "onAfterRemove" is read-only
>
> => Your application has errors. Waiting for file change.
**Files Collection Class:**
```
import { Meteor } from 'meteor/meteor';
import { FilesCollection } from 'meteor/ostrio:files';
const Images = new FilesCollection({
debug: true,
throttle: false,
storagePath: () => `${Meteor.absolutePath}/uploads`,
downloadRoute: '/uploads',
collectionName: 'Images',
allowClientCode: false,
onbeforeunloadMessage() {
return 'Upload is still in progress! Upload will be aborted if you leave this page!';
},
...
...
...
onAfterRemove(cursor) {
import { onAfterRemove } from '../../modules/server/upload-files';
onAfterRemove(cursor);
},
...
...
...
});
export default Images;
```
|
1.0
|
Error after upgrading to 1.9.6 "onAfterRemove" is read-only - Hi! After upgrade to Meteor 1.6, I upgraded all packages (meteor and npm) but it seems that something got wrong with ostrio:files. The files collection was working just fine, this started after upgrade.
ostrio:files@1.9.6
METEOR@1.6.0.1
Server Issue
**Error log:**
> => Errors prevented startup:
>
> While building for web.browser:
> imports/api/Images/Images.js:1:94: imports/api/Images/Images.js:
> "onAfterRemove" is read-only
>
> While building for os.linux.x86_64:
> imports/api/Images/Images.js:1:94: imports/api/Images/Images.js:
> "onAfterRemove" is read-only
>
> => Your application has errors. Waiting for file change.
**Files Collection Class:**
```
import { Meteor } from 'meteor/meteor';
import { FilesCollection } from 'meteor/ostrio:files';
const Images = new FilesCollection({
debug: true,
throttle: false,
storagePath: () => `${Meteor.absolutePath}/uploads`,
downloadRoute: '/uploads',
collectionName: 'Images',
allowClientCode: false,
onbeforeunloadMessage() {
return 'Upload is still in progress! Upload will be aborted if you leave this page!';
},
...
...
...
onAfterRemove(cursor) {
import { onAfterRemove } from '../../modules/server/upload-files';
onAfterRemove(cursor);
},
...
...
...
});
export default Images;
```
|
process
|
error after upgrading to onafterremove is read only hi after upgrade to meteor i upgraded all packages meteor and npm but it seems that something got wrong with ostrio files the files collection was working just fine this started after upgrade ostrio files meteor server issue error log errors prevented startup while building for web browser imports api images images js imports api images images js onafterremove is read only while building for os linux imports api images images js imports api images images js onafterremove is read only your application has errors waiting for file change files collection class import meteor from meteor meteor import filescollection from meteor ostrio files const images new filescollection debug true throttle false storagepath meteor absolutepath uploads downloadroute uploads collectionname images allowclientcode false onbeforeunloadmessage return upload is still in progress upload will be aborted if you leave this page onafterremove cursor import onafterremove from modules server upload files onafterremove cursor export default images
| 1
|
417,015
| 28,109,289,039
|
IssuesEvent
|
2023-03-31 05:23:20
|
Shresht7/Scribe
|
https://api.github.com/repos/Shresht7/Scribe
|
closed
|
Add Documentation
|
documentation
|
- [x] Add Documentation
- [x] Update README
- [x] Documentation on creating Custom Nodes
- [x] Improve Tests
- [x] Add Examples
|
1.0
|
Add Documentation - - [x] Add Documentation
- [x] Update README
- [x] Documentation on creating Custom Nodes
- [x] Improve Tests
- [x] Add Examples
|
non_process
|
add documentation add documentation update readme documentation on creating custom nodes improve tests add examples
| 0
|
27,510
| 11,495,164,575
|
IssuesEvent
|
2020-02-12 03:51:59
|
alpersonalwebsite/react-state-local-storage
|
https://api.github.com/repos/alpersonalwebsite/react-state-local-storage
|
opened
|
CVE-2020-8116 (Medium) detected in dot-prop-4.2.0.tgz
|
security vulnerability
|
## CVE-2020-8116 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>dot-prop-4.2.0.tgz</b></p></summary>
<p>Get, set, or delete a property from a nested object using a dot path</p>
<p>Library home page: <a href="https://registry.npmjs.org/dot-prop/-/dot-prop-4.2.0.tgz">https://registry.npmjs.org/dot-prop/-/dot-prop-4.2.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/react-state-local-storage/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/react-state-local-storage/node_modules/dot-prop/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-3.0.1.tgz (Root Library)
- optimize-css-assets-webpack-plugin-5.0.1.tgz
- cssnano-4.1.10.tgz
- cssnano-preset-default-4.0.7.tgz
- postcss-merge-rules-4.0.3.tgz
- postcss-selector-parser-3.1.1.tgz
- :x: **dot-prop-4.2.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/alpersonalwebsite/react-state-local-storage/commit/80cca48ff53c9bb59814f56e6146af200b29eca8">80cca48ff53c9bb59814f56e6146af200b29eca8</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Prototype pollution vulnerability in dot-prop npm package version 5.1.0 and earlier allows an attacker to add arbitrary properties to JavaScript language constructs such as objects.
<p>Publish Date: 2020-02-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8116>CVE-2020-8116</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8116">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8116</a></p>
<p>Release Date: 2020-02-04</p>
<p>Fix Resolution: dot-prop - 5.1.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-8116 (Medium) detected in dot-prop-4.2.0.tgz - ## CVE-2020-8116 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>dot-prop-4.2.0.tgz</b></p></summary>
<p>Get, set, or delete a property from a nested object using a dot path</p>
<p>Library home page: <a href="https://registry.npmjs.org/dot-prop/-/dot-prop-4.2.0.tgz">https://registry.npmjs.org/dot-prop/-/dot-prop-4.2.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/react-state-local-storage/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/react-state-local-storage/node_modules/dot-prop/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-3.0.1.tgz (Root Library)
- optimize-css-assets-webpack-plugin-5.0.1.tgz
- cssnano-4.1.10.tgz
- cssnano-preset-default-4.0.7.tgz
- postcss-merge-rules-4.0.3.tgz
- postcss-selector-parser-3.1.1.tgz
- :x: **dot-prop-4.2.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/alpersonalwebsite/react-state-local-storage/commit/80cca48ff53c9bb59814f56e6146af200b29eca8">80cca48ff53c9bb59814f56e6146af200b29eca8</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Prototype pollution vulnerability in dot-prop npm package version 5.1.0 and earlier allows an attacker to add arbitrary properties to JavaScript language constructs such as objects.
<p>Publish Date: 2020-02-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8116>CVE-2020-8116</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8116">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8116</a></p>
<p>Release Date: 2020-02-04</p>
<p>Fix Resolution: dot-prop - 5.1.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in dot prop tgz cve medium severity vulnerability vulnerable library dot prop tgz get set or delete a property from a nested object using a dot path library home page a href path to dependency file tmp ws scm react state local storage package json path to vulnerable library tmp ws scm react state local storage node modules dot prop package json dependency hierarchy react scripts tgz root library optimize css assets webpack plugin tgz cssnano tgz cssnano preset default tgz postcss merge rules tgz postcss selector parser tgz x dot prop tgz vulnerable library found in head commit a href vulnerability details prototype pollution vulnerability in dot prop npm package version and earlier allows an attacker to add arbitrary properties to javascript language constructs such as objects publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution dot prop step up your open source security game with whitesource
| 0
|
27,351
| 7,937,178,009
|
IssuesEvent
|
2018-07-09 12:01:32
|
pingcap/tikv
|
https://api.github.com/repos/pingcap/tikv
|
closed
|
make clippy reports lots of warning
|
C: Build
|
## clippy issue
**What version of Rust are you using?**
rustc 1.28.0-nightly (b907d9665 2018-06-13)
clippy 0.0.207
**What did you do?**
cargo clean && cargo clippy
**What did you expect to see?**
No warning should be reported.
**What did you see instead?**
```
warning: long literal lacking separators
--> src/util/codec/number.rs:20:24
|
20 | const SIGN_MARK: u64 = 0x8000000000000000;
| ^^^^^^^^^^^^^^^^^^ help: consider: `0x8000_0000_0000_0000`
|
= note: #[warn(unreadable_literal)] on by default
= help: for further information visit https://rust-lang-nursery.github.io/rust-clippy/v0.0.207/index.html#unreadable_literal
warning: module has the same name as its containing module
--> src/util/jemalloc.rs:15:1
|
15 | / mod jemalloc {
16 | | use libc::{self, c_char, c_void};
17 | | use std::{ptr, slice};
18 | |
... |
56 | | }
57 | | }
| |_^
|
= note: #[warn(module_inception)] on by default
= help: for further information visit https://rust-lang-nursery.github.io/rust-clippy/v0.0.207/index.html#module_inception
warning: module has the same name as its containing module
--> src/coprocessor/codec/chunk/mod.rs:14:1
|
14 | mod chunk;
| ^^^^^^^^^^
|
= help: for further information visit https://rust-lang-nursery.github.io/rust-clippy/v0.0.207/index.html#module_inception
```
And lots of other warnings.
@Hoverbear is there a same result in your environment? If so, I can help to fix those warnings and add the clippy check in ci and circle-ci.
|
1.0
|
make clippy reports lots of warning - ## clippy issue
**What version of Rust are you using?**
rustc 1.28.0-nightly (b907d9665 2018-06-13)
clippy 0.0.207
**What did you do?**
cargo clean && cargo clippy
**What did you expect to see?**
No warning should be reported.
**What did you see instead?**
```
warning: long literal lacking separators
--> src/util/codec/number.rs:20:24
|
20 | const SIGN_MARK: u64 = 0x8000000000000000;
| ^^^^^^^^^^^^^^^^^^ help: consider: `0x8000_0000_0000_0000`
|
= note: #[warn(unreadable_literal)] on by default
= help: for further information visit https://rust-lang-nursery.github.io/rust-clippy/v0.0.207/index.html#unreadable_literal
warning: module has the same name as its containing module
--> src/util/jemalloc.rs:15:1
|
15 | / mod jemalloc {
16 | | use libc::{self, c_char, c_void};
17 | | use std::{ptr, slice};
18 | |
... |
56 | | }
57 | | }
| |_^
|
= note: #[warn(module_inception)] on by default
= help: for further information visit https://rust-lang-nursery.github.io/rust-clippy/v0.0.207/index.html#module_inception
warning: module has the same name as its containing module
--> src/coprocessor/codec/chunk/mod.rs:14:1
|
14 | mod chunk;
| ^^^^^^^^^^
|
= help: for further information visit https://rust-lang-nursery.github.io/rust-clippy/v0.0.207/index.html#module_inception
```
And lots of other warnings.
@Hoverbear is there a same result in your environment? If so, I can help to fix those warnings and add the clippy check in ci and circle-ci.
|
non_process
|
make clippy reports lots of warning clippy issue what version of rust are you using rustc nightly clippy what did you do cargo clean cargo clippy what did you expect to see no warning should be reported what did you see instead warning long literal lacking separators src util codec number rs const sign mark help consider note on by default help for further information visit warning module has the same name as its containing module src util jemalloc rs mod jemalloc use libc self c char c void use std ptr slice note on by default help for further information visit warning module has the same name as its containing module src coprocessor codec chunk mod rs mod chunk help for further information visit and lots of other warnings hoverbear is there a same result in your environment if so i can help to fix those warnings and add the clippy check in ci and circle ci
| 0
|
17,260
| 23,042,722,966
|
IssuesEvent
|
2022-07-23 12:02:38
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
closed
|
ObjectDisposedException in System.Diagnostics.AsyncStreamReader.ReadBufferAsync
|
area-System.Diagnostics.Process no-recent-activity backlog-cleanup-candidate
|
### Description
We're in the process of switching our `net472` WPF application to `netcoreapp3.1`. So far, it's working great!
But we're receiving crash reports from our users which beta test the .NET Core version. These exceptions are not blocking and we get them from the `TaskScheduler.UnobservedTaskException` event.
The stack traces are not very good as **there's no user code inside it**. There are different flavors of it:
```
System.ObjectDisposedException:
at System.IO.FileStream.BeginRead (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.IO.Stream+<>c.<BeginEndReadAsync>b__48_0 (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Threading.Tasks.TaskFactory`1.FromAsyncTrim (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.IO.Stream.BeginEndReadAsync (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.IO.FileStream.ReadAsync (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.IO.Stream.ReadAsync (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.IO.FileStream.ReadAsync (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Diagnostics.AsyncStreamReader+<ReadBufferAsync>d__16.MoveNext (System.Diagnostics.Process, Version=4.2.2.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a)
```
```
System.ObjectDisposedException:
at System.IO.FileStream.ValidateReadWriteArgs (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.IO.FileStream.Read (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.IO.Stream+<>c.<BeginReadInternal>b__43_0 (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Threading.Tasks.Task`1.InnerInvoke (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Threading.Tasks.Task+<>c.<.cctor>b__274_0 (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Threading.ExecutionContext.RunFromThreadPoolDispatchLoop (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Threading.ExecutionContext.RunFromThreadPoolDispatchLoop (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Threading.Tasks.Task.ExecuteWithThreadLocal (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.IO.Stream.EndRead (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.IO.FileStream.EndRead (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.IO.Stream+<>c.<BeginEndReadAsync>b__48_1 (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Threading.Tasks.TaskFactory`1+FromAsyncTrimPromise`1.Complete (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Runtime.CompilerServices.ConfiguredValueTaskAwaitable`1+ConfiguredValueTaskAwaiter.GetResult (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Diagnostics.AsyncStreamReader+<ReadBufferAsync>d__16.MoveNext (System.Diagnostics.Process, Version=4.2.2.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a)
```
```
System.ObjectDisposedException:
at System.Runtime.InteropServices.SafeHandle.DangerousAddRef (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.StubHelpers.StubHelpers.SafeHandleAddRef (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at Interop+Kernel32.ReadFile (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.IO.FileStream.ReadFileNative (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.IO.FileStream.ReadNative (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.IO.FileStream.ReadSpan (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.IO.FileStream.Read (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.IO.Stream+<>c.<BeginReadInternal>b__43_0 (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Threading.Tasks.Task`1.InnerInvoke (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Threading.ExecutionContext.RunFromThreadPoolDispatchLoop (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Threading.Tasks.Task.ExecuteWithThreadLocal (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.IO.Stream.EndRead (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.IO.FileStream.EndRead (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.IO.Stream+<>c.<BeginEndReadAsync>b__48_1 (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Threading.Tasks.TaskFactory`1+FromAsyncTrimPromise`1.Complete (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Runtime.CompilerServices.ConfiguredValueTaskAwaitable`1+ConfiguredValueTaskAwaiter.GetResult (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Diagnostics.AsyncStreamReader+<ReadBufferAsync>d__16.MoveNext (System.Diagnostics.Process, Version=4.2.2.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a)
```
This one might be a different problem:
```
System.NotSupportedException:
at System.IO.FileStream.ReadSpan (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.IO.FileStream.Read (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.IO.Stream+<>c.<BeginReadInternal>b__43_0 (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Threading.Tasks.Task`1.InnerInvoke (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Threading.ExecutionContext.RunFromThreadPoolDispatchLoop (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Threading.Tasks.Task.ExecuteWithThreadLocal (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.IO.Stream.EndRead (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.IO.FileStream.EndRead (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.IO.Stream+<>c.<BeginEndReadAsync>b__48_1 (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Threading.Tasks.TaskFactory`1+FromAsyncTrimPromise`1.Complete (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Diagnostics.AsyncStreamReader+<ReadBufferAsync>d__16.MoveNext (System.Diagnostics.Process, Version=4.2.2.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a)
```
In any case they all point out to `System.Diagnostics.AsyncStreamReader` which seems to be only used by `System.Diagnostics.Process`.
We don't have a clear repro yet but it seems it happens when starting external processes (like `explorer.exe`).
`DisposedException` does not seem to be an expected exception as seen here:
https://github.com/dotnet/runtime/blob/5f96636b63bea718bcfa1355e17cb36693ff8bca/src/libraries/System.Diagnostics.Process/src/System/Diagnostics/AsyncStreamReader.cs#L89-L117
As for the fact the exception is not observed, I guess we never wait for `_readToBufferTask ` which seems to be only here:
https://github.com/dotnet/runtime/blob/5f96636b63bea718bcfa1355e17cb36693ff8bca/src/libraries/System.Diagnostics.Process/src/System/Diagnostics/AsyncStreamReader.cs#L256-L262
### Configuration
* x64
* .NET Core 3.1.9
* Windows 10.0.18362
### Regression?
* We don't have those crash reports with the .NET Framework 4.7.2 version of the same application
|
1.0
|
ObjectDisposedException in System.Diagnostics.AsyncStreamReader.ReadBufferAsync - ### Description
We're in the process of switching our `net472` WPF application to `netcoreapp3.1`. So far, it's working great!
But we're receiving crash reports from our users which beta test the .NET Core version. These exceptions are not blocking and we get them from the `TaskScheduler.UnobservedTaskException` event.
The stack traces are not very good as **there's no user code inside it**. There are different flavors of it:
```
System.ObjectDisposedException:
at System.IO.FileStream.BeginRead (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.IO.Stream+<>c.<BeginEndReadAsync>b__48_0 (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Threading.Tasks.TaskFactory`1.FromAsyncTrim (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.IO.Stream.BeginEndReadAsync (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.IO.FileStream.ReadAsync (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.IO.Stream.ReadAsync (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.IO.FileStream.ReadAsync (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Diagnostics.AsyncStreamReader+<ReadBufferAsync>d__16.MoveNext (System.Diagnostics.Process, Version=4.2.2.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a)
```
```
System.ObjectDisposedException:
at System.IO.FileStream.ValidateReadWriteArgs (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.IO.FileStream.Read (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.IO.Stream+<>c.<BeginReadInternal>b__43_0 (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Threading.Tasks.Task`1.InnerInvoke (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Threading.Tasks.Task+<>c.<.cctor>b__274_0 (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Threading.ExecutionContext.RunFromThreadPoolDispatchLoop (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Threading.ExecutionContext.RunFromThreadPoolDispatchLoop (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Threading.Tasks.Task.ExecuteWithThreadLocal (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.IO.Stream.EndRead (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.IO.FileStream.EndRead (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.IO.Stream+<>c.<BeginEndReadAsync>b__48_1 (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Threading.Tasks.TaskFactory`1+FromAsyncTrimPromise`1.Complete (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Runtime.CompilerServices.ConfiguredValueTaskAwaitable`1+ConfiguredValueTaskAwaiter.GetResult (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Diagnostics.AsyncStreamReader+<ReadBufferAsync>d__16.MoveNext (System.Diagnostics.Process, Version=4.2.2.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a)
```
```
System.ObjectDisposedException:
at System.Runtime.InteropServices.SafeHandle.DangerousAddRef (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.StubHelpers.StubHelpers.SafeHandleAddRef (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at Interop+Kernel32.ReadFile (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.IO.FileStream.ReadFileNative (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.IO.FileStream.ReadNative (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.IO.FileStream.ReadSpan (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.IO.FileStream.Read (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.IO.Stream+<>c.<BeginReadInternal>b__43_0 (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Threading.Tasks.Task`1.InnerInvoke (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Threading.ExecutionContext.RunFromThreadPoolDispatchLoop (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Threading.Tasks.Task.ExecuteWithThreadLocal (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.IO.Stream.EndRead (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.IO.FileStream.EndRead (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.IO.Stream+<>c.<BeginEndReadAsync>b__48_1 (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Threading.Tasks.TaskFactory`1+FromAsyncTrimPromise`1.Complete (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Runtime.CompilerServices.ConfiguredValueTaskAwaitable`1+ConfiguredValueTaskAwaiter.GetResult (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Diagnostics.AsyncStreamReader+<ReadBufferAsync>d__16.MoveNext (System.Diagnostics.Process, Version=4.2.2.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a)
```
This one might be a different problem:
```
System.NotSupportedException:
at System.IO.FileStream.ReadSpan (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.IO.FileStream.Read (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.IO.Stream+<>c.<BeginReadInternal>b__43_0 (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Threading.Tasks.Task`1.InnerInvoke (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Threading.ExecutionContext.RunFromThreadPoolDispatchLoop (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Threading.Tasks.Task.ExecuteWithThreadLocal (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.IO.Stream.EndRead (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.IO.FileStream.EndRead (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.IO.Stream+<>c.<BeginEndReadAsync>b__48_1 (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Threading.Tasks.TaskFactory`1+FromAsyncTrimPromise`1.Complete (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
at System.Diagnostics.AsyncStreamReader+<ReadBufferAsync>d__16.MoveNext (System.Diagnostics.Process, Version=4.2.2.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a)
```
In any case they all point out to `System.Diagnostics.AsyncStreamReader` which seems to be only used by `System.Diagnostics.Process`.
We don't have a clear repro yet but it seems it happens when starting external processes (like `explorer.exe`).
`DisposedException` does not seem to be an expected exception as seen here:
https://github.com/dotnet/runtime/blob/5f96636b63bea718bcfa1355e17cb36693ff8bca/src/libraries/System.Diagnostics.Process/src/System/Diagnostics/AsyncStreamReader.cs#L89-L117
As for the fact the exception is not observed, I guess we never wait for `_readToBufferTask ` which seems to be only here:
https://github.com/dotnet/runtime/blob/5f96636b63bea718bcfa1355e17cb36693ff8bca/src/libraries/System.Diagnostics.Process/src/System/Diagnostics/AsyncStreamReader.cs#L256-L262
### Configuration
* x64
* .NET Core 3.1.9
* Windows 10.0.18362
### Regression?
* We don't have those crash reports with the .NET Framework 4.7.2 version of the same application
|
process
|
objectdisposedexception in system diagnostics asyncstreamreader readbufferasync description we re in the process of switching our wpf application to so far it s working great but we re receiving crash reports from our users which beta test the net core version these exceptions are not blocking and we get them from the taskscheduler unobservedtaskexception event the stack traces are not very good as there s no user code inside it there are different flavors of it system objectdisposedexception at system io filestream beginread system private corelib version culture neutral publickeytoken at system io stream c b system private corelib version culture neutral publickeytoken at system threading tasks taskfactory fromasynctrim system private corelib version culture neutral publickeytoken at system io stream beginendreadasync system private corelib version culture neutral publickeytoken at system io filestream readasync system private corelib version culture neutral publickeytoken at system io stream readasync system private corelib version culture neutral publickeytoken at system io filestream readasync system private corelib version culture neutral publickeytoken at system diagnostics asyncstreamreader d movenext system diagnostics process version culture neutral publickeytoken system objectdisposedexception at system io filestream validatereadwriteargs system private corelib version culture neutral publickeytoken at system io filestream read system private corelib version culture neutral publickeytoken at system io stream c b system private corelib version culture neutral publickeytoken at system threading tasks task innerinvoke system private corelib version culture neutral publickeytoken at system threading tasks task c b system private corelib version culture neutral publickeytoken at system threading executioncontext runfromthreadpooldispatchloop system private corelib version culture neutral publickeytoken at system runtime exceptionservices exceptiondispatchinfo throw system private corelib version culture neutral publickeytoken at system threading executioncontext runfromthreadpooldispatchloop system private corelib version culture neutral publickeytoken at system threading tasks task executewiththreadlocal system private corelib version culture neutral publickeytoken at system runtime exceptionservices exceptiondispatchinfo throw system private corelib version culture neutral publickeytoken at system runtime compilerservices taskawaiter throwfornonsuccess system private corelib version culture neutral publickeytoken at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification system private corelib version culture neutral publickeytoken at system io stream endread system private corelib version culture neutral publickeytoken at system io filestream endread system private corelib version culture neutral publickeytoken at system io stream c b system private corelib version culture neutral publickeytoken at system threading tasks taskfactory fromasynctrimpromise complete system private corelib version culture neutral publickeytoken at system runtime exceptionservices exceptiondispatchinfo throw system private corelib version culture neutral publickeytoken at system runtime compilerservices taskawaiter throwfornonsuccess system private corelib version culture neutral publickeytoken at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification system private corelib version culture neutral publickeytoken at system runtime compilerservices configuredvaluetaskawaitable configuredvaluetaskawaiter getresult system private corelib version culture neutral publickeytoken at system diagnostics asyncstreamreader d movenext system diagnostics process version culture neutral publickeytoken system objectdisposedexception at system runtime interopservices safehandle dangerousaddref system private corelib version culture neutral publickeytoken at system stubhelpers stubhelpers safehandleaddref system private corelib version culture neutral publickeytoken at interop readfile system private corelib version culture neutral publickeytoken at system io filestream readfilenative system private corelib version culture neutral publickeytoken at system io filestream readnative system private corelib version culture neutral publickeytoken at system io filestream readspan system private corelib version culture neutral publickeytoken at system io filestream read system private corelib version culture neutral publickeytoken at system io stream c b system private corelib version culture neutral publickeytoken at system threading tasks task innerinvoke system private corelib version culture neutral publickeytoken at system threading executioncontext runfromthreadpooldispatchloop system private corelib version culture neutral publickeytoken at system runtime exceptionservices exceptiondispatchinfo throw system private corelib version culture neutral publickeytoken at system threading tasks task executewiththreadlocal system private corelib version culture neutral publickeytoken at system runtime exceptionservices exceptiondispatchinfo throw system private corelib version culture neutral publickeytoken at system runtime compilerservices taskawaiter throwfornonsuccess system private corelib version culture neutral publickeytoken at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification system private corelib version culture neutral publickeytoken at system io stream endread system private corelib version culture neutral publickeytoken at system io filestream endread system private corelib version culture neutral publickeytoken at system io stream c b system private corelib version culture neutral publickeytoken at system threading tasks taskfactory fromasynctrimpromise complete system private corelib version culture neutral publickeytoken at system runtime exceptionservices exceptiondispatchinfo throw system private corelib version culture neutral publickeytoken at system runtime compilerservices taskawaiter throwfornonsuccess system private corelib version culture neutral publickeytoken at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification system private corelib version culture neutral publickeytoken at system runtime compilerservices configuredvaluetaskawaitable configuredvaluetaskawaiter getresult system private corelib version culture neutral publickeytoken at system diagnostics asyncstreamreader d movenext system diagnostics process version culture neutral publickeytoken this one might be a different problem system notsupportedexception at system io filestream readspan system private corelib version culture neutral publickeytoken at system io filestream read system private corelib version culture neutral publickeytoken at system io stream c b system private corelib version culture neutral publickeytoken at system threading tasks task innerinvoke system private corelib version culture neutral publickeytoken at system threading executioncontext runfromthreadpooldispatchloop system private corelib version culture neutral publickeytoken at system runtime exceptionservices exceptiondispatchinfo throw system private corelib version culture neutral publickeytoken at system threading tasks task executewiththreadlocal system private corelib version culture neutral publickeytoken at system runtime exceptionservices exceptiondispatchinfo throw system private corelib version culture neutral publickeytoken at system runtime compilerservices taskawaiter throwfornonsuccess system private corelib version culture neutral publickeytoken at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification system private corelib version culture neutral publickeytoken at system io stream endread system private corelib version culture neutral publickeytoken at system io filestream endread system private corelib version culture neutral publickeytoken at system io stream c b system private corelib version culture neutral publickeytoken at system threading tasks taskfactory fromasynctrimpromise complete system private corelib version culture neutral publickeytoken at system runtime exceptionservices exceptiondispatchinfo throw system private corelib version culture neutral publickeytoken at system runtime compilerservices taskawaiter throwfornonsuccess system private corelib version culture neutral publickeytoken at system diagnostics asyncstreamreader d movenext system diagnostics process version culture neutral publickeytoken in any case they all point out to system diagnostics asyncstreamreader which seems to be only used by system diagnostics process we don t have a clear repro yet but it seems it happens when starting external processes like explorer exe disposedexception does not seem to be an expected exception as seen here as for the fact the exception is not observed i guess we never wait for readtobuffertask which seems to be only here configuration net core windows regression we don t have those crash reports with the net framework version of the same application
| 1
|
8,932
| 12,041,498,371
|
IssuesEvent
|
2020-04-14 08:56:45
|
nkumar115/Data-Science
|
https://api.github.com/repos/nkumar115/Data-Science
|
opened
|
Sparse Matrices For Efficient Machine Learning
|
Data - PreProcessing
|
**Title**
Sparse matrices have lots of zero values. Dense matrices do not. In Details
**Link**
https://dziganto.github.io/Sparse-Matrices-For-Efficient-Machine-Learning/
**Snapshot**
|
1.0
|
Sparse Matrices For Efficient Machine Learning - **Title**
Sparse matrices have lots of zero values. Dense matrices do not. In Details
**Link**
https://dziganto.github.io/Sparse-Matrices-For-Efficient-Machine-Learning/
**Snapshot**
|
process
|
sparse matrices for efficient machine learning title sparse matrices have lots of zero values dense matrices do not in details link snapshot
| 1
|
520,476
| 15,086,737,736
|
IssuesEvent
|
2021-02-05 20:51:00
|
ArctosDB/arctos
|
https://api.github.com/repos/ArctosDB/arctos
|
closed
|
Suggested changes to media widget
|
Display/Interface Enhancement Priority-High
|
I made some changes to the media widget while working on the taxon name page. Please go to test and look at stuff with media and let me know if you really dislike something!
For your entertainment, here are before and after:
Media on taxon page
Before | After
-- | --
 | 
(nothing is missing - one of the images isn't set up in test...
Media on catalog record page
Before | After
-- | --
 | 
Media on Agent Page
Before | After
-- | --
 | 
Media on a Project Page
Before | After
-- | --
 | 
Media is also associated with transactions, so please look there too.
|
1.0
|
Suggested changes to media widget - I made some changes to the media widget while working on the taxon name page. Please go to test and look at stuff with media and let me know if you really dislike something!
For your entertainment, here are before and after:
Media on taxon page
Before | After
-- | --
 | 
(nothing is missing - one of the images isn't set up in test...
Media on catalog record page
Before | After
-- | --
 | 
Media on Agent Page
Before | After
-- | --
 | 
Media on a Project Page
Before | After
-- | --
 | 
Media is also associated with transactions, so please look there too.
|
non_process
|
suggested changes to media widget i made some changes to the media widget while working on the taxon name page please go to test and look at stuff with media and let me know if you really dislike something for your entertainment here are before and after media on taxon page before after nothing is missing one of the images isn t set up in test media on catalog record page before after media on agent page before after media on a project page before after media is also associated with transactions so please look there too
| 0
|
19,165
| 25,265,334,980
|
IssuesEvent
|
2022-11-16 03:50:49
|
openxla/stablehlo
|
https://api.github.com/repos/openxla/stablehlo
|
closed
|
Dedicated markdown doc for Reference
|
Process
|
### Request description
Can we have a dedicated markdown doc for Reference?
There are some info in the interpreter markdown but I think having a distinct document for reference, as it is also an indipendent folder in the repo, it could also help to clarify the interpreter domain vs the reference impl.
|
1.0
|
Dedicated markdown doc for Reference - ### Request description
Can we have a dedicated markdown doc for Reference?
There are some info in the interpreter markdown but I think having a distinct document for reference, as it is also an indipendent folder in the repo, it could also help to clarify the interpreter domain vs the reference impl.
|
process
|
dedicated markdown doc for reference request description can we have a dedicated markdown doc for reference there are some info in the interpreter markdown but i think having a distinct document for reference as it is also an indipendent folder in the repo it could also help to clarify the interpreter domain vs the reference impl
| 1
|
15,172
| 18,944,749,395
|
IssuesEvent
|
2021-11-18 08:58:53
|
damb/scdetect
|
https://api.github.com/repos/damb/scdetect
|
closed
|
Sensor orientation mismatch
|
enhancement processing
|
The sensor orientation of the template waveform data and the data to be matched might differ. Therefore, always check for the sensor orientation (from the inventory metadata) of the stream to be matched and rotate the template waveform if required (@jfclinton).
|
1.0
|
Sensor orientation mismatch - The sensor orientation of the template waveform data and the data to be matched might differ. Therefore, always check for the sensor orientation (from the inventory metadata) of the stream to be matched and rotate the template waveform if required (@jfclinton).
|
process
|
sensor orientation mismatch the sensor orientation of the template waveform data and the data to be matched might differ therefore always check for the sensor orientation from the inventory metadata of the stream to be matched and rotate the template waveform if required jfclinton
| 1
|
109,626
| 23,800,965,496
|
IssuesEvent
|
2022-09-03 09:25:01
|
Toma400/The_Isle_of_Ansur
|
https://api.github.com/repos/Toma400/The_Isle_of_Ansur
|
opened
|
Better "prioritised" system for panoramas and menu sounds & Mod Loading Order
|
feature suggestion code improvement
|
Having possibility to prioritise panoramas and sounds is cool, as they will overwrite vanilla, but...
They will not overwrite other mods.
To explain it a bit in detail: if more than one mod use "PR%_" system, they will both shuffle through their data, so more than one mod can be prioritised. It's okay if you just want to overwrite vanilla, but not if you want to rule over all possible mods.
Solution for this can be kinda simple, kinda not, mechanic known from Morrowind: Mod Loading Order (later: MLO)
This would ensure that if your mod loading order is correct, first one will be picked up. I suggest having new key, like "OV%_" to make prioritised and overwriting different behaviours, as it may be useful. If "OV%_" is used, listed backgrounds/sounds will also check which mod loaded first and limit the list only to those entries.
Additional work you will need to do here is to make `Mods Screen` to be able to change MLO in-game, so they can be listed in correct order. Additional optional `info.json` value will not hurt either, and can be good way of pre-determining MLO status for your mod, without manually setting it by player.
```
{
"loading_number": -50
}
```
Example above would be run **before** anyone with number greater than -50.
|
1.0
|
Better "prioritised" system for panoramas and menu sounds & Mod Loading Order - Having possibility to prioritise panoramas and sounds is cool, as they will overwrite vanilla, but...
They will not overwrite other mods.
To explain it a bit in detail: if more than one mod use "PR%_" system, they will both shuffle through their data, so more than one mod can be prioritised. It's okay if you just want to overwrite vanilla, but not if you want to rule over all possible mods.
Solution for this can be kinda simple, kinda not, mechanic known from Morrowind: Mod Loading Order (later: MLO)
This would ensure that if your mod loading order is correct, first one will be picked up. I suggest having new key, like "OV%_" to make prioritised and overwriting different behaviours, as it may be useful. If "OV%_" is used, listed backgrounds/sounds will also check which mod loaded first and limit the list only to those entries.
Additional work you will need to do here is to make `Mods Screen` to be able to change MLO in-game, so they can be listed in correct order. Additional optional `info.json` value will not hurt either, and can be good way of pre-determining MLO status for your mod, without manually setting it by player.
```
{
"loading_number": -50
}
```
Example above would be run **before** anyone with number greater than -50.
|
non_process
|
better prioritised system for panoramas and menu sounds mod loading order having possibility to prioritise panoramas and sounds is cool as they will overwrite vanilla but they will not overwrite other mods to explain it a bit in detail if more than one mod use pr system they will both shuffle through their data so more than one mod can be prioritised it s okay if you just want to overwrite vanilla but not if you want to rule over all possible mods solution for this can be kinda simple kinda not mechanic known from morrowind mod loading order later mlo this would ensure that if your mod loading order is correct first one will be picked up i suggest having new key like ov to make prioritised and overwriting different behaviours as it may be useful if ov is used listed backgrounds sounds will also check which mod loaded first and limit the list only to those entries additional work you will need to do here is to make mods screen to be able to change mlo in game so they can be listed in correct order additional optional info json value will not hurt either and can be good way of pre determining mlo status for your mod without manually setting it by player loading number example above would be run before anyone with number greater than
| 0
|
11,441
| 14,261,519,528
|
IssuesEvent
|
2020-11-20 11:27:38
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
closed
|
ServiceController._waitForStatusSignal is never signaled nor disposed
|
area-System.ServiceProcess untriaged
|
### Description
The System.ServiceProcess.ServiceController._waitForStatusSignal field refers to a ManualResetEvent that is constructed and possibly waited for, but never signaled nor disposed. It just causes useless work for the garbage collector and the kernel.
Constructed:
<https://github.com/dotnet/runtime/blob/cf258a14b70ad9069470a108f13765e0e5988f51/src/libraries/System.ServiceProcess.ServiceController/src/System/ServiceProcess/ServiceController.cs#L24>
Waited for:
<https://github.com/dotnet/runtime/blob/cf258a14b70ad9069470a108f13765e0e5988f51/src/libraries/System.ServiceProcess.ServiceController/src/System/ServiceProcess/ServiceController.cs#L907>
The field is private, and ServiceController is not a partial class, so there cannot be references in other files.
### Configuration
.NET v5.0.0-rtm.20519.4 on Windows, any architecture.
### Regression?
Not a regression from previous versions of .NET Core.
However, .NET Framework uses Thread.Sleep instead: <https://github.com/microsoft/referencesource/blob/f461f1986ca4027720656a0c77bede9963e20b7e/System.ServiceProcess/ServiceController.cs#L1272>
### Other information
I noticed this while looking at how <https://github.com/dotnet/runtime/issues/35284> could be implemented.
The useless ManualResetEvent was discussed in <https://github.com/dotnet/corefx/pull/627/files#r24084380> when it was first added in 2015. It was intended to be "scrapped soon".
|
1.0
|
ServiceController._waitForStatusSignal is never signaled nor disposed - ### Description
The System.ServiceProcess.ServiceController._waitForStatusSignal field refers to a ManualResetEvent that is constructed and possibly waited for, but never signaled nor disposed. It just causes useless work for the garbage collector and the kernel.
Constructed:
<https://github.com/dotnet/runtime/blob/cf258a14b70ad9069470a108f13765e0e5988f51/src/libraries/System.ServiceProcess.ServiceController/src/System/ServiceProcess/ServiceController.cs#L24>
Waited for:
<https://github.com/dotnet/runtime/blob/cf258a14b70ad9069470a108f13765e0e5988f51/src/libraries/System.ServiceProcess.ServiceController/src/System/ServiceProcess/ServiceController.cs#L907>
The field is private, and ServiceController is not a partial class, so there cannot be references in other files.
### Configuration
.NET v5.0.0-rtm.20519.4 on Windows, any architecture.
### Regression?
Not a regression from previous versions of .NET Core.
However, .NET Framework uses Thread.Sleep instead: <https://github.com/microsoft/referencesource/blob/f461f1986ca4027720656a0c77bede9963e20b7e/System.ServiceProcess/ServiceController.cs#L1272>
### Other information
I noticed this while looking at how <https://github.com/dotnet/runtime/issues/35284> could be implemented.
The useless ManualResetEvent was discussed in <https://github.com/dotnet/corefx/pull/627/files#r24084380> when it was first added in 2015. It was intended to be "scrapped soon".
|
process
|
servicecontroller waitforstatussignal is never signaled nor disposed description the system serviceprocess servicecontroller waitforstatussignal field refers to a manualresetevent that is constructed and possibly waited for but never signaled nor disposed it just causes useless work for the garbage collector and the kernel constructed waited for the field is private and servicecontroller is not a partial class so there cannot be references in other files configuration net rtm on windows any architecture regression not a regression from previous versions of net core however net framework uses thread sleep instead other information i noticed this while looking at how could be implemented the useless manualresetevent was discussed in when it was first added in it was intended to be scrapped soon
| 1
|
7,240
| 10,409,395,771
|
IssuesEvent
|
2019-09-13 08:40:04
|
Open-EO/openeo-api
|
https://api.github.com/repos/Open-EO/openeo-api
|
closed
|
share resources like process graphs, services or jobs publicly as a backend provider or user
|
extension job management process graph management result access service management standalone vote
|
it should be possible to define resources for public use, for easy propagation of tutorial and example material. Clients like the editor should then also have the possibility to optionally visualize public or freely accessible resources such as example process graphs.
|
1.0
|
share resources like process graphs, services or jobs publicly as a backend provider or user - it should be possible to define resources for public use, for easy propagation of tutorial and example material. Clients like the editor should then also have the possibility to optionally visualize public or freely accessible resources such as example process graphs.
|
process
|
share resources like process graphs services or jobs publicly as a backend provider or user it should be possible to define resources for public use for easy propagation of tutorial and example material clients like the editor should then also have the possibility to optionally visualize public or freely accessible resources such as example process graphs
| 1
|
7,245
| 10,411,630,435
|
IssuesEvent
|
2019-09-13 14:16:21
|
toggl/mobileapp
|
https://api.github.com/repos/toggl/mobileapp
|
closed
|
Fix nightly builds
|
critical process
|
Both the nightly AdHoc and AppStore build are currently broken (see internal reports).
They need to be fixed.
|
1.0
|
Fix nightly builds - Both the nightly AdHoc and AppStore build are currently broken (see internal reports).
They need to be fixed.
|
process
|
fix nightly builds both the nightly adhoc and appstore build are currently broken see internal reports they need to be fixed
| 1
|
279,685
| 8,671,831,189
|
IssuesEvent
|
2018-11-29 20:16:55
|
alexgoodell/go-mdism
|
https://api.github.com/repos/alexgoodell/go-mdism
|
closed
|
make liver related death rate from HCC dependent on disease duration.
|
low-priority
|
Can leave for now but would be nice memory feature: make liver related death rate from HCC dependent on disease duration.
|
1.0
|
make liver related death rate from HCC dependent on disease duration. - Can leave for now but would be nice memory feature: make liver related death rate from HCC dependent on disease duration.
|
non_process
|
make liver related death rate from hcc dependent on disease duration can leave for now but would be nice memory feature make liver related death rate from hcc dependent on disease duration
| 0
|
22,679
| 31,926,134,795
|
IssuesEvent
|
2023-09-19 02:00:09
|
lizhihao6/get-daily-arxiv-noti
|
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
|
opened
|
New submissions for Tue, 19 Sep 23
|
event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB
|
## Keyword: events
### V2CE: Video to Continuous Events Simulator
- **Authors:** Zhongyang Zhang, Shuyang Cui, Kaidong Chai, Haowen Yu, Subhasis Dasgupta, Upal Mahbub, Tauhidur Rahman
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2309.08891
- **Pdf link:** https://arxiv.org/pdf/2309.08891
- **Abstract**
Dynamic Vision Sensor (DVS)-based solutions have recently garnered significant interest across various computer vision tasks, offering notable benefits in terms of dynamic range, temporal resolution, and inference speed. However, as a relatively nascent vision sensor compared to Active Pixel Sensor (APS) devices such as RGB cameras, DVS suffers from a dearth of ample labeled datasets. Prior efforts to convert APS data into events often grapple with issues such as a considerable domain shift from real events, the absence of quantified validation, and layering problems within the time axis. In this paper, we present a novel method for video-to-events stream conversion from multiple perspectives, considering the specific characteristics of DVS. A series of carefully designed losses helps enhance the quality of generated event voxels significantly. We also propose a novel local dynamic-aware timestamp inference strategy to accurately recover event timestamps from event voxels in a continuous fashion and eliminate the temporal layering problem. Results from rigorous validation through quantified metrics at all stages of the pipeline establish our method unquestionably as the current state-of-the-art (SOTA).
### Chasing Day and Night: Towards Robust and Efficient All-Day Object Detection Guided by an Event Camera
- **Authors:** Jiahang Cao, Xu Zheng, Yuanhuiyi Lyu, Jiaxu Wang, Renjing Xu, Lin Wang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Robotics (cs.RO)
- **Arxiv link:** https://arxiv.org/abs/2309.09297
- **Pdf link:** https://arxiv.org/pdf/2309.09297
- **Abstract**
The ability to detect objects in all lighting (i.e., normal-, over-, and under-exposed) conditions is crucial for real-world applications, such as self-driving.Traditional RGB-based detectors often fail under such varying lighting conditions.Therefore, recent works utilize novel event cameras to supplement or guide the RGB modality; however, these methods typically adopt asymmetric network structures that rely predominantly on the RGB modality, resulting in limited robustness for all-day detection. In this paper, we propose EOLO, a novel object detection framework that achieves robust and efficient all-day detection by fusing both RGB and event modalities. Our EOLO framework is built based on a lightweight spiking neural network (SNN) to efficiently leverage the asynchronous property of events. Buttressed by it, we first introduce an Event Temporal Attention (ETA) module to learn the high temporal information from events while preserving crucial edge information. Secondly, as different modalities exhibit varying levels of importance under diverse lighting conditions, we propose a novel Symmetric RGB-Event Fusion (SREF) module to effectively fuse RGB-Event features without relying on a specific modality, thus ensuring a balanced and adaptive fusion for all-day detection. In addition, to compensate for the lack of paired RGB-Event datasets for all-day training and evaluation, we propose an event synthesis approach based on the randomized optical flow that allows for directly generating the event frame from a single exposure image. We further build two new datasets, E-MSCOCO and E-VOC based on the popular benchmarks MSCOCO and PASCAL VOC. Extensive experiments demonstrate that our EOLO outperforms the state-of-the-art detectors,e.g.,RENet,by a substantial margin (+3.74% mAP50) in all lighting conditions.Our code and datasets will be available at https://vlislab22.github.io/EOLO/
### Learning Parallax for Stereo Event-based Motion Deblurring
- **Authors:** Mingyuan Lin, Chi Zhang, Chu He, Lei Yu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2309.09513
- **Pdf link:** https://arxiv.org/pdf/2309.09513
- **Abstract**
Due to the extremely low latency, events have been recently exploited to supplement lost information for motion deblurring. Existing approaches largely rely on the perfect pixel-wise alignment between intensity images and events, which is not always fulfilled in the real world. To tackle this problem, we propose a novel coarse-to-fine framework, named NETwork of Event-based motion Deblurring with STereo event and intensity cameras (St-EDNet), to recover high-quality images directly from the misaligned inputs, consisting of a single blurry image and the concurrent event streams. Specifically, the coarse spatial alignment of the blurry image and the event streams is first implemented with a cross-modal stereo matching module without the need for ground-truth depths. Then, a dual-feature embedding architecture is proposed to gradually build the fine bidirectional association of the coarsely aligned data and reconstruct the sequence of the latent sharp images. Furthermore, we build a new dataset with STereo Event and Intensity Cameras (StEIC), containing real-world events, intensity images, and dense disparity maps. Experiments on real-world datasets demonstrate the superiority of the proposed network over state-of-the-art methods.
## Keyword: event camera
### Chasing Day and Night: Towards Robust and Efficient All-Day Object Detection Guided by an Event Camera
- **Authors:** Jiahang Cao, Xu Zheng, Yuanhuiyi Lyu, Jiaxu Wang, Renjing Xu, Lin Wang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Robotics (cs.RO)
- **Arxiv link:** https://arxiv.org/abs/2309.09297
- **Pdf link:** https://arxiv.org/pdf/2309.09297
- **Abstract**
The ability to detect objects in all lighting (i.e., normal-, over-, and under-exposed) conditions is crucial for real-world applications, such as self-driving.Traditional RGB-based detectors often fail under such varying lighting conditions.Therefore, recent works utilize novel event cameras to supplement or guide the RGB modality; however, these methods typically adopt asymmetric network structures that rely predominantly on the RGB modality, resulting in limited robustness for all-day detection. In this paper, we propose EOLO, a novel object detection framework that achieves robust and efficient all-day detection by fusing both RGB and event modalities. Our EOLO framework is built based on a lightweight spiking neural network (SNN) to efficiently leverage the asynchronous property of events. Buttressed by it, we first introduce an Event Temporal Attention (ETA) module to learn the high temporal information from events while preserving crucial edge information. Secondly, as different modalities exhibit varying levels of importance under diverse lighting conditions, we propose a novel Symmetric RGB-Event Fusion (SREF) module to effectively fuse RGB-Event features without relying on a specific modality, thus ensuring a balanced and adaptive fusion for all-day detection. In addition, to compensate for the lack of paired RGB-Event datasets for all-day training and evaluation, we propose an event synthesis approach based on the randomized optical flow that allows for directly generating the event frame from a single exposure image. We further build two new datasets, E-MSCOCO and E-VOC based on the popular benchmarks MSCOCO and PASCAL VOC. Extensive experiments demonstrate that our EOLO outperforms the state-of-the-art detectors,e.g.,RENet,by a substantial margin (+3.74% mAP50) in all lighting conditions.Our code and datasets will be available at https://vlislab22.github.io/EOLO/
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
### MA-SAM: Modality-agnostic SAM Adaptation for 3D Medical Image Segmentation
- **Authors:** Cheng Chen, Juzheng Miao, Dufan Wu, Zhiling Yan, Sekeun Kim, Jiang Hu, Aoxiao Zhong, Zhengliang Liu, Lichao Sun, Xiang Li, Tianming Liu, Pheng-Ann Heng, Quanzheng Li
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2309.08842
- **Pdf link:** https://arxiv.org/pdf/2309.08842
- **Abstract**
The Segment Anything Model (SAM), a foundation model for general image segmentation, has demonstrated impressive zero-shot performance across numerous natural image segmentation tasks. However, SAM's performance significantly declines when applied to medical images, primarily due to the substantial disparity between natural and medical image domains. To effectively adapt SAM to medical images, it is important to incorporate critical third-dimensional information, i.e., volumetric or temporal knowledge, during fine-tuning. Simultaneously, we aim to harness SAM's pre-trained weights within its original 2D backbone to the fullest extent. In this paper, we introduce a modality-agnostic SAM adaptation framework, named as MA-SAM, that is applicable to various volumetric and video medical data. Our method roots in the parameter-efficient fine-tuning strategy to update only a small portion of weight increments while preserving the majority of SAM's pre-trained weights. By injecting a series of 3D adapters into the transformer blocks of the image encoder, our method enables the pre-trained 2D backbone to extract third-dimensional information from input data. The effectiveness of our method has been comprehensively evaluated on four medical image segmentation tasks, by using 10 public datasets across CT, MRI, and surgical video data. Remarkably, without using any prompt, our method consistently outperforms various state-of-the-art 3D approaches, surpassing nnU-Net by 0.9%, 2.6%, and 9.9% in Dice for CT multi-organ segmentation, MRI prostate segmentation, and surgical scene segmentation respectively. Our model also demonstrates strong generalization, and excels in challenging tumor segmentation when prompts are used. Our code is available at: https://github.com/cchen-cc/MA-SAM.
### Learning Parallax for Stereo Event-based Motion Deblurring
- **Authors:** Mingyuan Lin, Chi Zhang, Chu He, Lei Yu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2309.09513
- **Pdf link:** https://arxiv.org/pdf/2309.09513
- **Abstract**
Due to the extremely low latency, events have been recently exploited to supplement lost information for motion deblurring. Existing approaches largely rely on the perfect pixel-wise alignment between intensity images and events, which is not always fulfilled in the real world. To tackle this problem, we propose a novel coarse-to-fine framework, named NETwork of Event-based motion Deblurring with STereo event and intensity cameras (St-EDNet), to recover high-quality images directly from the misaligned inputs, consisting of a single blurry image and the concurrent event streams. Specifically, the coarse spatial alignment of the blurry image and the event streams is first implemented with a cross-modal stereo matching module without the need for ground-truth depths. Then, a dual-feature embedding architecture is proposed to gradually build the fine bidirectional association of the coarsely aligned data and reconstruct the sequence of the latent sharp images. Furthermore, we build a new dataset with STereo Event and Intensity Cameras (StEIC), containing real-world events, intensity images, and dense disparity maps. Experiments on real-world datasets demonstrate the superiority of the proposed network over state-of-the-art methods.
### Unified Frequency-Assisted Transformer Framework for Detecting and Grounding Multi-Modal Manipulation
- **Authors:** Huan Liu, Zichang Tan, Qiang Chen, Yunchao Wei, Yao Zhao, Jingdong Wang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2309.09667
- **Pdf link:** https://arxiv.org/pdf/2309.09667
- **Abstract**
Detecting and grounding multi-modal media manipulation (DGM^4) has become increasingly crucial due to the widespread dissemination of face forgery and text misinformation. In this paper, we present the Unified Frequency-Assisted transFormer framework, named UFAFormer, to address the DGM^4 problem. Unlike previous state-of-the-art methods that solely focus on the image (RGB) domain to describe visual forgery features, we additionally introduce the frequency domain as a complementary viewpoint. By leveraging the discrete wavelet transform, we decompose images into several frequency sub-bands, capturing rich face forgery artifacts. Then, our proposed frequency encoder, incorporating intra-band and inter-band self-attentions, explicitly aggregates forgery features within and across diverse sub-bands. Moreover, to address the semantic conflicts between image and frequency domains, the forgery-aware mutual module is developed to further enable the effective interaction of disparate image and frequency features, resulting in aligned and comprehensive visual forgery representations. Finally, based on visual and textual forgery features, we propose a unified decoder that comprises two symmetric cross-modal interaction modules responsible for gathering modality-specific forgery information, along with a fusing interaction module for aggregation of both modalities. The proposed unified decoder formulates our UFAFormer as a unified framework, ultimately simplifying the overall architecture and facilitating the optimization process. Experimental results on the DGM^4 dataset, containing several perturbations, demonstrate the superior performance of our framework compared to previous methods, setting a new benchmark in the field.
### R2GenGPT: Radiology Report Generation with Frozen LLMs
- **Authors:** Zhanyu Wang, Lingqiao Liu, Lei Wang, Luping Zhou
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2309.09812
- **Pdf link:** https://arxiv.org/pdf/2309.09812
- **Abstract**
Large Language Models (LLMs) have consistently showcased remarkable generalization capabilities when applied to various language tasks. Nonetheless, harnessing the full potential of LLMs for Radiology Report Generation (R2Gen) still presents a challenge, stemming from the inherent disparity in modality between LLMs and the R2Gen task. To bridge this gap effectively, we propose R2GenGPT, which is a novel solution that aligns visual features with the word embedding space of LLMs using an efficient visual alignment module. This innovative approach empowers the previously static LLM to seamlessly integrate and process image information, marking a step forward in optimizing R2Gen performance. R2GenGPT offers the following benefits. First, it attains state-of-the-art (SOTA) performance by training only the lightweight visual alignment module while freezing all the parameters of LLM. Second, it exhibits high training efficiency, as it requires the training of an exceptionally minimal number of parameters while achieving rapid convergence. By employing delta tuning, our model only trains 5M parameters (which constitute just 0.07\% of the total parameter count) to achieve performance close to the SOTA levels. Our code is available at https://github.com/wang-zhanyu/R2GenGPT.
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
### UGC: Unified GAN Compression for Efficient Image-to-Image Translation
- **Authors:** Yuxi Ren, Jie Wu, Peng Zhang, Manlin Zhang, Xuefeng Xiao, Qian He, Rui Wang, Min Zheng, Xin Pan
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2309.09310
- **Pdf link:** https://arxiv.org/pdf/2309.09310
- **Abstract**
Recent years have witnessed the prevailing progress of Generative Adversarial Networks (GANs) in image-to-image translation. However, the success of these GAN models hinges on ponderous computational costs and labor-expensive training data. Current efficient GAN learning techniques often fall into two orthogonal aspects: i) model slimming via reduced calculation costs; ii)data/label-efficient learning with fewer training data/labels. To combine the best of both worlds, we propose a new learning paradigm, Unified GAN Compression (UGC), with a unified optimization objective to seamlessly prompt the synergy of model-efficient and label-efficient learning. UGC sets up semi-supervised-driven network architecture search and adaptive online semi-supervised distillation stages sequentially, which formulates a heterogeneous mutual learning scheme to obtain an architecture-flexible, label-efficient, and performance-excellent model.
## Keyword: RAW
### Active Learning for Fine-Grained Sketch-Based Image Retrieval
- **Authors:** Himanshu Thakur, Soumitri Chattopadhyay
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2309.08743
- **Pdf link:** https://arxiv.org/pdf/2309.08743
- **Abstract**
The ability to retrieve a photo by mere free-hand sketching highlights the immense potential of Fine-grained sketch-based image retrieval (FG-SBIR). However, its rapid practical adoption, as well as scalability, is limited by the expense of acquiring faithful sketches for easily available photo counterparts. A solution to this problem is Active Learning, which could minimise the need for labeled sketches while maximising performance. Despite extensive studies in the field, there exists no work that utilises it for reducing sketching effort in FG-SBIR tasks. To this end, we propose a novel active learning sampling technique that drastically minimises the need for drawing photo sketches. Our proposed approach tackles the trade-off between uncertainty and diversity by utilising the relationship between the existing photo-sketch pair to a photo that does not have its sketch and augmenting this relation with its intermediate representations. Since our approach relies only on the underlying data distribution, it is agnostic of the modelling approach and hence is applicable to other cross-modal instance-level retrieval tasks as well. With experimentation over two publicly available fine-grained SBIR datasets ChairV2 and ShoeV2, we validate our approach and reveal its superiority over adapted baselines.
### Semantics-aware LiDAR-Only Pseudo Point Cloud Generation for 3D Object Detection
- **Authors:** Tiago Cortinhal, Idriss Gouigah, Eren Erdal Aksoy
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2309.08932
- **Pdf link:** https://arxiv.org/pdf/2309.08932
- **Abstract**
Although LiDAR sensors are crucial for autonomous systems due to providing precise depth information, they struggle with capturing fine object details, especially at a distance, due to sparse and non-uniform data. Recent advances introduced pseudo-LiDAR, i.e., synthetic dense point clouds, using additional modalities such as cameras to enhance 3D object detection. We present a novel LiDAR-only framework that augments raw scans with denser pseudo point clouds by solely relying on LiDAR sensors and scene semantics, omitting the need for cameras. Our framework first utilizes a segmentation model to extract scene semantics from raw point clouds, and then employs a multi-modal domain translator to generate synthetic image segments and depth cues without real cameras. This yields a dense pseudo point cloud enriched with semantic information. We also introduce a new semantically guided projection method, which enhances detection performance by retaining only relevant pseudo points. We applied our framework to different advanced 3D object detection methods and reported up to 2.9% performance upgrade. We also obtained comparable results on the KITTI 3D object detection dataset, in contrast to other state-of-the-art LiDAR-only detectors.
### Tightening Classification Boundaries in Open Set Domain Adaptation through Unknown Exploitation
- **Authors:** Lucas Fernando Alvarenga e Silva, Nicu Sebe, Jurandy Almeida
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2309.08964
- **Pdf link:** https://arxiv.org/pdf/2309.08964
- **Abstract**
Convolutional Neural Networks (CNNs) have brought revolutionary advances to many research areas due to their capacity of learning from raw data. However, when those methods are applied to non-controllable environments, many different factors can degrade the model's expected performance, such as unlabeled datasets with different levels of domain shift and category shift. Particularly, when both issues occur at the same time, we tackle this challenging setup as Open Set Domain Adaptation (OSDA) problem. In general, existing OSDA approaches focus their efforts only on aligning known classes or, if they already extract possible negative instances, use them as a new category learned with supervision during the course of training. We propose a novel way to improve OSDA approaches by extracting a high-confidence set of unknown instances and using it as a hard constraint to tighten the classification boundaries of OSDA methods. Especially, we adopt a new loss constraint evaluated in three different means, (1) directly with the pristine negative instances; (2) with randomly transformed negatives using data augmentation techniques; and (3) with synthetically generated negatives containing adversarial features. We assessed all approaches in an extensive set of experiments based on OVANet, where we could observe consistent improvements for two public benchmarks, the Office-31 and Office-Home datasets, yielding absolute gains of up to 1.3% for both Accuracy and H-Score on Office-31 and 5.8% for Accuracy and 4.7% for H-Score on Office-Home.
### LiDAR Data Synthesis with Denoising Diffusion Probabilistic Models
- **Authors:** Kazuto Nakashima, Ryo Kurazume
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Robotics (cs.RO)
- **Arxiv link:** https://arxiv.org/abs/2309.09256
- **Pdf link:** https://arxiv.org/pdf/2309.09256
- **Abstract**
Generative modeling of 3D LiDAR data is an emerging task with promising applications for autonomous mobile robots, such as scalable simulation, scene manipulation, and sparse-to-dense completion of LiDAR point clouds. Existing approaches have shown the feasibility of image-based LiDAR data generation using deep generative models while still struggling with the fidelity of generated data and training instability. In this work, we present R2DM, a novel generative model for LiDAR data that can generate diverse and high-fidelity 3D scene point clouds based on the image representation of range and reflectance intensity. Our method is based on the denoising diffusion probabilistic models (DDPMs), which have demonstrated impressive results among generative model frameworks and have been significantly progressing in recent years. To effectively train DDPMs on the LiDAR domain, we first conduct an in-depth analysis regarding data representation, training objective, and spatial inductive bias. Based on our designed model R2DM, we also introduce a flexible LiDAR completion pipeline using the powerful properties of DDPMs. We demonstrate that our method outperforms the baselines on the generation task of KITTI-360 and KITTI-Raw datasets and the upsampling task of KITTI-360 datasets. Our code and pre-trained weights will be available at https://github.com/kazuto1011/r2dm.
### Reconstructing Existing Levels through Level Inpainting
- **Authors:** Johor Jara Gonzalez, Mathew Guzdial
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2309.09472
- **Pdf link:** https://arxiv.org/pdf/2309.09472
- **Abstract**
Procedural Content Generation (PCG) and Procedural Content Generation via Machine Learning (PCGML) have been used in prior work for generating levels in various games. This paper introduces Content Augmentation and focuses on the subproblem of level inpainting, which involves reconstructing and extending video game levels. Drawing inspiration from image inpainting, we adapt two techniques from this domain to address our specific use case. We present two approaches for level inpainting: an Autoencoder and a U-net. Through a comprehensive case study, we demonstrate their superior performance compared to a baseline method and discuss their relative merits. Furthermore, we provide a practical demonstration of both approaches for the level inpainting task and offer insights into potential directions for future research.
### DriveDreamer: Towards Real-world-driven World Models for Autonomous Driving
- **Authors:** Xiaofeng Wang, Zheng Zhu, Guan Huang, Xinze Chen, Jiwen Lu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2309.09777
- **Pdf link:** https://arxiv.org/pdf/2309.09777
- **Abstract**
World models, especially in autonomous driving, are trending and drawing extensive attention due to their capacity for comprehending driving environments. The established world model holds immense potential for the generation of high-quality driving videos, and driving policies for safe maneuvering. However, a critical limitation in relevant research lies in its predominant focus on gaming environments or simulated settings, thereby lacking the representation of real-world driving scenarios. Therefore, we introduce DriveDreamer, a pioneering world model entirely derived from real-world driving scenarios. Regarding that modeling the world in intricate driving scenes entails an overwhelming search space, we propose harnessing the powerful diffusion model to construct a comprehensive representation of the complex environment. Furthermore, we introduce a two-stage training pipeline. In the initial phase, DriveDreamer acquires a deep understanding of structured traffic constraints, while the subsequent stage equips it with the ability to anticipate future states. The proposed DriveDreamer is the first world model established from real-world driving scenarios. We instantiate DriveDreamer on the challenging nuScenes benchmark, and extensive experiments verify that DriveDreamer empowers precise, controllable video generation that faithfully captures the structural constraints of real-world traffic scenarios. Additionally, DriveDreamer enables the generation of realistic and reasonable driving policies, opening avenues for interaction and practical applications.
## Keyword: raw image
There is no result
|
2.0
|
New submissions for Tue, 19 Sep 23 - ## Keyword: events
### V2CE: Video to Continuous Events Simulator
- **Authors:** Zhongyang Zhang, Shuyang Cui, Kaidong Chai, Haowen Yu, Subhasis Dasgupta, Upal Mahbub, Tauhidur Rahman
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2309.08891
- **Pdf link:** https://arxiv.org/pdf/2309.08891
- **Abstract**
Dynamic Vision Sensor (DVS)-based solutions have recently garnered significant interest across various computer vision tasks, offering notable benefits in terms of dynamic range, temporal resolution, and inference speed. However, as a relatively nascent vision sensor compared to Active Pixel Sensor (APS) devices such as RGB cameras, DVS suffers from a dearth of ample labeled datasets. Prior efforts to convert APS data into events often grapple with issues such as a considerable domain shift from real events, the absence of quantified validation, and layering problems within the time axis. In this paper, we present a novel method for video-to-events stream conversion from multiple perspectives, considering the specific characteristics of DVS. A series of carefully designed losses helps enhance the quality of generated event voxels significantly. We also propose a novel local dynamic-aware timestamp inference strategy to accurately recover event timestamps from event voxels in a continuous fashion and eliminate the temporal layering problem. Results from rigorous validation through quantified metrics at all stages of the pipeline establish our method unquestionably as the current state-of-the-art (SOTA).
### Chasing Day and Night: Towards Robust and Efficient All-Day Object Detection Guided by an Event Camera
- **Authors:** Jiahang Cao, Xu Zheng, Yuanhuiyi Lyu, Jiaxu Wang, Renjing Xu, Lin Wang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Robotics (cs.RO)
- **Arxiv link:** https://arxiv.org/abs/2309.09297
- **Pdf link:** https://arxiv.org/pdf/2309.09297
- **Abstract**
The ability to detect objects in all lighting (i.e., normal-, over-, and under-exposed) conditions is crucial for real-world applications, such as self-driving.Traditional RGB-based detectors often fail under such varying lighting conditions.Therefore, recent works utilize novel event cameras to supplement or guide the RGB modality; however, these methods typically adopt asymmetric network structures that rely predominantly on the RGB modality, resulting in limited robustness for all-day detection. In this paper, we propose EOLO, a novel object detection framework that achieves robust and efficient all-day detection by fusing both RGB and event modalities. Our EOLO framework is built based on a lightweight spiking neural network (SNN) to efficiently leverage the asynchronous property of events. Buttressed by it, we first introduce an Event Temporal Attention (ETA) module to learn the high temporal information from events while preserving crucial edge information. Secondly, as different modalities exhibit varying levels of importance under diverse lighting conditions, we propose a novel Symmetric RGB-Event Fusion (SREF) module to effectively fuse RGB-Event features without relying on a specific modality, thus ensuring a balanced and adaptive fusion for all-day detection. In addition, to compensate for the lack of paired RGB-Event datasets for all-day training and evaluation, we propose an event synthesis approach based on the randomized optical flow that allows for directly generating the event frame from a single exposure image. We further build two new datasets, E-MSCOCO and E-VOC based on the popular benchmarks MSCOCO and PASCAL VOC. Extensive experiments demonstrate that our EOLO outperforms the state-of-the-art detectors,e.g.,RENet,by a substantial margin (+3.74% mAP50) in all lighting conditions.Our code and datasets will be available at https://vlislab22.github.io/EOLO/
### Learning Parallax for Stereo Event-based Motion Deblurring
- **Authors:** Mingyuan Lin, Chi Zhang, Chu He, Lei Yu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2309.09513
- **Pdf link:** https://arxiv.org/pdf/2309.09513
- **Abstract**
Due to the extremely low latency, events have been recently exploited to supplement lost information for motion deblurring. Existing approaches largely rely on the perfect pixel-wise alignment between intensity images and events, which is not always fulfilled in the real world. To tackle this problem, we propose a novel coarse-to-fine framework, named NETwork of Event-based motion Deblurring with STereo event and intensity cameras (St-EDNet), to recover high-quality images directly from the misaligned inputs, consisting of a single blurry image and the concurrent event streams. Specifically, the coarse spatial alignment of the blurry image and the event streams is first implemented with a cross-modal stereo matching module without the need for ground-truth depths. Then, a dual-feature embedding architecture is proposed to gradually build the fine bidirectional association of the coarsely aligned data and reconstruct the sequence of the latent sharp images. Furthermore, we build a new dataset with STereo Event and Intensity Cameras (StEIC), containing real-world events, intensity images, and dense disparity maps. Experiments on real-world datasets demonstrate the superiority of the proposed network over state-of-the-art methods.
## Keyword: event camera
### Chasing Day and Night: Towards Robust and Efficient All-Day Object Detection Guided by an Event Camera
- **Authors:** Jiahang Cao, Xu Zheng, Yuanhuiyi Lyu, Jiaxu Wang, Renjing Xu, Lin Wang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Robotics (cs.RO)
- **Arxiv link:** https://arxiv.org/abs/2309.09297
- **Pdf link:** https://arxiv.org/pdf/2309.09297
- **Abstract**
The ability to detect objects in all lighting (i.e., normal-, over-, and under-exposed) conditions is crucial for real-world applications, such as self-driving.Traditional RGB-based detectors often fail under such varying lighting conditions.Therefore, recent works utilize novel event cameras to supplement or guide the RGB modality; however, these methods typically adopt asymmetric network structures that rely predominantly on the RGB modality, resulting in limited robustness for all-day detection. In this paper, we propose EOLO, a novel object detection framework that achieves robust and efficient all-day detection by fusing both RGB and event modalities. Our EOLO framework is built based on a lightweight spiking neural network (SNN) to efficiently leverage the asynchronous property of events. Buttressed by it, we first introduce an Event Temporal Attention (ETA) module to learn the high temporal information from events while preserving crucial edge information. Secondly, as different modalities exhibit varying levels of importance under diverse lighting conditions, we propose a novel Symmetric RGB-Event Fusion (SREF) module to effectively fuse RGB-Event features without relying on a specific modality, thus ensuring a balanced and adaptive fusion for all-day detection. In addition, to compensate for the lack of paired RGB-Event datasets for all-day training and evaluation, we propose an event synthesis approach based on the randomized optical flow that allows for directly generating the event frame from a single exposure image. We further build two new datasets, E-MSCOCO and E-VOC based on the popular benchmarks MSCOCO and PASCAL VOC. Extensive experiments demonstrate that our EOLO outperforms the state-of-the-art detectors,e.g.,RENet,by a substantial margin (+3.74% mAP50) in all lighting conditions.Our code and datasets will be available at https://vlislab22.github.io/EOLO/
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
### MA-SAM: Modality-agnostic SAM Adaptation for 3D Medical Image Segmentation
- **Authors:** Cheng Chen, Juzheng Miao, Dufan Wu, Zhiling Yan, Sekeun Kim, Jiang Hu, Aoxiao Zhong, Zhengliang Liu, Lichao Sun, Xiang Li, Tianming Liu, Pheng-Ann Heng, Quanzheng Li
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2309.08842
- **Pdf link:** https://arxiv.org/pdf/2309.08842
- **Abstract**
The Segment Anything Model (SAM), a foundation model for general image segmentation, has demonstrated impressive zero-shot performance across numerous natural image segmentation tasks. However, SAM's performance significantly declines when applied to medical images, primarily due to the substantial disparity between natural and medical image domains. To effectively adapt SAM to medical images, it is important to incorporate critical third-dimensional information, i.e., volumetric or temporal knowledge, during fine-tuning. Simultaneously, we aim to harness SAM's pre-trained weights within its original 2D backbone to the fullest extent. In this paper, we introduce a modality-agnostic SAM adaptation framework, named as MA-SAM, that is applicable to various volumetric and video medical data. Our method roots in the parameter-efficient fine-tuning strategy to update only a small portion of weight increments while preserving the majority of SAM's pre-trained weights. By injecting a series of 3D adapters into the transformer blocks of the image encoder, our method enables the pre-trained 2D backbone to extract third-dimensional information from input data. The effectiveness of our method has been comprehensively evaluated on four medical image segmentation tasks, by using 10 public datasets across CT, MRI, and surgical video data. Remarkably, without using any prompt, our method consistently outperforms various state-of-the-art 3D approaches, surpassing nnU-Net by 0.9%, 2.6%, and 9.9% in Dice for CT multi-organ segmentation, MRI prostate segmentation, and surgical scene segmentation respectively. Our model also demonstrates strong generalization, and excels in challenging tumor segmentation when prompts are used. Our code is available at: https://github.com/cchen-cc/MA-SAM.
### Learning Parallax for Stereo Event-based Motion Deblurring
- **Authors:** Mingyuan Lin, Chi Zhang, Chu He, Lei Yu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2309.09513
- **Pdf link:** https://arxiv.org/pdf/2309.09513
- **Abstract**
Due to the extremely low latency, events have been recently exploited to supplement lost information for motion deblurring. Existing approaches largely rely on the perfect pixel-wise alignment between intensity images and events, which is not always fulfilled in the real world. To tackle this problem, we propose a novel coarse-to-fine framework, named NETwork of Event-based motion Deblurring with STereo event and intensity cameras (St-EDNet), to recover high-quality images directly from the misaligned inputs, consisting of a single blurry image and the concurrent event streams. Specifically, the coarse spatial alignment of the blurry image and the event streams is first implemented with a cross-modal stereo matching module without the need for ground-truth depths. Then, a dual-feature embedding architecture is proposed to gradually build the fine bidirectional association of the coarsely aligned data and reconstruct the sequence of the latent sharp images. Furthermore, we build a new dataset with STereo Event and Intensity Cameras (StEIC), containing real-world events, intensity images, and dense disparity maps. Experiments on real-world datasets demonstrate the superiority of the proposed network over state-of-the-art methods.
### Unified Frequency-Assisted Transformer Framework for Detecting and Grounding Multi-Modal Manipulation
- **Authors:** Huan Liu, Zichang Tan, Qiang Chen, Yunchao Wei, Yao Zhao, Jingdong Wang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2309.09667
- **Pdf link:** https://arxiv.org/pdf/2309.09667
- **Abstract**
Detecting and grounding multi-modal media manipulation (DGM^4) has become increasingly crucial due to the widespread dissemination of face forgery and text misinformation. In this paper, we present the Unified Frequency-Assisted transFormer framework, named UFAFormer, to address the DGM^4 problem. Unlike previous state-of-the-art methods that solely focus on the image (RGB) domain to describe visual forgery features, we additionally introduce the frequency domain as a complementary viewpoint. By leveraging the discrete wavelet transform, we decompose images into several frequency sub-bands, capturing rich face forgery artifacts. Then, our proposed frequency encoder, incorporating intra-band and inter-band self-attentions, explicitly aggregates forgery features within and across diverse sub-bands. Moreover, to address the semantic conflicts between image and frequency domains, the forgery-aware mutual module is developed to further enable the effective interaction of disparate image and frequency features, resulting in aligned and comprehensive visual forgery representations. Finally, based on visual and textual forgery features, we propose a unified decoder that comprises two symmetric cross-modal interaction modules responsible for gathering modality-specific forgery information, along with a fusing interaction module for aggregation of both modalities. The proposed unified decoder formulates our UFAFormer as a unified framework, ultimately simplifying the overall architecture and facilitating the optimization process. Experimental results on the DGM^4 dataset, containing several perturbations, demonstrate the superior performance of our framework compared to previous methods, setting a new benchmark in the field.
### R2GenGPT: Radiology Report Generation with Frozen LLMs
- **Authors:** Zhanyu Wang, Lingqiao Liu, Lei Wang, Luping Zhou
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2309.09812
- **Pdf link:** https://arxiv.org/pdf/2309.09812
- **Abstract**
Large Language Models (LLMs) have consistently showcased remarkable generalization capabilities when applied to various language tasks. Nonetheless, harnessing the full potential of LLMs for Radiology Report Generation (R2Gen) still presents a challenge, stemming from the inherent disparity in modality between LLMs and the R2Gen task. To bridge this gap effectively, we propose R2GenGPT, which is a novel solution that aligns visual features with the word embedding space of LLMs using an efficient visual alignment module. This innovative approach empowers the previously static LLM to seamlessly integrate and process image information, marking a step forward in optimizing R2Gen performance. R2GenGPT offers the following benefits. First, it attains state-of-the-art (SOTA) performance by training only the lightweight visual alignment module while freezing all the parameters of LLM. Second, it exhibits high training efficiency, as it requires the training of an exceptionally minimal number of parameters while achieving rapid convergence. By employing delta tuning, our model only trains 5M parameters (which constitute just 0.07\% of the total parameter count) to achieve performance close to the SOTA levels. Our code is available at https://github.com/wang-zhanyu/R2GenGPT.
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
### UGC: Unified GAN Compression for Efficient Image-to-Image Translation
- **Authors:** Yuxi Ren, Jie Wu, Peng Zhang, Manlin Zhang, Xuefeng Xiao, Qian He, Rui Wang, Min Zheng, Xin Pan
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2309.09310
- **Pdf link:** https://arxiv.org/pdf/2309.09310
- **Abstract**
Recent years have witnessed the prevailing progress of Generative Adversarial Networks (GANs) in image-to-image translation. However, the success of these GAN models hinges on ponderous computational costs and labor-expensive training data. Current efficient GAN learning techniques often fall into two orthogonal aspects: i) model slimming via reduced calculation costs; ii)data/label-efficient learning with fewer training data/labels. To combine the best of both worlds, we propose a new learning paradigm, Unified GAN Compression (UGC), with a unified optimization objective to seamlessly prompt the synergy of model-efficient and label-efficient learning. UGC sets up semi-supervised-driven network architecture search and adaptive online semi-supervised distillation stages sequentially, which formulates a heterogeneous mutual learning scheme to obtain an architecture-flexible, label-efficient, and performance-excellent model.
## Keyword: RAW
### Active Learning for Fine-Grained Sketch-Based Image Retrieval
- **Authors:** Himanshu Thakur, Soumitri Chattopadhyay
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2309.08743
- **Pdf link:** https://arxiv.org/pdf/2309.08743
- **Abstract**
The ability to retrieve a photo by mere free-hand sketching highlights the immense potential of Fine-grained sketch-based image retrieval (FG-SBIR). However, its rapid practical adoption, as well as scalability, is limited by the expense of acquiring faithful sketches for easily available photo counterparts. A solution to this problem is Active Learning, which could minimise the need for labeled sketches while maximising performance. Despite extensive studies in the field, there exists no work that utilises it for reducing sketching effort in FG-SBIR tasks. To this end, we propose a novel active learning sampling technique that drastically minimises the need for drawing photo sketches. Our proposed approach tackles the trade-off between uncertainty and diversity by utilising the relationship between the existing photo-sketch pair to a photo that does not have its sketch and augmenting this relation with its intermediate representations. Since our approach relies only on the underlying data distribution, it is agnostic of the modelling approach and hence is applicable to other cross-modal instance-level retrieval tasks as well. With experimentation over two publicly available fine-grained SBIR datasets ChairV2 and ShoeV2, we validate our approach and reveal its superiority over adapted baselines.
### Semantics-aware LiDAR-Only Pseudo Point Cloud Generation for 3D Object Detection
- **Authors:** Tiago Cortinhal, Idriss Gouigah, Eren Erdal Aksoy
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2309.08932
- **Pdf link:** https://arxiv.org/pdf/2309.08932
- **Abstract**
Although LiDAR sensors are crucial for autonomous systems due to providing precise depth information, they struggle with capturing fine object details, especially at a distance, due to sparse and non-uniform data. Recent advances introduced pseudo-LiDAR, i.e., synthetic dense point clouds, using additional modalities such as cameras to enhance 3D object detection. We present a novel LiDAR-only framework that augments raw scans with denser pseudo point clouds by solely relying on LiDAR sensors and scene semantics, omitting the need for cameras. Our framework first utilizes a segmentation model to extract scene semantics from raw point clouds, and then employs a multi-modal domain translator to generate synthetic image segments and depth cues without real cameras. This yields a dense pseudo point cloud enriched with semantic information. We also introduce a new semantically guided projection method, which enhances detection performance by retaining only relevant pseudo points. We applied our framework to different advanced 3D object detection methods and reported up to 2.9% performance upgrade. We also obtained comparable results on the KITTI 3D object detection dataset, in contrast to other state-of-the-art LiDAR-only detectors.
### Tightening Classification Boundaries in Open Set Domain Adaptation through Unknown Exploitation
- **Authors:** Lucas Fernando Alvarenga e Silva, Nicu Sebe, Jurandy Almeida
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2309.08964
- **Pdf link:** https://arxiv.org/pdf/2309.08964
- **Abstract**
Convolutional Neural Networks (CNNs) have brought revolutionary advances to many research areas due to their capacity of learning from raw data. However, when those methods are applied to non-controllable environments, many different factors can degrade the model's expected performance, such as unlabeled datasets with different levels of domain shift and category shift. Particularly, when both issues occur at the same time, we tackle this challenging setup as Open Set Domain Adaptation (OSDA) problem. In general, existing OSDA approaches focus their efforts only on aligning known classes or, if they already extract possible negative instances, use them as a new category learned with supervision during the course of training. We propose a novel way to improve OSDA approaches by extracting a high-confidence set of unknown instances and using it as a hard constraint to tighten the classification boundaries of OSDA methods. Especially, we adopt a new loss constraint evaluated in three different means, (1) directly with the pristine negative instances; (2) with randomly transformed negatives using data augmentation techniques; and (3) with synthetically generated negatives containing adversarial features. We assessed all approaches in an extensive set of experiments based on OVANet, where we could observe consistent improvements for two public benchmarks, the Office-31 and Office-Home datasets, yielding absolute gains of up to 1.3% for both Accuracy and H-Score on Office-31 and 5.8% for Accuracy and 4.7% for H-Score on Office-Home.
### LiDAR Data Synthesis with Denoising Diffusion Probabilistic Models
- **Authors:** Kazuto Nakashima, Ryo Kurazume
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Robotics (cs.RO)
- **Arxiv link:** https://arxiv.org/abs/2309.09256
- **Pdf link:** https://arxiv.org/pdf/2309.09256
- **Abstract**
Generative modeling of 3D LiDAR data is an emerging task with promising applications for autonomous mobile robots, such as scalable simulation, scene manipulation, and sparse-to-dense completion of LiDAR point clouds. Existing approaches have shown the feasibility of image-based LiDAR data generation using deep generative models while still struggling with the fidelity of generated data and training instability. In this work, we present R2DM, a novel generative model for LiDAR data that can generate diverse and high-fidelity 3D scene point clouds based on the image representation of range and reflectance intensity. Our method is based on the denoising diffusion probabilistic models (DDPMs), which have demonstrated impressive results among generative model frameworks and have been significantly progressing in recent years. To effectively train DDPMs on the LiDAR domain, we first conduct an in-depth analysis regarding data representation, training objective, and spatial inductive bias. Based on our designed model R2DM, we also introduce a flexible LiDAR completion pipeline using the powerful properties of DDPMs. We demonstrate that our method outperforms the baselines on the generation task of KITTI-360 and KITTI-Raw datasets and the upsampling task of KITTI-360 datasets. Our code and pre-trained weights will be available at https://github.com/kazuto1011/r2dm.
### Reconstructing Existing Levels through Level Inpainting
- **Authors:** Johor Jara Gonzalez, Mathew Guzdial
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2309.09472
- **Pdf link:** https://arxiv.org/pdf/2309.09472
- **Abstract**
Procedural Content Generation (PCG) and Procedural Content Generation via Machine Learning (PCGML) have been used in prior work for generating levels in various games. This paper introduces Content Augmentation and focuses on the subproblem of level inpainting, which involves reconstructing and extending video game levels. Drawing inspiration from image inpainting, we adapt two techniques from this domain to address our specific use case. We present two approaches for level inpainting: an Autoencoder and a U-net. Through a comprehensive case study, we demonstrate their superior performance compared to a baseline method and discuss their relative merits. Furthermore, we provide a practical demonstration of both approaches for the level inpainting task and offer insights into potential directions for future research.
### DriveDreamer: Towards Real-world-driven World Models for Autonomous Driving
- **Authors:** Xiaofeng Wang, Zheng Zhu, Guan Huang, Xinze Chen, Jiwen Lu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2309.09777
- **Pdf link:** https://arxiv.org/pdf/2309.09777
- **Abstract**
World models, especially in autonomous driving, are trending and drawing extensive attention due to their capacity for comprehending driving environments. The established world model holds immense potential for the generation of high-quality driving videos, and driving policies for safe maneuvering. However, a critical limitation in relevant research lies in its predominant focus on gaming environments or simulated settings, thereby lacking the representation of real-world driving scenarios. Therefore, we introduce DriveDreamer, a pioneering world model entirely derived from real-world driving scenarios. Regarding that modeling the world in intricate driving scenes entails an overwhelming search space, we propose harnessing the powerful diffusion model to construct a comprehensive representation of the complex environment. Furthermore, we introduce a two-stage training pipeline. In the initial phase, DriveDreamer acquires a deep understanding of structured traffic constraints, while the subsequent stage equips it with the ability to anticipate future states. The proposed DriveDreamer is the first world model established from real-world driving scenarios. We instantiate DriveDreamer on the challenging nuScenes benchmark, and extensive experiments verify that DriveDreamer empowers precise, controllable video generation that faithfully captures the structural constraints of real-world traffic scenarios. Additionally, DriveDreamer enables the generation of realistic and reasonable driving policies, opening avenues for interaction and practical applications.
## Keyword: raw image
There is no result
|
process
|
new submissions for tue sep keyword events video to continuous events simulator authors zhongyang zhang shuyang cui kaidong chai haowen yu subhasis dasgupta upal mahbub tauhidur rahman subjects computer vision and pattern recognition cs cv artificial intelligence cs ai arxiv link pdf link abstract dynamic vision sensor dvs based solutions have recently garnered significant interest across various computer vision tasks offering notable benefits in terms of dynamic range temporal resolution and inference speed however as a relatively nascent vision sensor compared to active pixel sensor aps devices such as rgb cameras dvs suffers from a dearth of ample labeled datasets prior efforts to convert aps data into events often grapple with issues such as a considerable domain shift from real events the absence of quantified validation and layering problems within the time axis in this paper we present a novel method for video to events stream conversion from multiple perspectives considering the specific characteristics of dvs a series of carefully designed losses helps enhance the quality of generated event voxels significantly we also propose a novel local dynamic aware timestamp inference strategy to accurately recover event timestamps from event voxels in a continuous fashion and eliminate the temporal layering problem results from rigorous validation through quantified metrics at all stages of the pipeline establish our method unquestionably as the current state of the art sota chasing day and night towards robust and efficient all day object detection guided by an event camera authors jiahang cao xu zheng yuanhuiyi lyu jiaxu wang renjing xu lin wang subjects computer vision and pattern recognition cs cv robotics cs ro arxiv link pdf link abstract the ability to detect objects in all lighting i e normal over and under exposed conditions is crucial for real world applications such as self driving traditional rgb based detectors often fail under such varying lighting conditions therefore recent works utilize novel event cameras to supplement or guide the rgb modality however these methods typically adopt asymmetric network structures that rely predominantly on the rgb modality resulting in limited robustness for all day detection in this paper we propose eolo a novel object detection framework that achieves robust and efficient all day detection by fusing both rgb and event modalities our eolo framework is built based on a lightweight spiking neural network snn to efficiently leverage the asynchronous property of events buttressed by it we first introduce an event temporal attention eta module to learn the high temporal information from events while preserving crucial edge information secondly as different modalities exhibit varying levels of importance under diverse lighting conditions we propose a novel symmetric rgb event fusion sref module to effectively fuse rgb event features without relying on a specific modality thus ensuring a balanced and adaptive fusion for all day detection in addition to compensate for the lack of paired rgb event datasets for all day training and evaluation we propose an event synthesis approach based on the randomized optical flow that allows for directly generating the event frame from a single exposure image we further build two new datasets e mscoco and e voc based on the popular benchmarks mscoco and pascal voc extensive experiments demonstrate that our eolo outperforms the state of the art detectors e g renet by a substantial margin in all lighting conditions our code and datasets will be available at learning parallax for stereo event based motion deblurring authors mingyuan lin chi zhang chu he lei yu subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract due to the extremely low latency events have been recently exploited to supplement lost information for motion deblurring existing approaches largely rely on the perfect pixel wise alignment between intensity images and events which is not always fulfilled in the real world to tackle this problem we propose a novel coarse to fine framework named network of event based motion deblurring with stereo event and intensity cameras st ednet to recover high quality images directly from the misaligned inputs consisting of a single blurry image and the concurrent event streams specifically the coarse spatial alignment of the blurry image and the event streams is first implemented with a cross modal stereo matching module without the need for ground truth depths then a dual feature embedding architecture is proposed to gradually build the fine bidirectional association of the coarsely aligned data and reconstruct the sequence of the latent sharp images furthermore we build a new dataset with stereo event and intensity cameras steic containing real world events intensity images and dense disparity maps experiments on real world datasets demonstrate the superiority of the proposed network over state of the art methods keyword event camera chasing day and night towards robust and efficient all day object detection guided by an event camera authors jiahang cao xu zheng yuanhuiyi lyu jiaxu wang renjing xu lin wang subjects computer vision and pattern recognition cs cv robotics cs ro arxiv link pdf link abstract the ability to detect objects in all lighting i e normal over and under exposed conditions is crucial for real world applications such as self driving traditional rgb based detectors often fail under such varying lighting conditions therefore recent works utilize novel event cameras to supplement or guide the rgb modality however these methods typically adopt asymmetric network structures that rely predominantly on the rgb modality resulting in limited robustness for all day detection in this paper we propose eolo a novel object detection framework that achieves robust and efficient all day detection by fusing both rgb and event modalities our eolo framework is built based on a lightweight spiking neural network snn to efficiently leverage the asynchronous property of events buttressed by it we first introduce an event temporal attention eta module to learn the high temporal information from events while preserving crucial edge information secondly as different modalities exhibit varying levels of importance under diverse lighting conditions we propose a novel symmetric rgb event fusion sref module to effectively fuse rgb event features without relying on a specific modality thus ensuring a balanced and adaptive fusion for all day detection in addition to compensate for the lack of paired rgb event datasets for all day training and evaluation we propose an event synthesis approach based on the randomized optical flow that allows for directly generating the event frame from a single exposure image we further build two new datasets e mscoco and e voc based on the popular benchmarks mscoco and pascal voc extensive experiments demonstrate that our eolo outperforms the state of the art detectors e g renet by a substantial margin in all lighting conditions our code and datasets will be available at keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb there is no result keyword isp ma sam modality agnostic sam adaptation for medical image segmentation authors cheng chen juzheng miao dufan wu zhiling yan sekeun kim jiang hu aoxiao zhong zhengliang liu lichao sun xiang li tianming liu pheng ann heng quanzheng li subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract the segment anything model sam a foundation model for general image segmentation has demonstrated impressive zero shot performance across numerous natural image segmentation tasks however sam s performance significantly declines when applied to medical images primarily due to the substantial disparity between natural and medical image domains to effectively adapt sam to medical images it is important to incorporate critical third dimensional information i e volumetric or temporal knowledge during fine tuning simultaneously we aim to harness sam s pre trained weights within its original backbone to the fullest extent in this paper we introduce a modality agnostic sam adaptation framework named as ma sam that is applicable to various volumetric and video medical data our method roots in the parameter efficient fine tuning strategy to update only a small portion of weight increments while preserving the majority of sam s pre trained weights by injecting a series of adapters into the transformer blocks of the image encoder our method enables the pre trained backbone to extract third dimensional information from input data the effectiveness of our method has been comprehensively evaluated on four medical image segmentation tasks by using public datasets across ct mri and surgical video data remarkably without using any prompt our method consistently outperforms various state of the art approaches surpassing nnu net by and in dice for ct multi organ segmentation mri prostate segmentation and surgical scene segmentation respectively our model also demonstrates strong generalization and excels in challenging tumor segmentation when prompts are used our code is available at learning parallax for stereo event based motion deblurring authors mingyuan lin chi zhang chu he lei yu subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract due to the extremely low latency events have been recently exploited to supplement lost information for motion deblurring existing approaches largely rely on the perfect pixel wise alignment between intensity images and events which is not always fulfilled in the real world to tackle this problem we propose a novel coarse to fine framework named network of event based motion deblurring with stereo event and intensity cameras st ednet to recover high quality images directly from the misaligned inputs consisting of a single blurry image and the concurrent event streams specifically the coarse spatial alignment of the blurry image and the event streams is first implemented with a cross modal stereo matching module without the need for ground truth depths then a dual feature embedding architecture is proposed to gradually build the fine bidirectional association of the coarsely aligned data and reconstruct the sequence of the latent sharp images furthermore we build a new dataset with stereo event and intensity cameras steic containing real world events intensity images and dense disparity maps experiments on real world datasets demonstrate the superiority of the proposed network over state of the art methods unified frequency assisted transformer framework for detecting and grounding multi modal manipulation authors huan liu zichang tan qiang chen yunchao wei yao zhao jingdong wang subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract detecting and grounding multi modal media manipulation dgm has become increasingly crucial due to the widespread dissemination of face forgery and text misinformation in this paper we present the unified frequency assisted transformer framework named ufaformer to address the dgm problem unlike previous state of the art methods that solely focus on the image rgb domain to describe visual forgery features we additionally introduce the frequency domain as a complementary viewpoint by leveraging the discrete wavelet transform we decompose images into several frequency sub bands capturing rich face forgery artifacts then our proposed frequency encoder incorporating intra band and inter band self attentions explicitly aggregates forgery features within and across diverse sub bands moreover to address the semantic conflicts between image and frequency domains the forgery aware mutual module is developed to further enable the effective interaction of disparate image and frequency features resulting in aligned and comprehensive visual forgery representations finally based on visual and textual forgery features we propose a unified decoder that comprises two symmetric cross modal interaction modules responsible for gathering modality specific forgery information along with a fusing interaction module for aggregation of both modalities the proposed unified decoder formulates our ufaformer as a unified framework ultimately simplifying the overall architecture and facilitating the optimization process experimental results on the dgm dataset containing several perturbations demonstrate the superior performance of our framework compared to previous methods setting a new benchmark in the field radiology report generation with frozen llms authors zhanyu wang lingqiao liu lei wang luping zhou subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract large language models llms have consistently showcased remarkable generalization capabilities when applied to various language tasks nonetheless harnessing the full potential of llms for radiology report generation still presents a challenge stemming from the inherent disparity in modality between llms and the task to bridge this gap effectively we propose which is a novel solution that aligns visual features with the word embedding space of llms using an efficient visual alignment module this innovative approach empowers the previously static llm to seamlessly integrate and process image information marking a step forward in optimizing performance offers the following benefits first it attains state of the art sota performance by training only the lightweight visual alignment module while freezing all the parameters of llm second it exhibits high training efficiency as it requires the training of an exceptionally minimal number of parameters while achieving rapid convergence by employing delta tuning our model only trains parameters which constitute just of the total parameter count to achieve performance close to the sota levels our code is available at keyword image signal processing there is no result keyword image signal process there is no result keyword compression ugc unified gan compression for efficient image to image translation authors yuxi ren jie wu peng zhang manlin zhang xuefeng xiao qian he rui wang min zheng xin pan subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract recent years have witnessed the prevailing progress of generative adversarial networks gans in image to image translation however the success of these gan models hinges on ponderous computational costs and labor expensive training data current efficient gan learning techniques often fall into two orthogonal aspects i model slimming via reduced calculation costs ii data label efficient learning with fewer training data labels to combine the best of both worlds we propose a new learning paradigm unified gan compression ugc with a unified optimization objective to seamlessly prompt the synergy of model efficient and label efficient learning ugc sets up semi supervised driven network architecture search and adaptive online semi supervised distillation stages sequentially which formulates a heterogeneous mutual learning scheme to obtain an architecture flexible label efficient and performance excellent model keyword raw active learning for fine grained sketch based image retrieval authors himanshu thakur soumitri chattopadhyay subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract the ability to retrieve a photo by mere free hand sketching highlights the immense potential of fine grained sketch based image retrieval fg sbir however its rapid practical adoption as well as scalability is limited by the expense of acquiring faithful sketches for easily available photo counterparts a solution to this problem is active learning which could minimise the need for labeled sketches while maximising performance despite extensive studies in the field there exists no work that utilises it for reducing sketching effort in fg sbir tasks to this end we propose a novel active learning sampling technique that drastically minimises the need for drawing photo sketches our proposed approach tackles the trade off between uncertainty and diversity by utilising the relationship between the existing photo sketch pair to a photo that does not have its sketch and augmenting this relation with its intermediate representations since our approach relies only on the underlying data distribution it is agnostic of the modelling approach and hence is applicable to other cross modal instance level retrieval tasks as well with experimentation over two publicly available fine grained sbir datasets and we validate our approach and reveal its superiority over adapted baselines semantics aware lidar only pseudo point cloud generation for object detection authors tiago cortinhal idriss gouigah eren erdal aksoy subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract although lidar sensors are crucial for autonomous systems due to providing precise depth information they struggle with capturing fine object details especially at a distance due to sparse and non uniform data recent advances introduced pseudo lidar i e synthetic dense point clouds using additional modalities such as cameras to enhance object detection we present a novel lidar only framework that augments raw scans with denser pseudo point clouds by solely relying on lidar sensors and scene semantics omitting the need for cameras our framework first utilizes a segmentation model to extract scene semantics from raw point clouds and then employs a multi modal domain translator to generate synthetic image segments and depth cues without real cameras this yields a dense pseudo point cloud enriched with semantic information we also introduce a new semantically guided projection method which enhances detection performance by retaining only relevant pseudo points we applied our framework to different advanced object detection methods and reported up to performance upgrade we also obtained comparable results on the kitti object detection dataset in contrast to other state of the art lidar only detectors tightening classification boundaries in open set domain adaptation through unknown exploitation authors lucas fernando alvarenga e silva nicu sebe jurandy almeida subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract convolutional neural networks cnns have brought revolutionary advances to many research areas due to their capacity of learning from raw data however when those methods are applied to non controllable environments many different factors can degrade the model s expected performance such as unlabeled datasets with different levels of domain shift and category shift particularly when both issues occur at the same time we tackle this challenging setup as open set domain adaptation osda problem in general existing osda approaches focus their efforts only on aligning known classes or if they already extract possible negative instances use them as a new category learned with supervision during the course of training we propose a novel way to improve osda approaches by extracting a high confidence set of unknown instances and using it as a hard constraint to tighten the classification boundaries of osda methods especially we adopt a new loss constraint evaluated in three different means directly with the pristine negative instances with randomly transformed negatives using data augmentation techniques and with synthetically generated negatives containing adversarial features we assessed all approaches in an extensive set of experiments based on ovanet where we could observe consistent improvements for two public benchmarks the office and office home datasets yielding absolute gains of up to for both accuracy and h score on office and for accuracy and for h score on office home lidar data synthesis with denoising diffusion probabilistic models authors kazuto nakashima ryo kurazume subjects computer vision and pattern recognition cs cv robotics cs ro arxiv link pdf link abstract generative modeling of lidar data is an emerging task with promising applications for autonomous mobile robots such as scalable simulation scene manipulation and sparse to dense completion of lidar point clouds existing approaches have shown the feasibility of image based lidar data generation using deep generative models while still struggling with the fidelity of generated data and training instability in this work we present a novel generative model for lidar data that can generate diverse and high fidelity scene point clouds based on the image representation of range and reflectance intensity our method is based on the denoising diffusion probabilistic models ddpms which have demonstrated impressive results among generative model frameworks and have been significantly progressing in recent years to effectively train ddpms on the lidar domain we first conduct an in depth analysis regarding data representation training objective and spatial inductive bias based on our designed model we also introduce a flexible lidar completion pipeline using the powerful properties of ddpms we demonstrate that our method outperforms the baselines on the generation task of kitti and kitti raw datasets and the upsampling task of kitti datasets our code and pre trained weights will be available at reconstructing existing levels through level inpainting authors johor jara gonzalez mathew guzdial subjects computer vision and pattern recognition cs cv machine learning cs lg arxiv link pdf link abstract procedural content generation pcg and procedural content generation via machine learning pcgml have been used in prior work for generating levels in various games this paper introduces content augmentation and focuses on the subproblem of level inpainting which involves reconstructing and extending video game levels drawing inspiration from image inpainting we adapt two techniques from this domain to address our specific use case we present two approaches for level inpainting an autoencoder and a u net through a comprehensive case study we demonstrate their superior performance compared to a baseline method and discuss their relative merits furthermore we provide a practical demonstration of both approaches for the level inpainting task and offer insights into potential directions for future research drivedreamer towards real world driven world models for autonomous driving authors xiaofeng wang zheng zhu guan huang xinze chen jiwen lu subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract world models especially in autonomous driving are trending and drawing extensive attention due to their capacity for comprehending driving environments the established world model holds immense potential for the generation of high quality driving videos and driving policies for safe maneuvering however a critical limitation in relevant research lies in its predominant focus on gaming environments or simulated settings thereby lacking the representation of real world driving scenarios therefore we introduce drivedreamer a pioneering world model entirely derived from real world driving scenarios regarding that modeling the world in intricate driving scenes entails an overwhelming search space we propose harnessing the powerful diffusion model to construct a comprehensive representation of the complex environment furthermore we introduce a two stage training pipeline in the initial phase drivedreamer acquires a deep understanding of structured traffic constraints while the subsequent stage equips it with the ability to anticipate future states the proposed drivedreamer is the first world model established from real world driving scenarios we instantiate drivedreamer on the challenging nuscenes benchmark and extensive experiments verify that drivedreamer empowers precise controllable video generation that faithfully captures the structural constraints of real world traffic scenarios additionally drivedreamer enables the generation of realistic and reasonable driving policies opening avenues for interaction and practical applications keyword raw image there is no result
| 1
|
10,754
| 4,820,603,474
|
IssuesEvent
|
2016-11-04 23:48:32
|
jeff1evesque/machine-learning
|
https://api.github.com/repos/jeff1evesque/machine-learning
|
opened
|
Decrease inotifywait watch instances
|
build enhancement
|
We need to move `src/jsx/package.json` to `src/package.json`. Then, we need to adjust our build logic to create `src/node_modules/`. This means we'll need to adjust our corresponding compilers to reference this directory. By making the latter changes, we can decrease the number of watcher instances of `inotifywait`.
|
1.0
|
Decrease inotifywait watch instances - We need to move `src/jsx/package.json` to `src/package.json`. Then, we need to adjust our build logic to create `src/node_modules/`. This means we'll need to adjust our corresponding compilers to reference this directory. By making the latter changes, we can decrease the number of watcher instances of `inotifywait`.
|
non_process
|
decrease inotifywait watch instances we need to move src jsx package json to src package json then we need to adjust our build logic to create src node modules this means we ll need to adjust our corresponding compilers to reference this directory by making the latter changes we can decrease the number of watcher instances of inotifywait
| 0
|
18,083
| 24,098,429,251
|
IssuesEvent
|
2022-09-19 21:07:24
|
microsoft/cadl
|
https://api.github.com/repos/microsoft/cadl
|
closed
|
CI E2E investigate issue on node 16 windows
|
:pushpin: WS: Process Tools & Automation
|
This PR had to disable e2e on windows https://github.com/microsoft/cadl/pull/1001.
Original problem seems to be the version of `npm` bundled in node 16 is having issue with running `npx -g /local/foo.tgz` where it wouldn't install teh tgz file but try to resolve the package from `npm` which would fail in the prepare release publish as its not published yet.
`npm install -g npm` however seems to not update things correctly on windows. (Solve the issue on linux node 16.x but not windows)
|
1.0
|
CI E2E investigate issue on node 16 windows - This PR had to disable e2e on windows https://github.com/microsoft/cadl/pull/1001.
Original problem seems to be the version of `npm` bundled in node 16 is having issue with running `npx -g /local/foo.tgz` where it wouldn't install teh tgz file but try to resolve the package from `npm` which would fail in the prepare release publish as its not published yet.
`npm install -g npm` however seems to not update things correctly on windows. (Solve the issue on linux node 16.x but not windows)
|
process
|
ci investigate issue on node windows this pr had to disable on windows original problem seems to be the version of npm bundled in node is having issue with running npx g local foo tgz where it wouldn t install teh tgz file but try to resolve the package from npm which would fail in the prepare release publish as its not published yet npm install g npm however seems to not update things correctly on windows solve the issue on linux node x but not windows
| 1
|
10,321
| 13,161,585,231
|
IssuesEvent
|
2020-08-10 19:50:27
|
googleapis/python-bigquery
|
https://api.github.com/repos/googleapis/python-bigquery
|
closed
|
Sphinx 3.2.0 breaks 'docs' build
|
api: bigquery type: process
|
From [this Kokoro build](https://source.cloud.google.com/results/invocations/a4596b14-7553-4ab1-ab07-c4817ea27796/targets/cloud-devrel%2Fclient-libraries%2Fpython%2Fgoogleapis%2Fpython-bigquery%2Fpresubmit%2Fpresubmit/log):
```python
Traceback (most recent call last):
File "/tmpfs/src/github/python-bigquery/.nox/docs/lib/python3.8/site-packages/sphinx/cmd/build.py", line 280, in build_main
app.build(args.force_all, filenames)
File "/tmpfs/src/github/python-bigquery/.nox/docs/lib/python3.8/site-packages/sphinx/application.py", line 348, in build
self.builder.build_update()
File "/tmpfs/src/github/python-bigquery/.nox/docs/lib/python3.8/site-packages/sphinx/builders/__init__.py", line 297, in build_update
self.build(to_build,
File "/tmpfs/src/github/python-bigquery/.nox/docs/lib/python3.8/site-packages/sphinx/builders/__init__.py", line 311, in build
updated_docnames = set(self.read())
File "/usr/local/lib/python3.8/contextlib.py", line 120, in __exit__
next(self.gen)
File "/tmpfs/src/github/python-bigquery/.nox/docs/lib/python3.8/site-packages/sphinx/util/logging.py", line 213, in pending_warnings
memhandler.flushTo(logger)
File "/tmpfs/src/github/python-bigquery/.nox/docs/lib/python3.8/site-packages/sphinx/util/logging.py", line 178, in flushTo
logger.handle(record)
File "/usr/local/lib/python3.8/logging/__init__.py", line 1587, in handle
self.callHandlers(record)
File "/usr/local/lib/python3.8/logging/__init__.py", line 1649, in callHandlers
hdlr.handle(record)
File "/usr/local/lib/python3.8/logging/__init__.py", line 946, in handle
rv = self.filter(record)
File "/usr/local/lib/python3.8/logging/__init__.py", line 807, in filter
result = f.filter(record)
File "/tmpfs/src/github/python-bigquery/.nox/docs/lib/python3.8/site-packages/sphinx/util/logging.py", line 421, in filter
raise exc
sphinx.errors.SphinxWarning: /tmpfs/src/github/python-bigquery/google/cloud/bigquery/query.py:docstring of google.cloud.bigquery.query.ScalarQueryParameter:15:Field list ends without a blank line; unexpected unindent.
Warning, treated as error:
/tmpfs/src/github/python-bigquery/google/cloud/bigquery/query.py:docstring of google.cloud.bigquery.query.ScalarQueryParameter:15:Field list ends without a blank line; unexpected unindent.
```
|
1.0
|
Sphinx 3.2.0 breaks 'docs' build - From [this Kokoro build](https://source.cloud.google.com/results/invocations/a4596b14-7553-4ab1-ab07-c4817ea27796/targets/cloud-devrel%2Fclient-libraries%2Fpython%2Fgoogleapis%2Fpython-bigquery%2Fpresubmit%2Fpresubmit/log):
```python
Traceback (most recent call last):
File "/tmpfs/src/github/python-bigquery/.nox/docs/lib/python3.8/site-packages/sphinx/cmd/build.py", line 280, in build_main
app.build(args.force_all, filenames)
File "/tmpfs/src/github/python-bigquery/.nox/docs/lib/python3.8/site-packages/sphinx/application.py", line 348, in build
self.builder.build_update()
File "/tmpfs/src/github/python-bigquery/.nox/docs/lib/python3.8/site-packages/sphinx/builders/__init__.py", line 297, in build_update
self.build(to_build,
File "/tmpfs/src/github/python-bigquery/.nox/docs/lib/python3.8/site-packages/sphinx/builders/__init__.py", line 311, in build
updated_docnames = set(self.read())
File "/usr/local/lib/python3.8/contextlib.py", line 120, in __exit__
next(self.gen)
File "/tmpfs/src/github/python-bigquery/.nox/docs/lib/python3.8/site-packages/sphinx/util/logging.py", line 213, in pending_warnings
memhandler.flushTo(logger)
File "/tmpfs/src/github/python-bigquery/.nox/docs/lib/python3.8/site-packages/sphinx/util/logging.py", line 178, in flushTo
logger.handle(record)
File "/usr/local/lib/python3.8/logging/__init__.py", line 1587, in handle
self.callHandlers(record)
File "/usr/local/lib/python3.8/logging/__init__.py", line 1649, in callHandlers
hdlr.handle(record)
File "/usr/local/lib/python3.8/logging/__init__.py", line 946, in handle
rv = self.filter(record)
File "/usr/local/lib/python3.8/logging/__init__.py", line 807, in filter
result = f.filter(record)
File "/tmpfs/src/github/python-bigquery/.nox/docs/lib/python3.8/site-packages/sphinx/util/logging.py", line 421, in filter
raise exc
sphinx.errors.SphinxWarning: /tmpfs/src/github/python-bigquery/google/cloud/bigquery/query.py:docstring of google.cloud.bigquery.query.ScalarQueryParameter:15:Field list ends without a blank line; unexpected unindent.
Warning, treated as error:
/tmpfs/src/github/python-bigquery/google/cloud/bigquery/query.py:docstring of google.cloud.bigquery.query.ScalarQueryParameter:15:Field list ends without a blank line; unexpected unindent.
```
|
process
|
sphinx breaks docs build from python traceback most recent call last file tmpfs src github python bigquery nox docs lib site packages sphinx cmd build py line in build main app build args force all filenames file tmpfs src github python bigquery nox docs lib site packages sphinx application py line in build self builder build update file tmpfs src github python bigquery nox docs lib site packages sphinx builders init py line in build update self build to build file tmpfs src github python bigquery nox docs lib site packages sphinx builders init py line in build updated docnames set self read file usr local lib contextlib py line in exit next self gen file tmpfs src github python bigquery nox docs lib site packages sphinx util logging py line in pending warnings memhandler flushto logger file tmpfs src github python bigquery nox docs lib site packages sphinx util logging py line in flushto logger handle record file usr local lib logging init py line in handle self callhandlers record file usr local lib logging init py line in callhandlers hdlr handle record file usr local lib logging init py line in handle rv self filter record file usr local lib logging init py line in filter result f filter record file tmpfs src github python bigquery nox docs lib site packages sphinx util logging py line in filter raise exc sphinx errors sphinxwarning tmpfs src github python bigquery google cloud bigquery query py docstring of google cloud bigquery query scalarqueryparameter field list ends without a blank line unexpected unindent warning treated as error tmpfs src github python bigquery google cloud bigquery query py docstring of google cloud bigquery query scalarqueryparameter field list ends without a blank line unexpected unindent
| 1
|
19,869
| 26,282,063,015
|
IssuesEvent
|
2023-01-07 12:16:50
|
hsmusic/hsmusic-wiki
|
https://api.github.com/repos/hsmusic/hsmusic-wiki
|
opened
|
Basic dynamics: Detect and apply data updates from disk file changes
|
scope: data processing type: new page / feature
|
This issue will be closed when we have the *essential* implementation done, regardless of newly discovered bugs with the way data updates are already handled in general, which will be tracked in following issues.
Checklist TBD
|
1.0
|
Basic dynamics: Detect and apply data updates from disk file changes - This issue will be closed when we have the *essential* implementation done, regardless of newly discovered bugs with the way data updates are already handled in general, which will be tracked in following issues.
Checklist TBD
|
process
|
basic dynamics detect and apply data updates from disk file changes this issue will be closed when we have the essential implementation done regardless of newly discovered bugs with the way data updates are already handled in general which will be tracked in following issues checklist tbd
| 1
|
393,899
| 11,626,215,262
|
IssuesEvent
|
2020-02-27 14:04:59
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
support.mozilla.org - see bug description
|
browser-fenix engine-gecko priority-important
|
<!-- @browser: Firefox Mobile 73.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:73.0) Gecko/73.0 Firefox/73.0 -->
<!-- @reported_with: -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/49154 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://support.mozilla.org/en-US/products/firefox-preview/privacy-and-security
**Browser / Version**: Firefox Mobile 73.0
**Operating System**: Android
**Tested Another Browser**: No
**Problem type**: Something else
**Description**: titles and urls aren't saving on pages after app closes.
**Steps to Reproduce**:
After opening multiple tabs and the app closes the different pages all say "about:blank"
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with โค๏ธ_
|
1.0
|
support.mozilla.org - see bug description - <!-- @browser: Firefox Mobile 73.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:73.0) Gecko/73.0 Firefox/73.0 -->
<!-- @reported_with: -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/49154 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://support.mozilla.org/en-US/products/firefox-preview/privacy-and-security
**Browser / Version**: Firefox Mobile 73.0
**Operating System**: Android
**Tested Another Browser**: No
**Problem type**: Something else
**Description**: titles and urls aren't saving on pages after app closes.
**Steps to Reproduce**:
After opening multiple tabs and the app closes the different pages all say "about:blank"
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with โค๏ธ_
|
non_process
|
support mozilla org see bug description url browser version firefox mobile operating system android tested another browser no problem type something else description titles and urls aren t saving on pages after app closes steps to reproduce after opening multiple tabs and the app closes the different pages all say about blank browser configuration none from with โค๏ธ
| 0
|
18,856
| 24,774,487,910
|
IssuesEvent
|
2022-10-23 15:04:48
|
shirou/gopsutil
|
https://api.github.com/repos/shirou/gopsutil
|
closed
|
High cpu on unoptimized code in process package
|
package:process performance
|
**Describe the bug**
I am going to produce process monitoring app to find which process is increasing the temperature, but I ended up producing high cpu (over 80%), can this be optimized to run in realtime and not produce much cpu?
**To Reproduce**
```go
processes, err := process.Processes()
if err != nil {
return
}
for true {
time.Sleep(1000 * 5)
for _, _process := range processes {
var totalCpu, _ = _process.CPUPercent()
if totalCpu < 30 {
continue
}
fmt.Printf(
"%d,%s,%f\n",
_process.Pid,
getOne(_process.Name()),
getOne(_process.CPUPercent()),
)
}
}
```
**Expected behavior**
Smooth realtime recording/printing.
**Environment (please complete the following information):**
- [x] Linux:
NAME="Ubuntu"
VERSION="18.04.5 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.5 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
`Linux damjan 5.4.0-42-generic #46~18.04.1-Ubuntu SMP Fri Jul 10 07:21:24 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux`
|
1.0
|
High cpu on unoptimized code in process package - **Describe the bug**
I am going to produce process monitoring app to find which process is increasing the temperature, but I ended up producing high cpu (over 80%), can this be optimized to run in realtime and not produce much cpu?
**To Reproduce**
```go
processes, err := process.Processes()
if err != nil {
return
}
for true {
time.Sleep(1000 * 5)
for _, _process := range processes {
var totalCpu, _ = _process.CPUPercent()
if totalCpu < 30 {
continue
}
fmt.Printf(
"%d,%s,%f\n",
_process.Pid,
getOne(_process.Name()),
getOne(_process.CPUPercent()),
)
}
}
```
**Expected behavior**
Smooth realtime recording/printing.
**Environment (please complete the following information):**
- [x] Linux:
NAME="Ubuntu"
VERSION="18.04.5 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.5 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
`Linux damjan 5.4.0-42-generic #46~18.04.1-Ubuntu SMP Fri Jul 10 07:21:24 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux`
|
process
|
high cpu on unoptimized code in process package describe the bug i am going to produce process monitoring app to find which process is increasing the temperature but i ended up producing high cpu over can this be optimized to run in realtime and not produce much cpu to reproduce go processes err process processes if err nil return for true time sleep for process range processes var totalcpu process cpupercent if totalcpu continue fmt printf d s f n process pid getone process name getone process cpupercent expected behavior smooth realtime recording printing environment please complete the following information linux name ubuntu version lts bionic beaver id ubuntu id like debian pretty name ubuntu lts version id home url support url bug report url privacy policy url version codename bionic ubuntu codename bionic linux damjan generic ubuntu smp fri jul utc gnu linux
| 1
|
15,060
| 18,764,316,146
|
IssuesEvent
|
2021-11-05 20:47:54
|
ORNL-AMO/AMO-Tools-Desktop
|
https://api.github.com/repos/ORNL-AMO/AMO-Tools-Desktop
|
opened
|
Heat Cascade Verbiage
|
Process Heating Calculator
|
The language for the additional results is a bit confusing.
I think "heat transferred ..." and "equivalent heat input..." could be worded better
|
1.0
|
Heat Cascade Verbiage - The language for the additional results is a bit confusing.
I think "heat transferred ..." and "equivalent heat input..." could be worded better
|
process
|
heat cascade verbiage the language for the additional results is a bit confusing i think heat transferred and equivalent heat input could be worded better
| 1
|
192
| 2,596,564,241
|
IssuesEvent
|
2015-02-20 21:35:44
|
GsDevKit/gsDevKitHome
|
https://api.github.com/repos/GsDevKit/gsDevKitHome
|
reopened
|
setting PATH and GS_HOME could be a sh file rather than copy/paste
|
in process
|
e.g. setEnvironmentVariables
#!/bin/bash
export GS_HOME=`pwd`
export PATH=$GS_HOME/bin:$PATH
|
1.0
|
setting PATH and GS_HOME could be a sh file rather than copy/paste - e.g. setEnvironmentVariables
#!/bin/bash
export GS_HOME=`pwd`
export PATH=$GS_HOME/bin:$PATH
|
process
|
setting path and gs home could be a sh file rather than copy paste e g setenvironmentvariables bin bash export gs home pwd export path gs home bin path
| 1
|
310,777
| 26,742,624,560
|
IssuesEvent
|
2023-01-30 14:02:01
|
lowRISC/opentitan
|
https://api.github.com/repos/lowRISC/opentitan
|
closed
|
[test-triage] Assert failing in file sha3pad.sv:851
|
Component:TestTriage
|
### Hierarchy of regression failure
Chip Level
### Failure Description
UVM_ERROR @ 4148.559425 us: (sha3pad.sv:851) [ASSERT FAILED] RunThenComplete_M
UVM_INFO @ 4148.559425 us: (uvm_report_catcher.svh:705) [UVM/REPORT/CATCHER]
--- UVM Report catcher Summary ---
### Steps to Reproduce
- GitHub Revision: [91b09f2d4](https://github.com/lowrisc/opentitan/tree/91b09f2d4bfa63fbb344f250a7313d922953babc)
- dvsim invocation command to reproduce the failure, inclusive of build and run seeds:
./util/dvsim/dvsim.py hw/top_earlgrey/dv/chip_sim_cfg.hjson -i chip_sw_sram_ctrl_scrambled_access --build-seed 2585102727 --fixed-seed 1875324896 --waves fsdb
### Tests with similar or related failures
- [ ] chip_sw_sram_ctrl_scrambled_access
- [ ] chip_sw_sram_ctrl_scrambled_access_jitter_en
- [ ] chip_sw_sram_ctrl_scrambled_access_jitter_en_reduced_freq
- [ ] chip_csr_aliasing
- [ ] chip_same_csr_outstanding
- [ ] chip_sw_exit_test_unlocked_bootstrap
- [ ] chip_sw_flash_ctrl_lc_rw_en
- [ ] chip_prim_tl_access
|
1.0
|
[test-triage] Assert failing in file sha3pad.sv:851 - ### Hierarchy of regression failure
Chip Level
### Failure Description
UVM_ERROR @ 4148.559425 us: (sha3pad.sv:851) [ASSERT FAILED] RunThenComplete_M
UVM_INFO @ 4148.559425 us: (uvm_report_catcher.svh:705) [UVM/REPORT/CATCHER]
--- UVM Report catcher Summary ---
### Steps to Reproduce
- GitHub Revision: [91b09f2d4](https://github.com/lowrisc/opentitan/tree/91b09f2d4bfa63fbb344f250a7313d922953babc)
- dvsim invocation command to reproduce the failure, inclusive of build and run seeds:
./util/dvsim/dvsim.py hw/top_earlgrey/dv/chip_sim_cfg.hjson -i chip_sw_sram_ctrl_scrambled_access --build-seed 2585102727 --fixed-seed 1875324896 --waves fsdb
### Tests with similar or related failures
- [ ] chip_sw_sram_ctrl_scrambled_access
- [ ] chip_sw_sram_ctrl_scrambled_access_jitter_en
- [ ] chip_sw_sram_ctrl_scrambled_access_jitter_en_reduced_freq
- [ ] chip_csr_aliasing
- [ ] chip_same_csr_outstanding
- [ ] chip_sw_exit_test_unlocked_bootstrap
- [ ] chip_sw_flash_ctrl_lc_rw_en
- [ ] chip_prim_tl_access
|
non_process
|
assert failing in file sv hierarchy of regression failure chip level failure description uvm error us sv runthencomplete m uvm info us uvm report catcher svh uvm report catcher summary steps to reproduce github revision dvsim invocation command to reproduce the failure inclusive of build and run seeds util dvsim dvsim py hw top earlgrey dv chip sim cfg hjson i chip sw sram ctrl scrambled access build seed fixed seed waves fsdb tests with similar or related failures chip sw sram ctrl scrambled access chip sw sram ctrl scrambled access jitter en chip sw sram ctrl scrambled access jitter en reduced freq chip csr aliasing chip same csr outstanding chip sw exit test unlocked bootstrap chip sw flash ctrl lc rw en chip prim tl access
| 0
|
2,110
| 4,951,791,287
|
IssuesEvent
|
2016-12-01 09:50:16
|
g8os/core0
|
https://api.github.com/repos/g8os/core0
|
closed
|
g8os.alpha: create/list/delete/snapshot/snapshotRestore subvols
|
process_duplicate
|
- starting from name of filesystem (/storage/$filesystemname)
|
1.0
|
g8os.alpha: create/list/delete/snapshot/snapshotRestore subvols -
- starting from name of filesystem (/storage/$filesystemname)
|
process
|
alpha create list delete snapshot snapshotrestore subvols starting from name of filesystem storage filesystemname
| 1
|
141,941
| 19,010,517,507
|
IssuesEvent
|
2021-11-23 08:47:02
|
samisalamiws/npm7-Workspaces
|
https://api.github.com/repos/samisalamiws/npm7-Workspaces
|
opened
|
CVE-2019-10086 (High) detected in commons-beanutils-1.8.0.jar
|
security vulnerability
|
## CVE-2019-10086 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-beanutils-1.8.0.jar</b></p></summary>
<p>BeanUtils provides an easy-to-use but flexible wrapper around reflection and introspection.</p>
<p>Library home page: <a href="http://commons.apache.org/beanutils/">http://commons.apache.org/beanutils/</a></p>
<p>Path to vulnerable library: /pkgs/commons-beanutils-1.8.0.jar</p>
<p>
Dependency Hierarchy:
- :x: **commons-beanutils-1.8.0.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/samisalamiws/npm7-Workspaces/commit/af8c863980266af8f5676abc24029318e7ce979a">af8c863980266af8f5676abc24029318e7ce979a</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Apache Commons Beanutils 1.9.2, a special BeanIntrospector class was added which allows suppressing the ability for an attacker to access the classloader via the class property available on all Java objects. We, however were not using this by default characteristic of the PropertyUtilsBean.
<p>Publish Date: 2019-08-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-10086>CVE-2019-10086</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/victims/victims-cve-db/commit/16a669c84d95bbbd4294f30e609049a36700847f">https://github.com/victims/victims-cve-db/commit/16a669c84d95bbbd4294f30e609049a36700847f</a></p>
<p>Release Date: 2019-08-20</p>
<p>Fix Resolution: commons-beanutils:commons-beanutils:1.9.4</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"commons-beanutils","packageName":"commons-beanutils","packageVersion":"1.8.0","packageFilePaths":[],"isTransitiveDependency":false,"dependencyTree":"commons-beanutils:commons-beanutils:1.8.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"commons-beanutils:commons-beanutils:1.9.4"}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2019-10086","vulnerabilityDetails":"In Apache Commons Beanutils 1.9.2, a special BeanIntrospector class was added which allows suppressing the ability for an attacker to access the classloader via the class property available on all Java objects. We, however were not using this by default characteristic of the PropertyUtilsBean.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-10086","cvss3Severity":"high","cvss3Score":"7.3","cvss3Metrics":{"A":"Low","AC":"Low","PR":"None","S":"Unchanged","C":"Low","UI":"None","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2019-10086 (High) detected in commons-beanutils-1.8.0.jar - ## CVE-2019-10086 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-beanutils-1.8.0.jar</b></p></summary>
<p>BeanUtils provides an easy-to-use but flexible wrapper around reflection and introspection.</p>
<p>Library home page: <a href="http://commons.apache.org/beanutils/">http://commons.apache.org/beanutils/</a></p>
<p>Path to vulnerable library: /pkgs/commons-beanutils-1.8.0.jar</p>
<p>
Dependency Hierarchy:
- :x: **commons-beanutils-1.8.0.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/samisalamiws/npm7-Workspaces/commit/af8c863980266af8f5676abc24029318e7ce979a">af8c863980266af8f5676abc24029318e7ce979a</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Apache Commons Beanutils 1.9.2, a special BeanIntrospector class was added which allows suppressing the ability for an attacker to access the classloader via the class property available on all Java objects. We, however were not using this by default characteristic of the PropertyUtilsBean.
<p>Publish Date: 2019-08-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-10086>CVE-2019-10086</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/victims/victims-cve-db/commit/16a669c84d95bbbd4294f30e609049a36700847f">https://github.com/victims/victims-cve-db/commit/16a669c84d95bbbd4294f30e609049a36700847f</a></p>
<p>Release Date: 2019-08-20</p>
<p>Fix Resolution: commons-beanutils:commons-beanutils:1.9.4</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"commons-beanutils","packageName":"commons-beanutils","packageVersion":"1.8.0","packageFilePaths":[],"isTransitiveDependency":false,"dependencyTree":"commons-beanutils:commons-beanutils:1.8.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"commons-beanutils:commons-beanutils:1.9.4"}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2019-10086","vulnerabilityDetails":"In Apache Commons Beanutils 1.9.2, a special BeanIntrospector class was added which allows suppressing the ability for an attacker to access the classloader via the class property available on all Java objects. We, however were not using this by default characteristic of the PropertyUtilsBean.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-10086","cvss3Severity":"high","cvss3Score":"7.3","cvss3Metrics":{"A":"Low","AC":"Low","PR":"None","S":"Unchanged","C":"Low","UI":"None","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve high detected in commons beanutils jar cve high severity vulnerability vulnerable library commons beanutils jar beanutils provides an easy to use but flexible wrapper around reflection and introspection library home page a href path to vulnerable library pkgs commons beanutils jar dependency hierarchy x commons beanutils jar vulnerable library found in head commit a href found in base branch main vulnerability details in apache commons beanutils a special beanintrospector class was added which allows suppressing the ability for an attacker to access the classloader via the class property available on all java objects we however were not using this by default characteristic of the propertyutilsbean publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution commons beanutils commons beanutils check this box to open an automated fix pr isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree commons beanutils commons beanutils isminimumfixversionavailable true minimumfixversion commons beanutils commons beanutils basebranches vulnerabilityidentifier cve vulnerabilitydetails in apache commons beanutils a special beanintrospector class was added which allows suppressing the ability for an attacker to access the classloader via the class property available on all java objects we however were not using this by default characteristic of the propertyutilsbean vulnerabilityurl
| 0
|
18,851
| 24,766,105,561
|
IssuesEvent
|
2022-10-22 14:47:18
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
opened
|
[Mirror]
|
P2 type: process team-OSS mirror request
|
### Please list the URLs of the archives you'd like to mirror:
Please mirror:
- https://sourceware.org/pub/bzip2/bzip2-1.0.8.tar.gz
Expected mirror URLs:
- https://mirror.bazel.build/sourceware.org/pub/bzip2/bzip2-1.0.8.tar.gz
Intended use:
- https://github.com/nelhage/rules_boost/blob/d35a5a8830dff6c7beb0dc48d1ef7b238e1ec619/boost/boost.bzl#L182
|
1.0
|
[Mirror] - ### Please list the URLs of the archives you'd like to mirror:
Please mirror:
- https://sourceware.org/pub/bzip2/bzip2-1.0.8.tar.gz
Expected mirror URLs:
- https://mirror.bazel.build/sourceware.org/pub/bzip2/bzip2-1.0.8.tar.gz
Intended use:
- https://github.com/nelhage/rules_boost/blob/d35a5a8830dff6c7beb0dc48d1ef7b238e1ec619/boost/boost.bzl#L182
|
process
|
please list the urls of the archives you d like to mirror please mirror expected mirror urls intended use
| 1
|
240,292
| 18,298,541,431
|
IssuesEvent
|
2021-10-05 23:16:18
|
angelasofiaremolinagutierrez/netby
|
https://api.github.com/repos/angelasofiaremolinagutierrez/netby
|
closed
|
Diagrama de bloques
|
documentation
|
Se necesita un diagrama que muestre las etapas del software, su funcionalidad y el flujo deseado.
|
1.0
|
Diagrama de bloques - Se necesita un diagrama que muestre las etapas del software, su funcionalidad y el flujo deseado.
|
non_process
|
diagrama de bloques se necesita un diagrama que muestre las etapas del software su funcionalidad y el flujo deseado
| 0
|
9,134
| 12,202,914,583
|
IssuesEvent
|
2020-04-30 09:43:35
|
MHRA/products
|
https://api.github.com/repos/MHRA/products
|
closed
|
Update basic auth regex to match accepted standard
|
BUG :bug: EPIC - Auto Batch Process :oncoming_automobile:
|
**Describe the bug**
Current regex used to match on decoded username and password matches only `\w` which equates to `[a-zA-Z_]` but according to standards, should match any character (excluding control characters, colons (as this is used as the username:password separator) and (for the username) spaces).
**To Reproduce**
1. Go enabled username/password to include a valid but non-`\w` character, such as a hyphen (`-`).
2. Attempt send an authenticated request to end-point
4. Observe authentication failure (401)
**Expected behavior**
Any username/password that is configured and contains valid characters should authenticate successfully.
**Screenshots**
N/A
**Additional context**
N/A
|
1.0
|
Update basic auth regex to match accepted standard - **Describe the bug**
Current regex used to match on decoded username and password matches only `\w` which equates to `[a-zA-Z_]` but according to standards, should match any character (excluding control characters, colons (as this is used as the username:password separator) and (for the username) spaces).
**To Reproduce**
1. Go enabled username/password to include a valid but non-`\w` character, such as a hyphen (`-`).
2. Attempt send an authenticated request to end-point
4. Observe authentication failure (401)
**Expected behavior**
Any username/password that is configured and contains valid characters should authenticate successfully.
**Screenshots**
N/A
**Additional context**
N/A
|
process
|
update basic auth regex to match accepted standard describe the bug current regex used to match on decoded username and password matches only w which equates to but according to standards should match any character excluding control characters colons as this is used as the username password separator and for the username spaces to reproduce go enabled username password to include a valid but non w character such as a hyphen attempt send an authenticated request to end point observe authentication failure expected behavior any username password that is configured and contains valid characters should authenticate successfully screenshots n a additional context n a
| 1
|
20,250
| 26,868,901,780
|
IssuesEvent
|
2023-02-04 07:28:59
|
sebastianbergmann/phpunit
|
https://api.github.com/repos/sebastianbergmann/phpunit
|
closed
|
Allowing errors with convertWarningsToExceptions doesn't work with process isolation
|
type/bug feature/test-runner feature/process-isolation
|
| Q | A
| --------------------| ---------------
| PHPUnit version | PHPUnit 9.5.24 #StandWithUkraine
| PHP version | PHP 8.1.10 (cli) (built: Sep 4 2022 08:41:25) (NTS)
| Installation Method | Composer
#### Summary
The XML attributes `convertDeprecationsToExceptions`, `convertErrorsToExceptions`, `convertNoticesToExceptions`, `convertWarningsToExceptions` don't work when `processIsolation=true`.
#### Current behavior
I have a test that triggers a notice (MyTest.php):
```
<?php
use PHPUnit\Framework\TestCase;
class MyTest extends TestCase {
public function testNotice() : void {
$this->assertFalse(new stdClass() > 5); // triggers a E_NOTICE
}
}
```
I have a phpunit.xml that ignores notices:
```
<?xml version="1.0" encoding="UTF-8"?>
<phpunit xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="https://schema.phpunit.de/9.3/phpunit.xsd"
convertNoticesToExceptions="false"> <!-- Ignore notices -->
<testsuites>
<testsuite name="Tests">
<directory>.</directory>
</testsuite>
</testsuites>
</phpunit>
```
Running phpunit normally works:
```
$ vendor/bin/phpunit
PHPUnit 9.5.24 #StandWithUkraine
PHP Notice: Object of class stdClass could not be converted to int in /Users/sjoerdlangkemper/dev/test/phpunit/MyTest.php on line 6
. 1 / 1 (100%)
Notice: Object of class stdClass could not be converted to int in /Users/sjoerdlangkemper/dev/test/phpunit/MyTest.php on line 6
Time: 00:00.006, Memory: 6.00 MB
OK (1 test, 1 assertion)
```
Enabling process isolation, either in the XML file or in the command line, lets the notice break the test:
```
vendor/bin/phpunit --process-isolation
PHPUnit 9.5.24 #StandWithUkraine
E 1 / 1 (100%)
Time: 00:00.063, Memory: 6.00 MB
There was 1 error:
1) MyTest::testNotice
Object of class stdClass could not be converted to int
/Users/sjoerdlangkemper/dev/test/phpunit/MyTest.php:6
ERRORS!
Tests: 1, Assertions: 0, Errors: 1.
```
#### How to reproduce
See above.
#### Expected behavior
I would expect the test to behave the same with or without `--process-isolation`, i.e. allow the notice. I expect the tests to succeed since I have configured `convertNoticesToExceptions=false`.
#### More info
See also [Process isolation ignores convertErrorsToExceptions="false" ยท Issue #1710 ยท sebastianbergmann/phpunit](https://github.com/sebastianbergmann/phpunit/issues/1710)
```
$ composer info | sort
doctrine/instantiator 1.4.1 A small, lightweight utility to instantiate objects in PHP without invoking their constructors
myclabs/deep-copy 1.11.0 Create deep copies (clones) of your objects
nikic/php-parser v4.15.1 A PHP parser written in PHP
phar-io/manifest 2.0.3 Component for reading phar.io manifest information from a PHP Archive (PHAR)
phar-io/version 3.2.1 Library for handling version information and constraints
phpunit/php-code-coverage 9.2.17 Library that provides collection, processing, and rendering functionality for PHP code coverage information.
phpunit/php-file-iterator 3.0.6 FilterIterator implementation that filters files based on a list of suffixes.
phpunit/php-invoker 3.1.1 Invoke callables with a timeout
phpunit/php-text-template 2.0.4 Simple template engine.
phpunit/php-timer 5.0.3 Utility class for timing
phpunit/phpunit 9.5.24 The PHP Unit Testing framework.
sebastian/cli-parser 1.0.1 Library for parsing CLI options
sebastian/code-unit 1.0.8 Collection of value objects that represent the PHP code units
sebastian/code-unit-reverse-lookup 2.0.3 Looks up which function or method a line of code belongs to
sebastian/comparator 4.0.8 Provides the functionality to compare PHP values for equality
sebastian/complexity 2.0.2 Library for calculating the complexity of PHP code units
sebastian/diff 4.0.4 Diff implementation
sebastian/environment 5.1.4 Provides functionality to handle HHVM/PHP environments
sebastian/exporter 4.0.5 Provides the functionality to export PHP variables for visualization
sebastian/global-state 5.0.5 Snapshotting of global state
sebastian/lines-of-code 1.0.3 Library for counting the lines of code in PHP source code
sebastian/object-enumerator 4.0.4 Traverses array structures and object graphs to enumerate all referenced objects
sebastian/object-reflector 2.0.4 Allows reflection of object attributes, including inherited and non-public ones
sebastian/recursion-context 4.0.4 Provides functionality to recursively process PHP variables
sebastian/resource-operations 3.0.3 Provides a list of PHP built-in functions that operate on resources
sebastian/type 3.2.0 Collection of value objects that represent the types of the PHP type system
sebastian/version 3.0.2 Library that helps with managing the version number of Git-hosted PHP projects
theseer/tokenizer 1.2.1 A small library for converting tokenized PHP source code into XML and potentially other formats
```
|
1.0
|
Allowing errors with convertWarningsToExceptions doesn't work with process isolation - | Q | A
| --------------------| ---------------
| PHPUnit version | PHPUnit 9.5.24 #StandWithUkraine
| PHP version | PHP 8.1.10 (cli) (built: Sep 4 2022 08:41:25) (NTS)
| Installation Method | Composer
#### Summary
The XML attributes `convertDeprecationsToExceptions`, `convertErrorsToExceptions`, `convertNoticesToExceptions`, `convertWarningsToExceptions` don't work when `processIsolation=true`.
#### Current behavior
I have a test that triggers a notice (MyTest.php):
```
<?php
use PHPUnit\Framework\TestCase;
class MyTest extends TestCase {
public function testNotice() : void {
$this->assertFalse(new stdClass() > 5); // triggers a E_NOTICE
}
}
```
I have a phpunit.xml that ignores notices:
```
<?xml version="1.0" encoding="UTF-8"?>
<phpunit xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="https://schema.phpunit.de/9.3/phpunit.xsd"
convertNoticesToExceptions="false"> <!-- Ignore notices -->
<testsuites>
<testsuite name="Tests">
<directory>.</directory>
</testsuite>
</testsuites>
</phpunit>
```
Running phpunit normally works:
```
$ vendor/bin/phpunit
PHPUnit 9.5.24 #StandWithUkraine
PHP Notice: Object of class stdClass could not be converted to int in /Users/sjoerdlangkemper/dev/test/phpunit/MyTest.php on line 6
. 1 / 1 (100%)
Notice: Object of class stdClass could not be converted to int in /Users/sjoerdlangkemper/dev/test/phpunit/MyTest.php on line 6
Time: 00:00.006, Memory: 6.00 MB
OK (1 test, 1 assertion)
```
Enabling process isolation, either in the XML file or in the command line, lets the notice break the test:
```
vendor/bin/phpunit --process-isolation
PHPUnit 9.5.24 #StandWithUkraine
E 1 / 1 (100%)
Time: 00:00.063, Memory: 6.00 MB
There was 1 error:
1) MyTest::testNotice
Object of class stdClass could not be converted to int
/Users/sjoerdlangkemper/dev/test/phpunit/MyTest.php:6
ERRORS!
Tests: 1, Assertions: 0, Errors: 1.
```
#### How to reproduce
See above.
#### Expected behavior
I would expect the test to behave the same with or without `--process-isolation`, i.e. allow the notice. I expect the tests to succeed since I have configured `convertNoticesToExceptions=false`.
#### More info
See also [Process isolation ignores convertErrorsToExceptions="false" ยท Issue #1710 ยท sebastianbergmann/phpunit](https://github.com/sebastianbergmann/phpunit/issues/1710)
```
$ composer info | sort
doctrine/instantiator 1.4.1 A small, lightweight utility to instantiate objects in PHP without invoking their constructors
myclabs/deep-copy 1.11.0 Create deep copies (clones) of your objects
nikic/php-parser v4.15.1 A PHP parser written in PHP
phar-io/manifest 2.0.3 Component for reading phar.io manifest information from a PHP Archive (PHAR)
phar-io/version 3.2.1 Library for handling version information and constraints
phpunit/php-code-coverage 9.2.17 Library that provides collection, processing, and rendering functionality for PHP code coverage information.
phpunit/php-file-iterator 3.0.6 FilterIterator implementation that filters files based on a list of suffixes.
phpunit/php-invoker 3.1.1 Invoke callables with a timeout
phpunit/php-text-template 2.0.4 Simple template engine.
phpunit/php-timer 5.0.3 Utility class for timing
phpunit/phpunit 9.5.24 The PHP Unit Testing framework.
sebastian/cli-parser 1.0.1 Library for parsing CLI options
sebastian/code-unit 1.0.8 Collection of value objects that represent the PHP code units
sebastian/code-unit-reverse-lookup 2.0.3 Looks up which function or method a line of code belongs to
sebastian/comparator 4.0.8 Provides the functionality to compare PHP values for equality
sebastian/complexity 2.0.2 Library for calculating the complexity of PHP code units
sebastian/diff 4.0.4 Diff implementation
sebastian/environment 5.1.4 Provides functionality to handle HHVM/PHP environments
sebastian/exporter 4.0.5 Provides the functionality to export PHP variables for visualization
sebastian/global-state 5.0.5 Snapshotting of global state
sebastian/lines-of-code 1.0.3 Library for counting the lines of code in PHP source code
sebastian/object-enumerator 4.0.4 Traverses array structures and object graphs to enumerate all referenced objects
sebastian/object-reflector 2.0.4 Allows reflection of object attributes, including inherited and non-public ones
sebastian/recursion-context 4.0.4 Provides functionality to recursively process PHP variables
sebastian/resource-operations 3.0.3 Provides a list of PHP built-in functions that operate on resources
sebastian/type 3.2.0 Collection of value objects that represent the types of the PHP type system
sebastian/version 3.0.2 Library that helps with managing the version number of Git-hosted PHP projects
theseer/tokenizer 1.2.1 A small library for converting tokenized PHP source code into XML and potentially other formats
```
|
process
|
allowing errors with convertwarningstoexceptions doesn t work with process isolation q a phpunit version phpunit standwithukraine php version php cli built sep nts installation method composer summary the xml attributes convertdeprecationstoexceptions converterrorstoexceptions convertnoticestoexceptions convertwarningstoexceptions don t work when processisolation true current behavior i have a test that triggers a notice mytest php php use phpunit framework testcase class mytest extends testcase public function testnotice void this assertfalse new stdclass triggers a e notice i have a phpunit xml that ignores notices phpunit xmlns xsi xsi nonamespaceschemalocation convertnoticestoexceptions false running phpunit normally works vendor bin phpunit phpunit standwithukraine php notice object of class stdclass could not be converted to int in users sjoerdlangkemper dev test phpunit mytest php on line notice object of class stdclass could not be converted to int in users sjoerdlangkemper dev test phpunit mytest php on line time memory mb ok test assertion enabling process isolation either in the xml file or in the command line lets the notice break the test vendor bin phpunit process isolation phpunit standwithukraine e time memory mb there was error mytest testnotice object of class stdclass could not be converted to int users sjoerdlangkemper dev test phpunit mytest php errors tests assertions errors how to reproduce see above expected behavior i would expect the test to behave the same with or without process isolation i e allow the notice i expect the tests to succeed since i have configured convertnoticestoexceptions false more info see also composer info sort doctrine instantiator a small lightweight utility to instantiate objects in php without invoking their constructors myclabs deep copy create deep copies clones of your objects nikic php parser a php parser written in php phar io manifest component for reading phar io manifest information from a php archive phar phar io version library for handling version information and constraints phpunit php code coverage library that provides collection processing and rendering functionality for php code coverage information phpunit php file iterator filteriterator implementation that filters files based on a list of suffixes phpunit php invoker invoke callables with a timeout phpunit php text template simple template engine phpunit php timer utility class for timing phpunit phpunit the php unit testing framework sebastian cli parser library for parsing cli options sebastian code unit collection of value objects that represent the php code units sebastian code unit reverse lookup looks up which function or method a line of code belongs to sebastian comparator provides the functionality to compare php values for equality sebastian complexity library for calculating the complexity of php code units sebastian diff diff implementation sebastian environment provides functionality to handle hhvm php environments sebastian exporter provides the functionality to export php variables for visualization sebastian global state snapshotting of global state sebastian lines of code library for counting the lines of code in php source code sebastian object enumerator traverses array structures and object graphs to enumerate all referenced objects sebastian object reflector allows reflection of object attributes including inherited and non public ones sebastian recursion context provides functionality to recursively process php variables sebastian resource operations provides a list of php built in functions that operate on resources sebastian type collection of value objects that represent the types of the php type system sebastian version library that helps with managing the version number of git hosted php projects theseer tokenizer a small library for converting tokenized php source code into xml and potentially other formats
| 1
|
13,115
| 15,500,215,160
|
IssuesEvent
|
2021-03-11 09:02:04
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
protein quality control and elimination/ proteostasis
|
PomBase cellular processes low priority mini-project protein quality control
|
There seem to be a lot of ways to describe this pathway in different places?

|
1.0
|
protein quality control and elimination/ proteostasis - There seem to be a lot of ways to describe this pathway in different places?

|
process
|
protein quality control and elimination proteostasis there seem to be a lot of ways to describe this pathway in different places
| 1
|
333,483
| 10,127,073,961
|
IssuesEvent
|
2019-08-01 09:23:38
|
saesrpg/saesrpg-gang
|
https://api.github.com/repos/saesrpg/saesrpg-gang
|
closed
|
SAF requests
|
Priority: Low Status: Pending Type: Group / Gang Type: Mapping Type: Spawns
|
As founders of the group, Frisout and I have agreed on putting the group to rest for now. We would like to keep the ingame group and forum section on old forums, as we are inactive and not disbanded.
I'll put a checklist below for you to know what needs doing
- [x] remove filmworks spawn
- [ ] remove filmworks mapping (send files to Tut, por favor)
- [ ] remove filmworks vehicles
|
1.0
|
SAF requests - As founders of the group, Frisout and I have agreed on putting the group to rest for now. We would like to keep the ingame group and forum section on old forums, as we are inactive and not disbanded.
I'll put a checklist below for you to know what needs doing
- [x] remove filmworks spawn
- [ ] remove filmworks mapping (send files to Tut, por favor)
- [ ] remove filmworks vehicles
|
non_process
|
saf requests as founders of the group frisout and i have agreed on putting the group to rest for now we would like to keep the ingame group and forum section on old forums as we are inactive and not disbanded i ll put a checklist below for you to know what needs doing remove filmworks spawn remove filmworks mapping send files to tut por favor remove filmworks vehicles
| 0
|
165,034
| 20,574,078,031
|
IssuesEvent
|
2022-03-04 01:17:56
|
Guillerbr/escola-angu-app
|
https://api.github.com/repos/Guillerbr/escola-angu-app
|
opened
|
CVE-2022-0691 (High) detected in url-parse-1.4.7.tgz
|
security vulnerability
|
## CVE-2022-0691 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>url-parse-1.4.7.tgz</b></p></summary>
<p>Small footprint URL parser that works seamlessly across Node.js and browser environments</p>
<p>Library home page: <a href="https://registry.npmjs.org/url-parse/-/url-parse-1.4.7.tgz">https://registry.npmjs.org/url-parse/-/url-parse-1.4.7.tgz</a></p>
<p>Path to dependency file: /escola-angu-app/package.json</p>
<p>Path to vulnerable library: /node_modules/url-parse/package.json</p>
<p>
Dependency Hierarchy:
- build-angular-0.800.6.tgz (Root Library)
- webpack-dev-server-3.3.1.tgz
- sockjs-client-1.3.0.tgz
- :x: **url-parse-1.4.7.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Authorization Bypass Through User-Controlled Key in NPM url-parse prior to 1.5.9.
<p>Publish Date: 2022-02-21
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0691>CVE-2022-0691</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0691">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0691</a></p>
<p>Release Date: 2022-02-21</p>
<p>Fix Resolution (url-parse): 1.5.9</p>
<p>Direct dependency fix Resolution (@angular-devkit/build-angular): 0.801.0-beta.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-0691 (High) detected in url-parse-1.4.7.tgz - ## CVE-2022-0691 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>url-parse-1.4.7.tgz</b></p></summary>
<p>Small footprint URL parser that works seamlessly across Node.js and browser environments</p>
<p>Library home page: <a href="https://registry.npmjs.org/url-parse/-/url-parse-1.4.7.tgz">https://registry.npmjs.org/url-parse/-/url-parse-1.4.7.tgz</a></p>
<p>Path to dependency file: /escola-angu-app/package.json</p>
<p>Path to vulnerable library: /node_modules/url-parse/package.json</p>
<p>
Dependency Hierarchy:
- build-angular-0.800.6.tgz (Root Library)
- webpack-dev-server-3.3.1.tgz
- sockjs-client-1.3.0.tgz
- :x: **url-parse-1.4.7.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Authorization Bypass Through User-Controlled Key in NPM url-parse prior to 1.5.9.
<p>Publish Date: 2022-02-21
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0691>CVE-2022-0691</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0691">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0691</a></p>
<p>Release Date: 2022-02-21</p>
<p>Fix Resolution (url-parse): 1.5.9</p>
<p>Direct dependency fix Resolution (@angular-devkit/build-angular): 0.801.0-beta.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in url parse tgz cve high severity vulnerability vulnerable library url parse tgz small footprint url parser that works seamlessly across node js and browser environments library home page a href path to dependency file escola angu app package json path to vulnerable library node modules url parse package json dependency hierarchy build angular tgz root library webpack dev server tgz sockjs client tgz x url parse tgz vulnerable library vulnerability details authorization bypass through user controlled key in npm url parse prior to publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution url parse direct dependency fix resolution angular devkit build angular beta step up your open source security game with whitesource
| 0
|
140,490
| 32,007,401,920
|
IssuesEvent
|
2023-09-21 15:37:51
|
joomla/joomla-cms
|
https://api.github.com/repos/joomla/joomla-cms
|
closed
|
[5.0] PHP Warnings in Versions - Cannot declare class \joomla\cms\table\contenttype, because the name is already in use
|
No Code Attached Yet Information Required
|
### Steps to reproduce the issue
Open the Versions of an article or category
### Actual result
There are two PHP Warnings
Warning
: Cannot declare class \joomla\cms\table\contenttype, because the name is already in use in
/var/www/html/joomla-5/libraries/loader.php
on line 576
Warning
: Cannot declare class \joomla\cms\table\contenthistory, because the name is already in use in
/var/www/html/joomla-5/libraries/loader.php
on line
576
<img width="1698" alt="Screenshot 2023-09-20 at 16 44 06" src="https://github.com/joomla/joomla-cms/assets/4417047/6cb6a10f-002b-4af6-9c87-148cb9bc1e14">
### System information (as much as possible)
Joomla 5.0 Beta 2
PHP 8.1.23
|
1.0
|
[5.0] PHP Warnings in Versions - Cannot declare class \joomla\cms\table\contenttype, because the name is already in use - ### Steps to reproduce the issue
Open the Versions of an article or category
### Actual result
There are two PHP Warnings
Warning
: Cannot declare class \joomla\cms\table\contenttype, because the name is already in use in
/var/www/html/joomla-5/libraries/loader.php
on line 576
Warning
: Cannot declare class \joomla\cms\table\contenthistory, because the name is already in use in
/var/www/html/joomla-5/libraries/loader.php
on line
576
<img width="1698" alt="Screenshot 2023-09-20 at 16 44 06" src="https://github.com/joomla/joomla-cms/assets/4417047/6cb6a10f-002b-4af6-9c87-148cb9bc1e14">
### System information (as much as possible)
Joomla 5.0 Beta 2
PHP 8.1.23
|
non_process
|
php warnings in versions cannot declare class joomla cms table contenttype because the name is already in use steps to reproduce the issue open the versions of an article or category actual result there are two php warnings warning cannot declare class joomla cms table contenttype because the name is already in use in var www html joomla libraries loader php on line warning cannot declare class joomla cms table contenthistory because the name is already in use in var www html joomla libraries loader php on line img width alt screenshot at src system information as much as possible joomla beta php
| 0
|
249,881
| 26,995,965,470
|
IssuesEvent
|
2023-02-10 01:04:12
|
yknx4/MongoDBRest
|
https://api.github.com/repos/yknx4/MongoDBRest
|
opened
|
CVE-2023-25166 (Medium) detected in formula-3.0.0.tgz
|
security vulnerability
|
## CVE-2023-25166 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>formula-3.0.0.tgz</b></p></summary>
<p>Math and string formula parser.</p>
<p>Library home page: <a href="https://registry.npmjs.org/@sideway/formula/-/formula-3.0.0.tgz">https://registry.npmjs.org/@sideway/formula/-/formula-3.0.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/@sideway/formula/package.json</p>
<p>
Dependency Hierarchy:
- hapi__hapi-20.0.9.tgz (Root Library)
- joi-17.4.2.tgz
- :x: **formula-3.0.0.tgz** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
formula is a math and string formula parser. In versions prior to 3.0.1 crafted user-provided strings to formula's parser might lead to polynomial execution time and a denial of service. Users should upgrade to 3.0.1+. There are no known workarounds for this vulnerability.
<p>Publish Date: 2023-02-08
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-25166>CVE-2023-25166</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2023-25166">https://www.cve.org/CVERecord?id=CVE-2023-25166</a></p>
<p>Release Date: 2023-02-08</p>
<p>Fix Resolution: @sideway/formula - 3.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2023-25166 (Medium) detected in formula-3.0.0.tgz - ## CVE-2023-25166 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>formula-3.0.0.tgz</b></p></summary>
<p>Math and string formula parser.</p>
<p>Library home page: <a href="https://registry.npmjs.org/@sideway/formula/-/formula-3.0.0.tgz">https://registry.npmjs.org/@sideway/formula/-/formula-3.0.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/@sideway/formula/package.json</p>
<p>
Dependency Hierarchy:
- hapi__hapi-20.0.9.tgz (Root Library)
- joi-17.4.2.tgz
- :x: **formula-3.0.0.tgz** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
formula is a math and string formula parser. In versions prior to 3.0.1 crafted user-provided strings to formula's parser might lead to polynomial execution time and a denial of service. Users should upgrade to 3.0.1+. There are no known workarounds for this vulnerability.
<p>Publish Date: 2023-02-08
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-25166>CVE-2023-25166</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2023-25166">https://www.cve.org/CVERecord?id=CVE-2023-25166</a></p>
<p>Release Date: 2023-02-08</p>
<p>Fix Resolution: @sideway/formula - 3.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in formula tgz cve medium severity vulnerability vulnerable library formula tgz math and string formula parser library home page a href path to dependency file package json path to vulnerable library node modules sideway formula package json dependency hierarchy hapi hapi tgz root library joi tgz x formula tgz vulnerable library found in base branch main vulnerability details formula is a math and string formula parser in versions prior to crafted user provided strings to formula s parser might lead to polynomial execution time and a denial of service users should upgrade to there are no known workarounds for this vulnerability publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution sideway formula step up your open source security game with mend
| 0
|
190,025
| 14,530,924,062
|
IssuesEvent
|
2020-12-14 19:59:53
|
PowerShell/PowerShell
|
https://api.github.com/repos/PowerShell/PowerShell
|
opened
|
Release testing not running for Windows Server 2012R2
|
Issue-Bug Release-Testing
|
Failure looks like:
```
VERBOSE: Loading module from path 'C:\AzDevOpsAgent\_work\1\s\src\Release-Automation\Release-Automation.psd1'.
VERBOSE: Loading module from path 'C:\AzDevOpsAgent\_work\1\s\src\Release-Automation\Release-Automation.psm1'.
VERBOSE: Exporting function 'Invoke-ReleaseTest'.
VERBOSE: Exporting function 'Get-ReleaseTestResult'.
VERBOSE: Exporting function 'New-TestRunInfo'.
VERBOSE: Importing function 'Get-ReleaseTestResult'.
VERBOSE: Importing function 'Invoke-ReleaseTest'.
Invoke-ReleaseTest : Cannot bind argument to parameter 'Path' because it is null.
At C:\AzDevOpsAgent\_work\1\s\test\templates\execute-tests.ps1:19 char:5
+ Invoke-ReleaseTest -Build ${env:POWERSHELL_PACKAGE_BUILD_BUILDID} โฆ
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [Write-Error], WriteErrorException
+ FullyQualifiedErrorId : Microsoft.PowerShell.Commands.WriteErrorException,Invoke-ReleaseTest
##[error]PowerShell exited with code '1'.
```
|
1.0
|
Release testing not running for Windows Server 2012R2 - Failure looks like:
```
VERBOSE: Loading module from path 'C:\AzDevOpsAgent\_work\1\s\src\Release-Automation\Release-Automation.psd1'.
VERBOSE: Loading module from path 'C:\AzDevOpsAgent\_work\1\s\src\Release-Automation\Release-Automation.psm1'.
VERBOSE: Exporting function 'Invoke-ReleaseTest'.
VERBOSE: Exporting function 'Get-ReleaseTestResult'.
VERBOSE: Exporting function 'New-TestRunInfo'.
VERBOSE: Importing function 'Get-ReleaseTestResult'.
VERBOSE: Importing function 'Invoke-ReleaseTest'.
Invoke-ReleaseTest : Cannot bind argument to parameter 'Path' because it is null.
At C:\AzDevOpsAgent\_work\1\s\test\templates\execute-tests.ps1:19 char:5
+ Invoke-ReleaseTest -Build ${env:POWERSHELL_PACKAGE_BUILD_BUILDID} โฆ
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [Write-Error], WriteErrorException
+ FullyQualifiedErrorId : Microsoft.PowerShell.Commands.WriteErrorException,Invoke-ReleaseTest
##[error]PowerShell exited with code '1'.
```
|
non_process
|
release testing not running for windows server failure looks like verbose loading module from path c azdevopsagent work s src release automation release automation verbose loading module from path c azdevopsagent work s src release automation release automation verbose exporting function invoke releasetest verbose exporting function get releasetestresult verbose exporting function new testruninfo verbose importing function get releasetestresult verbose importing function invoke releasetest invoke releasetest cannot bind argument to parameter path because it is null at c azdevopsagent work s test templates execute tests char invoke releasetest build env powershell package build buildid โฆ categoryinfo notspecified writeerrorexception fullyqualifiederrorid microsoft powershell commands writeerrorexception invoke releasetest powershell exited with code
| 0
|
19,098
| 25,148,015,282
|
IssuesEvent
|
2022-11-10 07:42:03
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
SAGA Resampling Method
|
Processing Bug
|
SAGA raster algorithms are **always** using B-Spline Interpolation as Resampling Method in QGIS 3.x
https://github.com/qgis/QGIS/blob/master/python/plugins/processing/algs/saga/SagaAlgorithm.py#L434
This leads to "strange" output values, specially when the option choosed in the algorithms interface is the Nearest Neighbour.
QGIS documentation says that "When multiple raster layers are used as input for a SAGA algorithm, QGIS resamples them to a common grid system and then passes them to SAGA"
https://docs.qgis.org/3.4/en/docs/user_manual/processing/3rdParty.html#about-saga-grid-system-limitations
but this happen even with only one input raster.
For instance, using SAGA Raster Calculator, it do the job in two steps:
io_gdal 0 -TRANSFORM 1 **-RESAMPLING 3** -GRIDS "C:/Users/pedro.venancio/AppData/Local/Temp/processing_188d976842164f2bbb10e9f971f6ea00/0a59af8d77454142a23cd5b7ba417b82/probabilidadebruta.sgrd" -FILES "D:\Testes\duvida_e_erro\probabilidade_bruta.tif"
grid_calculus "Grid Calculator" -GRIDS "C:/Users/pedro.venancio/AppData/Local/Temp/processing_188d976842164f2bbb10e9f971f6ea00/0a59af8d77454142a23cd5b7ba417b82/probabilidadebruta.sgrd" -FORMULA "ifelse(eq(a,0),1,a)" **-RESAMPLING 0** -USE_NODATA false -TYPE 7 -RESULT "D:/Testes/duvida_e_erro/probabilidade_bruta_nn.sdat"
As you can see, the Resampling Method choosed in Raster Calculator was **[0] Nearest Neighbour**. It only uses one input raster. But the io_gdal algorithm always uses **[3] B-Spline Interpolation** Resampling Method.
This does not happened in QGIS 2.18, because it used the resampling method as a variable:
https://github.com/qgis/QGIS/blob/release-2_18/python/plugins/processing/algs/saga/SagaAlgorithm.py#L348
Ideally, the Resampling Method of io_gdal should match the Resampling Method selected by the user in the algorithm. It is not intuitive to have the possibility to choose the Resampling Method in the algorithm window, and the B-Spline be used in background just for input one single raster.
Or in alternative, the possibility to choose the default SAGA Resampling Method in SAGA Processing Options.
Until there is a more robust solution, I think the Nearest Neighbor Resampling Method should be used by default, as in the most recent SAGA versions:
http://www.saga-gis.org/saga_tool_doc/7.2.0/io_gdal_0.html
**QGIS and OS versions**
This affects all QGIS 3 versions in any OS.
|
1.0
|
SAGA Resampling Method - SAGA raster algorithms are **always** using B-Spline Interpolation as Resampling Method in QGIS 3.x
https://github.com/qgis/QGIS/blob/master/python/plugins/processing/algs/saga/SagaAlgorithm.py#L434
This leads to "strange" output values, specially when the option choosed in the algorithms interface is the Nearest Neighbour.
QGIS documentation says that "When multiple raster layers are used as input for a SAGA algorithm, QGIS resamples them to a common grid system and then passes them to SAGA"
https://docs.qgis.org/3.4/en/docs/user_manual/processing/3rdParty.html#about-saga-grid-system-limitations
but this happen even with only one input raster.
For instance, using SAGA Raster Calculator, it do the job in two steps:
io_gdal 0 -TRANSFORM 1 **-RESAMPLING 3** -GRIDS "C:/Users/pedro.venancio/AppData/Local/Temp/processing_188d976842164f2bbb10e9f971f6ea00/0a59af8d77454142a23cd5b7ba417b82/probabilidadebruta.sgrd" -FILES "D:\Testes\duvida_e_erro\probabilidade_bruta.tif"
grid_calculus "Grid Calculator" -GRIDS "C:/Users/pedro.venancio/AppData/Local/Temp/processing_188d976842164f2bbb10e9f971f6ea00/0a59af8d77454142a23cd5b7ba417b82/probabilidadebruta.sgrd" -FORMULA "ifelse(eq(a,0),1,a)" **-RESAMPLING 0** -USE_NODATA false -TYPE 7 -RESULT "D:/Testes/duvida_e_erro/probabilidade_bruta_nn.sdat"
As you can see, the Resampling Method choosed in Raster Calculator was **[0] Nearest Neighbour**. It only uses one input raster. But the io_gdal algorithm always uses **[3] B-Spline Interpolation** Resampling Method.
This does not happened in QGIS 2.18, because it used the resampling method as a variable:
https://github.com/qgis/QGIS/blob/release-2_18/python/plugins/processing/algs/saga/SagaAlgorithm.py#L348
Ideally, the Resampling Method of io_gdal should match the Resampling Method selected by the user in the algorithm. It is not intuitive to have the possibility to choose the Resampling Method in the algorithm window, and the B-Spline be used in background just for input one single raster.
Or in alternative, the possibility to choose the default SAGA Resampling Method in SAGA Processing Options.
Until there is a more robust solution, I think the Nearest Neighbor Resampling Method should be used by default, as in the most recent SAGA versions:
http://www.saga-gis.org/saga_tool_doc/7.2.0/io_gdal_0.html
**QGIS and OS versions**
This affects all QGIS 3 versions in any OS.
|
process
|
saga resampling method saga raster algorithms are always using b spline interpolation as resampling method in qgis x this leads to strange output values specially when the option choosed in the algorithms interface is the nearest neighbour qgis documentation says that when multiple raster layers are used as input for a saga algorithm qgis resamples them to a common grid system and then passes them to saga but this happen even with only one input raster for instance using saga raster calculator it do the job in two steps io gdal transform resampling grids c users pedro venancio appdata local temp processing probabilidadebruta sgrd files d testes duvida e erro probabilidade bruta tif grid calculus grid calculator grids c users pedro venancio appdata local temp processing probabilidadebruta sgrd formula ifelse eq a a resampling use nodata false type result d testes duvida e erro probabilidade bruta nn sdat as you can see the resampling method choosed in raster calculator was nearest neighbour it only uses one input raster but the io gdal algorithm always uses b spline interpolation resampling method this does not happened in qgis because it used the resampling method as a variable ideally the resampling method of io gdal should match the resampling method selected by the user in the algorithm it is not intuitive to have the possibility to choose the resampling method in the algorithm window and the b spline be used in background just for input one single raster or in alternative the possibility to choose the default saga resampling method in saga processing options until there is a more robust solution i think the nearest neighbor resampling method should be used by default as in the most recent saga versions qgis and os versions this affects all qgis versions in any os
| 1
|
301,743
| 22,772,388,527
|
IssuesEvent
|
2022-07-08 11:17:43
|
splunk/splunk-connect-for-syslog
|
https://api.github.com/repos/splunk/splunk-connect-for-syslog
|
closed
|
null queue example fails to start
|
documentation
|
The [example conf](https://splunk.github.io/splunk-connect-for-syslog/main/sources/#filtering-events-from-output) for filtering events from output to null queue fails to start.
**Example Conf**
```
block parser cisco_ios_debug-postfilter() {
channel {
#In this case the outcome is drop the event other logic such as adding indexed fields or editing the message is possible
rewrite {
rewrite(r_set_dest_splunk_null_queue);
};
};
};
application cisco_ios_debug-postfilter[sc4s-postfilter] {
filter {
"${fields.sc4s_vendor_product}" eq "cisco_ios"
#Note regex reads as
# start from first position
# Any atleast 1 char that is not a `-`
# constant '-7-'
and message('^%[^\-]+-7-');
};
parser { cisco_ios_debug-postfilter(); };
};
```
**Startup Output**
```
Apr 10 11:53:44 splunk-sc4s-01 podman[15289]: syslog-ng checking config
Apr 10 11:53:44 splunk-sc4s-01 podman[15289]: sc4s version=2.26.2
Apr 10 11:53:45 splunk-sc4s-01 podman[15289]: starting goss
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: Error parsing parser expression, syntax error, unexpected KW_REWRITE, expecting '}' in block parser cisco_ios_debug-postfilter() at /etc/syslog-ng/conf.d/local/config/app_parsers/syslog/null_example.conf:1:7:12-7:19:
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 2 #Start Block block parser cisco_ios_debug-postfilter() at /etc/syslog-ng/conf.d/local/config/app_parsers/syslog/null_example.conf:1
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 3
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 4 channel {
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 5 #In this case the outcome is drop the event other logic such as adding indexed fields or editing the message is possible
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 6 rewrite {
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 7-----> rewrite(r_set_dest_splunk_null_queue);
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 7-----> ^^^^^^^
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 8 };
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 9 };
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 10
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 11 #End Block block parser cisco_ios_debug-postfilter() at /etc/syslog-ng/conf.d/local/config/app_parsers/syslog/null_example.conf:1
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 12
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: Included from parser generator app-parser:300:15-300:43:
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 295 # start from first position
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 296 # Any atleast 1 char that is not a
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 297 # constant '-7-'
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 298 and message('^%[^\-]+-7-');
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 299 };
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 300---> parser { cisco_ios_debug-postfilter(); };
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 300---> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 301 rewrite {
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 302 set-tag('.app.cisco_ios_debug-postfilter');
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 303 set('cisco_ios_debug-postfilter' value('.app.name'));
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 304 };
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 305 flags(final);
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: Included from /etc/syslog-ng/conf.d/plugin/app_parser_topics.conf:32:5-32:39:
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 27 parser app-plugin-syslog-fix-program{
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 28 app-parser(topic(fix-invalid-program));
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 29 };
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 30
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 31 parser app-plugin-source-postprocess{
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 32----> app-parser(topic(sc4s-postfilter));
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 32----> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 33 };
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: Included from /etc/syslog-ng/syslog-ng.conf:41:1-41:1:
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 36
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 37 @include "conf.d/enrich/*.conf"
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 38 @include "conf.d/enrich/*/*.conf"
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 39
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 40 @include "conf.d/plugin/*.conf"
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 41---->
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 41----> ^
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 42 @include "conf.d/sources/*.conf"
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 43 @include "conf.d/sources/*/*.conf"
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 44 @include "conf.d/local/config/sources/*.conf"
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 45 @include "conf.d/local/config/sources/*/*.conf"
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 46
```
|
1.0
|
null queue example fails to start - The [example conf](https://splunk.github.io/splunk-connect-for-syslog/main/sources/#filtering-events-from-output) for filtering events from output to null queue fails to start.
**Example Conf**
```
block parser cisco_ios_debug-postfilter() {
channel {
#In this case the outcome is drop the event other logic such as adding indexed fields or editing the message is possible
rewrite {
rewrite(r_set_dest_splunk_null_queue);
};
};
};
application cisco_ios_debug-postfilter[sc4s-postfilter] {
filter {
"${fields.sc4s_vendor_product}" eq "cisco_ios"
#Note regex reads as
# start from first position
# Any atleast 1 char that is not a `-`
# constant '-7-'
and message('^%[^\-]+-7-');
};
parser { cisco_ios_debug-postfilter(); };
};
```
**Startup Output**
```
Apr 10 11:53:44 splunk-sc4s-01 podman[15289]: syslog-ng checking config
Apr 10 11:53:44 splunk-sc4s-01 podman[15289]: sc4s version=2.26.2
Apr 10 11:53:45 splunk-sc4s-01 podman[15289]: starting goss
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: Error parsing parser expression, syntax error, unexpected KW_REWRITE, expecting '}' in block parser cisco_ios_debug-postfilter() at /etc/syslog-ng/conf.d/local/config/app_parsers/syslog/null_example.conf:1:7:12-7:19:
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 2 #Start Block block parser cisco_ios_debug-postfilter() at /etc/syslog-ng/conf.d/local/config/app_parsers/syslog/null_example.conf:1
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 3
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 4 channel {
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 5 #In this case the outcome is drop the event other logic such as adding indexed fields or editing the message is possible
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 6 rewrite {
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 7-----> rewrite(r_set_dest_splunk_null_queue);
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 7-----> ^^^^^^^
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 8 };
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 9 };
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 10
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 11 #End Block block parser cisco_ios_debug-postfilter() at /etc/syslog-ng/conf.d/local/config/app_parsers/syslog/null_example.conf:1
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 12
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: Included from parser generator app-parser:300:15-300:43:
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 295 # start from first position
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 296 # Any atleast 1 char that is not a
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 297 # constant '-7-'
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 298 and message('^%[^\-]+-7-');
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 299 };
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 300---> parser { cisco_ios_debug-postfilter(); };
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 300---> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 301 rewrite {
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 302 set-tag('.app.cisco_ios_debug-postfilter');
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 303 set('cisco_ios_debug-postfilter' value('.app.name'));
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 304 };
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 305 flags(final);
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: Included from /etc/syslog-ng/conf.d/plugin/app_parser_topics.conf:32:5-32:39:
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 27 parser app-plugin-syslog-fix-program{
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 28 app-parser(topic(fix-invalid-program));
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 29 };
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 30
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 31 parser app-plugin-source-postprocess{
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 32----> app-parser(topic(sc4s-postfilter));
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 32----> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 33 };
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: Included from /etc/syslog-ng/syslog-ng.conf:41:1-41:1:
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 36
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 37 @include "conf.d/enrich/*.conf"
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 38 @include "conf.d/enrich/*/*.conf"
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 39
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 40 @include "conf.d/plugin/*.conf"
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 41---->
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 41----> ^
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 42 @include "conf.d/sources/*.conf"
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 43 @include "conf.d/sources/*/*.conf"
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 44 @include "conf.d/local/config/sources/*.conf"
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 45 @include "conf.d/local/config/sources/*/*.conf"
Apr 10 11:53:46 splunk-sc4s-01 podman[15289]: 46
```
|
non_process
|
null queue example fails to start the for filtering events from output to null queue fails to start example conf block parser cisco ios debug postfilter channel in this case the outcome is drop the event other logic such as adding indexed fields or editing the message is possible rewrite rewrite r set dest splunk null queue application cisco ios debug postfilter filter fields vendor product eq cisco ios note regex reads as start from first position any atleast char that is not a constant and message parser cisco ios debug postfilter startup output apr splunk podman syslog ng checking config apr splunk podman version apr splunk podman starting goss apr splunk podman error parsing parser expression syntax error unexpected kw rewrite expecting in block parser cisco ios debug postfilter at etc syslog ng conf d local config app parsers syslog null example conf apr splunk podman start block block parser cisco ios debug postfilter at etc syslog ng conf d local config app parsers syslog null example conf apr splunk podman apr splunk podman channel apr splunk podman in this case the outcome is drop the event other logic such as adding indexed fields or editing the message is possible apr splunk podman rewrite apr splunk podman rewrite r set dest splunk null queue apr splunk podman apr splunk podman apr splunk podman apr splunk podman apr splunk podman end block block parser cisco ios debug postfilter at etc syslog ng conf d local config app parsers syslog null example conf apr splunk podman apr splunk podman included from parser generator app parser apr splunk podman start from first position apr splunk podman any atleast char that is not a apr splunk podman constant apr splunk podman and message apr splunk podman apr splunk podman parser cisco ios debug postfilter apr splunk podman apr splunk podman rewrite apr splunk podman set tag app cisco ios debug postfilter apr splunk podman set cisco ios debug postfilter value app name apr splunk podman apr splunk podman flags final apr splunk podman included from etc syslog ng conf d plugin app parser topics conf apr splunk podman parser app plugin syslog fix program apr splunk podman app parser topic fix invalid program apr splunk podman apr splunk podman apr splunk podman parser app plugin source postprocess apr splunk podman app parser topic postfilter apr splunk podman apr splunk podman apr splunk podman included from etc syslog ng syslog ng conf apr splunk podman apr splunk podman include conf d enrich conf apr splunk podman include conf d enrich conf apr splunk podman apr splunk podman include conf d plugin conf apr splunk podman apr splunk podman apr splunk podman include conf d sources conf apr splunk podman include conf d sources conf apr splunk podman include conf d local config sources conf apr splunk podman include conf d local config sources conf apr splunk podman
| 0
|
61,570
| 7,477,056,743
|
IssuesEvent
|
2018-04-04 06:53:35
|
SpareBank1/designsystem
|
https://api.github.com/repos/SpareBank1/designsystem
|
closed
|
Contributing: Hjelp designere bidra til designsystemet
|
:dizzy: design system :gem: enhancement :nail_care: design
|
Contributing-guiden er veldig utviklersenteret. Hvordan hjelper vi designerne komme i gang som bidragsytere? Hva er de oftest stilte spรธrsmรฅlene? Hvordan er workflowen?
Vi har gjort noen undersรธkelser internt, men vi mรฅ fรฅ dokumentert det her pรฅ Github :writing_hand:
|
2.0
|
Contributing: Hjelp designere bidra til designsystemet - Contributing-guiden er veldig utviklersenteret. Hvordan hjelper vi designerne komme i gang som bidragsytere? Hva er de oftest stilte spรธrsmรฅlene? Hvordan er workflowen?
Vi har gjort noen undersรธkelser internt, men vi mรฅ fรฅ dokumentert det her pรฅ Github :writing_hand:
|
non_process
|
contributing hjelp designere bidra til designsystemet contributing guiden er veldig utviklersenteret hvordan hjelper vi designerne komme i gang som bidragsytere hva er de oftest stilte spรธrsmรฅlene hvordan er workflowen vi har gjort noen undersรธkelser internt men vi mรฅ fรฅ dokumentert det her pรฅ github writing hand
| 0
|
43,491
| 23,263,089,691
|
IssuesEvent
|
2022-08-04 14:58:43
|
TheSuperHackers/GeneralsGamePatch
|
https://api.github.com/repos/TheSuperHackers/GeneralsGamePatch
|
closed
|
Particles may not render at all if their Alpha min max values are zero
|
Bug Major Performance
|
Particles may not render at all if their Alpha min max values are zero.
Reported by Enlima29:
> Particle System with zero values is a bit broken. Zero values for initial alpha or color wont scale/change properly, if it is supposed to change on later iteration. Values must be 0.01 at least. This causes some particles to not render correctly while still counting towards the total particle count. Stalker from Operation Firestorm Mod (NLS Discord Server) may have more information on this.
Example
Bad:
```
Alpha1 = 0.00 0.00 1
Alpha2 = 1.00 1.00 5
```
Good:
```
Alpha1 = 0.01 0.01 1
Alpha2 = 1.00 1.00 5
```
Note:
Alpha1 = [min value] [max value] [frame time]
|
True
|
Particles may not render at all if their Alpha min max values are zero - Particles may not render at all if their Alpha min max values are zero.
Reported by Enlima29:
> Particle System with zero values is a bit broken. Zero values for initial alpha or color wont scale/change properly, if it is supposed to change on later iteration. Values must be 0.01 at least. This causes some particles to not render correctly while still counting towards the total particle count. Stalker from Operation Firestorm Mod (NLS Discord Server) may have more information on this.
Example
Bad:
```
Alpha1 = 0.00 0.00 1
Alpha2 = 1.00 1.00 5
```
Good:
```
Alpha1 = 0.01 0.01 1
Alpha2 = 1.00 1.00 5
```
Note:
Alpha1 = [min value] [max value] [frame time]
|
non_process
|
particles may not render at all if their alpha min max values are zero particles may not render at all if their alpha min max values are zero reported by particle system with zero values is a bit broken zero values for initial alpha or color wont scale change properly if it is supposed to change on later iteration values must be at least this causes some particles to not render correctly while still counting towards the total particle count stalker from operation firestorm mod nls discord server may have more information on this example bad good note
| 0
|
6,168
| 9,082,062,579
|
IssuesEvent
|
2019-02-17 08:41:21
|
linnovate/root
|
https://api.github.com/repos/linnovate/root
|
opened
|
Editing text bug in folder description
|
2.0.7 Process bug Settings
|
go to Settings -> Folders
create new folder
enter a description and mark it
choose one of the editing options
the text change but after refreshing the changes were not saved.
before refreshing:

after refreshing:

|
1.0
|
Editing text bug in folder description - go to Settings -> Folders
create new folder
enter a description and mark it
choose one of the editing options
the text change but after refreshing the changes were not saved.
before refreshing:

after refreshing:

|
process
|
editing text bug in folder description go to settings folders create new folder enter a description and mark it choose one of the editing options the text change but after refreshing the changes were not saved before refreshing after refreshing
| 1
|
21,416
| 29,359,590,591
|
IssuesEvent
|
2023-05-28 00:36:39
|
devssa/onde-codar-em-salvador
|
https://api.github.com/repos/devssa/onde-codar-em-salvador
|
closed
|
[Remoto] Software Architect .Net na Coodesh
|
SALVADOR PJ BANCO DE DADOS MONGODB FULL-STACK SQL GIT DOCKER KUBERNETES NOSQL AWS REQUISITOS REMOTO PROCESSOS INOVAรรO GITHUB CI CD E-COMMERCE UMA R MICROSERVICES SAAS TERRAFORM AUTOMAรรO DE PROCESSOS AUTOMAรรO DE TESTES Stale
|
## Descriรงรฃo da vaga:
Esta รฉ uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se vocรช terรก acesso as informaรงรตes completas sobre a empresa e benefรญcios.
Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/vagas/arquiteto-net-173438509?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. ๐
<p>A <strong>Tecnologia รnica</strong> estรก em busca de <strong><ins>Software Architect .Net </ins></strong><strong> </strong>para compor seu time!</p>
<p>Venha fazer parte de uma empresa que acredita e aposta em novas ideias, com um time forte, e com propรณsito que jรก soma mais de 200 pessoas. Somos uma empresa de desenvolvimento de sistemas fundada em 2004. Nossas principais soluรงรตes sรฃo para o mercado segurador e de ๏ฌdelizaรงรฃo para diversos segmentos, desde o varejo atรฉ o agronegรณcio. Tambรฉm mantemos um ecossistema de startups, nosso pilar de inovaรงรฃo. Parceira de grandes players do mercado de tecnologia, entre eles Microsoft e AWS, temos a missรฃo de impactar a vida das pessoas provendo soluรงรตes disruptivas, investindo massivamente em nosso time, sempre perseguindo e incentivando processos criativos.</p>
<p><strong>Responsabilidades:</strong></p>
<ul>
<li>Acelerar a migraรงรฃo para Microservices da nossa plataforma de marketplace;</li>
<li>Atuar no desenho da arquitetura, construรงรฃo dos microserviรงos, automaรงรฃo de testes e auxiliar na migraรงรฃo do legado para comunicaรงรฃo com os novos serviรงos.</li>
</ul>
## Tecnologia รnica:
<p>Nascemos em 2004 com o propรณsito de trazer ao mercado soluรงรตes analรญticas voltadas para o atendimento ao cliente do mercado segurador. Prestamos diversos serviรงos na รกrea de fidelizaรงรฃo. Desde o varejo atรฉ o agronegรณcio, incluindo plataformas de e-commerce cross indรบstria, bem como serviรงos de tecnologia como integraรงรฃo, automaรงรฃo de processos, construรงรฃo de sistemas especializados. Inovaรงรฃo รฉ um de nossos principais pilares e contamos com um ambiente para incubaรงรฃo de startups. Com mais de 150 colaboradores e faturamento de mais de R$20M, a maioria de nossas soluรงรตes รฉ comercializada na modalidade SaaS e temos como principais parceiros de tecnologia a Microsoft e a AWS.</p>
</p>
## Habilidades:
- .NET
- AWS
- Kubernetes
- Docker
- GIT
- CI/CD
- Terraform
- Banco de dados nรฃo-relacionais (NoSQL)
- Banco de dados relacionais (SQL)
## Local:
100% Remoto
## Requisitos:
- Conhecimentos avanรงados em .NET e .NET Core
- Conhecimentos em Microservices;
- Conhecimentos em AWS;
- Conhecimentos em Localstack;
- Conhecimentos em Docker;
- Conhecimentos em Kubernetes;
- Conhecimentos em GIT;
- Conhecimentos em CI/CD;
- Infra as Code (Cloudformation / Terraform);
- Banco de dados nรฃo relacional DynamoDB/MongoDB;
- Banco de dados relacional SQL Server.
## Benefรญcios:
- Vale Alimentaรงรฃo;
- Auxรญlio Saรบde;
- Gympass;
- Apoio psicolรณgico;
- Day off remunerado no aniversรกrio;
- Descanso remunerado apรณs 1 ano de contrato.
## Como se candidatar:
Candidatar-se exclusivamente atravรฉs da plataforma Coodesh no link a seguir: [Software Architect .Net na Tecnologia รnica](https://coodesh.com/vagas/arquiteto-net-173438509?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open)
Apรณs candidatar-se via plataforma Coodesh e validar o seu login, vocรช poderรก acompanhar e receber todas as interaรงรตes do processo por lรก. Utilize a opรงรฃo **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso farรก com que a pessoa **Recruiter** responsรกvel pelo processo na empresa receba a notificaรงรฃo.
## Labels
#### Alocaรงรฃo
Remoto
#### Regime
PJ
#### Categoria
Full-Stack
|
2.0
|
[Remoto] Software Architect .Net na Coodesh - ## Descriรงรฃo da vaga:
Esta รฉ uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se vocรช terรก acesso as informaรงรตes completas sobre a empresa e benefรญcios.
Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/vagas/arquiteto-net-173438509?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. ๐
<p>A <strong>Tecnologia รnica</strong> estรก em busca de <strong><ins>Software Architect .Net </ins></strong><strong> </strong>para compor seu time!</p>
<p>Venha fazer parte de uma empresa que acredita e aposta em novas ideias, com um time forte, e com propรณsito que jรก soma mais de 200 pessoas. Somos uma empresa de desenvolvimento de sistemas fundada em 2004. Nossas principais soluรงรตes sรฃo para o mercado segurador e de ๏ฌdelizaรงรฃo para diversos segmentos, desde o varejo atรฉ o agronegรณcio. Tambรฉm mantemos um ecossistema de startups, nosso pilar de inovaรงรฃo. Parceira de grandes players do mercado de tecnologia, entre eles Microsoft e AWS, temos a missรฃo de impactar a vida das pessoas provendo soluรงรตes disruptivas, investindo massivamente em nosso time, sempre perseguindo e incentivando processos criativos.</p>
<p><strong>Responsabilidades:</strong></p>
<ul>
<li>Acelerar a migraรงรฃo para Microservices da nossa plataforma de marketplace;</li>
<li>Atuar no desenho da arquitetura, construรงรฃo dos microserviรงos, automaรงรฃo de testes e auxiliar na migraรงรฃo do legado para comunicaรงรฃo com os novos serviรงos.</li>
</ul>
## Tecnologia รnica:
<p>Nascemos em 2004 com o propรณsito de trazer ao mercado soluรงรตes analรญticas voltadas para o atendimento ao cliente do mercado segurador. Prestamos diversos serviรงos na รกrea de fidelizaรงรฃo. Desde o varejo atรฉ o agronegรณcio, incluindo plataformas de e-commerce cross indรบstria, bem como serviรงos de tecnologia como integraรงรฃo, automaรงรฃo de processos, construรงรฃo de sistemas especializados. Inovaรงรฃo รฉ um de nossos principais pilares e contamos com um ambiente para incubaรงรฃo de startups. Com mais de 150 colaboradores e faturamento de mais de R$20M, a maioria de nossas soluรงรตes รฉ comercializada na modalidade SaaS e temos como principais parceiros de tecnologia a Microsoft e a AWS.</p>
</p>
## Habilidades:
- .NET
- AWS
- Kubernetes
- Docker
- GIT
- CI/CD
- Terraform
- Banco de dados nรฃo-relacionais (NoSQL)
- Banco de dados relacionais (SQL)
## Local:
100% Remoto
## Requisitos:
- Conhecimentos avanรงados em .NET e .NET Core
- Conhecimentos em Microservices;
- Conhecimentos em AWS;
- Conhecimentos em Localstack;
- Conhecimentos em Docker;
- Conhecimentos em Kubernetes;
- Conhecimentos em GIT;
- Conhecimentos em CI/CD;
- Infra as Code (Cloudformation / Terraform);
- Banco de dados nรฃo relacional DynamoDB/MongoDB;
- Banco de dados relacional SQL Server.
## Benefรญcios:
- Vale Alimentaรงรฃo;
- Auxรญlio Saรบde;
- Gympass;
- Apoio psicolรณgico;
- Day off remunerado no aniversรกrio;
- Descanso remunerado apรณs 1 ano de contrato.
## Como se candidatar:
Candidatar-se exclusivamente atravรฉs da plataforma Coodesh no link a seguir: [Software Architect .Net na Tecnologia รnica](https://coodesh.com/vagas/arquiteto-net-173438509?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open)
Apรณs candidatar-se via plataforma Coodesh e validar o seu login, vocรช poderรก acompanhar e receber todas as interaรงรตes do processo por lรก. Utilize a opรงรฃo **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso farรก com que a pessoa **Recruiter** responsรกvel pelo processo na empresa receba a notificaรงรฃo.
## Labels
#### Alocaรงรฃo
Remoto
#### Regime
PJ
#### Categoria
Full-Stack
|
process
|
software architect net na coodesh descriรงรฃo da vaga esta รฉ uma vaga de um parceiro da plataforma coodesh ao candidatar se vocรช terรก acesso as informaรงรตes completas sobre a empresa e benefรญcios fique atento ao redirecionamento que vai te levar para uma url com o pop up personalizado de candidatura ๐ a tecnologia รบnica estรก em busca de software architect net para compor seu time venha fazer parte de uma empresa que acredita e aposta em novas ideias com um time forte e com propรณsito que jรก soma mais de pessoas somos uma empresa de desenvolvimento de sistemas fundada em nossas principais soluรงรตes sรฃo para o mercado segurador e de ๏ฌdelizaรงรฃo para diversos segmentos desde o varejo atรฉ o agronegรณcio tambรฉm mantemos um ecossistema de startups nosso pilar de inovaรงรฃo parceira de grandes players do mercado de tecnologia entre eles microsoft e aws temos a missรฃo de impactar a vida das pessoas provendo soluรงรตes disruptivas investindo massivamente em nosso time sempre perseguindo e incentivando processos criativos responsabilidades acelerar a migraรงรฃo para microservices da nossa plataforma de marketplace atuar no desenho da arquitetura construรงรฃo dos microserviรงos automaรงรฃo de testes e auxiliar na migraรงรฃo do legado para comunicaรงรฃo com os novos serviรงos tecnologia รบnica nascemos em com o propรณsito de trazer ao mercado soluรงรตes analรญticas voltadas para o atendimento ao cliente do mercado segurador prestamos diversos serviรงos na รกrea de fidelizaรงรฃo desde o varejo atรฉ o agronegรณcio incluindo plataformas de e commerce cross indรบstria bem como serviรงos de tecnologia como integraรงรฃo automaรงรฃo de processos construรงรฃo de sistemas especializados inovaรงรฃo รฉ um de nossos principais pilares e contamos com um ambiente para incubaรงรฃo de startups com mais de colaboradores e faturamento de mais de r a maioria de nossas soluรงรตes รฉ comercializada na modalidade saas e temos como principais parceiros de tecnologia a microsoft e a aws habilidades net aws kubernetes docker git ci cd terraform banco de dados nรฃo relacionais nosql banco de dados relacionais sql local remoto requisitos conhecimentos avanรงados em net e net core conhecimentos em microservices conhecimentos em aws conhecimentos em localstack conhecimentos em docker conhecimentos em kubernetes conhecimentos em git conhecimentos em ci cd infra as code cloudformation terraform banco de dados nรฃo relacional dynamodb mongodb banco de dados relacional sql server benefรญcios vale alimentaรงรฃo auxรญlio saรบde gympass apoio psicolรณgico day off remunerado no aniversรกrio descanso remunerado apรณs ano de contrato como se candidatar candidatar se exclusivamente atravรฉs da plataforma coodesh no link a seguir apรณs candidatar se via plataforma coodesh e validar o seu login vocรช poderรก acompanhar e receber todas as interaรงรตes do processo por lรก utilize a opรงรฃo pedir feedback entre uma etapa e outra na vaga que se candidatou isso farรก com que a pessoa recruiter responsรกvel pelo processo na empresa receba a notificaรงรฃo labels alocaรงรฃo remoto regime pj categoria full stack
| 1
|
3,881
| 6,817,754,026
|
IssuesEvent
|
2017-11-07 01:06:19
|
Great-Hill-Corporation/quickBlocks
|
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
|
closed
|
If a trace has more than 5 mb of data returned, I do not parse it under the assumption that it was a DDOS attack.
|
monitors-all status-inprocess type-bug
|
Happens starting with block 2287740 (or near there). And probably continues to the end of the DDOS attack in Oct 2016.
|
1.0
|
If a trace has more than 5 mb of data returned, I do not parse it under the assumption that it was a DDOS attack. - Happens starting with block 2287740 (or near there). And probably continues to the end of the DDOS attack in Oct 2016.
|
process
|
if a trace has more than mb of data returned i do not parse it under the assumption that it was a ddos attack happens starting with block or near there and probably continues to the end of the ddos attack in oct
| 1
|
79,021
| 15,586,094,021
|
IssuesEvent
|
2021-03-18 01:09:33
|
Farsene1/Object-Oriented-Programming-Project
|
https://api.github.com/repos/Farsene1/Object-Oriented-Programming-Project
|
opened
|
CVE-2020-35728 (High) detected in jackson-databind-2.9.6.jar
|
security vulnerability
|
## CVE-2020-35728 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.6.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /Object-Oriented-Programming-Project/_2_client/pom.xml</p>
<p>Path to vulnerable library: /root/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/root/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.0.5.RELEASE.jar (Root Library)
- spring-boot-starter-json-2.0.5.RELEASE.jar
- :x: **jackson-databind-2.9.6.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to com.oracle.wls.shaded.org.apache.xalan.lib.sql.JNDIConnectionPool (aka embedded Xalan in org.glassfish.web/javax.servlet.jsp.jstl).
<p>Publish Date: 2020-12-27
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-35728>CVE-2020-35728</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-35728">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-35728</a></p>
<p>Release Date: 2020-12-27</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.8</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-35728 (High) detected in jackson-databind-2.9.6.jar - ## CVE-2020-35728 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.6.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /Object-Oriented-Programming-Project/_2_client/pom.xml</p>
<p>Path to vulnerable library: /root/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/root/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.0.5.RELEASE.jar (Root Library)
- spring-boot-starter-json-2.0.5.RELEASE.jar
- :x: **jackson-databind-2.9.6.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to com.oracle.wls.shaded.org.apache.xalan.lib.sql.JNDIConnectionPool (aka embedded Xalan in org.glassfish.web/javax.servlet.jsp.jstl).
<p>Publish Date: 2020-12-27
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-35728>CVE-2020-35728</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-35728">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-35728</a></p>
<p>Release Date: 2020-12-27</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.8</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file object oriented programming project client pom xml path to vulnerable library root repository com fasterxml jackson core jackson databind jackson databind jar root repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy spring boot starter web release jar root library spring boot starter json release jar x jackson databind jar vulnerable library vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to com oracle wls shaded org apache xalan lib sql jndiconnectionpool aka embedded xalan in org glassfish web javax servlet jsp jstl publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind step up your open source security game with whitesource
| 0
|
12,818
| 15,192,176,933
|
IssuesEvent
|
2021-02-15 21:27:19
|
2i2c-org/team-compass
|
https://api.github.com/repos/2i2c-org/team-compass
|
opened
|
Tech Team Update: 2021-02-15
|
team-process
|
Hey @2i2c-org/tech-team - time to fill in some updates about what you've been up to the last couple of weeks!
Can folks fill out the [HackMD](https://hackmd.io/i2Siurp1TkmPYgn3ZgxFQw) with their own updates? โจ
- **Updates HackMD**: https://hackmd.io/i2Siurp1TkmPYgn3ZgxFQw
- **Team Sync history**: https://2i2c.org/team-compass/team/tech/sync/
# ToDo
- [ ] Clean up the [HackMD](https://hackmd.io/i2Siurp1TkmPYgn3ZgxFQw) for this update
- [ ] Ping the team members in [`#tech-updates`](https://2i2c.slack.com/archives/C01GLCC1VCN)
- [ ] Wait 2-3 days
- [ ] Copy/paste into the `team-compass` repository
- [ ] Clean up the HackMD
- [ ] Link to new updates in `team-compass/` in [`#tech-updates`](https://2i2c.slack.com/archives/C01GLCC1VCN)
|
1.0
|
Tech Team Update: 2021-02-15 - Hey @2i2c-org/tech-team - time to fill in some updates about what you've been up to the last couple of weeks!
Can folks fill out the [HackMD](https://hackmd.io/i2Siurp1TkmPYgn3ZgxFQw) with their own updates? โจ
- **Updates HackMD**: https://hackmd.io/i2Siurp1TkmPYgn3ZgxFQw
- **Team Sync history**: https://2i2c.org/team-compass/team/tech/sync/
# ToDo
- [ ] Clean up the [HackMD](https://hackmd.io/i2Siurp1TkmPYgn3ZgxFQw) for this update
- [ ] Ping the team members in [`#tech-updates`](https://2i2c.slack.com/archives/C01GLCC1VCN)
- [ ] Wait 2-3 days
- [ ] Copy/paste into the `team-compass` repository
- [ ] Clean up the HackMD
- [ ] Link to new updates in `team-compass/` in [`#tech-updates`](https://2i2c.slack.com/archives/C01GLCC1VCN)
|
process
|
tech team update hey org tech team time to fill in some updates about what you ve been up to the last couple of weeks can folks fill out the with their own updates โจ updates hackmd team sync history todo clean up the for this update ping the team members in wait days copy paste into the team compass repository clean up the hackmd link to new updates in team compass in
| 1
|
19,247
| 25,408,896,000
|
IssuesEvent
|
2022-11-22 17:15:30
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
opened
|
Load TIN Interpolator from processing history cause Python error
|
Processing Bug
|
### What is the bug or the crash?
```
TypeError: findText(self, str, flags: Union[Qt.MatchFlags, Qt.MatchFlag] = Qt.MatchExactly|Qt.MatchCaseSensitive): argument 1 has unexpected type 'int'
Traceback (most recent call last):
File "C:\OSGeo4W/apps/qgis/./python/plugins\processing\gui\wrappers.py", line 204, in setWidgetValue
self.setValue(value)
File "C:\OSGeo4W/apps/qgis/./python/plugins\processing\algs\qgis\ui\InterpolationWidgets.py", line 223, in setValue
self.widget.setValue(value)
File "C:\OSGeo4W/apps/qgis/./python/plugins\processing\algs\qgis\ui\InterpolationWidgets.py", line 172, in setValue
comboBox.setCurrentIndex(comboBox.findText((int(v[3]))))
TypeError: findText(self, str, flags: Union[Qt.MatchFlags, Qt.MatchFlag] = Qt.MatchExactly|Qt.MatchCaseSensitive): argument 1 has unexpected type 'int'
Version de Python : 3.9.5 (tags/v3.9.5:0a7dcbd, May 3 2021, 17:27:52) [MSC v.1928 64 bit (AMD64)]
Version de QGIS : 3.28.1-Firenze Firenze, fde3b8fbb8c
```
### Steps to reproduce the issue
1. Run TIN Interpolator
2. Open processing history and double clic on TIN Interpolator history entry
### Versions
3.28.1
### Supported QGIS version
- [X] I'm running a supported QGIS version according to the roadmap.
### New profile
- [X] I tried with a new QGIS profile
### Additional context
_No response_
|
1.0
|
Load TIN Interpolator from processing history cause Python error - ### What is the bug or the crash?
```
TypeError: findText(self, str, flags: Union[Qt.MatchFlags, Qt.MatchFlag] = Qt.MatchExactly|Qt.MatchCaseSensitive): argument 1 has unexpected type 'int'
Traceback (most recent call last):
File "C:\OSGeo4W/apps/qgis/./python/plugins\processing\gui\wrappers.py", line 204, in setWidgetValue
self.setValue(value)
File "C:\OSGeo4W/apps/qgis/./python/plugins\processing\algs\qgis\ui\InterpolationWidgets.py", line 223, in setValue
self.widget.setValue(value)
File "C:\OSGeo4W/apps/qgis/./python/plugins\processing\algs\qgis\ui\InterpolationWidgets.py", line 172, in setValue
comboBox.setCurrentIndex(comboBox.findText((int(v[3]))))
TypeError: findText(self, str, flags: Union[Qt.MatchFlags, Qt.MatchFlag] = Qt.MatchExactly|Qt.MatchCaseSensitive): argument 1 has unexpected type 'int'
Version de Python : 3.9.5 (tags/v3.9.5:0a7dcbd, May 3 2021, 17:27:52) [MSC v.1928 64 bit (AMD64)]
Version de QGIS : 3.28.1-Firenze Firenze, fde3b8fbb8c
```
### Steps to reproduce the issue
1. Run TIN Interpolator
2. Open processing history and double clic on TIN Interpolator history entry
### Versions
3.28.1
### Supported QGIS version
- [X] I'm running a supported QGIS version according to the roadmap.
### New profile
- [X] I tried with a new QGIS profile
### Additional context
_No response_
|
process
|
load tin interpolator from processing history cause python error what is the bug or the crash typeerror findtext self str flags union qt matchexactly qt matchcasesensitive argument has unexpected type int traceback most recent call last file c apps qgis python plugins processing gui wrappers py line in setwidgetvalue self setvalue value file c apps qgis python plugins processing algs qgis ui interpolationwidgets py line in setvalue self widget setvalue value file c apps qgis python plugins processing algs qgis ui interpolationwidgets py line in setvalue combobox setcurrentindex combobox findtext int v typeerror findtext self str flags union qt matchexactly qt matchcasesensitive argument has unexpected type int version de python tags may version de qgis firenze firenze steps to reproduce the issue run tin interpolator open processing history and double clic on tin interpolator history entry versions supported qgis version i m running a supported qgis version according to the roadmap new profile i tried with a new qgis profile additional context no response
| 1
|
61,483
| 17,023,704,694
|
IssuesEvent
|
2021-07-03 03:23:57
|
tomhughes/trac-tickets
|
https://api.github.com/repos/tomhughes/trac-tickets
|
closed
|
area name rendered at zoomlevels 14 and 16, but not at 15
|
Component: mapnik Priority: minor Resolution: invalid Type: defect
|
**[Submitted to the original trac issue database at 3.51am, Monday, 25th April 2011]**
area names (like park or commercial) seems to be suboptimal for zoomlevel 15. for example, compare zoomlevels 14-16 :
http://www.openstreetmap.org/?mlat=30.4088&mlon=-97.7289&zoom=14&layers=M
http://www.openstreetmap.org/?mlat=30.4088&mlon=-97.7289&zoom=15&layers=M
http://www.openstreetmap.org/?mlat=30.4088&mlon=-97.7289&zoom=16&layers=M
notice how both park to the left & commercial landuse to the right display their names at zoomlevels 14 & 16, but do not have them at zoomlevel 15.
it would be preferred to have their names rendered also at zoomlevel 15.
|
1.0
|
area name rendered at zoomlevels 14 and 16, but not at 15 - **[Submitted to the original trac issue database at 3.51am, Monday, 25th April 2011]**
area names (like park or commercial) seems to be suboptimal for zoomlevel 15. for example, compare zoomlevels 14-16 :
http://www.openstreetmap.org/?mlat=30.4088&mlon=-97.7289&zoom=14&layers=M
http://www.openstreetmap.org/?mlat=30.4088&mlon=-97.7289&zoom=15&layers=M
http://www.openstreetmap.org/?mlat=30.4088&mlon=-97.7289&zoom=16&layers=M
notice how both park to the left & commercial landuse to the right display their names at zoomlevels 14 & 16, but do not have them at zoomlevel 15.
it would be preferred to have their names rendered also at zoomlevel 15.
|
non_process
|
area name rendered at zoomlevels and but not at area names like park or commercial seems to be suboptimal for zoomlevel for example compare zoomlevels notice how both park to the left commercial landuse to the right display their names at zoomlevels but do not have them at zoomlevel it would be preferred to have their names rendered also at zoomlevel
| 0
|
15,423
| 19,609,397,349
|
IssuesEvent
|
2022-01-06 13:42:41
|
NationalSecurityAgency/ghidra
|
https://api.github.com/repos/NationalSecurityAgency/ghidra
|
closed
|
Invalid PCode generation for PMULUDQ
|
Feature: Processor/x86 Feature: Sleigh
|
**Describe the bug**
Ghidra disassembly for command PMULUDQ (XmmReg, m128) produces Pcode that reads the same 4 bytes of memory twice instead of distinct 4 bytes in 16-byte memory.
```
66 0f f4 95 20 dd ff ff PMULUDQ XMM2,xmmword ptr [RBP + local_22e8]
$U3200:8 = INT_ADD RBP, 0xffffffffffffdd20:8
$Ua4d00:8 = INT_ZEXT XMM2_Da
$U5300:4 = LOAD ram($U3200:8)
$Ua4e00:8 = INT_ZEXT $U5300:4
XMM2_Qa = INT_MULT $Ua4d00:8, $Ua4e00:8
$Ua4f80:8 = INT_ZEXT XMM2_Dc
$U5308:4 = LOAD ram($U3200:8)
$Ua5080:8 = INT_ZEXT $U5308:4
XMM2_Qb = INT_MULT $Ua4f80:8, $Ua5080:8
```
This results in invalid emulation, at the very least.
**To Reproduce**
Steps to reproduce the behavior:
Disassemble `66 0f f4 95 20 dd `
**Expected behavior**
Resulting pcode should properly access memory at offset.
**Environment (please complete the following information):**
- OS: Windows 7
- Java Version: java 17 2021-09-14 LTS
- Ghidra Version: 10.0.4
- Ghidra Origin: official distro
**Additional context**
Of note is that the ia.sinc file is correct:
```
:PMULUDQ XmmReg, m128 is vexMode=0 & $(PRE_66) & byte=0x0F; byte=0xF4; XmmReg ... & m128
{
local a:8 = zext(XmmReg[0,32]);
local b:8 = zext(m128[0,32]);
XmmReg[0,64] = a * b;
local c:8 = zext(XmmReg[64,32]);
local d:8 = zext(m128[64,32]);
XmmReg[64,64] = c * d;
}
```
|
1.0
|
Invalid PCode generation for PMULUDQ - **Describe the bug**
Ghidra disassembly for command PMULUDQ (XmmReg, m128) produces Pcode that reads the same 4 bytes of memory twice instead of distinct 4 bytes in 16-byte memory.
```
66 0f f4 95 20 dd ff ff PMULUDQ XMM2,xmmword ptr [RBP + local_22e8]
$U3200:8 = INT_ADD RBP, 0xffffffffffffdd20:8
$Ua4d00:8 = INT_ZEXT XMM2_Da
$U5300:4 = LOAD ram($U3200:8)
$Ua4e00:8 = INT_ZEXT $U5300:4
XMM2_Qa = INT_MULT $Ua4d00:8, $Ua4e00:8
$Ua4f80:8 = INT_ZEXT XMM2_Dc
$U5308:4 = LOAD ram($U3200:8)
$Ua5080:8 = INT_ZEXT $U5308:4
XMM2_Qb = INT_MULT $Ua4f80:8, $Ua5080:8
```
This results in invalid emulation, at the very least.
**To Reproduce**
Steps to reproduce the behavior:
Disassemble `66 0f f4 95 20 dd `
**Expected behavior**
Resulting pcode should properly access memory at offset.
**Environment (please complete the following information):**
- OS: Windows 7
- Java Version: java 17 2021-09-14 LTS
- Ghidra Version: 10.0.4
- Ghidra Origin: official distro
**Additional context**
Of note is that the ia.sinc file is correct:
```
:PMULUDQ XmmReg, m128 is vexMode=0 & $(PRE_66) & byte=0x0F; byte=0xF4; XmmReg ... & m128
{
local a:8 = zext(XmmReg[0,32]);
local b:8 = zext(m128[0,32]);
XmmReg[0,64] = a * b;
local c:8 = zext(XmmReg[64,32]);
local d:8 = zext(m128[64,32]);
XmmReg[64,64] = c * d;
}
```
|
process
|
invalid pcode generation for pmuludq describe the bug ghidra disassembly for command pmuludq xmmreg produces pcode that reads the same bytes of memory twice instead of distinct bytes in byte memory dd ff ff pmuludq xmmword ptr int add rbp int zext da load ram int zext qa int mult int zext dc load ram int zext qb int mult this results in invalid emulation at the very least to reproduce steps to reproduce the behavior disassemble dd expected behavior resulting pcode should properly access memory at offset environment please complete the following information os windows java version java lts ghidra version ghidra origin official distro additional context of note is that the ia sinc file is correct pmuludq xmmreg is vexmode pre byte byte xmmreg local a zext xmmreg local b zext xmmreg a b local c zext xmmreg local d zext xmmreg c d
| 1
|
398,690
| 27,208,775,013
|
IssuesEvent
|
2023-02-20 15:00:17
|
python-babel/babel
|
https://api.github.com/repos/python-babel/babel
|
closed
|
ASSIGNMENT: Contribution, new features, documentation (Uni-class project)
|
documentation feature
|
Good afternoon Babel,
I am a student of the Athens University of Economics and Business, Department of Management Science & Technology. In the context of the Software Engineering in Practice course I am attending this semester, I must choose a project to contribute in it's source code.
I would like to contribute to Babel, work on developing new features, contribute to the documentation or anything else you may have to suggest!
Thank you in advance.
Your sincerely,
Alexandros Vasileiou
|
1.0
|
ASSIGNMENT: Contribution, new features, documentation (Uni-class project) - Good afternoon Babel,
I am a student of the Athens University of Economics and Business, Department of Management Science & Technology. In the context of the Software Engineering in Practice course I am attending this semester, I must choose a project to contribute in it's source code.
I would like to contribute to Babel, work on developing new features, contribute to the documentation or anything else you may have to suggest!
Thank you in advance.
Your sincerely,
Alexandros Vasileiou
|
non_process
|
assignment contribution new features documentation uni class project good afternoon babel i am a student of the athens university of economics and business department of management science technology in the context of the software engineering in practice course i am attending this semester i must choose a project to contribute in it s source code i would like to contribute to babel work on developing new features contribute to the documentation or anything else you may have to suggest thank you in advance your sincerely alexandros vasileiou
| 0
|
18,719
| 24,610,330,602
|
IssuesEvent
|
2022-10-14 20:39:01
|
python/cpython
|
https://api.github.com/repos/python/cpython
|
closed
|
Intermittent semaphore cleanup issues when using multiprocessing forkserver option
|
type-bug expert-multiprocessing
|
# Bug report
I recently changed a program that makes use of multiprocessing and semaphores to use the forkserver start option.
I've noticed that on program exit, I will intermittently get a long list of ignored exceptions and complaints from the multiprocessing resource_tracker that semaphores appear to be leaked. I'll see many of these:
```Exception ignored in: <Finalize object, dead>
Traceback (most recent call last):
File "/usr/lib/python3.8/multiprocessing/util.py", line 224, in __call__
res = self._callback(*self._args, **self._kwargs)
File "/usr/lib/python3.8/multiprocessing/synchronize.py", line 87, in _cleanup
sem_unlink(name)
FileNotFoundError: [Errno 2] No such file or directory
```
and then
```
/usr/lib/python3.8/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 109 leaked semaphore objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d '
/usr/lib/python3.8/multiprocessing/resource_tracker.py:229: UserWarning: resource_tracker: '/mp-li69sgfk': [Errno 2] No such file or directory
warnings.warn('resource_tracker: %r: %s' % (name, e))
```
Unfortunately at this point I can't reproduce reliably with my large program, much less a toy reproduction. But I've spent a few hours examining the code and wanted to see if others had ideas.
The fact that `sem_unlink` is failing means that the semaphores have already been unlinked somehow. The only code I see that calls `sem_unlink` is the `SemLock._cleanup` method in `multiprocessing/synchronize.py`. Any ideas what else could cause the unlinking to have already happened?
The complaint from the `resource_tracker` is just because the semaphores are not unregistered in the last line of `_cleanup` because the `sem_unlink(name)` line throws.
The program and its processes are exiting cleanly when this happens. My processes derive from multiprocessing.Process and use a number of multiprocessing constructs, including `RLock`.
# Your environment
Ubuntu 20.04.4 LTS
Python 3.8.10
|
1.0
|
Intermittent semaphore cleanup issues when using multiprocessing forkserver option - # Bug report
I recently changed a program that makes use of multiprocessing and semaphores to use the forkserver start option.
I've noticed that on program exit, I will intermittently get a long list of ignored exceptions and complaints from the multiprocessing resource_tracker that semaphores appear to be leaked. I'll see many of these:
```Exception ignored in: <Finalize object, dead>
Traceback (most recent call last):
File "/usr/lib/python3.8/multiprocessing/util.py", line 224, in __call__
res = self._callback(*self._args, **self._kwargs)
File "/usr/lib/python3.8/multiprocessing/synchronize.py", line 87, in _cleanup
sem_unlink(name)
FileNotFoundError: [Errno 2] No such file or directory
```
and then
```
/usr/lib/python3.8/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 109 leaked semaphore objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d '
/usr/lib/python3.8/multiprocessing/resource_tracker.py:229: UserWarning: resource_tracker: '/mp-li69sgfk': [Errno 2] No such file or directory
warnings.warn('resource_tracker: %r: %s' % (name, e))
```
Unfortunately at this point I can't reproduce reliably with my large program, much less a toy reproduction. But I've spent a few hours examining the code and wanted to see if others had ideas.
The fact that `sem_unlink` is failing means that the semaphores have already been unlinked somehow. The only code I see that calls `sem_unlink` is the `SemLock._cleanup` method in `multiprocessing/synchronize.py`. Any ideas what else could cause the unlinking to have already happened?
The complaint from the `resource_tracker` is just because the semaphores are not unregistered in the last line of `_cleanup` because the `sem_unlink(name)` line throws.
The program and its processes are exiting cleanly when this happens. My processes derive from multiprocessing.Process and use a number of multiprocessing constructs, including `RLock`.
# Your environment
Ubuntu 20.04.4 LTS
Python 3.8.10
|
process
|
intermittent semaphore cleanup issues when using multiprocessing forkserver option bug report i recently changed a program that makes use of multiprocessing and semaphores to use the forkserver start option i ve noticed that on program exit i will intermittently get a long list of ignored exceptions and complaints from the multiprocessing resource tracker that semaphores appear to be leaked i ll see many of these exception ignored in traceback most recent call last file usr lib multiprocessing util py line in call res self callback self args self kwargs file usr lib multiprocessing synchronize py line in cleanup sem unlink name filenotfounderror no such file or directory and then usr lib multiprocessing resource tracker py userwarning resource tracker there appear to be leaked semaphore objects to clean up at shutdown warnings warn resource tracker there appear to be d usr lib multiprocessing resource tracker py userwarning resource tracker mp no such file or directory warnings warn resource tracker r s name e unfortunately at this point i can t reproduce reliably with my large program much less a toy reproduction but i ve spent a few hours examining the code and wanted to see if others had ideas the fact that sem unlink is failing means that the semaphores have already been unlinked somehow the only code i see that calls sem unlink is the semlock cleanup method in multiprocessing synchronize py any ideas what else could cause the unlinking to have already happened the complaint from the resource tracker is just because the semaphores are not unregistered in the last line of cleanup because the sem unlink name line throws the program and its processes are exiting cleanly when this happens my processes derive from multiprocessing process and use a number of multiprocessing constructs including rlock your environment ubuntu lts python
| 1
|
6,759
| 9,884,082,752
|
IssuesEvent
|
2019-06-24 21:05:22
|
aiidateam/aiida_core
|
https://api.github.com/repos/aiidateam/aiida_core
|
closed
|
Make the return value of `Process.exposed_inputs` a mutable `AttributeDict`
|
aiida-core 1.x priority/nice-to-have topic/processes type/accepted feature
|
Currently, parts of the returned dictionary can still be an `AttributesFrozenDict` because they are taken from the process inputs, which should remain indeed immutable. However, for the purpose of the `exposed_inputs` one often wants to still be able to modify them before submitting the relevant process. Therefore it makes more sense to have the method return an `AttributeDict`.
|
1.0
|
Make the return value of `Process.exposed_inputs` a mutable `AttributeDict` - Currently, parts of the returned dictionary can still be an `AttributesFrozenDict` because they are taken from the process inputs, which should remain indeed immutable. However, for the purpose of the `exposed_inputs` one often wants to still be able to modify them before submitting the relevant process. Therefore it makes more sense to have the method return an `AttributeDict`.
|
process
|
make the return value of process exposed inputs a mutable attributedict currently parts of the returned dictionary can still be an attributesfrozendict because they are taken from the process inputs which should remain indeed immutable however for the purpose of the exposed inputs one often wants to still be able to modify them before submitting the relevant process therefore it makes more sense to have the method return an attributedict
| 1
|
723,168
| 24,887,446,883
|
IssuesEvent
|
2022-10-28 09:00:12
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
www.youtube.com - design is broken
|
browser-firefox priority-critical os-linux engine-gecko
|
<!-- @browser: Firefox 108.0 -->
<!-- @ua_header: Mozilla/5.0 (X11; Linux x86_64; rv:108.0) Gecko/20100101 Firefox/108.0 -->
<!-- @reported_with: desktop-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/112996 -->
**URL**: https://www.youtube.com/
**Browser / Version**: Firefox 108.0
**Operating System**: Linux
**Tested Another Browser**: Yes Chrome
**Problem type**: Design is broken
**Description**: Items not fully visible
**Steps to Reproduce**:
The side scrollbar scrolls with the main bar and is no longer it's own scrollbar. I have tried in Private mode with no extensions loaded and it is still broken.
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2022/10/11a84358-6afb-4527-9104-3327617c9f92.jpg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20221027095047</li><li>channel: nightly</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2022/10/78858ef6-ab5e-4840-a97d-d6307033044e)
_From [webcompat.com](https://webcompat.com/) with โค๏ธ_
|
1.0
|
www.youtube.com - design is broken - <!-- @browser: Firefox 108.0 -->
<!-- @ua_header: Mozilla/5.0 (X11; Linux x86_64; rv:108.0) Gecko/20100101 Firefox/108.0 -->
<!-- @reported_with: desktop-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/112996 -->
**URL**: https://www.youtube.com/
**Browser / Version**: Firefox 108.0
**Operating System**: Linux
**Tested Another Browser**: Yes Chrome
**Problem type**: Design is broken
**Description**: Items not fully visible
**Steps to Reproduce**:
The side scrollbar scrolls with the main bar and is no longer it's own scrollbar. I have tried in Private mode with no extensions loaded and it is still broken.
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2022/10/11a84358-6afb-4527-9104-3327617c9f92.jpg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20221027095047</li><li>channel: nightly</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2022/10/78858ef6-ab5e-4840-a97d-d6307033044e)
_From [webcompat.com](https://webcompat.com/) with โค๏ธ_
|
non_process
|
design is broken url browser version firefox operating system linux tested another browser yes chrome problem type design is broken description items not fully visible steps to reproduce the side scrollbar scrolls with the main bar and is no longer it s own scrollbar i have tried in private mode with no extensions loaded and it is still broken view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel nightly hastouchscreen false mixed active content blocked false mixed passive content blocked false tracking content blocked false from with โค๏ธ
| 0
|
10,681
| 13,463,575,429
|
IssuesEvent
|
2020-09-09 17:49:10
|
tdwg/dwc
|
https://api.github.com/repos/tdwg/dwc
|
closed
|
environmentalFeature
|
Class - Event Process - dismissed Process - ready for public comment Term - add
|
Was https://code.google.com/p/darwincore/issues/detail?id=190
==New Term Recommendation==
Submitter: John Wieczorek on behalf of the May 2013 GBIF hackathon-workshop on Darwin Core and sample data
Justification: see "Meeting Report: GBIF hackathon-workshop on Darwin Core and sample data (22-24 May 2013)" at http://www.gbif.org/orc/?doc_id=5424
Term Name: environmental feature
Identifier: http://purl.obolibrary.org/obo/ENVO_00002297
Namespace: http://purl.obolibrary.org/obo/
Label: Environmental Feature
Definition: A prominent or distinctive aspect, quality, or characteristic of a biome.
Comment: Examples: "meadow", "http://purl.obolibrary.org/obo/ENVO_00000108". For discussion see https://code.google.com/p/darwincore/wiki/Event (there will be no further documentation here until the term is ratified)
Type of Term: http://www.w3.org/2000/01/rdf-schema#Class
Refines:
Status: proposed
Date Issued: 2013-09-25
Date Modified: 2013-09-25
Has Domain:
Has Range:
Refines:
Version: http://purl.obolibrary.org/obo/ENVO_00002297
Replaces:
IsReplaceBy:
Class: http://rs.tdwg.org/dwc/terms/Event
ABCD 2.0.6: not in ABCD (someone please confirm or deny this)
Note that the current official definition for this term is "A feature that is." I sent a request to obo-envo@lists.sourceforge.net to change the definition to be the same as what I put in the proposal above.
Sep 26, 2013 comment #1 gtuco.btuco
Based on initial discussions on tdwg-content, modified the proposal to make a new DwC property term that recommends the ENVO class as the range, as follows:
Term Name: environmentalFeature
Identifier: http://rs.tdwg.org/dwc/terms/environmentalFeature
Namespace: http://rs.tdwg.org/dwc/terms/
Label: Environmental Feature
Definition: A prominent or distinctive aspect, quality, or characteristic of a biome. Recommended best practice is to use a controlled vocabulary such as defined by the environmental feature class of the Environment Ontology (ENVO).
Comment: Examples: "meadow", "http://purl.obolibrary.org/obo/ENVO_00000108". For discussion see https://code.google.com/p/darwincore/wiki/Event (there will be no further documentation here until the term is ratified)
Type of Term: http://www.w3.org/1999/02/22-rdf-syntax-ns#Property
Refines:
Status: proposed
Date Issued: 2013-09-26
Date Modified: 2013-09-26
Has Domain:
Has Range:
Refines:
Version: environmentalFeature-2013-09-26
Replaces:
IsReplaceBy:
Class: http://rs.tdwg.org/dwc/terms/Event
ABCD 2.0.6: not in ABCD (someone please confirm or deny this)
|
2.0
|
environmentalFeature - Was https://code.google.com/p/darwincore/issues/detail?id=190
==New Term Recommendation==
Submitter: John Wieczorek on behalf of the May 2013 GBIF hackathon-workshop on Darwin Core and sample data
Justification: see "Meeting Report: GBIF hackathon-workshop on Darwin Core and sample data (22-24 May 2013)" at http://www.gbif.org/orc/?doc_id=5424
Term Name: environmental feature
Identifier: http://purl.obolibrary.org/obo/ENVO_00002297
Namespace: http://purl.obolibrary.org/obo/
Label: Environmental Feature
Definition: A prominent or distinctive aspect, quality, or characteristic of a biome.
Comment: Examples: "meadow", "http://purl.obolibrary.org/obo/ENVO_00000108". For discussion see https://code.google.com/p/darwincore/wiki/Event (there will be no further documentation here until the term is ratified)
Type of Term: http://www.w3.org/2000/01/rdf-schema#Class
Refines:
Status: proposed
Date Issued: 2013-09-25
Date Modified: 2013-09-25
Has Domain:
Has Range:
Refines:
Version: http://purl.obolibrary.org/obo/ENVO_00002297
Replaces:
IsReplaceBy:
Class: http://rs.tdwg.org/dwc/terms/Event
ABCD 2.0.6: not in ABCD (someone please confirm or deny this)
Note that the current official definition for this term is "A feature that is." I sent a request to obo-envo@lists.sourceforge.net to change the definition to be the same as what I put in the proposal above.
Sep 26, 2013 comment #1 gtuco.btuco
Based on initial discussions on tdwg-content, modified the proposal to make a new DwC property term that recommends the ENVO class as the range, as follows:
Term Name: environmentalFeature
Identifier: http://rs.tdwg.org/dwc/terms/environmentalFeature
Namespace: http://rs.tdwg.org/dwc/terms/
Label: Environmental Feature
Definition: A prominent or distinctive aspect, quality, or characteristic of a biome. Recommended best practice is to use a controlled vocabulary such as defined by the environmental feature class of the Environment Ontology (ENVO).
Comment: Examples: "meadow", "http://purl.obolibrary.org/obo/ENVO_00000108". For discussion see https://code.google.com/p/darwincore/wiki/Event (there will be no further documentation here until the term is ratified)
Type of Term: http://www.w3.org/1999/02/22-rdf-syntax-ns#Property
Refines:
Status: proposed
Date Issued: 2013-09-26
Date Modified: 2013-09-26
Has Domain:
Has Range:
Refines:
Version: environmentalFeature-2013-09-26
Replaces:
IsReplaceBy:
Class: http://rs.tdwg.org/dwc/terms/Event
ABCD 2.0.6: not in ABCD (someone please confirm or deny this)
|
process
|
environmentalfeature was new term recommendation submitter john wieczorek on behalf of the may gbif hackathon workshop on darwin core and sample data justification see meeting report gbif hackathon workshop on darwin core and sample data may at term name environmental feature identifier namespace label environmental feature definition a prominent or distinctive aspect quality or characteristic of a biome comment examples meadow for discussion see there will be no further documentation here until the term is ratified type of term refines status proposed date issued date modified has domain has range refines version replaces isreplaceby class abcd not in abcd someone please confirm or deny this note that the current official definition for this term is a feature that is i sent a request to obo envo lists sourceforge net to change the definition to be the same as what i put in the proposal above sep comment gtuco btuco based on initial discussions on tdwg content modified the proposal to make a new dwc property term that recommends the envo class as the range as follows term name environmentalfeature identifier namespace label environmental feature definition a prominent or distinctive aspect quality or characteristic of a biome recommended best practice is to use a controlled vocabulary such as defined by the environmental feature class of the environment ontology envo comment examples meadow for discussion see there will be no further documentation here until the term is ratified type of term refines status proposed date issued date modified has domain has range refines version environmentalfeature replaces isreplaceby class abcd not in abcd someone please confirm or deny this
| 1
|
22,482
| 31,394,060,180
|
IssuesEvent
|
2023-08-26 18:00:01
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
closed
|
[MLv2] Column not found errors when joining cards
|
.Backend .metabase-lib .Team/QueryProcessor :hammer_and_wrench:
|
We have two failing E2E tests on the [joins FE branch](https://github.com/metabase/metabase/pull/32912) when a BE now returns an error when running a query with joined cards:
```
Column "Question 5 - Products โ Created At: Month.CREATED_AT" not found;
SQL statement: ...
```
**Failing tests**
โ ๏ธ Please use the [joins FE branch](https://github.com/metabase/metabase/pull/32912) to reproduce the error
- [should join two saved questions with the same implicit/explicit grouped field (metabase#18512)](https://github.com/metabase/metabase/blob/master/e2e/test/scenarios/joins/reproductions/18512-cannot-join-two-saved-questions-with-same-implicit-explicit-grouped-field.cy.spec.js)
- [should join saved questions that themselves contain joins (metabase#12928)](https://github.com/metabase/metabase/blob/master/e2e/test/scenarios/joins/joins.cy.spec.js)
- [shouldn't drop joins using MLv2 format (metabase#31769)](https://github.com/metabase/metabase/blob/master/e2e/test/scenarios/joins/reproductions/31769-mlv2-join-dropped.cy.spec.js)
|
1.0
|
[MLv2] Column not found errors when joining cards - We have two failing E2E tests on the [joins FE branch](https://github.com/metabase/metabase/pull/32912) when a BE now returns an error when running a query with joined cards:
```
Column "Question 5 - Products โ Created At: Month.CREATED_AT" not found;
SQL statement: ...
```
**Failing tests**
โ ๏ธ Please use the [joins FE branch](https://github.com/metabase/metabase/pull/32912) to reproduce the error
- [should join two saved questions with the same implicit/explicit grouped field (metabase#18512)](https://github.com/metabase/metabase/blob/master/e2e/test/scenarios/joins/reproductions/18512-cannot-join-two-saved-questions-with-same-implicit-explicit-grouped-field.cy.spec.js)
- [should join saved questions that themselves contain joins (metabase#12928)](https://github.com/metabase/metabase/blob/master/e2e/test/scenarios/joins/joins.cy.spec.js)
- [shouldn't drop joins using MLv2 format (metabase#31769)](https://github.com/metabase/metabase/blob/master/e2e/test/scenarios/joins/reproductions/31769-mlv2-join-dropped.cy.spec.js)
|
process
|
column not found errors when joining cards we have two failing tests on the when a be now returns an error when running a query with joined cards column question products โ created at month created at not found sql statement failing tests โ ๏ธ please use the to reproduce the error
| 1
|
411,371
| 27,820,151,327
|
IssuesEvent
|
2023-03-19 05:40:50
|
atomic-works/github-repository-template
|
https://api.github.com/repos/atomic-works/github-repository-template
|
closed
|
Create Issue Templates.
|
documentation enhancement
|
Create templates for new issues, and require templates to be used. The new templates are:
- Enhancement Request
- Fix Request
- Refactor
These will have different fields. Namely, `Fix Request` and `Refactor` will require links to their parent issues.
To disable blank issues from being created, blank issues will be disabled in `.github/ISSUE_TEMPLATE/config.yml` where the setting `blank_issues_enabled: false`.
|
1.0
|
Create Issue Templates. - Create templates for new issues, and require templates to be used. The new templates are:
- Enhancement Request
- Fix Request
- Refactor
These will have different fields. Namely, `Fix Request` and `Refactor` will require links to their parent issues.
To disable blank issues from being created, blank issues will be disabled in `.github/ISSUE_TEMPLATE/config.yml` where the setting `blank_issues_enabled: false`.
|
non_process
|
create issue templates create templates for new issues and require templates to be used the new templates are enhancement request fix request refactor these will have different fields namely fix request and refactor will require links to their parent issues to disable blank issues from being created blank issues will be disabled in github issue template config yml where the setting blank issues enabled false
| 0
|
2,705
| 5,564,849,128
|
IssuesEvent
|
2017-03-26 08:13:56
|
AllenFang/react-bootstrap-table
|
https://api.github.com/repos/AllenFang/react-bootstrap-table
|
reopened
|
Put checkbox column at last
|
inprocess
|
I would like to customise the checkbox column to the last column in the table, just like this:

Anyone has suggestions on these?
- how to give a title for the checkbox column?
- how to use circular checkbox with background?
- how to move the checkbox to last column?
|
1.0
|
Put checkbox column at last - I would like to customise the checkbox column to the last column in the table, just like this:

Anyone has suggestions on these?
- how to give a title for the checkbox column?
- how to use circular checkbox with background?
- how to move the checkbox to last column?
|
process
|
put checkbox column at last i would like to customise the checkbox column to the last column in the table just like this anyone has suggestions on these how to give a title for the checkbox column how to use circular checkbox with background how to move the checkbox to last column
| 1
|
333,094
| 10,115,482,712
|
IssuesEvent
|
2019-07-30 21:55:31
|
opendatakit/collect
|
https://api.github.com/repos/opendatakit/collect
|
closed
|
Provide proper attribution for map tiles
|
high priority in progress
|
Collect makes available beautiful basemaps from a number of free web services. I just did some licensing due diligence and realized that we violate several:
- Carto: the [terms of service](https://drive.google.com/file/d/0B3OBExqwT6KJNHp3U3VUamx6U1U/view) state: "When using the Services, you must provide attribution to both OpenStreetMap and CARTO, as described at https://carto.com/attributions." It looks like it should be "ยฉ OpenStreetMap contributors, ยฉ CARTO"
- Stamen: from [the maps service landing page](http://maps.stamen.com/#toner), the attribution should be "Map tiles by <a href="http://stamen.com">Stamen Design</a>, under <a href="http://creativecommons.org/licenses/by/3.0">CC BY 3.0</a>. Data by <a href="http://openstreetmap.org">OpenStreetMap</a>, under <a href="http://www.openstreetmap.org/copyright">ODbL</a>."
- USGS: the [terms of service](https://www.usgs.gov/faqs/what-are-terms-uselicensing-map-services-and-data-national-map?qt-news_science_products=0#qt-news_science_products) state: "request that the following acknowledgment statement of the originating agency be included in products and data derived from our map services when citing, copying, or reprinting: "Map services and data available from U.S. Geological Survey, National Geospatial Program.""
|
1.0
|
Provide proper attribution for map tiles - Collect makes available beautiful basemaps from a number of free web services. I just did some licensing due diligence and realized that we violate several:
- Carto: the [terms of service](https://drive.google.com/file/d/0B3OBExqwT6KJNHp3U3VUamx6U1U/view) state: "When using the Services, you must provide attribution to both OpenStreetMap and CARTO, as described at https://carto.com/attributions." It looks like it should be "ยฉ OpenStreetMap contributors, ยฉ CARTO"
- Stamen: from [the maps service landing page](http://maps.stamen.com/#toner), the attribution should be "Map tiles by <a href="http://stamen.com">Stamen Design</a>, under <a href="http://creativecommons.org/licenses/by/3.0">CC BY 3.0</a>. Data by <a href="http://openstreetmap.org">OpenStreetMap</a>, under <a href="http://www.openstreetmap.org/copyright">ODbL</a>."
- USGS: the [terms of service](https://www.usgs.gov/faqs/what-are-terms-uselicensing-map-services-and-data-national-map?qt-news_science_products=0#qt-news_science_products) state: "request that the following acknowledgment statement of the originating agency be included in products and data derived from our map services when citing, copying, or reprinting: "Map services and data available from U.S. Geological Survey, National Geospatial Program.""
|
non_process
|
provide proper attribution for map tiles collect makes available beautiful basemaps from a number of free web services i just did some licensing due diligence and realized that we violate several carto the state when using the services you must provide attribution to both openstreetmap and carto as described at it looks like it should be ยฉ openstreetmap contributors ยฉ carto stamen from the attribution should be map tiles by under data by a href under a href usgs the state request that the following acknowledgment statement of the originating agency be included in products and data derived from our map services when citing copying or reprinting map services and data available from u s geological survey national geospatial program
| 0
|
12,652
| 15,024,348,143
|
IssuesEvent
|
2021-02-01 19:30:24
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Typo in article "Automation account authentication overview"
|
Pri2 automation/svc process-automation/subsvc
|
Azure Run As account: Allows you to **manages** Azure resources based on the Azure Resource Manager deployment and management service for Azure.
---
#### Document Details
โ *Do not edit this section. It is required for docs.microsoft.com โ GitHub issue linking.*
* ID: 8721e209-24ce-2170-6caa-ed12a7060080
* Version Independent ID: ac13f91d-460c-cbe9-4778-50d20765b252
* Content: [Azure Automation account authentication overview](https://docs.microsoft.com/en-us/azure/automation/automation-security-overview)
* Content Source: [articles/automation/automation-security-overview.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/automation/automation-security-overview.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @MGoedtel
* Microsoft Alias: **magoedte**
|
1.0
|
Typo in article "Automation account authentication overview" - Azure Run As account: Allows you to **manages** Azure resources based on the Azure Resource Manager deployment and management service for Azure.
---
#### Document Details
โ *Do not edit this section. It is required for docs.microsoft.com โ GitHub issue linking.*
* ID: 8721e209-24ce-2170-6caa-ed12a7060080
* Version Independent ID: ac13f91d-460c-cbe9-4778-50d20765b252
* Content: [Azure Automation account authentication overview](https://docs.microsoft.com/en-us/azure/automation/automation-security-overview)
* Content Source: [articles/automation/automation-security-overview.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/automation/automation-security-overview.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @MGoedtel
* Microsoft Alias: **magoedte**
|
process
|
typo in article automation account authentication overview azure run as account allows you to manages azure resources based on the azure resource manager deployment and management service for azure document details โ do not edit this section it is required for docs microsoft com โ github issue linking id version independent id content content source service automation sub service process automation github login mgoedtel microsoft alias magoedte
| 1
|
165,270
| 13,996,456,866
|
IssuesEvent
|
2020-10-28 05:57:27
|
gardener/machine-controller-manager
|
https://api.github.com/repos/gardener/machine-controller-manager
|
closed
|
Add documentation for new-major features
|
area/documentation component/mcm lifecycle/stale priority/critical
|
We should add documentation for the following topics:
1. Working of Safety-controller
2. Working of Machine-deployment with regards to priorities of Machine.
- the feature we implemented for autoscaler integration.
3. Notable features in README at root location.
- Rolling update, rollback, pause, autoscaler integration, health-check.
4. Working of base controllers
- Behavior of machine/set/deployment controllers in different scenarios.
- Maybe the way autoscaler Q&A looks like:
https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md
5. Working of bootstrap-token mechanism with Gardener at the time of machine-creation.
- This may fit-in gardener repository rather.
This should help larger audience to understand the internals/working of mcm, also help us re-directing many of the queries here.
|
1.0
|
Add documentation for new-major features - We should add documentation for the following topics:
1. Working of Safety-controller
2. Working of Machine-deployment with regards to priorities of Machine.
- the feature we implemented for autoscaler integration.
3. Notable features in README at root location.
- Rolling update, rollback, pause, autoscaler integration, health-check.
4. Working of base controllers
- Behavior of machine/set/deployment controllers in different scenarios.
- Maybe the way autoscaler Q&A looks like:
https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md
5. Working of bootstrap-token mechanism with Gardener at the time of machine-creation.
- This may fit-in gardener repository rather.
This should help larger audience to understand the internals/working of mcm, also help us re-directing many of the queries here.
|
non_process
|
add documentation for new major features we should add documentation for the following topics working of safety controller working of machine deployment with regards to priorities of machine the feature we implemented for autoscaler integration notable features in readme at root location rolling update rollback pause autoscaler integration health check working of base controllers behavior of machine set deployment controllers in different scenarios maybe the way autoscaler q a looks like working of bootstrap token mechanism with gardener at the time of machine creation this may fit in gardener repository rather this should help larger audience to understand the internals working of mcm also help us re directing many of the queries here
| 0
|
312,667
| 9,551,292,154
|
IssuesEvent
|
2019-05-02 14:09:31
|
linkerd/linkerd2
|
https://api.github.com/repos/linkerd/linkerd2
|
closed
|
GKE Private Clusters Cannot Proxy
|
area/cli bug good first issue priority/P1
|
## Bug Report
GKE Private Clusters cannot proxy requests via `kubectl proxy` from the looks of it. This breaks `linkerd dashboard` command and the `linkerd check` command. I don't think this is a linkerd issue after looking into it further. I cannot proxy any services on GKE Private Clusters.
### What is the issue?
`linkerd dashboard`
`linkerd check`
both of these fail, because the proxy times out.
### How can it be reproduced?
Create a GKE private cluster and install linkerd.
### Logs, error output, etc
(If the output is long, please create a [gist](https://gist.github.com/) and
paste the link here.)
#### `linkerd check` output
```text
NYJKurzMBP:linkerd2 joshua.kurz$ linkerd check --verbose
kubernetes-api: can initialize the client..................................[ok]
kubernetes-api: can query the Kubernetes API...............................[ok]
kubernetes-api: is running the minimum Kubernetes API version..............[ok]
linkerd-api: control plane namespace exists................................[ok]
linkerd-api: control plane pods are ready..................................[ok]
DEBU[0000] Expecting API to be served over [https://35.237.232.208/api/v1/namespaces/linkerd/services/linkerd-controller-api:http/proxy/api/v1/]
linkerd-api: can initialize the client.....................................[ok]
DEBU[0000] Making gRPC-over-HTTP call to [https://35.237.232.208/api/v1/namespaces/linkerd/services/linkerd-controller-api:http/proxy/api/v1/SelfCheck] []
DEBU[0005] Error invoking [https://35.237.232.208/api/v1/namespaces/linkerd/services/linkerd-controller-api:http/proxy/api/v1/SelfCheck]: Post https://35.237.232.208/api/v1/namespaces/linkerd/services/linkerd-controller-api:http/proxy/api/v1/SelfCheck: context deadline exceeded
linkerd-api: can query the control plane API...............................[FAIL] -- Post https://35.237.232.208/api/v1/namespaces/linkerd/services/linkerd-controller-api:http/proxy/api/v1/SelfCheck: context deadline exceeded
```
### Environment
- Kubernetes Version:
```
kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:53:20Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10+", GitVersion:"v1.10.7-gke.11", GitCommit:"fa90543563c9cfafca69128ce8cd9ecd5941940f", GitTreeState:"clean", BuildDate:"2018-11-08T20:22:21Z", GoVersion:"go1.9.3b4", Compiler:"gc", Platform:"linux/amd64"}
```
- Cluster Environment: GKE
- Host OS:
- Linkerd version:
```
NYJKurzMBP:linkerd2 joshua.kurz$ linkerd version
Client version: stable-2.1.0
```
### Possible solution
Thinking we need to dig into what is going on in GKE. It really seems like more of a GKE issue than linkerd, but opening here for visibility and curious what you all think a good solution would be or if you know how to raise awareness for this a little more.
### Additional context
@sudermanjr mentioned this issue here as well https://github.com/linkerd/linkerd2/issues/1696
|
1.0
|
GKE Private Clusters Cannot Proxy - ## Bug Report
GKE Private Clusters cannot proxy requests via `kubectl proxy` from the looks of it. This breaks `linkerd dashboard` command and the `linkerd check` command. I don't think this is a linkerd issue after looking into it further. I cannot proxy any services on GKE Private Clusters.
### What is the issue?
`linkerd dashboard`
`linkerd check`
both of these fail, because the proxy times out.
### How can it be reproduced?
Create a GKE private cluster and install linkerd.
### Logs, error output, etc
(If the output is long, please create a [gist](https://gist.github.com/) and
paste the link here.)
#### `linkerd check` output
```text
NYJKurzMBP:linkerd2 joshua.kurz$ linkerd check --verbose
kubernetes-api: can initialize the client..................................[ok]
kubernetes-api: can query the Kubernetes API...............................[ok]
kubernetes-api: is running the minimum Kubernetes API version..............[ok]
linkerd-api: control plane namespace exists................................[ok]
linkerd-api: control plane pods are ready..................................[ok]
DEBU[0000] Expecting API to be served over [https://35.237.232.208/api/v1/namespaces/linkerd/services/linkerd-controller-api:http/proxy/api/v1/]
linkerd-api: can initialize the client.....................................[ok]
DEBU[0000] Making gRPC-over-HTTP call to [https://35.237.232.208/api/v1/namespaces/linkerd/services/linkerd-controller-api:http/proxy/api/v1/SelfCheck] []
DEBU[0005] Error invoking [https://35.237.232.208/api/v1/namespaces/linkerd/services/linkerd-controller-api:http/proxy/api/v1/SelfCheck]: Post https://35.237.232.208/api/v1/namespaces/linkerd/services/linkerd-controller-api:http/proxy/api/v1/SelfCheck: context deadline exceeded
linkerd-api: can query the control plane API...............................[FAIL] -- Post https://35.237.232.208/api/v1/namespaces/linkerd/services/linkerd-controller-api:http/proxy/api/v1/SelfCheck: context deadline exceeded
```
### Environment
- Kubernetes Version:
```
kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:53:20Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10+", GitVersion:"v1.10.7-gke.11", GitCommit:"fa90543563c9cfafca69128ce8cd9ecd5941940f", GitTreeState:"clean", BuildDate:"2018-11-08T20:22:21Z", GoVersion:"go1.9.3b4", Compiler:"gc", Platform:"linux/amd64"}
```
- Cluster Environment: GKE
- Host OS:
- Linkerd version:
```
NYJKurzMBP:linkerd2 joshua.kurz$ linkerd version
Client version: stable-2.1.0
```
### Possible solution
Thinking we need to dig into what is going on in GKE. It really seems like more of a GKE issue than linkerd, but opening here for visibility and curious what you all think a good solution would be or if you know how to raise awareness for this a little more.
### Additional context
@sudermanjr mentioned this issue here as well https://github.com/linkerd/linkerd2/issues/1696
|
non_process
|
gke private clusters cannot proxy bug report gke private clusters cannot proxy requests via kubectl proxy from the looks of it this breaks linkerd dashboard command and the linkerd check command i don t think this is a linkerd issue after looking into it further i cannot proxy any services on gke private clusters what is the issue linkerd dashboard linkerd check both of these fail because the proxy times out how can it be reproduced create a gke private cluster and install linkerd logs error output etc if the output is long please create a and paste the link here linkerd check output text nyjkurzmbp joshua kurz linkerd check verbose kubernetes api can initialize the client kubernetes api can query the kubernetes api kubernetes api is running the minimum kubernetes api version linkerd api control plane namespace exists linkerd api control plane pods are ready debu expecting api to be served over linkerd api can initialize the client debu making grpc over http call to debu error invoking post context deadline exceeded linkerd api can query the control plane api post context deadline exceeded environment kubernetes version kubectl version client version version info major minor gitversion gitcommit gittreestate clean builddate goversion compiler gc platform darwin server version version info major minor gitversion gke gitcommit gittreestate clean builddate goversion compiler gc platform linux cluster environment gke host os linkerd version nyjkurzmbp joshua kurz linkerd version client version stable possible solution thinking we need to dig into what is going on in gke it really seems like more of a gke issue than linkerd but opening here for visibility and curious what you all think a good solution would be or if you know how to raise awareness for this a little more additional context sudermanjr mentioned this issue here as well
| 0
|
174,497
| 21,300,153,476
|
IssuesEvent
|
2022-04-15 01:14:37
|
mgh3326/nuber-eats-frontend
|
https://api.github.com/repos/mgh3326/nuber-eats-frontend
|
opened
|
CVE-2021-43138 (High) detected in async-3.2.0.tgz, async-2.6.3.tgz
|
security vulnerability
|
## CVE-2021-43138 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>async-3.2.0.tgz</b>, <b>async-2.6.3.tgz</b></p></summary>
<p>
<details><summary><b>async-3.2.0.tgz</b></p></summary>
<p>Higher-order functions and common patterns for asynchronous code</p>
<p>Library home page: <a href="https://registry.npmjs.org/async/-/async-3.2.0.tgz">https://registry.npmjs.org/async/-/async-3.2.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/getos/node_modules/async/package.json</p>
<p>
Dependency Hierarchy:
- cypress-6.4.0.tgz (Root Library)
- getos-3.2.1.tgz
- :x: **async-3.2.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>async-2.6.3.tgz</b></p></summary>
<p>Higher-order functions and common patterns for asynchronous code</p>
<p>Library home page: <a href="https://registry.npmjs.org/async/-/async-2.6.3.tgz">https://registry.npmjs.org/async/-/async-2.6.3.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/async/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-4.0.2.tgz (Root Library)
- webpack-dev-server-3.11.0.tgz
- portfinder-1.0.28.tgz
- :x: **async-2.6.3.tgz** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A vulnerability exists in Async through 3.2.1 (fixed in 3.2.2) , which could let a malicious user obtain privileges via the mapValues() method.
<p>Publish Date: 2022-04-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-43138>CVE-2021-43138</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-43138">https://nvd.nist.gov/vuln/detail/CVE-2021-43138</a></p>
<p>Release Date: 2022-04-06</p>
<p>Fix Resolution (async): 3.2.2</p>
<p>Direct dependency fix Resolution (cypress): 6.5.0</p><p>Fix Resolution (async): 3.2.2</p>
<p>Direct dependency fix Resolution (react-scripts): 5.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-43138 (High) detected in async-3.2.0.tgz, async-2.6.3.tgz - ## CVE-2021-43138 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>async-3.2.0.tgz</b>, <b>async-2.6.3.tgz</b></p></summary>
<p>
<details><summary><b>async-3.2.0.tgz</b></p></summary>
<p>Higher-order functions and common patterns for asynchronous code</p>
<p>Library home page: <a href="https://registry.npmjs.org/async/-/async-3.2.0.tgz">https://registry.npmjs.org/async/-/async-3.2.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/getos/node_modules/async/package.json</p>
<p>
Dependency Hierarchy:
- cypress-6.4.0.tgz (Root Library)
- getos-3.2.1.tgz
- :x: **async-3.2.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>async-2.6.3.tgz</b></p></summary>
<p>Higher-order functions and common patterns for asynchronous code</p>
<p>Library home page: <a href="https://registry.npmjs.org/async/-/async-2.6.3.tgz">https://registry.npmjs.org/async/-/async-2.6.3.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/async/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-4.0.2.tgz (Root Library)
- webpack-dev-server-3.11.0.tgz
- portfinder-1.0.28.tgz
- :x: **async-2.6.3.tgz** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A vulnerability exists in Async through 3.2.1 (fixed in 3.2.2) , which could let a malicious user obtain privileges via the mapValues() method.
<p>Publish Date: 2022-04-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-43138>CVE-2021-43138</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-43138">https://nvd.nist.gov/vuln/detail/CVE-2021-43138</a></p>
<p>Release Date: 2022-04-06</p>
<p>Fix Resolution (async): 3.2.2</p>
<p>Direct dependency fix Resolution (cypress): 6.5.0</p><p>Fix Resolution (async): 3.2.2</p>
<p>Direct dependency fix Resolution (react-scripts): 5.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in async tgz async tgz cve high severity vulnerability vulnerable libraries async tgz async tgz async tgz higher order functions and common patterns for asynchronous code library home page a href path to dependency file package json path to vulnerable library node modules getos node modules async package json dependency hierarchy cypress tgz root library getos tgz x async tgz vulnerable library async tgz higher order functions and common patterns for asynchronous code library home page a href path to dependency file package json path to vulnerable library node modules async package json dependency hierarchy react scripts tgz root library webpack dev server tgz portfinder tgz x async tgz vulnerable library found in base branch master vulnerability details a vulnerability exists in async through fixed in which could let a malicious user obtain privileges via the mapvalues method publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution async direct dependency fix resolution cypress fix resolution async direct dependency fix resolution react scripts step up your open source security game with whitesource
| 0
|
1,450
| 4,020,481,509
|
IssuesEvent
|
2016-05-16 18:34:03
|
cfpb/hmda-platform-ui
|
https://api.github.com/repos/cfpb/hmda-platform-ui
|
closed
|
Sessions and queries
|
Event Processing Persistence question
|
Is the data flow one directional or bi-directional? Before submitting anything (but after authenticating) does a user see information about what they have submitted previously? Or does a user just submit a file again and hope everything works out?
Should we be focused on making multiple submissions as easy as possible? Opt-in early submission both increases data quality and decreases time to publication... should we design the system with this in mind?
|
1.0
|
Sessions and queries - Is the data flow one directional or bi-directional? Before submitting anything (but after authenticating) does a user see information about what they have submitted previously? Or does a user just submit a file again and hope everything works out?
Should we be focused on making multiple submissions as easy as possible? Opt-in early submission both increases data quality and decreases time to publication... should we design the system with this in mind?
|
process
|
sessions and queries is the data flow one directional or bi directional before submitting anything but after authenticating does a user see information about what they have submitted previously or does a user just submit a file again and hope everything works out should we be focused on making multiple submissions as easy as possible opt in early submission both increases data quality and decreases time to publication should we design the system with this in mind
| 1
|
1,844
| 4,647,114,224
|
IssuesEvent
|
2016-10-01 09:09:48
|
AllenFang/react-bootstrap-table
|
https://api.github.com/repos/AllenFang/react-bootstrap-table
|
closed
|
caretRender needs more than just direction
|
inprocess
|
It would be really nice to add row and column parameters besides direction to caretRender, similar to the signature that sortFunc has.
I need to initially set the caret direction based on the last sort that user did to their table.
So for example if the list was sorted by name field with desc direction last time user sorted the list, I need to preserve that when they come back.
Cheers
|
1.0
|
caretRender needs more than just direction - It would be really nice to add row and column parameters besides direction to caretRender, similar to the signature that sortFunc has.
I need to initially set the caret direction based on the last sort that user did to their table.
So for example if the list was sorted by name field with desc direction last time user sorted the list, I need to preserve that when they come back.
Cheers
|
process
|
caretrender needs more than just direction it would be really nice to add row and column parameters besides direction to caretrender similar to the signature that sortfunc has i need to initially set the caret direction based on the last sort that user did to their table so for example if the list was sorted by name field with desc direction last time user sorted the list i need to preserve that when they come back cheers
| 1
|
10,748
| 13,541,552,950
|
IssuesEvent
|
2020-09-16 16:01:56
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
closed
|
Unable to connect when connection string contains $
|
bug/2-confirmed kind/regression process/candidate team/typescript
|
**Present only in Prisma 2.7.0**
๐จ Affects all users, not only the ones using nested env variables.
```env
DATABASE_URL="postgres://user:password@server.host:5432/database?ssl=1&schema=schema$1234"
```
<!--
Thanks for helping us improve Prisma! ๐ Please follow the sections in the template and provide as much information as possible about your problem, e.g. by setting the `DEBUG="*"` environment variable and enabling additional logging output in Prisma Client.
Learn more about writing proper bug reports here: https://pris.ly/d/bug-reports
-->
## Bug description
<!-- A clear and concise description of what the bug is. -->
Edge case missing from tests:
If the connection string specifies a schema containing the symbol `$`, only the first component is actually parsed.
Probably not the best idea to use $ in a schema name, Prisma 1 naming is to blame !
## How to reproduce
<!--
Steps to reproduce the behavior:
1. Go to '...'
2. Change '....'
3. Run '....'
4. See error
-->
Use the following `.env`
```env
DOTENV_PRISMA_EXPAND_DATABASE_URL="postgres://user:password@server.host:5432/database"
DOTENV_PRISMA_EXPAND_DATABASE_URL_WITH_SCHEMA="${DOTENV_PRISMA_EXPAND_DATABASE_URL}?ssl=1&schema=schema$1234"
```
Error displayed when performing ANY query (`schema` detected instead of `schema$1234`):
> Error: The table `schema.TableNameHere` does not exist in the current database.
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
Prisma should connect to the specified server, db and schema.
## Prisma information
N/A
## Environment & setup
<!-- In which environment does the problem occur -->
- OS: <!--[e.g. Mac OS, Windows, Debian, CentOS, ...]-->
- Database: <!--[PostgreSQL, MySQL, MariaDB or SQLite]-->
- Node.js version: <!--[Run `node -v` to see your Node.js version]-->
- Prisma version: 2.7.0
<!--[Run `prisma -v` to see your Prisma version and paste it between the ยดยดยด]-->
## Notes
The culprit is, most likely, `dotenv-expand`'s parsing (introduced in https://github.com/prisma/prisma/pull/3330).
|
1.0
|
Unable to connect when connection string contains $ - **Present only in Prisma 2.7.0**
๐จ Affects all users, not only the ones using nested env variables.
```env
DATABASE_URL="postgres://user:password@server.host:5432/database?ssl=1&schema=schema$1234"
```
<!--
Thanks for helping us improve Prisma! ๐ Please follow the sections in the template and provide as much information as possible about your problem, e.g. by setting the `DEBUG="*"` environment variable and enabling additional logging output in Prisma Client.
Learn more about writing proper bug reports here: https://pris.ly/d/bug-reports
-->
## Bug description
<!-- A clear and concise description of what the bug is. -->
Edge case missing from tests:
If the connection string specifies a schema containing the symbol `$`, only the first component is actually parsed.
Probably not the best idea to use $ in a schema name, Prisma 1 naming is to blame !
## How to reproduce
<!--
Steps to reproduce the behavior:
1. Go to '...'
2. Change '....'
3. Run '....'
4. See error
-->
Use the following `.env`
```env
DOTENV_PRISMA_EXPAND_DATABASE_URL="postgres://user:password@server.host:5432/database"
DOTENV_PRISMA_EXPAND_DATABASE_URL_WITH_SCHEMA="${DOTENV_PRISMA_EXPAND_DATABASE_URL}?ssl=1&schema=schema$1234"
```
Error displayed when performing ANY query (`schema` detected instead of `schema$1234`):
> Error: The table `schema.TableNameHere` does not exist in the current database.
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
Prisma should connect to the specified server, db and schema.
## Prisma information
N/A
## Environment & setup
<!-- In which environment does the problem occur -->
- OS: <!--[e.g. Mac OS, Windows, Debian, CentOS, ...]-->
- Database: <!--[PostgreSQL, MySQL, MariaDB or SQLite]-->
- Node.js version: <!--[Run `node -v` to see your Node.js version]-->
- Prisma version: 2.7.0
<!--[Run `prisma -v` to see your Prisma version and paste it between the ยดยดยด]-->
## Notes
The culprit is, most likely, `dotenv-expand`'s parsing (introduced in https://github.com/prisma/prisma/pull/3330).
|
process
|
unable to connect when connection string contains present only in prisma ๐จ affects all users not only the ones using nested env variables env database url postgres user password server host database ssl schema schema thanks for helping us improve prisma ๐ please follow the sections in the template and provide as much information as possible about your problem e g by setting the debug environment variable and enabling additional logging output in prisma client learn more about writing proper bug reports here bug description edge case missing from tests if the connection string specifies a schema containing the symbol only the first component is actually parsed probably not the best idea to use in a schema name prisma naming is to blame how to reproduce steps to reproduce the behavior go to change run see error use the following env env dotenv prisma expand database url postgres user password server host database dotenv prisma expand database url with schema dotenv prisma expand database url ssl schema schema error displayed when performing any query schema detected instead of schema error the table schema tablenamehere does not exist in the current database expected behavior prisma should connect to the specified server db and schema prisma information n a environment setup os database node js version prisma version notes the culprit is most likely dotenv expand s parsing introduced in
| 1
|
138,702
| 18,794,547,336
|
IssuesEvent
|
2021-11-08 20:36:29
|
Dima2022/concord
|
https://api.github.com/repos/Dima2022/concord
|
opened
|
CVE-2021-23424 (High) detected in ansi-html-0.0.7.tgz
|
security vulnerability
|
## CVE-2021-23424 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ansi-html-0.0.7.tgz</b></p></summary>
<p>An elegant lib that converts the chalked (ANSI) text to HTML.</p>
<p>Library home page: <a href="https://registry.npmjs.org/ansi-html/-/ansi-html-0.0.7.tgz">https://registry.npmjs.org/ansi-html/-/ansi-html-0.0.7.tgz</a></p>
<p>Path to dependency file: concord/console2/package.json</p>
<p>Path to vulnerable library: concord/console2/node_modules/ansi-html/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-4.0.3.tgz (Root Library)
- webpack-dev-server-3.11.1.tgz
- :x: **ansi-html-0.0.7.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Dima2022/concord/commit/e91cad7608046892d2a9cfeb280e9cd5350019a2">e91cad7608046892d2a9cfeb280e9cd5350019a2</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects all versions of package ansi-html. If an attacker provides a malicious string, it will get stuck processing the input for an extremely long time.
<p>Publish Date: 2021-08-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23424>CVE-2021-23424</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"ansi-html","packageVersion":"0.0.7","packageFilePaths":["/console2/package.json"],"isTransitiveDependency":true,"dependencyTree":"react-scripts:4.0.3;webpack-dev-server:3.11.1;ansi-html:0.0.7","isMinimumFixVersionAvailable":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-23424","vulnerabilityDetails":"This affects all versions of package ansi-html. If an attacker provides a malicious string, it will get stuck processing the input for an extremely long time.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23424","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2021-23424 (High) detected in ansi-html-0.0.7.tgz - ## CVE-2021-23424 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ansi-html-0.0.7.tgz</b></p></summary>
<p>An elegant lib that converts the chalked (ANSI) text to HTML.</p>
<p>Library home page: <a href="https://registry.npmjs.org/ansi-html/-/ansi-html-0.0.7.tgz">https://registry.npmjs.org/ansi-html/-/ansi-html-0.0.7.tgz</a></p>
<p>Path to dependency file: concord/console2/package.json</p>
<p>Path to vulnerable library: concord/console2/node_modules/ansi-html/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-4.0.3.tgz (Root Library)
- webpack-dev-server-3.11.1.tgz
- :x: **ansi-html-0.0.7.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Dima2022/concord/commit/e91cad7608046892d2a9cfeb280e9cd5350019a2">e91cad7608046892d2a9cfeb280e9cd5350019a2</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects all versions of package ansi-html. If an attacker provides a malicious string, it will get stuck processing the input for an extremely long time.
<p>Publish Date: 2021-08-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23424>CVE-2021-23424</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"ansi-html","packageVersion":"0.0.7","packageFilePaths":["/console2/package.json"],"isTransitiveDependency":true,"dependencyTree":"react-scripts:4.0.3;webpack-dev-server:3.11.1;ansi-html:0.0.7","isMinimumFixVersionAvailable":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-23424","vulnerabilityDetails":"This affects all versions of package ansi-html. If an attacker provides a malicious string, it will get stuck processing the input for an extremely long time.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23424","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve high detected in ansi html tgz cve high severity vulnerability vulnerable library ansi html tgz an elegant lib that converts the chalked ansi text to html library home page a href path to dependency file concord package json path to vulnerable library concord node modules ansi html package json dependency hierarchy react scripts tgz root library webpack dev server tgz x ansi html tgz vulnerable library found in head commit a href found in base branch master vulnerability details this affects all versions of package ansi html if an attacker provides a malicious string it will get stuck processing the input for an extremely long time publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree react scripts webpack dev server ansi html isminimumfixversionavailable false basebranches vulnerabilityidentifier cve vulnerabilitydetails this affects all versions of package ansi html if an attacker provides a malicious string it will get stuck processing the input for an extremely long time vulnerabilityurl
| 0
|
14,341
| 17,368,742,501
|
IssuesEvent
|
2021-07-30 11:00:58
|
darktable-org/darktable
|
https://api.github.com/repos/darktable-org/darktable
|
closed
|
Color-picker in negadoctor crash Dt (Segmentation fault (core dumped))
|
bug: pending scope: UI scope: image processing
|
In negadoctor, the use of clolor_picker crash Dt
1. Load a scan
2. Click on 'color_picker icon'
3. See error
[darktable_bt_BN5Z20.txt](https://github.com/darktable-org/darktable/files/6419759/darktable_bt_BN5Z20.txt)
[terminal04052021.txt](https://github.com/darktable-org/darktable/files/6419760/terminal04052021.txt)
darktable version : 3.5.0~git2052.5a55c65115-1 (Suze)
OS : Linux - kernel 5.3.0-64 lowlatency
Linux - Distro : Ubuntu studio 19.10
Memory : 32Gb
Graphics card : 2x Nvidia Geforce
Graphics driver : 440.100
OpenCL installed : yes
OpenCL activated : yes
Xorg : 1.20.5
Desktop : xfce
GTK+ : 2.24.32-Ubuntu1 + 3.24.12-1Ubuntu1
gcc : 8.4.0-1Ubuntu1-19.10 + 9.2.1-9Ubuntu2
cflags : -g -02
CMAKE_BUILD_TYPE : none
Can you reproduce with another darktable version(s)? no (Dt 3.4.1)
Can you reproduce with a RAW or Jpeg or both? Dng, tif, jpg
Are the steps above reproducible with a fresh edit (i.e. after discarding history)? yes
Is the issue still present using an empty/new config-dir (e.g. start darktable with --configdir "/tmp")? yes
|
1.0
|
Color-picker in negadoctor crash Dt (Segmentation fault (core dumped)) -
In negadoctor, the use of clolor_picker crash Dt
1. Load a scan
2. Click on 'color_picker icon'
3. See error
[darktable_bt_BN5Z20.txt](https://github.com/darktable-org/darktable/files/6419759/darktable_bt_BN5Z20.txt)
[terminal04052021.txt](https://github.com/darktable-org/darktable/files/6419760/terminal04052021.txt)
darktable version : 3.5.0~git2052.5a55c65115-1 (Suze)
OS : Linux - kernel 5.3.0-64 lowlatency
Linux - Distro : Ubuntu studio 19.10
Memory : 32Gb
Graphics card : 2x Nvidia Geforce
Graphics driver : 440.100
OpenCL installed : yes
OpenCL activated : yes
Xorg : 1.20.5
Desktop : xfce
GTK+ : 2.24.32-Ubuntu1 + 3.24.12-1Ubuntu1
gcc : 8.4.0-1Ubuntu1-19.10 + 9.2.1-9Ubuntu2
cflags : -g -02
CMAKE_BUILD_TYPE : none
Can you reproduce with another darktable version(s)? no (Dt 3.4.1)
Can you reproduce with a RAW or Jpeg or both? Dng, tif, jpg
Are the steps above reproducible with a fresh edit (i.e. after discarding history)? yes
Is the issue still present using an empty/new config-dir (e.g. start darktable with --configdir "/tmp")? yes
|
process
|
color picker in negadoctor crash dt segmentation fault core dumped in negadoctor the use of clolor picker crash dt load a scan click on color picker icon see error darktable version suze os linux kernel lowlatency linux distro ubuntu studio memory graphics card nvidia geforce graphics driver opencl installed yes opencl activated yes xorg desktop xfce gtk gcc cflags g cmake build type none can you reproduce with another darktable version s no dt can you reproduce with a raw or jpeg or both dng tif jpg are the steps above reproducible with a fresh edit i e after discarding history yes is the issue still present using an empty new config dir e g start darktable with configdir tmp yes
| 1
|
692,528
| 23,738,849,472
|
IssuesEvent
|
2022-08-31 10:32:02
|
renovatebot/renovate
|
https://api.github.com/repos/renovatebot/renovate
|
opened
|
Config Migration branch should be based from the default branch
|
type:bug priority-3-medium status:in-progress
|
### How are you running Renovate?
Self-hosted
### If you're self-hosting Renovate, tell us what version of Renovate you run.
latest
### If you're self-hosting Renovate, select which platform you are using.
github.com
### If you're self-hosting Renovate, tell us what version of the platform you run.
_No response_
### Was this something which used to work for you, and then stopped?
I never saw this working
### Describe the bug
When committing changes to Migrate the repository config, the new branch is based on the last processed base branch.
In the following example,
default branch is `main`.
`baseBranch` is set to `dev`.
-> therefore the Config Migration is from a branch checkout of `dev` (which is ahead of main) into main which interduces additional commits being part of the PR.
- https://github.com/ladzaretti/migration-base-branch-issue/pull/3
Minimal reproduction:
- https://github.com/ladzaretti/migration-base-branch-issue
### Relevant debug logs
<details><summary>Logs</summary>
```
Copy/paste the relevant log(s) here, between the starting and ending backticks
```
</details>
### Have you created a minimal reproduction repository?
I have linked to a minimal reproduction repository in the bug description
|
1.0
|
Config Migration branch should be based from the default branch - ### How are you running Renovate?
Self-hosted
### If you're self-hosting Renovate, tell us what version of Renovate you run.
latest
### If you're self-hosting Renovate, select which platform you are using.
github.com
### If you're self-hosting Renovate, tell us what version of the platform you run.
_No response_
### Was this something which used to work for you, and then stopped?
I never saw this working
### Describe the bug
When committing changes to Migrate the repository config, the new branch is based on the last processed base branch.
In the following example,
default branch is `main`.
`baseBranch` is set to `dev`.
-> therefore the Config Migration is from a branch checkout of `dev` (which is ahead of main) into main which interduces additional commits being part of the PR.
- https://github.com/ladzaretti/migration-base-branch-issue/pull/3
Minimal reproduction:
- https://github.com/ladzaretti/migration-base-branch-issue
### Relevant debug logs
<details><summary>Logs</summary>
```
Copy/paste the relevant log(s) here, between the starting and ending backticks
```
</details>
### Have you created a minimal reproduction repository?
I have linked to a minimal reproduction repository in the bug description
|
non_process
|
config migration branch should be based from the default branch how are you running renovate self hosted if you re self hosting renovate tell us what version of renovate you run latest if you re self hosting renovate select which platform you are using github com if you re self hosting renovate tell us what version of the platform you run no response was this something which used to work for you and then stopped i never saw this working describe the bug when committing changes to migrate the repository config the new branch is based on the last processed base branch in the following example default branch is main basebranch is set to dev therefore the config migration is from a branch checkout of dev which is ahead of main into main which interduces additional commits being part of the pr minimal reproduction relevant debug logs logs copy paste the relevant log s here between the starting and ending backticks have you created a minimal reproduction repository i have linked to a minimal reproduction repository in the bug description
| 0
|
18,467
| 24,550,130,272
|
IssuesEvent
|
2022-10-12 11:58:05
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[PM] Admin details screen > UI issue
|
Bug P2 Participant manager Process: Fixed Process: Tested QA Process: Tested dev
|
Admin details screen > UI issue > Reduce the gap between the search bar and selected apps

|
3.0
|
[PM] Admin details screen > UI issue - Admin details screen > UI issue > Reduce the gap between the search bar and selected apps

|
process
|
admin details screen ui issue admin details screen ui issue reduce the gap between the search bar and selected apps
| 1
|
12,937
| 15,302,124,524
|
IssuesEvent
|
2021-02-24 14:24:34
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
closed
|
Release 4.0 - December 2020
|
P1 release team-XProduct type: process
|
# Status of Bazel 4.0
- Target baseline: 2020-11-12
- Expected release date: 2021-01-18
- [List of release blockers](https://github.com/bazelbuild/bazel/labels/Release%20blocker)
To report a release-blocking bug, please file a bug using the `Release blocker` label, and cc me.
Task list:
- [x] Update GitHub issues for incompatible changes
- [x] Pick release baseline: 37a429ad12b4c9e6a62dbae4881a1ff03b81ab40
- [x] Create release candidate: https://releases.bazel.build/4.0.0/rc10/
- [x] Check downstream projects:
- [x] [Create draft release announcement](https://docs.google.com/document/d/1wDvulLlj4NAlPZamdlEVFORks3YXJonCjyuQMUQEmB0/edit)
- [x] Send for review the release announcement PR:
- [x] Push the release, notify package maintainers:
- [x] Update the documentation
- [x] Push the blog post
- [x] Update the [release page](https://github.com/bazelbuild/bazel/releases/)
|
1.0
|
Release 4.0 - December 2020 - # Status of Bazel 4.0
- Target baseline: 2020-11-12
- Expected release date: 2021-01-18
- [List of release blockers](https://github.com/bazelbuild/bazel/labels/Release%20blocker)
To report a release-blocking bug, please file a bug using the `Release blocker` label, and cc me.
Task list:
- [x] Update GitHub issues for incompatible changes
- [x] Pick release baseline: 37a429ad12b4c9e6a62dbae4881a1ff03b81ab40
- [x] Create release candidate: https://releases.bazel.build/4.0.0/rc10/
- [x] Check downstream projects:
- [x] [Create draft release announcement](https://docs.google.com/document/d/1wDvulLlj4NAlPZamdlEVFORks3YXJonCjyuQMUQEmB0/edit)
- [x] Send for review the release announcement PR:
- [x] Push the release, notify package maintainers:
- [x] Update the documentation
- [x] Push the blog post
- [x] Update the [release page](https://github.com/bazelbuild/bazel/releases/)
|
process
|
release december status of bazel target baseline expected release date to report a release blocking bug please file a bug using the release blocker label and cc me task list update github issues for incompatible changes pick release baseline create release candidate check downstream projects send for review the release announcement pr push the release notify package maintainers update the documentation push the blog post update the
| 1
|
19,195
| 10,334,442,290
|
IssuesEvent
|
2019-09-03 08:22:17
|
mchernolevskyi/photoTrivia
|
https://api.github.com/repos/mchernolevskyi/photoTrivia
|
opened
|
CVE-2019-12384 (Medium) detected in jackson-databind-2.9.4.jar
|
security vulnerability
|
## CVE-2019-12384 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.4.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /photoTrivia/pom.xml</p>
<p>Path to vulnerable library: /root/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.4/jackson-databind-2.9.4.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.1.RELEASE.jar (Root Library)
- spring-boot-starter-json-2.0.0.BUILD-SNAPSHOT.jar
- :x: **jackson-databind-2.9.4.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/mchernolevskyi/photoTrivia/commit/8c6b52f5fb962a6e6707ab8f1a397860ab036422">8c6b52f5fb962a6e6707ab8f1a397860ab036422</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.9.1 might allow attackers to have a variety of impacts by leveraging failure to block the logback-core class from polymorphic deserialization. Depending on the classpath content, remote code execution may be possible.
<p>Publish Date: 2019-06-24
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12384>CVE-2019-12384</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12384">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12384</a></p>
<p>Release Date: 2019-08-12</p>
<p>Fix Resolution: 2.9.9.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2019-12384 (Medium) detected in jackson-databind-2.9.4.jar - ## CVE-2019-12384 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.4.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /photoTrivia/pom.xml</p>
<p>Path to vulnerable library: /root/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.4/jackson-databind-2.9.4.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.1.RELEASE.jar (Root Library)
- spring-boot-starter-json-2.0.0.BUILD-SNAPSHOT.jar
- :x: **jackson-databind-2.9.4.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/mchernolevskyi/photoTrivia/commit/8c6b52f5fb962a6e6707ab8f1a397860ab036422">8c6b52f5fb962a6e6707ab8f1a397860ab036422</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.9.1 might allow attackers to have a variety of impacts by leveraging failure to block the logback-core class from polymorphic deserialization. Depending on the classpath content, remote code execution may be possible.
<p>Publish Date: 2019-06-24
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12384>CVE-2019-12384</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12384">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12384</a></p>
<p>Release Date: 2019-08-12</p>
<p>Fix Resolution: 2.9.9.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in jackson databind jar cve medium severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file phototrivia pom xml path to vulnerable library root repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy spring boot starter web release jar root library spring boot starter json build snapshot jar x jackson databind jar vulnerable library found in head commit a href vulnerability details fasterxml jackson databind x before might allow attackers to have a variety of impacts by leveraging failure to block the logback core class from polymorphic deserialization depending on the classpath content remote code execution may be possible publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.