Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
15,001
| 18,682,582,620
|
IssuesEvent
|
2021-11-01 08:15:12
|
Ghost-chu/QuickShop-Reremake
|
https://api.github.com/repos/Ghost-chu/QuickShop-Reremake
|
closed
|
[BUG] SPANISH LANG BROKEN
|
Bug Priority:Major In Process
|
### Description
I update to 5.0.0.0. i restart my server and now all signs change languaje to english. (before use spanish like default)
Also i put spanish manually but traslations are broken(? some traslasted words now are in english
EXAMPLE:

Title now say "adminshop" instead "tienda admin" and price is "[price] each" instead "[price] la unidad"
### Steps to reproduce
Update
Use default settings
BUG!
### Expected Behaviour
1. Use spanish like default (like before)
2. Use before traslated words
### Screenshots
COMPARATION (BEFORE-AFTER)


### `/qs paste` URL
https://paste.helpch.at/reyutuhuvi
### Additional Context
_No response_
|
1.0
|
[BUG] SPANISH LANG BROKEN - ### Description
I update to 5.0.0.0. i restart my server and now all signs change languaje to english. (before use spanish like default)
Also i put spanish manually but traslations are broken(? some traslasted words now are in english
EXAMPLE:

Title now say "adminshop" instead "tienda admin" and price is "[price] each" instead "[price] la unidad"
### Steps to reproduce
Update
Use default settings
BUG!
### Expected Behaviour
1. Use spanish like default (like before)
2. Use before traslated words
### Screenshots
COMPARATION (BEFORE-AFTER)


### `/qs paste` URL
https://paste.helpch.at/reyutuhuvi
### Additional Context
_No response_
|
process
|
spanish lang broken description i update to i restart my server and now all signs change languaje to english before use spanish like default also i put spanish manually but traslations are broken some traslasted words now are in english example title now say adminshop instead tienda admin and price is each instead la unidad steps to reproduce update use default settings bug expected behaviour use spanish like default like before use before traslated words screenshots comparation before after qs paste url additional context no response
| 1
|
392,332
| 26,935,075,577
|
IssuesEvent
|
2023-02-07 19:56:01
|
jenkins-infra/helpdesk
|
https://api.github.com/repos/jenkins-infra/helpdesk
|
opened
|
Redirect Chinese pages to English pages and shutdown the Chinese site
|
documentation triage
|
### Describe your use-case which is not covered by existing documentation.
The [Chinese language web site](https://www.jenkins.io/zh/) for Jenkins is outdated, has no maintainers, and is receiving bug reports when users follow the instructions on that site.
The Jenkins governance board meeting Feb 6, 2023 approved that the Chinese site be replaced with redirects to the English language pages. If Chinese translation contributors become involved again, we can consider reinstating the Chinese site.
### Reference any relevant documentation, other materials or issues/pull requests that can be used for inspiration.
This will need more detailed planning before it is implemented. It is not enough to disable the Chinese pages. We need to assure that Chinese users are taken to the English language pages.
|
1.0
|
Redirect Chinese pages to English pages and shutdown the Chinese site - ### Describe your use-case which is not covered by existing documentation.
The [Chinese language web site](https://www.jenkins.io/zh/) for Jenkins is outdated, has no maintainers, and is receiving bug reports when users follow the instructions on that site.
The Jenkins governance board meeting Feb 6, 2023 approved that the Chinese site be replaced with redirects to the English language pages. If Chinese translation contributors become involved again, we can consider reinstating the Chinese site.
### Reference any relevant documentation, other materials or issues/pull requests that can be used for inspiration.
This will need more detailed planning before it is implemented. It is not enough to disable the Chinese pages. We need to assure that Chinese users are taken to the English language pages.
|
non_process
|
redirect chinese pages to english pages and shutdown the chinese site describe your use case which is not covered by existing documentation the for jenkins is outdated has no maintainers and is receiving bug reports when users follow the instructions on that site the jenkins governance board meeting feb approved that the chinese site be replaced with redirects to the english language pages if chinese translation contributors become involved again we can consider reinstating the chinese site reference any relevant documentation other materials or issues pull requests that can be used for inspiration this will need more detailed planning before it is implemented it is not enough to disable the chinese pages we need to assure that chinese users are taken to the english language pages
| 0
|
26,389
| 5,247,012,745
|
IssuesEvent
|
2017-02-01 11:31:27
|
UTNkar/moore
|
https://api.github.com/repos/UTNkar/moore
|
closed
|
Add testing instructions
|
documentation enhancement
|
### Description
Currently nobody knows how to test the system. This is BAD programming practice. Please provide more documentatino
### Steps to Reproduce
1. Look at `README.md`
2. See `#testing`
**Expected behavior:** Find instructions how to run tests
**Actual behavior:** Nothing is shown
|
1.0
|
Add testing instructions - ### Description
Currently nobody knows how to test the system. This is BAD programming practice. Please provide more documentatino
### Steps to Reproduce
1. Look at `README.md`
2. See `#testing`
**Expected behavior:** Find instructions how to run tests
**Actual behavior:** Nothing is shown
|
non_process
|
add testing instructions description currently nobody knows how to test the system this is bad programming practice please provide more documentatino steps to reproduce look at readme md see testing expected behavior find instructions how to run tests actual behavior nothing is shown
| 0
|
9,062
| 12,136,755,519
|
IssuesEvent
|
2020-04-23 14:48:42
|
MHRA/products
|
https://api.github.com/repos/MHRA/products
|
closed
|
Unauthorised request returns wrong error code
|
BUG :bug: EPIC - Auto Batch Process :oncoming_automobile:
|
**Describe the bug**
An unauthorized request to the document-index-updater returns a 500 (internal server error) but should return a 401 (unauthorized). This makes it more difficult for clients to diagnose possible issues with requests, as it surfaces like an issue on the document-index-updater side.
**To Reproduce**
Steps to reproduce the behavior:
1. Perform `curl -L -X GET 'https://doc-index-updater.test.mhra.gov.uk/jobs/f9ec57a6-3d54-43d6-a1ee-924117f2b72b' \
-H 'Content-Type: application/json' \
-H 'Authorization: Basic dXNrcm5hbWU6cGFzc3dvcmQ='`
2. Observe 500 error response
**Expected behavior**
401 response
**Screenshots**
N/A
|
1.0
|
Unauthorised request returns wrong error code - **Describe the bug**
An unauthorized request to the document-index-updater returns a 500 (internal server error) but should return a 401 (unauthorized). This makes it more difficult for clients to diagnose possible issues with requests, as it surfaces like an issue on the document-index-updater side.
**To Reproduce**
Steps to reproduce the behavior:
1. Perform `curl -L -X GET 'https://doc-index-updater.test.mhra.gov.uk/jobs/f9ec57a6-3d54-43d6-a1ee-924117f2b72b' \
-H 'Content-Type: application/json' \
-H 'Authorization: Basic dXNrcm5hbWU6cGFzc3dvcmQ='`
2. Observe 500 error response
**Expected behavior**
401 response
**Screenshots**
N/A
|
process
|
unauthorised request returns wrong error code describe the bug an unauthorized request to the document index updater returns a internal server error but should return a unauthorized this makes it more difficult for clients to diagnose possible issues with requests as it surfaces like an issue on the document index updater side to reproduce steps to reproduce the behavior perform curl l x get h content type application json h authorization basic observe error response expected behavior response screenshots n a
| 1
|
2,202
| 5,040,769,837
|
IssuesEvent
|
2016-12-19 07:34:51
|
jlm2017/jlm-video-subtitles
|
https://api.github.com/repos/jlm2017/jlm-video-subtitles
|
opened
|
[subtitles] [eng] « La cause du terrorisme est dans la guerre et l'argent » - Jean-Luc Mélenchon au Parlement européen
|
Language: English Process: [1] Writing in progress
|
# Video title
« La cause du terrorisme est dans la guerre et l'argent » - Jean-Luc Mélenchon au Parlement européen
# URL
https://www.youtube.com/watch?v=A3KM2oOi-Jc&t=5s
# Youtube subtitle language
Anglais
# Duration
3:26
# URL subtitles
https://www.youtube.com/timedtext_editor?tab=captions&ref=player&lang=en&v=A3KM2oOi-Jc&ui=hd&action_mde_edit_form=1&bl=vmp
|
1.0
|
[subtitles] [eng] « La cause du terrorisme est dans la guerre et l'argent » - Jean-Luc Mélenchon au Parlement européen - # Video title
« La cause du terrorisme est dans la guerre et l'argent » - Jean-Luc Mélenchon au Parlement européen
# URL
https://www.youtube.com/watch?v=A3KM2oOi-Jc&t=5s
# Youtube subtitle language
Anglais
# Duration
3:26
# URL subtitles
https://www.youtube.com/timedtext_editor?tab=captions&ref=player&lang=en&v=A3KM2oOi-Jc&ui=hd&action_mde_edit_form=1&bl=vmp
|
process
|
« la cause du terrorisme est dans la guerre et l argent » jean luc mélenchon au parlement européen video title « la cause du terrorisme est dans la guerre et l argent » jean luc mélenchon au parlement européen url youtube subtitle language anglais duration url subtitles
| 1
|
139,591
| 12,875,823,377
|
IssuesEvent
|
2020-07-11 00:55:40
|
mimiframework/Mimi.jl
|
https://api.github.com/repos/mimiframework/Mimi.jl
|
closed
|
finish v1.0.0 docs
|
documentation
|
Documentation required for release of v0.10.0 (and v1.0.0) marked with **TODO**
**1. Home** - a single file
**2. Tutorials** - header
- Tutorials Intro
- Tutorial 1: Install Mimi
- Tutorial 2: Run an Existing Model
- Tutorial 3: Modify an Existing Model
- Tutorial 4: Create a Model
- Tutorial 5: Sensitivity Analysis
**3. How-to Guides** - header
- How-to Guides Intro
- How-to Guide 1: Construct and Run a Model
- How-to Guide 2: View and Explore Model Results
- How-to Guide 3: Conduct Sensitivity Analysis
- How-to Guide 4: Work with Timesteps, Parameters, and Variables
- How-to Guide 5: Port to Mimi v0.5.0
- How-to Guide 6: Port to Mimi v1.0.0 - **TODO** complete this
**4. Advanced How-to Guides** - header
- Advanced How-to Guides Intro
- Build and Init Functions
- Using Datum References
**5. Reference Guides** - header
- Reference Guides Intro
- Mimi API
- Structures 1. Overview - **TODO** review and edit
- Structures 2. Definitions - **TODO** review and edit
- Structures 3. Instances - **TODO** review and edit
**6. FAQ** - a single file
|
1.0
|
finish v1.0.0 docs - Documentation required for release of v0.10.0 (and v1.0.0) marked with **TODO**
**1. Home** - a single file
**2. Tutorials** - header
- Tutorials Intro
- Tutorial 1: Install Mimi
- Tutorial 2: Run an Existing Model
- Tutorial 3: Modify an Existing Model
- Tutorial 4: Create a Model
- Tutorial 5: Sensitivity Analysis
**3. How-to Guides** - header
- How-to Guides Intro
- How-to Guide 1: Construct and Run a Model
- How-to Guide 2: View and Explore Model Results
- How-to Guide 3: Conduct Sensitivity Analysis
- How-to Guide 4: Work with Timesteps, Parameters, and Variables
- How-to Guide 5: Port to Mimi v0.5.0
- How-to Guide 6: Port to Mimi v1.0.0 - **TODO** complete this
**4. Advanced How-to Guides** - header
- Advanced How-to Guides Intro
- Build and Init Functions
- Using Datum References
**5. Reference Guides** - header
- Reference Guides Intro
- Mimi API
- Structures 1. Overview - **TODO** review and edit
- Structures 2. Definitions - **TODO** review and edit
- Structures 3. Instances - **TODO** review and edit
**6. FAQ** - a single file
|
non_process
|
finish docs documentation required for release of and marked with todo home a single file tutorials header tutorials intro tutorial install mimi tutorial run an existing model tutorial modify an existing model tutorial create a model tutorial sensitivity analysis how to guides header how to guides intro how to guide construct and run a model how to guide view and explore model results how to guide conduct sensitivity analysis how to guide work with timesteps parameters and variables how to guide port to mimi how to guide port to mimi todo complete this advanced how to guides header advanced how to guides intro build and init functions using datum references reference guides header reference guides intro mimi api structures overview todo review and edit structures definitions todo review and edit structures instances todo review and edit faq a single file
| 0
|
144,135
| 19,274,740,386
|
IssuesEvent
|
2021-12-10 10:28:37
|
symfony/symfony
|
https://api.github.com/repos/symfony/symfony
|
closed
|
[Security][RFC] Manual reset of login throttle
|
Security RFC
|
### Description
I can see a use case where login throttling is set to longer periods (e.g., 10 minutes), and you want to give support staff the ability to manually reset a fat-fingered user's login throttle instead of telling the person to make a sandwich and wait.
Perhaps something like a ``LoginThrottlingResetListener``?
### Example
_No response_
|
True
|
[Security][RFC] Manual reset of login throttle - ### Description
I can see a use case where login throttling is set to longer periods (e.g., 10 minutes), and you want to give support staff the ability to manually reset a fat-fingered user's login throttle instead of telling the person to make a sandwich and wait.
Perhaps something like a ``LoginThrottlingResetListener``?
### Example
_No response_
|
non_process
|
manual reset of login throttle description i can see a use case where login throttling is set to longer periods e g minutes and you want to give support staff the ability to manually reset a fat fingered user s login throttle instead of telling the person to make a sandwich and wait perhaps something like a loginthrottlingresetlistener example no response
| 0
|
54,963
| 3,071,728,040
|
IssuesEvent
|
2015-08-19 13:45:49
|
pavel-pimenov/flylinkdc-r5xx
|
https://api.github.com/repos/pavel-pimenov/flylinkdc-r5xx
|
opened
|
Ошибка в модуле отображения ссылок в чате
|
bug Component-UI imported Priority-Medium Usability
|
_From [dimitrij...@gmail.com](https://code.google.com/u/117085084104156933070/) on November 20, 2010 02:32:35_
При вставке магнета на файл
Аватар_-_Avatar_(Extended_Collector's_Cut)_2009_1080p_BDRip_x264@L4.1_DTS5.1_rus_eng.mkv (см. screenshot-258.png)
в чате только названия файла до ' отображается как ссылка, символ @ заменяется на %40 (см. screenshot-257.png)
FlylinkDC++ ( r500 )-beta45 x64 build 5291
Windows Server 2003 x64
**Attachment:** [screenshot-257.png screenshot-258.png](http://code.google.com/p/flylinkdc/issues/detail?id=224)
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=224_
|
1.0
|
Ошибка в модуле отображения ссылок в чате - _From [dimitrij...@gmail.com](https://code.google.com/u/117085084104156933070/) on November 20, 2010 02:32:35_
При вставке магнета на файл
Аватар_-_Avatar_(Extended_Collector's_Cut)_2009_1080p_BDRip_x264@L4.1_DTS5.1_rus_eng.mkv (см. screenshot-258.png)
в чате только названия файла до ' отображается как ссылка, символ @ заменяется на %40 (см. screenshot-257.png)
FlylinkDC++ ( r500 )-beta45 x64 build 5291
Windows Server 2003 x64
**Attachment:** [screenshot-257.png screenshot-258.png](http://code.google.com/p/flylinkdc/issues/detail?id=224)
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=224_
|
non_process
|
ошибка в модуле отображения ссылок в чате from on november при вставке магнета на файл аватар avatar extended collector s cut bdrip rus eng mkv см screenshot png в чате только названия файла до отображается как ссылка символ заменяется на см screenshot png flylinkdc build windows server attachment original issue
| 0
|
10,243
| 13,099,405,920
|
IssuesEvent
|
2020-08-03 21:31:59
|
googleapis/java-pubsublite
|
https://api.github.com/repos/googleapis/java-pubsublite
|
closed
|
Enable `clirr` to unblock Maven releases
|
api: pubsublite type: process
|
See https://github.com/googleapis/java-pubsublite/issues/131#issuecomment-665941171. We need to re-enable `clirr` to publish new releases of the library on Maven. Release is stuck at 0.1.7.
https://mvnrepository.com/artifact/com.google.cloud/google-cloud-pubsublite
Here's an example `clirr-ignored-differences.xml` that ignores protos changes.
https://github.com/googleapis/java-asset/blob/master/proto-google-cloud-asset-v1/clirr-ignored-differences.xml
|
1.0
|
Enable `clirr` to unblock Maven releases - See https://github.com/googleapis/java-pubsublite/issues/131#issuecomment-665941171. We need to re-enable `clirr` to publish new releases of the library on Maven. Release is stuck at 0.1.7.
https://mvnrepository.com/artifact/com.google.cloud/google-cloud-pubsublite
Here's an example `clirr-ignored-differences.xml` that ignores protos changes.
https://github.com/googleapis/java-asset/blob/master/proto-google-cloud-asset-v1/clirr-ignored-differences.xml
|
process
|
enable clirr to unblock maven releases see we need to re enable clirr to publish new releases of the library on maven release is stuck at here s an example clirr ignored differences xml that ignores protos changes
| 1
|
1,223
| 3,755,495,222
|
IssuesEvent
|
2016-03-12 18:05:36
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
doc: Event: 'unhandledRejection' code example not borked
|
doc good first contribution process
|
https://nodejs.org/api/process.html#process_event_unhandledrejection
```js
somePromise.then((res) => {
return reportToUser(JSON.parse(res)); // note the typo
}); // no `.catch` or `.then`
```
Someone [made off](https://github.com/jasnell/node/commit/f950904650a33ad9cdb0ecf6eb6cf1df335c14ca) with the typo. When this is put back together I'd suggest updating to something like:
```js
// note the typo (`pasre`)
```
|
1.0
|
doc: Event: 'unhandledRejection' code example not borked - https://nodejs.org/api/process.html#process_event_unhandledrejection
```js
somePromise.then((res) => {
return reportToUser(JSON.parse(res)); // note the typo
}); // no `.catch` or `.then`
```
Someone [made off](https://github.com/jasnell/node/commit/f950904650a33ad9cdb0ecf6eb6cf1df335c14ca) with the typo. When this is put back together I'd suggest updating to something like:
```js
// note the typo (`pasre`)
```
|
process
|
doc event unhandledrejection code example not borked js somepromise then res return reporttouser json parse res note the typo no catch or then someone with the typo when this is put back together i d suggest updating to something like js note the typo pasre
| 1
|
7,286
| 10,435,660,321
|
IssuesEvent
|
2019-09-17 17:47:39
|
usgs/libcomcat
|
https://api.github.com/repos/usgs/libcomcat
|
closed
|
Add libcomcat-code-version to UserAgent for all url requests
|
process
|
This will allow web guys to track how many search requests are coming from libcomcat as opposed to manual or other searches.
|
1.0
|
Add libcomcat-code-version to UserAgent for all url requests - This will allow web guys to track how many search requests are coming from libcomcat as opposed to manual or other searches.
|
process
|
add libcomcat code version to useragent for all url requests this will allow web guys to track how many search requests are coming from libcomcat as opposed to manual or other searches
| 1
|
580,945
| 17,270,453,873
|
IssuesEvent
|
2021-07-22 19:04:49
|
GaloisInc/saw-script
|
https://api.github.com/repos/GaloisInc/saw-script
|
closed
|
NULL-checking a pointer argument yields "Proof failed."
|
error-messages maybe-fixed priority
|
Assume a function that checks whether its pointer argument is NULL:
``` c
uint8_t test(uint8_t* a) {
return a ? 0 : 1;
}
```
Now take a SAW script that proves it always returns 0 with a non-null argument:
```
llvm_verify m "test" [] do {
llvm_ptr "a" (llvm_array 1 (llvm_int 8));
a <- llvm_var "*a" (llvm_array 1 (llvm_int 8));
llvm_return {{ 0:[8] }};
llvm_verify_tactic abc;
};
```
This however returns "Proof failed.":
```
Loading module Cryptol
Loading file "test.saw"
When verifying @test:
Proof of return value failed.
Counterexample:
lss__alloc0: 0
return value
Encountered: 1
Expected: 0
saw: user error (Proof failed.)
```
Adding `llvm_assert_eq "*a" {{ [0:[8]] }};` to fix the input argument to a specific value doesn't help here either. The only way to make this work seems currently to remove the NULL-check.
|
1.0
|
NULL-checking a pointer argument yields "Proof failed." - Assume a function that checks whether its pointer argument is NULL:
``` c
uint8_t test(uint8_t* a) {
return a ? 0 : 1;
}
```
Now take a SAW script that proves it always returns 0 with a non-null argument:
```
llvm_verify m "test" [] do {
llvm_ptr "a" (llvm_array 1 (llvm_int 8));
a <- llvm_var "*a" (llvm_array 1 (llvm_int 8));
llvm_return {{ 0:[8] }};
llvm_verify_tactic abc;
};
```
This however returns "Proof failed.":
```
Loading module Cryptol
Loading file "test.saw"
When verifying @test:
Proof of return value failed.
Counterexample:
lss__alloc0: 0
return value
Encountered: 1
Expected: 0
saw: user error (Proof failed.)
```
Adding `llvm_assert_eq "*a" {{ [0:[8]] }};` to fix the input argument to a specific value doesn't help here either. The only way to make this work seems currently to remove the NULL-check.
|
non_process
|
null checking a pointer argument yields proof failed assume a function that checks whether its pointer argument is null c t test t a return a now take a saw script that proves it always returns with a non null argument llvm verify m test do llvm ptr a llvm array llvm int a llvm var a llvm array llvm int llvm return llvm verify tactic abc this however returns proof failed loading module cryptol loading file test saw when verifying test proof of return value failed counterexample lss return value encountered expected saw user error proof failed adding llvm assert eq a to fix the input argument to a specific value doesn t help here either the only way to make this work seems currently to remove the null check
| 0
|
8,347
| 11,737,405,079
|
IssuesEvent
|
2020-03-11 14:35:06
|
OpenDRR/opendrr-api
|
https://api.github.com/repos/OpenDRR/opendrr-api
|
opened
|
Look into ways to sync PostGIS with Elasticsearch
|
Requirement
|
We need to synchronize PostGIS with Elasticsearch. This need to be done when data in updated in PostGIS. Maybe we look at LogStash?
|
1.0
|
Look into ways to sync PostGIS with Elasticsearch - We need to synchronize PostGIS with Elasticsearch. This need to be done when data in updated in PostGIS. Maybe we look at LogStash?
|
non_process
|
look into ways to sync postgis with elasticsearch we need to synchronize postgis with elasticsearch this need to be done when data in updated in postgis maybe we look at logstash
| 0
|
17,948
| 23,940,604,326
|
IssuesEvent
|
2022-09-11 21:06:13
|
GregTechCEu/gt-ideas
|
https://api.github.com/repos/GregTechCEu/gt-ideas
|
opened
|
Air Distillation Overhaul
|
processing chain
|
## Details
Currently, air distillation in gregtech simply involves cooling some air and distilling it to get some stuff. However I think it can be made more painful. This will outline a more realistic air separation process.
## Products
Main Product: Nitrogen, Oxygen, Argon
Side Product(s): Carbon Dioxide, Water
## Steps
Air is collected with air collector
Air is passed through a molecular sieve to remove water vapor, carbon dioxide, and dust
10 b Air + Molecular Sieve (NC) -> 10 b Filtered Air + 10 mB carbon dioxide + 100 mB water + Small pile of ash (25% chance)
The Air is then cooled
1 b Air -> Vacuum Freezer -> 1 b Liquid Air
The Air is distilled
100 b Liquid Air -> Distillation Tower -> 78 b Liquid Nitrogen + 21 b Liquid Oxygen + 1 b Liquid Argon
The product gases will have to be heated from the liquid phase to a gas phase to be used
## Yield
Per 100b of Air:
78 b Liquid Nitrogen
21 b Liquid Oxygen
1 b Liquid Argon
1 b Water
100 mB carbon dioxide
Some ash
## Sources
https://en.wikipedia.org/wiki/Air_separation#Cryogenic_liquification_process
|
1.0
|
Air Distillation Overhaul - ## Details
Currently, air distillation in gregtech simply involves cooling some air and distilling it to get some stuff. However I think it can be made more painful. This will outline a more realistic air separation process.
## Products
Main Product: Nitrogen, Oxygen, Argon
Side Product(s): Carbon Dioxide, Water
## Steps
Air is collected with air collector
Air is passed through a molecular sieve to remove water vapor, carbon dioxide, and dust
10 b Air + Molecular Sieve (NC) -> 10 b Filtered Air + 10 mB carbon dioxide + 100 mB water + Small pile of ash (25% chance)
The Air is then cooled
1 b Air -> Vacuum Freezer -> 1 b Liquid Air
The Air is distilled
100 b Liquid Air -> Distillation Tower -> 78 b Liquid Nitrogen + 21 b Liquid Oxygen + 1 b Liquid Argon
The product gases will have to be heated from the liquid phase to a gas phase to be used
## Yield
Per 100b of Air:
78 b Liquid Nitrogen
21 b Liquid Oxygen
1 b Liquid Argon
1 b Water
100 mB carbon dioxide
Some ash
## Sources
https://en.wikipedia.org/wiki/Air_separation#Cryogenic_liquification_process
|
process
|
air distillation overhaul details currently air distillation in gregtech simply involves cooling some air and distilling it to get some stuff however i think it can be made more painful this will outline a more realistic air separation process products main product nitrogen oxygen argon side product s carbon dioxide water steps air is collected with air collector air is passed through a molecular sieve to remove water vapor carbon dioxide and dust b air molecular sieve nc b filtered air mb carbon dioxide mb water small pile of ash chance the air is then cooled b air vacuum freezer b liquid air the air is distilled b liquid air distillation tower b liquid nitrogen b liquid oxygen b liquid argon the product gases will have to be heated from the liquid phase to a gas phase to be used yield per of air b liquid nitrogen b liquid oxygen b liquid argon b water mb carbon dioxide some ash sources
| 1
|
172,145
| 14,350,742,017
|
IssuesEvent
|
2020-11-29 22:13:39
|
RoyMagnussen/CottonBox-Data-Centric
|
https://api.github.com/repos/RoyMagnussen/CottonBox-Data-Centric
|
closed
|
[CHANGE REQUEST] User Stories
|
documentation
|
#### Describe the change wanted
Add User Stories to the project to provide people with information about what users of E-Commerce sites would like to see implemented.
#### Proposed solutions
- Add a `User Stories` section within the `UX` section of the `README` file and provide the user stories there.
|
1.0
|
[CHANGE REQUEST] User Stories - #### Describe the change wanted
Add User Stories to the project to provide people with information about what users of E-Commerce sites would like to see implemented.
#### Proposed solutions
- Add a `User Stories` section within the `UX` section of the `README` file and provide the user stories there.
|
non_process
|
user stories describe the change wanted add user stories to the project to provide people with information about what users of e commerce sites would like to see implemented proposed solutions add a user stories section within the ux section of the readme file and provide the user stories there
| 0
|
12,801
| 15,181,037,028
|
IssuesEvent
|
2021-02-15 02:10:11
|
Geonovum/disgeo-arch
|
https://api.github.com/repos/Geonovum/disgeo-arch
|
closed
|
4.2 Functionele lagen in de inrichting: capabilities, functies en componenten?
|
Cosmetisch In Behandeling In behandeling - voorstel processen e.d. Processen Functies Componenten
|
In de alinea boven figuur 7 wordt gesproken over capabilities, clusters van functies en componenten. Onduidelijk is hoe deze termen zich tot elkaar verhouden of dat het synoniemen zijn. Kies één van deze drie, en hanteer die consequent in de rest van het document.
|
2.0
|
4.2 Functionele lagen in de inrichting: capabilities, functies en componenten? - In de alinea boven figuur 7 wordt gesproken over capabilities, clusters van functies en componenten. Onduidelijk is hoe deze termen zich tot elkaar verhouden of dat het synoniemen zijn. Kies één van deze drie, en hanteer die consequent in de rest van het document.
|
process
|
functionele lagen in de inrichting capabilities functies en componenten in de alinea boven figuur wordt gesproken over capabilities clusters van functies en componenten onduidelijk is hoe deze termen zich tot elkaar verhouden of dat het synoniemen zijn kies één van deze drie en hanteer die consequent in de rest van het document
| 1
|
84,674
| 16,534,420,531
|
IssuesEvent
|
2021-05-27 10:04:33
|
CiviWiki/OpenCiviWiki
|
https://api.github.com/repos/CiviWiki/OpenCiviWiki
|
closed
|
Remove features related to legislation
|
code quality dependencies enhancement good first issue help wanted mentoring
|
The legislation feature is currently U.S.-centric, making this project less relevant in other countries. We rely on third-party data sources for information on legislation, which adds a maintenance burden that this project cannot bear. Furthermore, we cannot support adding legislation for other political forms.
Remove code related to legislation as part of simplifying our data model, removing third-party integrations, and making the project useful in an international context.
## Related code
Remove related code from the following sections.
:warning: **Note** Remember to run the [`makemigrations` command](https://docs.djangoproject.com/en/3.2/ref/django-admin/#django-admin-makemigrations) after changing the project models.
### Model(s)
https://github.com/CiviWiki/OpenCiviWiki/blob/f152ad5986ebfd25ff60ad5ca7c34563818f1e17/project/api/models/bill.py
### Accounts model
https://github.com/CiviWiki/OpenCiviWiki/blob/f152ad5986ebfd25ff60ad5ca7c34563818f1e17/project/api/models/account.py#L47
https://github.com/CiviWiki/OpenCiviWiki/blob/f152ad5986ebfd25ff60ad5ca7c34563818f1e17/project/api/models/account.py#L285-L322
### CiviWiki models
https://github.com/CiviWiki/OpenCiviWiki/blob/f152ad5986ebfd25ff60ad5ca7c34563818f1e17/project/api/models/rationale.py#L9
https://github.com/CiviWiki/OpenCiviWiki/blob/f152ad5986ebfd25ff60ad5ca7c34563818f1e17/project/api/models/rationale.py#L3
### Templates
Remove the following files.
https://github.com/CiviWiki/OpenCiviWiki/blob/ff74546f8fdc277c9bc5705f3df0c807c1130c62/project/webapp/templates/partials/account/tabs/my_bills.html
### Data integration code
Remove the following
https://github.com/CiviWiki/OpenCiviWiki/blob/f152ad5986ebfd25ff60ad5ca7c34563818f1e17/project/api/tests/propublica_responses.py
https://github.com/CiviWiki/OpenCiviWiki/blob/f152ad5986ebfd25ff60ad5ca7c34563818f1e17/project/api/propublica.py
### CiviWiki API/design files
https://github.com/CiviWiki/OpenCiviWiki/blob/ff74546f8fdc277c9bc5705f3df0c807c1130c62/docs/design/bill_api.json
https://github.com/CiviWiki/OpenCiviWiki/blob/f152ad5986ebfd25ff60ad5ca7c34563818f1e17/project/api/views/bill.py
### Votes
https://github.com/CiviWiki/OpenCiviWiki/blob/f152ad5986ebfd25ff60ad5ca7c34563818f1e17/project/api/models/vote.py
https://github.com/CiviWiki/OpenCiviWiki/blob/f152ad5986ebfd25ff60ad5ca7c34563818f1e17/project/api/tasks.py#L74-L80
### Management command
https://github.com/CiviWiki/OpenCiviWiki/blob/f152ad5986ebfd25ff60ad5ca7c34563818f1e17/project/api/management/commands/gather_votes_data.py
Delete the `api/management/command/` directory, since it only contains one file.
|
1.0
|
Remove features related to legislation - The legislation feature is currently U.S.-centric, making this project less relevant in other countries. We rely on third-party data sources for information on legislation, which adds a maintenance burden that this project cannot bear. Furthermore, we cannot support adding legislation for other political forms.
Remove code related to legislation as part of simplifying our data model, removing third-party integrations, and making the project useful in an international context.
## Related code
Remove related code from the following sections.
:warning: **Note** Remember to run the [`makemigrations` command](https://docs.djangoproject.com/en/3.2/ref/django-admin/#django-admin-makemigrations) after changing the project models.
### Model(s)
https://github.com/CiviWiki/OpenCiviWiki/blob/f152ad5986ebfd25ff60ad5ca7c34563818f1e17/project/api/models/bill.py
### Accounts model
https://github.com/CiviWiki/OpenCiviWiki/blob/f152ad5986ebfd25ff60ad5ca7c34563818f1e17/project/api/models/account.py#L47
https://github.com/CiviWiki/OpenCiviWiki/blob/f152ad5986ebfd25ff60ad5ca7c34563818f1e17/project/api/models/account.py#L285-L322
### CiviWiki models
https://github.com/CiviWiki/OpenCiviWiki/blob/f152ad5986ebfd25ff60ad5ca7c34563818f1e17/project/api/models/rationale.py#L9
https://github.com/CiviWiki/OpenCiviWiki/blob/f152ad5986ebfd25ff60ad5ca7c34563818f1e17/project/api/models/rationale.py#L3
### Templates
Remove the following files.
https://github.com/CiviWiki/OpenCiviWiki/blob/ff74546f8fdc277c9bc5705f3df0c807c1130c62/project/webapp/templates/partials/account/tabs/my_bills.html
### Data integration code
Remove the following
https://github.com/CiviWiki/OpenCiviWiki/blob/f152ad5986ebfd25ff60ad5ca7c34563818f1e17/project/api/tests/propublica_responses.py
https://github.com/CiviWiki/OpenCiviWiki/blob/f152ad5986ebfd25ff60ad5ca7c34563818f1e17/project/api/propublica.py
### CiviWiki API/design files
https://github.com/CiviWiki/OpenCiviWiki/blob/ff74546f8fdc277c9bc5705f3df0c807c1130c62/docs/design/bill_api.json
https://github.com/CiviWiki/OpenCiviWiki/blob/f152ad5986ebfd25ff60ad5ca7c34563818f1e17/project/api/views/bill.py
### Votes
https://github.com/CiviWiki/OpenCiviWiki/blob/f152ad5986ebfd25ff60ad5ca7c34563818f1e17/project/api/models/vote.py
https://github.com/CiviWiki/OpenCiviWiki/blob/f152ad5986ebfd25ff60ad5ca7c34563818f1e17/project/api/tasks.py#L74-L80
### Management command
https://github.com/CiviWiki/OpenCiviWiki/blob/f152ad5986ebfd25ff60ad5ca7c34563818f1e17/project/api/management/commands/gather_votes_data.py
Delete the `api/management/command/` directory, since it only contains one file.
|
non_process
|
remove features related to legislation the legislation feature is currently u s centric making this project less relevant in other countries we rely on third party data sources for information on legislation which adds a maintenance burden that this project cannot bear furthermore we cannot support adding legislation for other political forms remove code related to legislation as part of simplifying our data model removing third party integrations and making the project useful in an international context related code remove related code from the following sections warning note remember to run the after changing the project models model s accounts model civiwiki models templates remove the following files data integration code remove the following civiwiki api design files votes management command delete the api management command directory since it only contains one file
| 0
|
9,971
| 13,017,474,818
|
IssuesEvent
|
2020-07-26 12:40:47
|
chanmakotoo/memories
|
https://api.github.com/repos/chanmakotoo/memories
|
opened
|
Processingの型
|
Processing
|
| 型 | 説明 |
|--|--|
| char | 文字やユニコードシンボル''で囲む |
| int | 整数 |
| float | 浮動小数点 |
| boolean | 真偽 |
| byte | -127から128を格納できる |
| String | 文字列 |
|
1.0
|
Processingの型 - | 型 | 説明 |
|--|--|
| char | 文字やユニコードシンボル''で囲む |
| int | 整数 |
| float | 浮動小数点 |
| boolean | 真偽 |
| byte | -127から128を格納できる |
| String | 文字列 |
|
process
|
processingの型 型 説明 char 文字やユニコードシンボル で囲む int 整数 float 浮動小数点 boolean 真偽 byte string 文字列
| 1
|
12,592
| 14,992,153,556
|
IssuesEvent
|
2021-01-29 09:27:56
|
AcademySoftwareFoundation/OpenCue
|
https://api.github.com/repos/AcademySoftwareFoundation/OpenCue
|
closed
|
docker files need to pin max pip version for python2
|
process
|
**Describe the process**
The latest version of `pip` (21.0) and `setuptools` (45) has just dropped python 2 support.
As a result building the sandbox environment fails:
```
docker-compose --project-directory . -f sandbox/docker-compose.yml build
```
```
Step 8/25 : RUN python -m pip install --upgrade setuptools
---> Running in 8eb1abc898a2
Traceback (most recent call last):
File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/usr/lib/python2.7/site-packages/pip/__main__.py", line 21, in <module>
from pip._internal.cli.main import main as _main
File "/usr/lib/python2.7/site-packages/pip/_internal/cli/main.py", line 60
sys.stderr.write(f"ERROR: {exc}")
^
SyntaxError: invalid syntax
ERROR: Service 'metrics' failed to build : The command '/bin/sh -c python -m pip install --upgrade setuptools' returned a non-zero code: 1
```
We need should pin `pip` < 21 and `setuptools` < 45 during the calls to upgrade these packages in the various Dockerfiles
```
RUN python -m pip install --upgrade 'pip<21'
RUN python -m pip install --upgrade 'setuptools<45'
```
|
1.0
|
docker files need to pin max pip version for python2 - **Describe the process**
The latest version of `pip` (21.0) and `setuptools` (45) has just dropped python 2 support.
As a result building the sandbox environment fails:
```
docker-compose --project-directory . -f sandbox/docker-compose.yml build
```
```
Step 8/25 : RUN python -m pip install --upgrade setuptools
---> Running in 8eb1abc898a2
Traceback (most recent call last):
File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/usr/lib/python2.7/site-packages/pip/__main__.py", line 21, in <module>
from pip._internal.cli.main import main as _main
File "/usr/lib/python2.7/site-packages/pip/_internal/cli/main.py", line 60
sys.stderr.write(f"ERROR: {exc}")
^
SyntaxError: invalid syntax
ERROR: Service 'metrics' failed to build : The command '/bin/sh -c python -m pip install --upgrade setuptools' returned a non-zero code: 1
```
We need should pin `pip` < 21 and `setuptools` < 45 during the calls to upgrade these packages in the various Dockerfiles
```
RUN python -m pip install --upgrade 'pip<21'
RUN python -m pip install --upgrade 'setuptools<45'
```
|
process
|
docker files need to pin max pip version for describe the process the latest version of pip and setuptools has just dropped python support as a result building the sandbox environment fails docker compose project directory f sandbox docker compose yml build step run python m pip install upgrade setuptools running in traceback most recent call last file usr runpy py line in run module as main main fname loader pkg name file usr runpy py line in run code exec code in run globals file usr lib site packages pip main py line in from pip internal cli main import main as main file usr lib site packages pip internal cli main py line sys stderr write f error exc syntaxerror invalid syntax error service metrics failed to build the command bin sh c python m pip install upgrade setuptools returned a non zero code we need should pin pip and setuptools during the calls to upgrade these packages in the various dockerfiles run python m pip install upgrade pip run python m pip install upgrade setuptools
| 1
|
58,177
| 16,413,709,299
|
IssuesEvent
|
2021-05-19 01:47:22
|
DependencyTrack/dependency-track
|
https://api.github.com/repos/DependencyTrack/dependency-track
|
closed
|
[Project] Unable to add PURL attribute containing "@" to project
|
defect p2
|
### Current Behavior:
DT Version : 4.2.2
Upload PURL attribute to a project using the web UI.
Attribute used: `pkg:npm/%40angular/animation@12.3.1`
Server will respond with a status code of 400 and response body: `[{"message":"The Package URL (purl) must be a valid URI and conform to https://github.com/package-url/purl-spec","messageTemplate":"The Package URL (purl) must be a valid URI and conform to https://github.com/package-url/purl-spec","path":"purl","invalidValue":"pkg:npm/%40angular/animation@12.3.1"}]`
### Steps to Reproduce:
See above
### Expected Behavior:
I think the problematic character would be the @ character.
This would be perfectly valid input due to how npm does the scoping of packages. This example is also used in the https://github.com/package-url/purl-spec documentation.
### Environment:
- Dependency-Track Version: 4.2.2
- Distribution: [ Docker ]
- BOM Format & Version: NIL
- Database Server: [ H2 ]
- Browser: Chrome
### Additional Details:
|
1.0
|
[Project] Unable to add PURL attribute containing "@" to project - ### Current Behavior:
DT Version : 4.2.2
Upload PURL attribute to a project using the web UI.
Attribute used: `pkg:npm/%40angular/animation@12.3.1`
Server will respond with a status code of 400 and response body: `[{"message":"The Package URL (purl) must be a valid URI and conform to https://github.com/package-url/purl-spec","messageTemplate":"The Package URL (purl) must be a valid URI and conform to https://github.com/package-url/purl-spec","path":"purl","invalidValue":"pkg:npm/%40angular/animation@12.3.1"}]`
### Steps to Reproduce:
See above
### Expected Behavior:
I think the problematic character would be the @ character.
This would be perfectly valid input due to how npm does the scoping of packages. This example is also used in the https://github.com/package-url/purl-spec documentation.
### Environment:
- Dependency-Track Version: 4.2.2
- Distribution: [ Docker ]
- BOM Format & Version: NIL
- Database Server: [ H2 ]
- Browser: Chrome
### Additional Details:
|
non_process
|
unable to add purl attribute containing to project current behavior dt version upload purl attribute to a project using the web ui attribute used pkg npm animation server will respond with a status code of and response body steps to reproduce see above expected behavior i think the problematic character would be the character this would be perfectly valid input due to how npm does the scoping of packages this example is also used in the documentation environment dependency track version distribution bom format version nil database server browser chrome additional details
| 0
|
164,130
| 12,780,222,793
|
IssuesEvent
|
2020-07-01 00:02:50
|
etcd-io/etcd
|
https://api.github.com/repos/etcd-io/etcd
|
closed
|
Improve etcd upgrade/downgrade policy and tests
|
area/doc area/feature area/testing stale
|
We don't have enough coverage on upgrades (none for downgrades). Only test case is upgrade from latest release to master branch https://github.com/coreos/etcd/blob/master/e2e/etcd_release_upgrade_test.go where we stop/restart with new versions of etcd (master branch) in CI.
- Clearly document compatibilities between different versions
- Early terminate (or warning) on unsafe upgrades/downgrades
- Add more test cases (or document)
- What if newer versions of etcd join older versioned cluster, and vice versa?
- What if newer versions of etcd reboots from snapshot fetched from older-versioned etcd cluster?
- https://github.com/coreos/etcd/issues/6457
ref. https://github.com/coreos/etcd/issues/7308
/cc @jpbetz @saranbalaji90
|
1.0
|
Improve etcd upgrade/downgrade policy and tests - We don't have enough coverage on upgrades (none for downgrades). Only test case is upgrade from latest release to master branch https://github.com/coreos/etcd/blob/master/e2e/etcd_release_upgrade_test.go where we stop/restart with new versions of etcd (master branch) in CI.
- Clearly document compatibilities between different versions
- Early terminate (or warning) on unsafe upgrades/downgrades
- Add more test cases (or document)
- What if newer versions of etcd join older versioned cluster, and vice versa?
- What if newer versions of etcd reboots from snapshot fetched from older-versioned etcd cluster?
- https://github.com/coreos/etcd/issues/6457
ref. https://github.com/coreos/etcd/issues/7308
/cc @jpbetz @saranbalaji90
|
non_process
|
improve etcd upgrade downgrade policy and tests we don t have enough coverage on upgrades none for downgrades only test case is upgrade from latest release to master branch where we stop restart with new versions of etcd master branch in ci clearly document compatibilities between different versions early terminate or warning on unsafe upgrades downgrades add more test cases or document what if newer versions of etcd join older versioned cluster and vice versa what if newer versions of etcd reboots from snapshot fetched from older versioned etcd cluster ref cc jpbetz
| 0
|
20,581
| 27,242,270,008
|
IssuesEvent
|
2023-02-21 21:37:54
|
googleapis/google-cloud-node
|
https://api.github.com/repos/googleapis/google-cloud-node
|
closed
|
Website seems to be from archived repo
|
type: process
|
This website https://googleapis.dev/nodejs/run/latest/ points at https://github.com/googleapis/nodejs-run/tree/main/samples which is an archived repository.
|
1.0
|
Website seems to be from archived repo - This website https://googleapis.dev/nodejs/run/latest/ points at https://github.com/googleapis/nodejs-run/tree/main/samples which is an archived repository.
|
process
|
website seems to be from archived repo this website points at which is an archived repository
| 1
|
2,119
| 4,955,812,068
|
IssuesEvent
|
2016-12-01 21:31:22
|
demidovakatya/imaginaryfriend
|
https://api.github.com/repos/demidovakatya/imaginaryfriend
|
closed
|
Puctuation != words
|
enhancement text processing
|
Не надо, чтобы бот запоминал `Молодец)`. Слова нужно чистить от пунктуации.
|
1.0
|
Puctuation != words - Не надо, чтобы бот запоминал `Молодец)`. Слова нужно чистить от пунктуации.
|
process
|
puctuation words не надо чтобы бот запоминал молодец слова нужно чистить от пунктуации
| 1
|
796,165
| 28,100,604,642
|
IssuesEvent
|
2023-03-30 19:10:02
|
cloudflare/cloudflared
|
https://api.github.com/repos/cloudflare/cloudflared
|
closed
|
💡 Getting authenticated Tunnel user
|
Type: Feature Request Priority: Normal
|
**Describe the feature you'd like**
After authenticating a Tunnel user through Zero Trust, I'd be interesting to get the current authenticated user email, maybe as another column of the output from `cloudflared tunnel info XPTO`
|
1.0
|
💡 Getting authenticated Tunnel user - **Describe the feature you'd like**
After authenticating a Tunnel user through Zero Trust, I'd be interesting to get the current authenticated user email, maybe as another column of the output from `cloudflared tunnel info XPTO`
|
non_process
|
💡 getting authenticated tunnel user describe the feature you d like after authenticating a tunnel user through zero trust i d be interesting to get the current authenticated user email maybe as another column of the output from cloudflared tunnel info xpto
| 0
|
80,069
| 15,343,903,087
|
IssuesEvent
|
2021-02-27 22:26:07
|
fwcd/kotlin-language-server
|
https://api.github.com/repos/fwcd/kotlin-language-server
|
opened
|
Store rendered descriptor previews in symbol index
|
code completion enhancement index
|
_(extracted from #268)_
Just like with normal completions, completions for non-imported symbols (i.e. those from the index) should include rendered descriptor previews.
|
1.0
|
Store rendered descriptor previews in symbol index - _(extracted from #268)_
Just like with normal completions, completions for non-imported symbols (i.e. those from the index) should include rendered descriptor previews.
|
non_process
|
store rendered descriptor previews in symbol index extracted from just like with normal completions completions for non imported symbols i e those from the index should include rendered descriptor previews
| 0
|
304,083
| 9,320,852,714
|
IssuesEvent
|
2019-03-27 01:09:44
|
kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines
|
closed
|
Tensorboard not showing historical AUC / Accuracy
|
priority/p1
|
When launching Tensorboard after running the TFX notebook samples, the graphs are not showing the AUC / Accuracy at each execution step of the training algorithm. Only the final result is displayed.
|
1.0
|
Tensorboard not showing historical AUC / Accuracy - When launching Tensorboard after running the TFX notebook samples, the graphs are not showing the AUC / Accuracy at each execution step of the training algorithm. Only the final result is displayed.
|
non_process
|
tensorboard not showing historical auc accuracy when launching tensorboard after running the tfx notebook samples the graphs are not showing the auc accuracy at each execution step of the training algorithm only the final result is displayed
| 0
|
13,630
| 16,240,394,902
|
IssuesEvent
|
2021-05-07 08:50:08
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
opened
|
Error: [libs/sql-schema-describer/src/getters.rs:21:14] called `Result::unwrap()` on an `Err` value: "Getting is_identity from Resultrow ResultRow { columns: [\"table_name\", \"column_name\", \"formatted_type\", \"numeric_precision\", \"numeric_scale\", \"numeric_precision_radix\", \"datetime_precision\", \"data_type\", \"full_data_type\", \"column_default\", \"is_nullable\", \"is_identity\", \"data_type\", \"character_maximum_length\"], values: [Text(Some(\"roles\")), Text(Some(\"id\")), Text(Some(\"uuid\")), Integer(None), Integer(None), Integer(None), Integer(None), Text(Some(\"uuid\")), Text(Some(\"uuid\")), Text(Some(\"gen_random_uuid()\")), Text(Some(\"NO\")), Text(None), Text(Some(\"uuid\")), Integer(None)] } as String failed"
|
bug/1-repro-available kind/bug process/candidate team/migrations
|
<!-- If required, please update the title to be clear and descriptive -->
Command: `prisma introspect`
Version: `2.21.2`
Binary Version: `e421996c87d5f3c8f7eeadd502d4ad402c89464d`
Report: https://prisma-errors.netlify.app/report/13280
OS: `x64 darwin 20.3.0`
JS Stacktrace:
```
Error: [libs/sql-schema-describer/src/getters.rs:21:14] called `Result::unwrap()` on an `Err` value: "Getting is_identity from Resultrow ResultRow { columns: [\"table_name\", \"column_name\", \"formatted_type\", \"numeric_precision\", \"numeric_scale\", \"numeric_precision_radix\", \"datetime_precision\", \"data_type\", \"full_data_type\", \"column_default\", \"is_nullable\", \"is_identity\", \"data_type\", \"character_maximum_length\"], values: [Text(Some(\"roles\")), Text(Some(\"id\")), Text(Some(\"uuid\")), Integer(None), Integer(None), Integer(None), Integer(None), Text(Some(\"uuid\")), Text(Some(\"uuid\")), Text(Some(\"gen_random_uuid()\")), Text(Some(\"NO\")), Text(None), Text(Some(\"uuid\")), Integer(None)] } as String failed"
at ChildProcess.<anonymous> (/Users/husseinjoe/Hussein/Work/MrYum/SourceCode/mr-yum/api/node_modules/prisma/build/index.js:39953:28)
at ChildProcess.emit (events.js:315:20)
at ChildProcess.EventEmitter.emit (domain.js:467:12)
at Process.ChildProcess._handle.onexit (internal/child_process.js:277:12)
```
Rust Stacktrace:
```
0: backtrace::backtrace::trace
1: backtrace::capture::Backtrace::new
2: user_facing_errors::Error::new_in_panic_hook
3: user_facing_errors::panic_hook::set_panic_hook::{{closure}}
4: std::panicking::rust_panic_with_hook
5: std::panicking::begin_panic_handler::{{closure}}
6: std::sys_common::backtrace::__rust_end_short_backtrace
7: _rust_begin_unwind
8: core::panicking::panic_fmt
9: core::option::expect_none_failed
10: <quaint::connector::result_set::result_row::ResultRow as sql_schema_describer::getters::Getter>::get_expect_string
11: <sql_schema_describer::postgres::SqlSchemaDescriber as sql_schema_describer::SqlSchemaDescriberBackend>::describe::{{closure}}::{{closure}}
12: <tracing::instrument::Instrumented<T> as core::future::future::Future>::poll
13: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
14: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
15: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
16: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
17: <futures_util::future::either::Either<A,B> as core::future::future::Future>::poll
18: <futures_util::future::future::Then<Fut1,Fut2,F> as core::future::future::Future>::poll
19: <futures_util::future::either::Either<A,B> as core::future::future::Future>::poll
20: introspection_engine::main::{{closure}}
21: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
22: introspection_engine::main
23: std::sys_common::backtrace::__rust_begin_short_backtrace
24: std::rt::lang_start::{{closure}}
25: std::rt::lang_start_internal
26: std::rt::lang_start
```
|
1.0
|
Error: [libs/sql-schema-describer/src/getters.rs:21:14] called `Result::unwrap()` on an `Err` value: "Getting is_identity from Resultrow ResultRow { columns: [\"table_name\", \"column_name\", \"formatted_type\", \"numeric_precision\", \"numeric_scale\", \"numeric_precision_radix\", \"datetime_precision\", \"data_type\", \"full_data_type\", \"column_default\", \"is_nullable\", \"is_identity\", \"data_type\", \"character_maximum_length\"], values: [Text(Some(\"roles\")), Text(Some(\"id\")), Text(Some(\"uuid\")), Integer(None), Integer(None), Integer(None), Integer(None), Text(Some(\"uuid\")), Text(Some(\"uuid\")), Text(Some(\"gen_random_uuid()\")), Text(Some(\"NO\")), Text(None), Text(Some(\"uuid\")), Integer(None)] } as String failed" - <!-- If required, please update the title to be clear and descriptive -->
Command: `prisma introspect`
Version: `2.21.2`
Binary Version: `e421996c87d5f3c8f7eeadd502d4ad402c89464d`
Report: https://prisma-errors.netlify.app/report/13280
OS: `x64 darwin 20.3.0`
JS Stacktrace:
```
Error: [libs/sql-schema-describer/src/getters.rs:21:14] called `Result::unwrap()` on an `Err` value: "Getting is_identity from Resultrow ResultRow { columns: [\"table_name\", \"column_name\", \"formatted_type\", \"numeric_precision\", \"numeric_scale\", \"numeric_precision_radix\", \"datetime_precision\", \"data_type\", \"full_data_type\", \"column_default\", \"is_nullable\", \"is_identity\", \"data_type\", \"character_maximum_length\"], values: [Text(Some(\"roles\")), Text(Some(\"id\")), Text(Some(\"uuid\")), Integer(None), Integer(None), Integer(None), Integer(None), Text(Some(\"uuid\")), Text(Some(\"uuid\")), Text(Some(\"gen_random_uuid()\")), Text(Some(\"NO\")), Text(None), Text(Some(\"uuid\")), Integer(None)] } as String failed"
at ChildProcess.<anonymous> (/Users/husseinjoe/Hussein/Work/MrYum/SourceCode/mr-yum/api/node_modules/prisma/build/index.js:39953:28)
at ChildProcess.emit (events.js:315:20)
at ChildProcess.EventEmitter.emit (domain.js:467:12)
at Process.ChildProcess._handle.onexit (internal/child_process.js:277:12)
```
Rust Stacktrace:
```
0: backtrace::backtrace::trace
1: backtrace::capture::Backtrace::new
2: user_facing_errors::Error::new_in_panic_hook
3: user_facing_errors::panic_hook::set_panic_hook::{{closure}}
4: std::panicking::rust_panic_with_hook
5: std::panicking::begin_panic_handler::{{closure}}
6: std::sys_common::backtrace::__rust_end_short_backtrace
7: _rust_begin_unwind
8: core::panicking::panic_fmt
9: core::option::expect_none_failed
10: <quaint::connector::result_set::result_row::ResultRow as sql_schema_describer::getters::Getter>::get_expect_string
11: <sql_schema_describer::postgres::SqlSchemaDescriber as sql_schema_describer::SqlSchemaDescriberBackend>::describe::{{closure}}::{{closure}}
12: <tracing::instrument::Instrumented<T> as core::future::future::Future>::poll
13: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
14: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
15: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
16: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
17: <futures_util::future::either::Either<A,B> as core::future::future::Future>::poll
18: <futures_util::future::future::Then<Fut1,Fut2,F> as core::future::future::Future>::poll
19: <futures_util::future::either::Either<A,B> as core::future::future::Future>::poll
20: introspection_engine::main::{{closure}}
21: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
22: introspection_engine::main
23: std::sys_common::backtrace::__rust_begin_short_backtrace
24: std::rt::lang_start::{{closure}}
25: std::rt::lang_start_internal
26: std::rt::lang_start
```
|
process
|
error called result unwrap on an err value getting is identity from resultrow resultrow columns values as string failed command prisma introspect version binary version report os darwin js stacktrace error called result unwrap on an err value getting is identity from resultrow resultrow columns values as string failed at childprocess users husseinjoe hussein work mryum sourcecode mr yum api node modules prisma build index js at childprocess emit events js at childprocess eventemitter emit domain js at process childprocess handle onexit internal child process js rust stacktrace backtrace backtrace trace backtrace capture backtrace new user facing errors error new in panic hook user facing errors panic hook set panic hook closure std panicking rust panic with hook std panicking begin panic handler closure std sys common backtrace rust end short backtrace rust begin unwind core panicking panic fmt core option expect none failed get expect string describe closure closure as core future future future poll as core future future future poll as core future future future poll as core future future future poll as core future future future poll as core future future future poll as core future future future poll as core future future future poll introspection engine main closure as core future future future poll introspection engine main std sys common backtrace rust begin short backtrace std rt lang start closure std rt lang start internal std rt lang start
| 1
|
84,082
| 15,720,829,459
|
IssuesEvent
|
2021-03-29 01:20:39
|
LalithK90/processManagement
|
https://api.github.com/repos/LalithK90/processManagement
|
opened
|
CVE-2021-25122 (High) detected in tomcat-embed-core-9.0.30.jar
|
security vulnerability
|
## CVE-2021-25122 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tomcat-embed-core-9.0.30.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p>
<p>Path to dependency file: processManagement/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-core/9.0.30/ad32909314fe2ba02cec036434c0addd19bcc580/tomcat-embed-core-9.0.30.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.2.4.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.2.4.RELEASE.jar
- :x: **tomcat-embed-core-9.0.30.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
When responding to new h2c connection requests, Apache Tomcat versions 10.0.0-M1 to 10.0.0, 9.0.0.M1 to 9.0.41 and 8.5.0 to 8.5.61 could duplicate request headers and a limited amount of request body from one request to another meaning user A and user B could both see the results of user A's request.
<p>Publish Date: 2021-03-01
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-25122>CVE-2021-25122</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://lists.apache.org/thread.html/r7b95bc248603360501f18c8eb03bb6001ec0ee3296205b34b07105b7%40%3Cannounce.tomcat.apache.org%3E">https://lists.apache.org/thread.html/r7b95bc248603360501f18c8eb03bb6001ec0ee3296205b34b07105b7%40%3Cannounce.tomcat.apache.org%3E</a></p>
<p>Release Date: 2021-03-01</p>
<p>Fix Resolution: org.apache.tomcat.embed:tomcat-embed-core:8.5.62,9.0.42,10.0.2;org.apache.tomcat:tomcat-coyote:8.5.62,9.0.42,10.0.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-25122 (High) detected in tomcat-embed-core-9.0.30.jar - ## CVE-2021-25122 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tomcat-embed-core-9.0.30.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p>
<p>Path to dependency file: processManagement/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-core/9.0.30/ad32909314fe2ba02cec036434c0addd19bcc580/tomcat-embed-core-9.0.30.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.2.4.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.2.4.RELEASE.jar
- :x: **tomcat-embed-core-9.0.30.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
When responding to new h2c connection requests, Apache Tomcat versions 10.0.0-M1 to 10.0.0, 9.0.0.M1 to 9.0.41 and 8.5.0 to 8.5.61 could duplicate request headers and a limited amount of request body from one request to another meaning user A and user B could both see the results of user A's request.
<p>Publish Date: 2021-03-01
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-25122>CVE-2021-25122</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://lists.apache.org/thread.html/r7b95bc248603360501f18c8eb03bb6001ec0ee3296205b34b07105b7%40%3Cannounce.tomcat.apache.org%3E">https://lists.apache.org/thread.html/r7b95bc248603360501f18c8eb03bb6001ec0ee3296205b34b07105b7%40%3Cannounce.tomcat.apache.org%3E</a></p>
<p>Release Date: 2021-03-01</p>
<p>Fix Resolution: org.apache.tomcat.embed:tomcat-embed-core:8.5.62,9.0.42,10.0.2;org.apache.tomcat:tomcat-coyote:8.5.62,9.0.42,10.0.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in tomcat embed core jar cve high severity vulnerability vulnerable library tomcat embed core jar core tomcat implementation library home page a href path to dependency file processmanagement build gradle path to vulnerable library home wss scanner gradle caches modules files org apache tomcat embed tomcat embed core tomcat embed core jar dependency hierarchy spring boot starter web release jar root library spring boot starter tomcat release jar x tomcat embed core jar vulnerable library found in base branch master vulnerability details when responding to new connection requests apache tomcat versions to to and to could duplicate request headers and a limited amount of request body from one request to another meaning user a and user b could both see the results of user a s request publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache tomcat embed tomcat embed core org apache tomcat tomcat coyote step up your open source security game with whitesource
| 0
|
13,234
| 15,705,959,939
|
IssuesEvent
|
2021-03-26 16:48:10
|
emacs-ess/ESS
|
https://api.github.com/repos/emacs-ess/ESS
|
closed
|
C-c C-c on script opens R in new frame, but puts script in new frame's bufffer
|
process:windows
|
Hello,
When I attempt to run R, opening it by `ess-eval-region-or-function-or-paragraph-and-step VIS` on an R script in its own buffer and frame, it creates a second frame and opens R in that frame, as expected by my use of `(setq inferior-ess-own-frame t)` in .init.el file in .emacs.d However, it then proceeds to return the frame it just opened to run R to the buffer with the r script, and I've included what I saw in the ESS buffer below.
```
ess-request-a-process: {beginning}
... request-a-process:
major mode ess-mode; current buff: ButCanItSendAnEmail.R; ess-language: S, ess-dialect: R
..start-process-specific: lang:dialect= S:R, current-buf=ButCanItSendAnEmail.R
(R): ess-dialect=R, buf=ButCanItSendAnEmail.R, start-arg=nil
current-prefix-arg=nil
(inf-ess 1): lang=S, dialect=R, tmp-dialect=R, buf=ButCanItSendAnEmail.R
(inf-ess 1.1): procname=R temp-dialect=R, buf-name=*R*
(inf-ess 2.0) Method #3 start=/Users/bjr/Desktop buf=*R*
(inf-ess 2.1): ess-language=S, ess-dialect=R buf=*R*
(i-ess 1): buf=*R*, lang=S, comint..echo=t, comint..sender=inferior-ess-input-sender,
(i-ess end): buf=*R*, lang=S, comint..echo=t, comint..sender=inferior-ess-input-sender,
(inf-ess 3.0): prog=R, start-args=--no-readline --no-save , echoes=t
Making Process...Buf *R*, :Proc R, :Prog R
:Args= --no-readline --no-save
Start File=nil
(inferior-ess: waiting for process to start (before hook)
(inferior-ess 3): waiting for process after hookload-ESSR cmd:
local({
source('/Applications/Emacs.app/Contents/Resources/etc/ess/ESSR/R/.load.R', local=TRUE) #define load.ESSR
load.ESSR('/Applications/Emacs.app/Contents/Resources/etc/ess/ESSR/R')
})
(R): inferior-ess-language-start=options(STERM='iESS', str.dendrogram.last="'", editor='emacsclient', show.error.locations=TRUE)
... request-a-process: buf=ButCanItSendAnEmail.R
```
|
1.0
|
C-c C-c on script opens R in new frame, but puts script in new frame's bufffer - Hello,
When I attempt to run R, opening it by `ess-eval-region-or-function-or-paragraph-and-step VIS` on an R script in its own buffer and frame, it creates a second frame and opens R in that frame, as expected by my use of `(setq inferior-ess-own-frame t)` in .init.el file in .emacs.d However, it then proceeds to return the frame it just opened to run R to the buffer with the r script, and I've included what I saw in the ESS buffer below.
```
ess-request-a-process: {beginning}
... request-a-process:
major mode ess-mode; current buff: ButCanItSendAnEmail.R; ess-language: S, ess-dialect: R
..start-process-specific: lang:dialect= S:R, current-buf=ButCanItSendAnEmail.R
(R): ess-dialect=R, buf=ButCanItSendAnEmail.R, start-arg=nil
current-prefix-arg=nil
(inf-ess 1): lang=S, dialect=R, tmp-dialect=R, buf=ButCanItSendAnEmail.R
(inf-ess 1.1): procname=R temp-dialect=R, buf-name=*R*
(inf-ess 2.0) Method #3 start=/Users/bjr/Desktop buf=*R*
(inf-ess 2.1): ess-language=S, ess-dialect=R buf=*R*
(i-ess 1): buf=*R*, lang=S, comint..echo=t, comint..sender=inferior-ess-input-sender,
(i-ess end): buf=*R*, lang=S, comint..echo=t, comint..sender=inferior-ess-input-sender,
(inf-ess 3.0): prog=R, start-args=--no-readline --no-save , echoes=t
Making Process...Buf *R*, :Proc R, :Prog R
:Args= --no-readline --no-save
Start File=nil
(inferior-ess: waiting for process to start (before hook)
(inferior-ess 3): waiting for process after hookload-ESSR cmd:
local({
source('/Applications/Emacs.app/Contents/Resources/etc/ess/ESSR/R/.load.R', local=TRUE) #define load.ESSR
load.ESSR('/Applications/Emacs.app/Contents/Resources/etc/ess/ESSR/R')
})
(R): inferior-ess-language-start=options(STERM='iESS', str.dendrogram.last="'", editor='emacsclient', show.error.locations=TRUE)
... request-a-process: buf=ButCanItSendAnEmail.R
```
|
process
|
c c c c on script opens r in new frame but puts script in new frame s bufffer hello when i attempt to run r opening it by ess eval region or function or paragraph and step vis on an r script in its own buffer and frame it creates a second frame and opens r in that frame as expected by my use of setq inferior ess own frame t in init el file in emacs d however it then proceeds to return the frame it just opened to run r to the buffer with the r script and i ve included what i saw in the ess buffer below ess request a process beginning request a process major mode ess mode current buff butcanitsendanemail r ess language s ess dialect r start process specific lang dialect s r current buf butcanitsendanemail r r ess dialect r buf butcanitsendanemail r start arg nil current prefix arg nil inf ess lang s dialect r tmp dialect r buf butcanitsendanemail r inf ess procname r temp dialect r buf name r inf ess method start users bjr desktop buf r inf ess ess language s ess dialect r buf r i ess buf r lang s comint echo t comint sender inferior ess input sender i ess end buf r lang s comint echo t comint sender inferior ess input sender inf ess prog r start args no readline no save echoes t making process buf r proc r prog r args no readline no save start file nil inferior ess waiting for process to start before hook inferior ess waiting for process after hookload essr cmd local source applications emacs app contents resources etc ess essr r load r local true define load essr load essr applications emacs app contents resources etc ess essr r r inferior ess language start options sterm iess str dendrogram last editor emacsclient show error locations true request a process buf butcanitsendanemail r
| 1
|
10,592
| 13,400,944,135
|
IssuesEvent
|
2020-09-03 16:32:17
|
jgraley/inferno-cpp2v
|
https://api.github.com/repos/jgraley/inferno-cpp2v
|
closed
|
Early-out on trivial problems
|
Constraint Processing bug
|
The case of for example `NotMatch(master_coupled_identifier)` causes a trivial `AndRuleEngine` problem. `my_agents` is empty, and root agent is a master boundary agent. It's wasteful to bother the conjectures etc. Also, CSP generation goes wrong: zero constraints should mean zero variables, but we expect one of the (zero) constraints to bring the master boundary agent in as a variable, and this doesn't happen. No need to sweat solving trivial CSPs.
I don't mind creating an instance of `AndRuleEngine` for these, but it should revert to a trivial operation in `Compare()`, basically just a single `SimpleCompare()`, I think. Related: #115.
|
1.0
|
Early-out on trivial problems - The case of for example `NotMatch(master_coupled_identifier)` causes a trivial `AndRuleEngine` problem. `my_agents` is empty, and root agent is a master boundary agent. It's wasteful to bother the conjectures etc. Also, CSP generation goes wrong: zero constraints should mean zero variables, but we expect one of the (zero) constraints to bring the master boundary agent in as a variable, and this doesn't happen. No need to sweat solving trivial CSPs.
I don't mind creating an instance of `AndRuleEngine` for these, but it should revert to a trivial operation in `Compare()`, basically just a single `SimpleCompare()`, I think. Related: #115.
|
process
|
early out on trivial problems the case of for example notmatch master coupled identifier causes a trivial andruleengine problem my agents is empty and root agent is a master boundary agent it s wasteful to bother the conjectures etc also csp generation goes wrong zero constraints should mean zero variables but we expect one of the zero constraints to bring the master boundary agent in as a variable and this doesn t happen no need to sweat solving trivial csps i don t mind creating an instance of andruleengine for these but it should revert to a trivial operation in compare basically just a single simplecompare i think related
| 1
|
16,771
| 4,086,787,003
|
IssuesEvent
|
2016-06-01 07:27:48
|
mantidproject/mantid
|
https://api.github.com/repos/mantidproject/mantid
|
closed
|
Add minimizers documentation with comparison of performance
|
Component: Fitting Priority: High Quality: Documentation
|
Add short documentation for minimizers and tables with summary and details on performance (accuracy and run time) for all minimizers, except FABADA.
Use the test problems from the NIST benchmark, and if they're ready in time some of the test problems from CUTEst. Organize the tables in three blocks, following the NIST categories: lower, average, higher difficulty.
This should be an RST page, included in `concepts`, with a subpage with detailed tables, produced by the scripts / system test from #15952.
|
1.0
|
Add minimizers documentation with comparison of performance - Add short documentation for minimizers and tables with summary and details on performance (accuracy and run time) for all minimizers, except FABADA.
Use the test problems from the NIST benchmark, and if they're ready in time some of the test problems from CUTEst. Organize the tables in three blocks, following the NIST categories: lower, average, higher difficulty.
This should be an RST page, included in `concepts`, with a subpage with detailed tables, produced by the scripts / system test from #15952.
|
non_process
|
add minimizers documentation with comparison of performance add short documentation for minimizers and tables with summary and details on performance accuracy and run time for all minimizers except fabada use the test problems from the nist benchmark and if they re ready in time some of the test problems from cutest organize the tables in three blocks following the nist categories lower average higher difficulty this should be an rst page included in concepts with a subpage with detailed tables produced by the scripts system test from
| 0
|
35,509
| 7,756,333,957
|
IssuesEvent
|
2018-05-31 13:19:37
|
jOOQ/jOOQ
|
https://api.github.com/repos/jOOQ/jOOQ
|
opened
|
Do not use InternalVisitListener for ORACLE12C dialect
|
C: Functionality P: Medium T: Defect
|
There is an `InternalVisitListener` that collects some column aliases from the `SELECT` clause and makes them available to the `ORDER BY` clause in the same scope, in case `OFFSET` pagination needs to be emulated (see #2080). This is currently being done for:
- DB2
- Oracle
- SQL Data Warehouse
- SQL Server 2008
- Sybase
- Teradata
`VisitListeners` incur some non-trivial overhead on SQL generation and should be avoided if possible. In Oracle 12c, the `InternalVisitListener` is probably not necessary. Perhaps, it can be even replaced by something entirely different anyway.
|
1.0
|
Do not use InternalVisitListener for ORACLE12C dialect - There is an `InternalVisitListener` that collects some column aliases from the `SELECT` clause and makes them available to the `ORDER BY` clause in the same scope, in case `OFFSET` pagination needs to be emulated (see #2080). This is currently being done for:
- DB2
- Oracle
- SQL Data Warehouse
- SQL Server 2008
- Sybase
- Teradata
`VisitListeners` incur some non-trivial overhead on SQL generation and should be avoided if possible. In Oracle 12c, the `InternalVisitListener` is probably not necessary. Perhaps, it can be even replaced by something entirely different anyway.
|
non_process
|
do not use internalvisitlistener for dialect there is an internalvisitlistener that collects some column aliases from the select clause and makes them available to the order by clause in the same scope in case offset pagination needs to be emulated see this is currently being done for oracle sql data warehouse sql server sybase teradata visitlisteners incur some non trivial overhead on sql generation and should be avoided if possible in oracle the internalvisitlistener is probably not necessary perhaps it can be even replaced by something entirely different anyway
| 0
|
12,852
| 15,238,460,741
|
IssuesEvent
|
2021-02-19 01:59:06
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Vector Feature Density
|
Feature Request Feedback Processing stale
|
Author Name: **Alejandro Pareja** (Alejandro Pareja)
Original Redmine Issue: [21249](https://issues.qgis.org/issues/21249)
Redmine category:processing/qgis
---
I suggest a tool for the density of vector features (points, lines or polygons) within a specified distance of each grid cell in a raster. Similar to ArcGIS line density and Whitebox GAT Vector Feature Density and Vector Attribute Gridding.
|
1.0
|
Vector Feature Density - Author Name: **Alejandro Pareja** (Alejandro Pareja)
Original Redmine Issue: [21249](https://issues.qgis.org/issues/21249)
Redmine category:processing/qgis
---
I suggest a tool for the density of vector features (points, lines or polygons) within a specified distance of each grid cell in a raster. Similar to ArcGIS line density and Whitebox GAT Vector Feature Density and Vector Attribute Gridding.
|
process
|
vector feature density author name alejandro pareja alejandro pareja original redmine issue redmine category processing qgis i suggest a tool for the density of vector features points lines or polygons within a specified distance of each grid cell in a raster similar to arcgis line density and whitebox gat vector feature density and vector attribute gridding
| 1
|
21,501
| 29,668,994,493
|
IssuesEvent
|
2023-06-11 06:49:59
|
turt2live/matrix-media-repo
|
https://api.github.com/repos/turt2live/matrix-media-repo
|
opened
|
Refactoring checklist
|
enhancement media import release-blocker media export url previews multi-process datastores files antispam resource waste spec compliance performance transfer admin api gdpr
|
* [ ] Move URL previews to pipeline system
* [ ] Move imports/exports to pipeline system
* [ ] Move actionable admin APIs to pipeline system (transfer, purge, etc)
* [ ] Move remaining admin APIs to new database accessor
* [ ] Make plugins work again
* [ ] Delete dead code
* [ ] Integration tests???
|
1.0
|
Refactoring checklist - * [ ] Move URL previews to pipeline system
* [ ] Move imports/exports to pipeline system
* [ ] Move actionable admin APIs to pipeline system (transfer, purge, etc)
* [ ] Move remaining admin APIs to new database accessor
* [ ] Make plugins work again
* [ ] Delete dead code
* [ ] Integration tests???
|
process
|
refactoring checklist move url previews to pipeline system move imports exports to pipeline system move actionable admin apis to pipeline system transfer purge etc move remaining admin apis to new database accessor make plugins work again delete dead code integration tests
| 1
|
14,649
| 17,774,844,813
|
IssuesEvent
|
2021-08-30 17:50:05
|
open-telemetry/opentelemetry-collector
|
https://api.github.com/repos/open-telemetry/opentelemetry-collector
|
closed
|
Add way to communicate hash calculated by probabilistic sampler
|
area:processor
|
Related to https://github.com/open-telemetry/opentelemetry-collector/pull/469#discussion_r364000874
The idea is to make the probabilistic sampler capable of communicate the hash it calculated for sampled traces.
/cc @SergeyKanzhelev
|
1.0
|
Add way to communicate hash calculated by probabilistic sampler - Related to https://github.com/open-telemetry/opentelemetry-collector/pull/469#discussion_r364000874
The idea is to make the probabilistic sampler capable of communicate the hash it calculated for sampled traces.
/cc @SergeyKanzhelev
|
process
|
add way to communicate hash calculated by probabilistic sampler related to the idea is to make the probabilistic sampler capable of communicate the hash it calculated for sampled traces cc sergeykanzhelev
| 1
|
255,585
| 8,125,746,590
|
IssuesEvent
|
2018-08-16 22:08:12
|
RobRuizR/NeighborHealth
|
https://api.github.com/repos/RobRuizR/NeighborHealth
|
closed
|
Forma para obtener información del cliente
|
high priority
|
Información general sobre las preferencias de viaje del cliente

|
1.0
|
Forma para obtener información del cliente - Información general sobre las preferencias de viaje del cliente

|
non_process
|
forma para obtener información del cliente información general sobre las preferencias de viaje del cliente
| 0
|
11,957
| 14,726,009,043
|
IssuesEvent
|
2021-01-06 06:00:06
|
kdjstudios/SABillingGitlab
|
https://api.github.com/repos/kdjstudios/SABillingGitlab
|
opened
|
Payment Due Reminders
|
anc-process anp-2 ant-enhancement pl-wish list
|
In GitLab by @kdjstudios on Oct 24, 2016, 24:51
Based on last invoice date and net terms, send a payment due reminder the day following the due date.
|
1.0
|
Payment Due Reminders - In GitLab by @kdjstudios on Oct 24, 2016, 24:51
Based on last invoice date and net terms, send a payment due reminder the day following the due date.
|
process
|
payment due reminders in gitlab by kdjstudios on oct based on last invoice date and net terms send a payment due reminder the day following the due date
| 1
|
27,269
| 4,957,309,603
|
IssuesEvent
|
2016-12-02 03:40:02
|
TNGSB/eWallet
|
https://api.github.com/repos/TNGSB/eWallet
|
closed
|
eWallet_MobileApp_Android (Registration) #09
|
Defect - High (Sev-2)
|
[Defect_Mobile App #09.xlsx](https://github.com/TNGSB/eWallet/files/565973/Defect_Mobile.App.09.xlsx)
Test Description : To validate error message displayed when user input more than 14 characters for "Phone" field
Expected Result : "System should not allow input more than 14 characters and stop input when user reach 14 characters
If user to insert more than 14 characters, system to prompt error message"
Actual Result : System allowed user to key in more than 14 characters
Refer attached document for POT
*Apply to both android & IOS
|
1.0
|
eWallet_MobileApp_Android (Registration) #09 - [Defect_Mobile App #09.xlsx](https://github.com/TNGSB/eWallet/files/565973/Defect_Mobile.App.09.xlsx)
Test Description : To validate error message displayed when user input more than 14 characters for "Phone" field
Expected Result : "System should not allow input more than 14 characters and stop input when user reach 14 characters
If user to insert more than 14 characters, system to prompt error message"
Actual Result : System allowed user to key in more than 14 characters
Refer attached document for POT
*Apply to both android & IOS
|
non_process
|
ewallet mobileapp android registration test description to validate error message displayed when user input more than characters for phone field expected result system should not allow input more than characters and stop input when user reach characters if user to insert more than characters system to prompt error message actual result system allowed user to key in more than characters refer attached document for pot apply to both android ios
| 0
|
19,893
| 26,340,346,767
|
IssuesEvent
|
2023-01-10 17:11:17
|
temporalio/sdk-typescript
|
https://api.github.com/repos/temporalio/sdk-typescript
|
closed
|
Set up publish from CI
|
CICD processes
|
Need to first consider:
- When to publish packages (on merge to main?)
- Which version to bump: patch, minor, major?
- Auto generate changelogs: https://github.com/lerna/lerna-changelog
|
1.0
|
Set up publish from CI - Need to first consider:
- When to publish packages (on merge to main?)
- Which version to bump: patch, minor, major?
- Auto generate changelogs: https://github.com/lerna/lerna-changelog
|
process
|
set up publish from ci need to first consider when to publish packages on merge to main which version to bump patch minor major auto generate changelogs
| 1
|
22,069
| 30,593,286,362
|
IssuesEvent
|
2023-07-21 19:08:15
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
closed
|
Need a list of accepted build source type
|
devops/prod doc-bug Pri2 devops-cicd-process/tech
|
If you have an external CI build system that produces artifacts, you can consume artifacts with a builds resource. A builds resource can be any external CI systems like Jenkins, TeamCity, CircleCI, and so on.
resources: # types: pipelines | builds | repositories | containers | packages
builds:
- build: string # identifier for the build resource
type: string # the type of your build service like Jenkins, circleCI etc.
The documentation does not give a list of accepted types. For example, if I want to add a build resource for an external TFS build or an external Azure DevOps organization build, I do not know which value to put into the "type" field. This forces a user to guess and so far I do not know the accept the value or whether "external TFS build" is even accepted.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: ee4ec9d0-e0d5-4fb4-7c3e-b84abfa290c2
* Version Independent ID: 3e2b80d9-30e5-0c48-49f0-4fcdfedf5eee
* Content: [Define YAML resources for Azure Pipelines - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/resources?view=azure-devops&tabs=schema)
* Content Source: [docs/pipelines/process/resources.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/main/docs/pipelines/process/resources.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
1.0
|
Need a list of accepted build source type - If you have an external CI build system that produces artifacts, you can consume artifacts with a builds resource. A builds resource can be any external CI systems like Jenkins, TeamCity, CircleCI, and so on.
resources: # types: pipelines | builds | repositories | containers | packages
builds:
- build: string # identifier for the build resource
type: string # the type of your build service like Jenkins, circleCI etc.
The documentation does not give a list of accepted types. For example, if I want to add a build resource for an external TFS build or an external Azure DevOps organization build, I do not know which value to put into the "type" field. This forces a user to guess and so far I do not know the accept the value or whether "external TFS build" is even accepted.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: ee4ec9d0-e0d5-4fb4-7c3e-b84abfa290c2
* Version Independent ID: 3e2b80d9-30e5-0c48-49f0-4fcdfedf5eee
* Content: [Define YAML resources for Azure Pipelines - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/resources?view=azure-devops&tabs=schema)
* Content Source: [docs/pipelines/process/resources.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/main/docs/pipelines/process/resources.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
process
|
need a list of accepted build source type if you have an external ci build system that produces artifacts you can consume artifacts with a builds resource a builds resource can be any external ci systems like jenkins teamcity circleci and so on resources types pipelines builds repositories containers packages builds build string identifier for the build resource type string the type of your build service like jenkins circleci etc the documentation does not give a list of accepted types for example if i want to add a build resource for an external tfs build or an external azure devops organization build i do not know which value to put into the type field this forces a user to guess and so far i do not know the accept the value or whether external tfs build is even accepted document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
| 1
|
14,762
| 18,041,451,796
|
IssuesEvent
|
2021-09-18 05:23:39
|
ooi-data/CE01ISSM-MFD35-02-PRESFA000-telemetered-presf_abc_dcl_tide_measurement
|
https://api.github.com/repos/ooi-data/CE01ISSM-MFD35-02-PRESFA000-telemetered-presf_abc_dcl_tide_measurement
|
opened
|
🛑 Processing failed: ResponseParserError
|
process
|
## Overview
`ResponseParserError` found in `processing_task` task during run ended on 2021-09-18T05:23:38.716592.
## Details
Flow name: `CE01ISSM-MFD35-02-PRESFA000-telemetered-presf_abc_dcl_tide_measurement`
Task name: `processing_task`
Error type: `ResponseParserError`
Error message: Unable to parse response (no element found: line 2, column 0), invalid XML received. Further retries may succeed:
b'<?xml version="1.0" encoding="UTF-8"?>\n'
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/botocore/parsers.py", line 452, in _parse_xml_string_to_dom
root = parser.close()
xml.etree.ElementTree.ParseError: no element found: line 2, column 0
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/ooi_harvester/processor/pipeline.py", line 101, in processing
final_path = finalize_zarr(
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/ooi_harvester/processor/__init__.py", line 359, in finalize_zarr
source_store.fs.delete(source_store.root, recursive=True)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/spec.py", line 1187, in delete
return self.rm(path, recursive=recursive, maxdepth=maxdepth)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 88, in wrapper
return sync(self.loop, func, *args, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 69, in sync
raise result[0]
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 25, in _runner
result[0] = await coro
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 1677, in _rm
await asyncio.gather(
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 1657, in _bulk_delete
await self._call_s3("delete_objects", kwargs, Bucket=bucket, Delete=delete_keys)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 268, in _call_s3
raise err
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 248, in _call_s3
out = await method(**additional_kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/aiobotocore/client.py", line 141, in _make_api_call
http, parsed_response = await self._make_request(
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/aiobotocore/client.py", line 161, in _make_request
return await self._endpoint.make_request(operation_model, request_dict)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/aiobotocore/endpoint.py", line 93, in _send_request
success_response, exception = await self._get_response(
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/aiobotocore/endpoint.py", line 112, in _get_response
success_response, exception = await self._do_get_response(
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/aiobotocore/endpoint.py", line 177, in _do_get_response
parsed_response = parser.parse(
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/botocore/parsers.py", line 245, in parse
parsed = self._do_parse(response, shape)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/botocore/parsers.py", line 813, in _do_parse
self._add_modeled_parse(response, shape, final_parsed)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/botocore/parsers.py", line 822, in _add_modeled_parse
self._parse_payload(response, shape, member_shapes, final_parsed)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/botocore/parsers.py", line 862, in _parse_payload
original_parsed = self._initial_body_parse(response['body'])
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/botocore/parsers.py", line 948, in _initial_body_parse
return self._parse_xml_string_to_dom(xml_string)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/botocore/parsers.py", line 454, in _parse_xml_string_to_dom
raise ResponseParserError(
botocore.parsers.ResponseParserError: Unable to parse response (no element found: line 2, column 0), invalid XML received. Further retries may succeed:
b'<?xml version="1.0" encoding="UTF-8"?>\n'
```
</details>
|
1.0
|
🛑 Processing failed: ResponseParserError - ## Overview
`ResponseParserError` found in `processing_task` task during run ended on 2021-09-18T05:23:38.716592.
## Details
Flow name: `CE01ISSM-MFD35-02-PRESFA000-telemetered-presf_abc_dcl_tide_measurement`
Task name: `processing_task`
Error type: `ResponseParserError`
Error message: Unable to parse response (no element found: line 2, column 0), invalid XML received. Further retries may succeed:
b'<?xml version="1.0" encoding="UTF-8"?>\n'
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/botocore/parsers.py", line 452, in _parse_xml_string_to_dom
root = parser.close()
xml.etree.ElementTree.ParseError: no element found: line 2, column 0
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/ooi_harvester/processor/pipeline.py", line 101, in processing
final_path = finalize_zarr(
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/ooi_harvester/processor/__init__.py", line 359, in finalize_zarr
source_store.fs.delete(source_store.root, recursive=True)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/spec.py", line 1187, in delete
return self.rm(path, recursive=recursive, maxdepth=maxdepth)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 88, in wrapper
return sync(self.loop, func, *args, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 69, in sync
raise result[0]
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 25, in _runner
result[0] = await coro
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 1677, in _rm
await asyncio.gather(
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 1657, in _bulk_delete
await self._call_s3("delete_objects", kwargs, Bucket=bucket, Delete=delete_keys)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 268, in _call_s3
raise err
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 248, in _call_s3
out = await method(**additional_kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/aiobotocore/client.py", line 141, in _make_api_call
http, parsed_response = await self._make_request(
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/aiobotocore/client.py", line 161, in _make_request
return await self._endpoint.make_request(operation_model, request_dict)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/aiobotocore/endpoint.py", line 93, in _send_request
success_response, exception = await self._get_response(
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/aiobotocore/endpoint.py", line 112, in _get_response
success_response, exception = await self._do_get_response(
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/aiobotocore/endpoint.py", line 177, in _do_get_response
parsed_response = parser.parse(
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/botocore/parsers.py", line 245, in parse
parsed = self._do_parse(response, shape)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/botocore/parsers.py", line 813, in _do_parse
self._add_modeled_parse(response, shape, final_parsed)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/botocore/parsers.py", line 822, in _add_modeled_parse
self._parse_payload(response, shape, member_shapes, final_parsed)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/botocore/parsers.py", line 862, in _parse_payload
original_parsed = self._initial_body_parse(response['body'])
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/botocore/parsers.py", line 948, in _initial_body_parse
return self._parse_xml_string_to_dom(xml_string)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/botocore/parsers.py", line 454, in _parse_xml_string_to_dom
raise ResponseParserError(
botocore.parsers.ResponseParserError: Unable to parse response (no element found: line 2, column 0), invalid XML received. Further retries may succeed:
b'<?xml version="1.0" encoding="UTF-8"?>\n'
```
</details>
|
process
|
🛑 processing failed responseparsererror overview responseparsererror found in processing task task during run ended on details flow name telemetered presf abc dcl tide measurement task name processing task error type responseparsererror error message unable to parse response no element found line column invalid xml received further retries may succeed b n traceback traceback most recent call last file srv conda envs notebook lib site packages botocore parsers py line in parse xml string to dom root parser close xml etree elementtree parseerror no element found line column during handling of the above exception another exception occurred traceback most recent call last file srv conda envs notebook lib site packages ooi harvester processor pipeline py line in processing final path finalize zarr file srv conda envs notebook lib site packages ooi harvester processor init py line in finalize zarr source store fs delete source store root recursive true file srv conda envs notebook lib site packages fsspec spec py line in delete return self rm path recursive recursive maxdepth maxdepth file srv conda envs notebook lib site packages fsspec asyn py line in wrapper return sync self loop func args kwargs file srv conda envs notebook lib site packages fsspec asyn py line in sync raise result file srv conda envs notebook lib site packages fsspec asyn py line in runner result await coro file srv conda envs notebook lib site packages core py line in rm await asyncio gather file srv conda envs notebook lib site packages core py line in bulk delete await self call delete objects kwargs bucket bucket delete delete keys file srv conda envs notebook lib site packages core py line in call raise err file srv conda envs notebook lib site packages core py line in call out await method additional kwargs file srv conda envs notebook lib site packages aiobotocore client py line in make api call http parsed response await self make request file srv conda envs notebook lib site packages aiobotocore client py line in make request return await self endpoint make request operation model request dict file srv conda envs notebook lib site packages aiobotocore endpoint py line in send request success response exception await self get response file srv conda envs notebook lib site packages aiobotocore endpoint py line in get response success response exception await self do get response file srv conda envs notebook lib site packages aiobotocore endpoint py line in do get response parsed response parser parse file srv conda envs notebook lib site packages botocore parsers py line in parse parsed self do parse response shape file srv conda envs notebook lib site packages botocore parsers py line in do parse self add modeled parse response shape final parsed file srv conda envs notebook lib site packages botocore parsers py line in add modeled parse self parse payload response shape member shapes final parsed file srv conda envs notebook lib site packages botocore parsers py line in parse payload original parsed self initial body parse response file srv conda envs notebook lib site packages botocore parsers py line in initial body parse return self parse xml string to dom xml string file srv conda envs notebook lib site packages botocore parsers py line in parse xml string to dom raise responseparsererror botocore parsers responseparsererror unable to parse response no element found line column invalid xml received further retries may succeed b n
| 1
|
9,852
| 12,838,976,956
|
IssuesEvent
|
2020-07-07 18:27:41
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
opened
|
[Introspection] Pick up order of columns in database into schema
|
kind/improvement process/candidate topic: introspection
|
For https://github.com/prisma/prisma/issues/2554 @carmenberndt showed that it is possibly to get the order of the table columns from a table with the right SQL queries, https://github.com/prisma/prisma-engines/pull/872 even implements this already in good parts.
We should now change our Introspection implemented to indeed do this.
Related internal discussion: https://prisma-company.slack.com/archives/CEYCG2MCN/p1593787606369100
|
1.0
|
[Introspection] Pick up order of columns in database into schema - For https://github.com/prisma/prisma/issues/2554 @carmenberndt showed that it is possibly to get the order of the table columns from a table with the right SQL queries, https://github.com/prisma/prisma-engines/pull/872 even implements this already in good parts.
We should now change our Introspection implemented to indeed do this.
Related internal discussion: https://prisma-company.slack.com/archives/CEYCG2MCN/p1593787606369100
|
process
|
pick up order of columns in database into schema for carmenberndt showed that it is possibly to get the order of the table columns from a table with the right sql queries even implements this already in good parts we should now change our introspection implemented to indeed do this related internal discussion
| 1
|
121,697
| 17,662,089,865
|
IssuesEvent
|
2021-08-21 18:16:51
|
ghc-dev/Carolyn-Maldonado
|
https://api.github.com/repos/ghc-dev/Carolyn-Maldonado
|
closed
|
CVE-2020-14365 (High) detected in ansible-2.9.9.tar.gz - autoclosed
|
security vulnerability
|
## CVE-2020-14365 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ansible-2.9.9.tar.gz</b></p></summary>
<p>Radically simple IT automation</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/00/5d/e10b83e0e6056dbd5b4809b451a191395175a57e3175ce04e35d9c5fc2a0/ansible-2.9.9.tar.gz">https://files.pythonhosted.org/packages/00/5d/e10b83e0e6056dbd5b4809b451a191395175a57e3175ce04e35d9c5fc2a0/ansible-2.9.9.tar.gz</a></p>
<p>Path to dependency file: Carolyn-Maldonado/requirements.txt</p>
<p>Path to vulnerable library: /requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **ansible-2.9.9.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ghc-dev/Carolyn-Maldonado/commit/37b57fa1be6eba0ceaca3e3231c162f402aa73f8">37b57fa1be6eba0ceaca3e3231c162f402aa73f8</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A flaw was found in the Ansible Engine, in ansible-engine 2.8.x before 2.8.15 and ansible-engine 2.9.x before 2.9.13, when installing packages using the dnf module. GPG signatures are ignored during installation even when disable_gpg_check is set to False, which is the default behavior. This flaw leads to malicious packages being installed on the system and arbitrary code executed via package installation scripts. The highest threat from this vulnerability is to integrity and system availability.
<p>Publish Date: 2020-09-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-14365>CVE-2020-14365</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1869154">https://bugzilla.redhat.com/show_bug.cgi?id=1869154</a></p>
<p>Release Date: 2020-07-21</p>
<p>Fix Resolution: 2.8.15,2.9.13</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Python","packageName":"ansible","packageVersion":"2.9.9","packageFilePaths":["/requirements.txt"],"isTransitiveDependency":false,"dependencyTree":"ansible:2.9.9","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.8.15,2.9.13"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-14365","vulnerabilityDetails":"A flaw was found in the Ansible Engine, in ansible-engine 2.8.x before 2.8.15 and ansible-engine 2.9.x before 2.9.13, when installing packages using the dnf module. GPG signatures are ignored during installation even when disable_gpg_check is set to False, which is the default behavior. This flaw leads to malicious packages being installed on the system and arbitrary code executed via package installation scripts. The highest threat from this vulnerability is to integrity and system availability.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-14365","cvss3Severity":"high","cvss3Score":"7.1","cvss3Metrics":{"A":"High","AC":"Low","PR":"Low","S":"Unchanged","C":"None","UI":"None","AV":"Local","I":"High"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2020-14365 (High) detected in ansible-2.9.9.tar.gz - autoclosed - ## CVE-2020-14365 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ansible-2.9.9.tar.gz</b></p></summary>
<p>Radically simple IT automation</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/00/5d/e10b83e0e6056dbd5b4809b451a191395175a57e3175ce04e35d9c5fc2a0/ansible-2.9.9.tar.gz">https://files.pythonhosted.org/packages/00/5d/e10b83e0e6056dbd5b4809b451a191395175a57e3175ce04e35d9c5fc2a0/ansible-2.9.9.tar.gz</a></p>
<p>Path to dependency file: Carolyn-Maldonado/requirements.txt</p>
<p>Path to vulnerable library: /requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **ansible-2.9.9.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ghc-dev/Carolyn-Maldonado/commit/37b57fa1be6eba0ceaca3e3231c162f402aa73f8">37b57fa1be6eba0ceaca3e3231c162f402aa73f8</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A flaw was found in the Ansible Engine, in ansible-engine 2.8.x before 2.8.15 and ansible-engine 2.9.x before 2.9.13, when installing packages using the dnf module. GPG signatures are ignored during installation even when disable_gpg_check is set to False, which is the default behavior. This flaw leads to malicious packages being installed on the system and arbitrary code executed via package installation scripts. The highest threat from this vulnerability is to integrity and system availability.
<p>Publish Date: 2020-09-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-14365>CVE-2020-14365</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1869154">https://bugzilla.redhat.com/show_bug.cgi?id=1869154</a></p>
<p>Release Date: 2020-07-21</p>
<p>Fix Resolution: 2.8.15,2.9.13</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Python","packageName":"ansible","packageVersion":"2.9.9","packageFilePaths":["/requirements.txt"],"isTransitiveDependency":false,"dependencyTree":"ansible:2.9.9","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.8.15,2.9.13"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-14365","vulnerabilityDetails":"A flaw was found in the Ansible Engine, in ansible-engine 2.8.x before 2.8.15 and ansible-engine 2.9.x before 2.9.13, when installing packages using the dnf module. GPG signatures are ignored during installation even when disable_gpg_check is set to False, which is the default behavior. This flaw leads to malicious packages being installed on the system and arbitrary code executed via package installation scripts. The highest threat from this vulnerability is to integrity and system availability.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-14365","cvss3Severity":"high","cvss3Score":"7.1","cvss3Metrics":{"A":"High","AC":"Low","PR":"Low","S":"Unchanged","C":"None","UI":"None","AV":"Local","I":"High"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve high detected in ansible tar gz autoclosed cve high severity vulnerability vulnerable library ansible tar gz radically simple it automation library home page a href path to dependency file carolyn maldonado requirements txt path to vulnerable library requirements txt dependency hierarchy x ansible tar gz vulnerable library found in head commit a href found in base branch master vulnerability details a flaw was found in the ansible engine in ansible engine x before and ansible engine x before when installing packages using the dnf module gpg signatures are ignored during installation even when disable gpg check is set to false which is the default behavior this flaw leads to malicious packages being installed on the system and arbitrary code executed via package installation scripts the highest threat from this vulnerability is to integrity and system availability publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree ansible isminimumfixversionavailable true minimumfixversion basebranches vulnerabilityidentifier cve vulnerabilitydetails a flaw was found in the ansible engine in ansible engine x before and ansible engine x before when installing packages using the dnf module gpg signatures are ignored during installation even when disable gpg check is set to false which is the default behavior this flaw leads to malicious packages being installed on the system and arbitrary code executed via package installation scripts the highest threat from this vulnerability is to integrity and system availability vulnerabilityurl
| 0
|
320,268
| 27,430,221,534
|
IssuesEvent
|
2023-03-02 00:19:57
|
MPMG-DCC-UFMG/F01
|
https://api.github.com/repos/MPMG-DCC-UFMG/F01
|
closed
|
Teste de generalizacao para a tag Receitas - Dados de Receitas - Jeceaba
|
generalization test development
|
DoD: Realizar o teste de Generalização do validador da tag Receitas - Dados de Receitas para o Município de Jeceaba.
|
1.0
|
Teste de generalizacao para a tag Receitas - Dados de Receitas - Jeceaba - DoD: Realizar o teste de Generalização do validador da tag Receitas - Dados de Receitas para o Município de Jeceaba.
|
non_process
|
teste de generalizacao para a tag receitas dados de receitas jeceaba dod realizar o teste de generalização do validador da tag receitas dados de receitas para o município de jeceaba
| 0
|
180,790
| 21,625,825,526
|
IssuesEvent
|
2022-05-05 01:54:43
|
mgh3326/que_bang
|
https://api.github.com/repos/mgh3326/que_bang
|
closed
|
CVE-2021-35517 (High) detected in commons-compress-1.20.jar - autoclosed
|
security vulnerability
|
## CVE-2021-35517 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-compress-1.20.jar</b></p></summary>
<p>Apache Commons Compress software defines an API for working with
compression and archive formats. These include: bzip2, gzip, pack200,
lzma, xz, Snappy, traditional Unix Compress, DEFLATE, DEFLATE64, LZ4,
Brotli, Zstandard and ar, cpio, jar, tar, zip, dump, 7z, arj.</p>
<p>Library home page: <a href="https://commons.apache.org/proper/commons-compress/">https://commons.apache.org/proper/commons-compress/</a></p>
<p>Path to dependency file: /build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.20/b8df472b31e1f17c232d2ad78ceb1c84e00c641b/commons-compress-1.20.jar</p>
<p>
Dependency Hierarchy:
- junit-jupiter-1.14.3.jar (Root Library)
- testcontainers-1.14.3.jar
- :x: **commons-compress-1.20.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/mgh3326/que_bang/commit/43405f4738eb28f407014d7e54a5fb58f2823552">43405f4738eb28f407014d7e54a5fb58f2823552</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
When reading a specially crafted TAR archive, Compress can be made to allocate large amounts of memory that finally leads to an out of memory error even for very small inputs. This could be used to mount a denial of service attack against services that use Compress' tar package.
<p>Publish Date: 2021-07-13
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-35517>CVE-2021-35517</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://commons.apache.org/proper/commons-compress/security-reports.html">https://commons.apache.org/proper/commons-compress/security-reports.html</a></p>
<p>Release Date: 2021-07-13</p>
<p>Fix Resolution: org.apache.commons:commons-compress:1.21</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-35517 (High) detected in commons-compress-1.20.jar - autoclosed - ## CVE-2021-35517 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-compress-1.20.jar</b></p></summary>
<p>Apache Commons Compress software defines an API for working with
compression and archive formats. These include: bzip2, gzip, pack200,
lzma, xz, Snappy, traditional Unix Compress, DEFLATE, DEFLATE64, LZ4,
Brotli, Zstandard and ar, cpio, jar, tar, zip, dump, 7z, arj.</p>
<p>Library home page: <a href="https://commons.apache.org/proper/commons-compress/">https://commons.apache.org/proper/commons-compress/</a></p>
<p>Path to dependency file: /build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.20/b8df472b31e1f17c232d2ad78ceb1c84e00c641b/commons-compress-1.20.jar</p>
<p>
Dependency Hierarchy:
- junit-jupiter-1.14.3.jar (Root Library)
- testcontainers-1.14.3.jar
- :x: **commons-compress-1.20.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/mgh3326/que_bang/commit/43405f4738eb28f407014d7e54a5fb58f2823552">43405f4738eb28f407014d7e54a5fb58f2823552</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
When reading a specially crafted TAR archive, Compress can be made to allocate large amounts of memory that finally leads to an out of memory error even for very small inputs. This could be used to mount a denial of service attack against services that use Compress' tar package.
<p>Publish Date: 2021-07-13
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-35517>CVE-2021-35517</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://commons.apache.org/proper/commons-compress/security-reports.html">https://commons.apache.org/proper/commons-compress/security-reports.html</a></p>
<p>Release Date: 2021-07-13</p>
<p>Fix Resolution: org.apache.commons:commons-compress:1.21</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in commons compress jar autoclosed cve high severity vulnerability vulnerable library commons compress jar apache commons compress software defines an api for working with compression and archive formats these include gzip lzma xz snappy traditional unix compress deflate brotli zstandard and ar cpio jar tar zip dump arj library home page a href path to dependency file build gradle path to vulnerable library home wss scanner gradle caches modules files org apache commons commons compress commons compress jar dependency hierarchy junit jupiter jar root library testcontainers jar x commons compress jar vulnerable library found in head commit a href vulnerability details when reading a specially crafted tar archive compress can be made to allocate large amounts of memory that finally leads to an out of memory error even for very small inputs this could be used to mount a denial of service attack against services that use compress tar package publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache commons commons compress step up your open source security game with whitesource
| 0
|
287,959
| 31,856,846,147
|
IssuesEvent
|
2023-09-15 08:06:21
|
nidhi7598/linux-4.19.72_CVE-2022-3564
|
https://api.github.com/repos/nidhi7598/linux-4.19.72_CVE-2022-3564
|
closed
|
CVE-2023-1252 (High) detected in linuxlinux-4.19.294 - autoclosed
|
Mend: dependency security vulnerability
|
## CVE-2023-1252 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.294</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/nidhi7598/linux-4.19.72_CVE-2022-3564/commit/454c7dacf6fa9a6de86d4067f5a08f25cffa519b">454c7dacf6fa9a6de86d4067f5a08f25cffa519b</a></p>
<p>Found in base branch: <b>main</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
A use-after-free flaw was found in the Linux kernel’s Ext4 File System in how a user triggers several file operations simultaneously with the overlay FS usage. This flaw allows a local user to crash or potentially escalate their privileges on the system. Only if patch 9a2544037600 ("ovl: fix use after free in struct ovl_aio_req") not applied yet, the kernel could be affected.
<p>Publish Date: 2023-03-23
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-1252>CVE-2023-1252</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2023-1252">https://www.linuxkernelcves.com/cves/CVE-2023-1252</a></p>
<p>Release Date: 2023-03-07</p>
<p>Fix Resolution: v5.10.80,v5.14.19,v5.15.3,v5.16-rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2023-1252 (High) detected in linuxlinux-4.19.294 - autoclosed - ## CVE-2023-1252 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.294</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/nidhi7598/linux-4.19.72_CVE-2022-3564/commit/454c7dacf6fa9a6de86d4067f5a08f25cffa519b">454c7dacf6fa9a6de86d4067f5a08f25cffa519b</a></p>
<p>Found in base branch: <b>main</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
A use-after-free flaw was found in the Linux kernel’s Ext4 File System in how a user triggers several file operations simultaneously with the overlay FS usage. This flaw allows a local user to crash or potentially escalate their privileges on the system. Only if patch 9a2544037600 ("ovl: fix use after free in struct ovl_aio_req") not applied yet, the kernel could be affected.
<p>Publish Date: 2023-03-23
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-1252>CVE-2023-1252</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2023-1252">https://www.linuxkernelcves.com/cves/CVE-2023-1252</a></p>
<p>Release Date: 2023-03-07</p>
<p>Fix Resolution: v5.10.80,v5.14.19,v5.15.3,v5.16-rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in linuxlinux autoclosed cve high severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch main vulnerable source files vulnerability details a use after free flaw was found in the linux kernel’s file system in how a user triggers several file operations simultaneously with the overlay fs usage this flaw allows a local user to crash or potentially escalate their privileges on the system only if patch ovl fix use after free in struct ovl aio req not applied yet the kernel could be affected publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
610,984
| 18,941,629,382
|
IssuesEvent
|
2021-11-18 04:05:07
|
zgzgorg/iam-backend
|
https://api.github.com/repos/zgzgorg/iam-backend
|
opened
|
implement groups list
|
help wanted Priority:P2 Type:feature
|
As a user, we would like to know what groups are in this org.
We should have an API to show a list of groups
Outcome:
1. develop a API to show a list of group
|
1.0
|
implement groups list - As a user, we would like to know what groups are in this org.
We should have an API to show a list of groups
Outcome:
1. develop a API to show a list of group
|
non_process
|
implement groups list as a user we would like to know what groups are in this org we should have an api to show a list of groups outcome develop a api to show a list of group
| 0
|
79,951
| 7,734,133,664
|
IssuesEvent
|
2018-05-26 20:25:53
|
python/mypy
|
https://api.github.com/repos/python/mypy
|
closed
|
Use incremental mode in python evaluation test cases
|
priority-1-normal topic-incremental topic-tests
|
The Python evaluation test cases (which don't actually run the code if there are type errors...) use full stubs for `builtins` and `typing`. Using incremental mode would likely speed them up significantly, as processing stubs probably takes the majority of time. These tests are the long pole in the full test suite, so this could give a nice speed boost.
|
1.0
|
Use incremental mode in python evaluation test cases - The Python evaluation test cases (which don't actually run the code if there are type errors...) use full stubs for `builtins` and `typing`. Using incremental mode would likely speed them up significantly, as processing stubs probably takes the majority of time. These tests are the long pole in the full test suite, so this could give a nice speed boost.
|
non_process
|
use incremental mode in python evaluation test cases the python evaluation test cases which don t actually run the code if there are type errors use full stubs for builtins and typing using incremental mode would likely speed them up significantly as processing stubs probably takes the majority of time these tests are the long pole in the full test suite so this could give a nice speed boost
| 0
|
11,678
| 14,536,465,453
|
IssuesEvent
|
2020-12-15 07:40:39
|
aiidateam/aiida-core
|
https://api.github.com/repos/aiidateam/aiida-core
|
closed
|
Running a process function will create `Runner` without persister causing any submission after that to fail
|
priority/critical-blocking topic/engine topic/persistence topic/processes type/bug
|
When a process function is run, a runner is obtained through `Manager.get_runner`, but it passes `with_persistence=False`:
https://github.com/aiidateam/aiida-core/blob/073639aeb6d844acece2fda70aeefdca9d9005fc/aiida/engine/processes/functions.py#L116
In principle this would be fine since a process function runs in one go and so does not need to be persisted. The problem arises when the process function is the first thing to run in the interpreter, which causes the `Runner` instance of the `Manager` to be set to the persistence-less one which will cause any subsequent process that is submitted to fail, since that calls `runner.persister` to save the checkpoint after instantiating the process.
|
1.0
|
Running a process function will create `Runner` without persister causing any submission after that to fail - When a process function is run, a runner is obtained through `Manager.get_runner`, but it passes `with_persistence=False`:
https://github.com/aiidateam/aiida-core/blob/073639aeb6d844acece2fda70aeefdca9d9005fc/aiida/engine/processes/functions.py#L116
In principle this would be fine since a process function runs in one go and so does not need to be persisted. The problem arises when the process function is the first thing to run in the interpreter, which causes the `Runner` instance of the `Manager` to be set to the persistence-less one which will cause any subsequent process that is submitted to fail, since that calls `runner.persister` to save the checkpoint after instantiating the process.
|
process
|
running a process function will create runner without persister causing any submission after that to fail when a process function is run a runner is obtained through manager get runner but it passes with persistence false in principle this would be fine since a process function runs in one go and so does not need to be persisted the problem arises when the process function is the first thing to run in the interpreter which causes the runner instance of the manager to be set to the persistence less one which will cause any subsequent process that is submitted to fail since that calls runner persister to save the checkpoint after instantiating the process
| 1
|
35,199
| 30,832,741,200
|
IssuesEvent
|
2023-08-02 04:02:43
|
dotnet/aspnetcore
|
https://api.github.com/repos/dotnet/aspnetcore
|
closed
|
Rename Microsoft.AspNetCore.Testing
|
area-infrastructure
|
As per https://github.com/dotnet/extensions/issues/4057#issuecomment-1660927215, https://github.com/dotnet/aspnetcore/blob/main/src/Testing/src/Microsoft.AspNetCore.Testing.csproj is used for internal purposes, and being published to the BAR it causes a clash with the project coming out of dotnet/extensions (e.g., https://github.com/dotnet/dnceng/issues/174).
The project should be renamed or should not be published to the BAR.
/cc: @joperezr @Tratcher @wtgodbe
|
1.0
|
Rename Microsoft.AspNetCore.Testing - As per https://github.com/dotnet/extensions/issues/4057#issuecomment-1660927215, https://github.com/dotnet/aspnetcore/blob/main/src/Testing/src/Microsoft.AspNetCore.Testing.csproj is used for internal purposes, and being published to the BAR it causes a clash with the project coming out of dotnet/extensions (e.g., https://github.com/dotnet/dnceng/issues/174).
The project should be renamed or should not be published to the BAR.
/cc: @joperezr @Tratcher @wtgodbe
|
non_process
|
rename microsoft aspnetcore testing as per is used for internal purposes and being published to the bar it causes a clash with the project coming out of dotnet extensions e g the project should be renamed or should not be published to the bar cc joperezr tratcher wtgodbe
| 0
|
636,981
| 20,616,246,819
|
IssuesEvent
|
2022-03-07 13:31:30
|
canonical-web-and-design/ubuntu.com
|
https://api.github.com/repos/canonical-web-and-design/ubuntu.com
|
closed
|
Ubuntu Core image download thank-you page points to incorrect installation instructions
|
Priority: Medium
|
When I download an Ubuntu Core 20 image from ubuntu.com, I get redirected to a 'thank you' page using the following URL: https://ubuntu.com/download/raspberry-pi/thank-you?version=20&architecture=core-20-arm64+raspi The page tries to be helpful and offers pointers to the installation instructions. However, those instrutions, at least as of 07.03.2022 9:40 CET, displays links to the installation instructions of the Ubuntu Desktop and Ubuntu Server variants for the Raspberry Pi, but neither of those variants is applicable to the Ubuntu Core flavor. What would it get to have right link there, possibly pointing to the core docs, most likely this one: https://ubuntu.com/core/docs/uc20/install-raspberry-pi ?
---
*Reported from: https://ubuntu.com/download/raspberry-pi/thank-you*
|
1.0
|
Ubuntu Core image download thank-you page points to incorrect installation instructions - When I download an Ubuntu Core 20 image from ubuntu.com, I get redirected to a 'thank you' page using the following URL: https://ubuntu.com/download/raspberry-pi/thank-you?version=20&architecture=core-20-arm64+raspi The page tries to be helpful and offers pointers to the installation instructions. However, those instrutions, at least as of 07.03.2022 9:40 CET, displays links to the installation instructions of the Ubuntu Desktop and Ubuntu Server variants for the Raspberry Pi, but neither of those variants is applicable to the Ubuntu Core flavor. What would it get to have right link there, possibly pointing to the core docs, most likely this one: https://ubuntu.com/core/docs/uc20/install-raspberry-pi ?
---
*Reported from: https://ubuntu.com/download/raspberry-pi/thank-you*
|
non_process
|
ubuntu core image download thank you page points to incorrect installation instructions when i download an ubuntu core image from ubuntu com i get redirected to a thank you page using the following url the page tries to be helpful and offers pointers to the installation instructions however those instrutions at least as of cet displays links to the installation instructions of the ubuntu desktop and ubuntu server variants for the raspberry pi but neither of those variants is applicable to the ubuntu core flavor what would it get to have right link there possibly pointing to the core docs most likely this one reported from
| 0
|
1,138
| 3,626,850,499
|
IssuesEvent
|
2016-02-10 04:00:36
|
worldspawn/mascis
|
https://api.github.com/repos/worldspawn/mascis
|
opened
|
Support Contains function for IEnumerables
|
enhancement linq-expression-parser postgres-language-processor t-sql-language-processor
|
- [ ] Linq Expression Parser
- [ ] T-Sql Language Processor
- [ ] Postgres Language Processor
|
2.0
|
Support Contains function for IEnumerables - - [ ] Linq Expression Parser
- [ ] T-Sql Language Processor
- [ ] Postgres Language Processor
|
process
|
support contains function for ienumerables linq expression parser t sql language processor postgres language processor
| 1
|
63,043
| 7,680,263,219
|
IssuesEvent
|
2018-05-16 00:32:18
|
ParabolInc/action
|
https://api.github.com/repos/ParabolInc/action
|
opened
|
Improve Congratulations Modal on Upgrade to Pro
|
design
|
The current modal congratulates you for giving us money, which isn't really a cause for celebration:

Instead, we should have it be more like a video game & tell you what features you've unlocked.
Acceptance Criteria:
- [ ] Congratulations modal designed
Effort: 11 points
|
1.0
|
Improve Congratulations Modal on Upgrade to Pro - The current modal congratulates you for giving us money, which isn't really a cause for celebration:

Instead, we should have it be more like a video game & tell you what features you've unlocked.
Acceptance Criteria:
- [ ] Congratulations modal designed
Effort: 11 points
|
non_process
|
improve congratulations modal on upgrade to pro the current modal congratulates you for giving us money which isn t really a cause for celebration instead we should have it be more like a video game tell you what features you ve unlocked acceptance criteria congratulations modal designed effort points
| 0
|
20,647
| 27,323,800,190
|
IssuesEvent
|
2023-02-24 22:52:28
|
MPMG-DCC-UFMG/C01
|
https://api.github.com/repos/MPMG-DCC-UFMG/C01
|
closed
|
Deletar módulo `form-parser` do sistema
|
[0] Desenvolvimento [2] Média Prioridade [1] Aprimoramento [3] Processamento Dinâmico
|
## Comportamento Esperado
O módulo `form-parser` deve ser removido do código.
## Comportamento Atual
Atualmente, este módulo não é utilizado por nenhuma funcionalidade do sistema. Em versões anteriores, o `form-parser` era usada conjuntamente com o módulo de injeção em formulários, que foi removido do sistema principal.
## Passos para reproduzir o erro
Não se aplica
## Especificações da Coleta
Não se aplica
## Sistema (caso necessário)
- MP ou local: ambos
- Branch específica: não
- Sistema diferente: sistema distribuído
## Screenshots (caso necessário)
Não se aplica
|
1.0
|
Deletar módulo `form-parser` do sistema - ## Comportamento Esperado
O módulo `form-parser` deve ser removido do código.
## Comportamento Atual
Atualmente, este módulo não é utilizado por nenhuma funcionalidade do sistema. Em versões anteriores, o `form-parser` era usada conjuntamente com o módulo de injeção em formulários, que foi removido do sistema principal.
## Passos para reproduzir o erro
Não se aplica
## Especificações da Coleta
Não se aplica
## Sistema (caso necessário)
- MP ou local: ambos
- Branch específica: não
- Sistema diferente: sistema distribuído
## Screenshots (caso necessário)
Não se aplica
|
process
|
deletar módulo form parser do sistema comportamento esperado o módulo form parser deve ser removido do código comportamento atual atualmente este módulo não é utilizado por nenhuma funcionalidade do sistema em versões anteriores o form parser era usada conjuntamente com o módulo de injeção em formulários que foi removido do sistema principal passos para reproduzir o erro não se aplica especificações da coleta não se aplica sistema caso necessário mp ou local ambos branch específica não sistema diferente sistema distribuído screenshots caso necessário não se aplica
| 1
|
10,966
| 13,769,699,611
|
IssuesEvent
|
2020-10-07 19:02:05
|
googleapis/python-pubsub
|
https://api.github.com/repos/googleapis/python-pubsub
|
closed
|
PubSub: emulator breaks under heavy load from Python
|
api: pubsub type: process
|
Hi.
It seems that when pubsub emulator is under high-load with python client (I haven't tested other clients, maybe it has the same issue), the pubsub emulator fails miserably even though all messages are under 10MB (they have ~500KB). Is there a way how to solve this from the Python client side?
I tried even the latest master version, but the result is the same. Should I ask elsewhere?
*Update*: When trying to use GCP managed pubsub, I start to get 504 and then it chokes entirely and nothing gets through:
```
...
loader_1 | 2019-12-07 22:07:32,311:PID140477465024256:google.api_core.retry:DEBUG - Retrying due to 504 Deadline Exceeded, sleeping 0.3s ...
loader_1 | 2019-12-07 22:07:32,312:PID140477446272768:google.api_core.retry:DEBUG - Retrying due to 504 Deadline Exceeded, sleeping 0.3s ...
loader_1 | 2019-12-07 22:07:32,314:PID140477377148672:google.api_core.retry:DEBUG - Retrying due to 504 Deadline Exceeded, sleeping 0.3s ...
loader_1 | 2019-12-07 22:07:32,325:PID140477437880064:google.api_core.retry:DEBUG - Retrying due to 504 Deadline Exceeded, sleeping 0.3s ...
loader_1 | 2019-12-07 22:07:32,345:PID140477333255936:google.api_core.retry:DEBUG - Retrying due to 504 Deadline Exceeded, sleeping 0.2s ...
loader_1 | 2019-12-07 22:07:32,394:PID140477424264960:google.api_core.retry:DEBUG - Retrying due to 504 Deadline Exceeded, sleeping 0.1s ...
...
```
## Reproducible example (or close to)
```py
# publisher.py
import json
import time
from google.cloud import pubsub_v1
TOPIC_NAME = "test-topic"
PROJECT_ID = "local"
PUBLISHER = pubsub_v1.PublisherClient()
TOPIC_PATH = PUBLISHER.topic_path(PROJECT_ID, TOPIC_NAME)
time.sleep(5) # to let emulator start
import random
dt = [{f"q{random.randint(0,9999999)}_somestringtoadddata{random.randint(0,999999)}": random.randint(0, 2) for i in range(30000)} for i in range(100)]
for i in dt:
PUBLISHER.publish(TOPIC_PATH, data=json.dumps(i).encode())
## if I add something like:
# future = PUBLISHER.publish(TOPIC_PATH, data=json.dumps(i).encode())
# if future.result(): print("OK")
## it works as it "waits" (but is significantly slower than fire and forget or via callbacks
```
```dockerfile
# Dockerfile
FROM python:3.8-slim
ADD publisher.py publisher.py
RUN pip install google-cloud-pubsub
```
```yaml
# docker-compose.yaml
version: '3'
services:
loader:
stdin_open: true
tty: true
build: .
image: loader
cmd: ["publisher.py"]
environment:
PUBSUB_EMULATOR_HOST: pubsub:8681
PYTHONPATH: "."
depends_on:
- pubsub
pubsub:
image: messagebird/gcloud-pubsub-emulator:latest
expose:
- 8681
environment:
- PUBSUB_PROJECT1=local,test-topic:test-subscription
```
When triggered, I get:
```
$ docker-compose up publisher pubsub
pubsub_1 | Executing: /google-cloud-sdk/platform/pubsub-emulator/bin/cloud-pubsub-emulator --host=0.0.0.0 --port=8681
pubsub_1 | [pubsub] This is the Google Pub/Sub fake.
pubsub_1 | [pubsub] Implementation may be incomplete or differ from the real system.
pubsub_1 | [pubsub] Dec 07, 2019 9:02:01 PM com.google.cloud.pubsub.testing.v1.Main main
pubsub_1 | [pubsub] INFO: IAM integration is disabled. IAM policy methods and ACL checks are not supported
publisher_1 | Waiting for pubsub topic creation
pubsub_1 | [pubsub] Dec 07, 2019 9:02:02 PM io.gapi.emulators.netty.NettyUtil applyJava7LongHostnameWorkaround
pubsub_1 | [pubsub] INFO: Applied Java 7 long hostname workaround.
pubsub_1 | [pubsub] Dec 07, 2019 9:02:02 PM com.google.cloud.pubsub.testing.v1.Main main
pubsub_1 | [pubsub] INFO: Server started, listening on 8681
pubsub_1 | Client connected with project ID "local"
pubsub_1 | Creating topic "test-topic"
pubsub_1 | [pubsub] Dec 07, 2019 9:02:02 PM io.gapi.emulators.grpc.GrpcServer$3 operationComplete
pubsub_1 | [pubsub] INFO: Adding handler(s) to newly registered Channel.
pubsub_1 | [pubsub] Dec 07, 2019 9:02:02 PM io.gapi.emulators.grpc.GrpcServer$3 operationComplete
pubsub_1 | [pubsub] INFO: Adding handler(s) to newly registered Channel.
pubsub_1 | [pubsub] Dec 07, 2019 9:02:02 PM io.gapi.emulators.netty.HttpVersionRoutingHandler channelRead
pubsub_1 | [pubsub] INFO: Detected HTTP/2 connection.
pubsub_1 | Creating subscription "test-subscription"
subscriber_1 | Listening for messages on projects/local/subscriptions/test-subscription
pubsub_1 | [pubsub] Dec 07, 2019 9:02:04 PM io.gapi.emulators.grpc.GrpcServer$3 operationComplete
pubsub_1 | [pubsub] INFO: Adding handler(s) to newly registered Channel.
pubsub_1 | [pubsub] Dec 07, 2019 9:02:04 PM io.gapi.emulators.netty.HttpVersionRoutingHandler channelRead
pubsub_1 | [pubsub] INFO: Detected HTTP/2 connection.
pubsub_1 | [pubsub] Dec 07, 2019 9:02:25 PM io.gapi.emulators.grpc.GrpcServer$3 operationComplete
pubsub_1 | [pubsub] INFO: Adding handler(s) to newly registered Channel.
pubsub_1 | [pubsub] Dec 07, 2019 9:02:25 PM io.gapi.emulators.netty.HttpVersionRoutingHandler channelRead
pubsub_1 | [pubsub] INFO: Detected HTTP/2 connection.
pubsub_1 | [pubsub] Dec 07, 2019 9:02:25 PM io.grpc.netty.NettyServerStream$TransportState deframeFailed
pubsub_1 | [pubsub] WARNING: Exception processing message
pubsub_1 | [pubsub] io.grpc.StatusRuntimeException: RESOURCE_EXHAUSTED: io.grpc.netty.NettyServerStream$TransportState: Frame size 9786710 exceeds maximum: 4194304.
pubsub_1 | [pubsub] at io.grpc.Status.asRuntimeException(Status.java:517)
pubsub_1 | [pubsub] at io.grpc.internal.MessageDeframer.processHeader(MessageDeframer.java:384)
pubsub_1 | [pubsub] at io.grpc.internal.MessageDeframer.deliver(MessageDeframer.java:264)
pubsub_1 | [pubsub] at io.grpc.internal.MessageDeframer.deframe(MessageDeframer.java:174)
pubsub_1 | [pubsub] at io.grpc.internal.AbstractStream$TransportState.deframe(AbstractStream.java:181)
pubsub_1 | [pubsub] at io.grpc.internal.AbstractServerStream$TransportState.inboundDataReceived(AbstractServerStream.java:247)
pubsub_1 | [pubsub] at io.grpc.netty.NettyServerStream$TransportState.inboundDataReceived(NettyServerStream.java:178)
pubsub_1 | [pubsub] at io.grpc.netty.NettyServerHandler.onDataRead(NettyServerHandler.java:391)
pubsub_1 | [pubsub] at io.grpc.netty.NettyServerHandler.access$400(NettyServerHandler.java:92)
pubsub_1 | [pubsub] at io.grpc.netty.NettyServerHandler$FrameListener.onDataRead(NettyServerHandler.java:642)
pubsub_1 | [pubsub] at io.netty.handler.codec.http2.DefaultHttp2ConnectionDecoder$FrameReadListener.onDataRead(DefaultHttp2ConnectionDecoder.java:240)
pubsub_1 | [pubsub] at io.netty.handler.codec.http2.Http2InboundFrameLogger$1.onDataRead(Http2InboundFrameLogger.java:48)
pubsub_1 | [pubsub] at io.netty.handler.codec.http2.DefaultHttp2FrameReader.readDataFrame(DefaultHttp2FrameReader.java:421)
pubsub_1 | [pubsub] at io.netty.handler.codec.http2.DefaultHttp2FrameReader.processPayloadState(DefaultHttp2FrameReader.java:251)
pubsub_1 | [pubsub] at io.netty.handler.codec.http2.DefaultHttp2FrameReader.readFrame(DefaultHttp2FrameReader.java:160)
pubsub_1 | [pubsub] at io.netty.handler.codec.http2.Http2InboundFrameLogger.readFrame(Http2InboundFrameLogger.java:41)
pubsub_1 | [pubsub] at io.netty.handler.codec.http2.DefaultHttp2ConnectionDecoder.decodeFrame(DefaultHttp2ConnectionDecoder.java:118)
pubsub_1 | [pubsub] at io.netty.handler.codec.http2.Http2ConnectionHandler$FrameDecoder.decode(Http2ConnectionHandler.java:383)
pubsub_1 | [pubsub] at io.netty.handler.codec.http2.Http2ConnectionHandler.decode(Http2ConnectionHandler.java:443)
pubsub_1 | [pubsub] at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489)
pubsub_1 | [pubsub] at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:428)
pubsub_1 | [pubsub] at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265)
pubsub_1 | [pubsub] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
pubsub_1 | [pubsub] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
pubsub_1 | [pubsub] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
pubsub_1 | [pubsub] at io.gapi.emulators.netty.HttpVersionRoutingHandler.channelRead(HttpVersionRoutingHandler.java:103)
pubsub_1 | [pubsub] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
pubsub_1 | [pubsub] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
pubsub_1 | [pubsub] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
pubsub_1 | [pubsub] at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:241)
pubsub_1 | [pubsub] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
pubsub_1 | [pubsub] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
pubsub_1 | [pubsub] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
pubsub_1 | [pubsub] at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
pubsub_1 | [pubsub] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
pubsub_1 | [pubsub] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
pubsub_1 | [pubsub] at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
pubsub_1 | [pubsub] at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134)
pubsub_1 | [pubsub] at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
pubsub_1 | [pubsub] at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
pubsub_1 | [pubsub] at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
pubsub_1 | [pubsub] at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
pubsub_1 | [pubsub] at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
pubsub_1 | [pubsub] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1152)
pubsub_1 | [pubsub] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622)
pubsub_1 | [pubsub] at java.lang.Thread.run(Thread.java:748)
pubsub_1 | [pubsub]
pubsub_1 | [pubsub] Dec 07, 2019 9:02:25 PM io.grpc.netty.NettyServerHandler onStreamError
pubsub_1 | [pubsub] WARNING: Stream Error
pubsub_1 | [pubsub] io.netty.handler.codec.http2.Http2Exception$StreamException: Received DATA frame for an unknown stream 1
pubsub_1 | [pubsub] at io.netty.handler.codec.http2.Http2Exception.streamError(Http2Exception.java:129)
pubsub_1 | [pubsub] at io.netty.handler.codec.http2.DefaultHttp2ConnectionDecoder$FrameReadListener.shouldIgnoreHeadersOrDataFrame(DefaultHttp2ConnectionDecoder.java:535)
pubsub_1 | [pubsub] at io.netty.handler.codec.http2.DefaultHttp2ConnectionDecoder$FrameReadListener.onDataRead(DefaultHttp2ConnectionDecoder.java:187)
pubsub_1 | [pubsub] at io.netty.handler.codec.http2.Http2InboundFrameLogger$1.onDataRead(Http2InboundFrameLogger.java:48)
pubsub_1 | [pubsub] at io.netty.handler.codec.http2.DefaultHttp2FrameReader.readDataFrame(DefaultHttp2FrameReader.java:421)
pubsub_1 | [pubsub] at io.netty.handler.codec.http2.DefaultHttp2FrameReader.processPayloadState(DefaultHttp2FrameReader.java:251)
pubsub_1 | [pubsub] at io.netty.handler.codec.http2.DefaultHttp2FrameReader.readFrame(DefaultHttp2FrameReader.java:160)
pubsub_1 | [pubsub] at io.netty.handler.codec.http2.Http2InboundFrameLogger.readFrame(Http2InboundFrameLogger.java:41)
pubsub_1 | [pubsub] at io.netty.handler.codec.http2.DefaultHttp2ConnectionDecoder.decodeFrame(DefaultHttp2ConnectionDecoder.java:118)
pubsub_1 | [pubsub] at io.netty.handler.codec.http2.Http2ConnectionHandler$FrameDecoder.decode(Http2ConnectionHandler.java:383)
pubsub_1 | [pubsub] at io.netty.handler.codec.http2.Http2ConnectionHandler.decode(Http2ConnectionHandler.java:443)
pubsub_1 | [pubsub] at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489)
pubsub_1 | [pubsub] at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:428)
pubsub_1 | [pubsub] at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265)
pubsub_1 | [pubsub] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
pubsub_1 | [pubsub] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
pubsub_1 | [pubsub] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
pubsub_1 | [pubsub] at io.gapi.emulators.netty.HttpVersionRoutingHandler.channelRead(HttpVersionRoutingHandler.java:103)
pubsub_1 | [pubsub] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
pubsub_1 | [pubsub] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
pubsub_1 | [pubsub] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
pubsub_1 | [pubsub] at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:241)
pubsub_1 | [pubsub] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
pubsub_1 | [pubsub] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
pubsub_1 | [pubsub] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
pubsub_1 | [pubsub] at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
pubsub_1 | [pubsub] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
pubsub_1 | [pubsub] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
pubsub_1 | [pubsub] at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
pubsub_1 | [pubsub] at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134)
pubsub_1 | [pubsub] at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
pubsub_1 | [pubsub] at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
pubsub_1 | [pubsub] at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
pubsub_1 | [pubsub] at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
pubsub_1 | [pubsub] at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
pubsub_1 | [pubsub] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1152)
pubsub_1 | [pubsub] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622)
pubsub_1 | [pubsub] at java.lang.Thread.run(Thread.java:748)
```
Some stuff I found:
* https://github.com/grpc/grpc-java/issues/3996
* https://github.com/grpc/grpc-java/issues/1578
* https://github.com/grpc/grpc/issues/15738
* https://github.com/grpc/grpc-java/issues/917
* https://github.com/grpc/grpc-java/issues/3996
* https://stackoverflow.com/questions/39753730/java-grpc-how-to-increase-the-message-size-limit-in-a-managedchannel
* https://stackoverflow.com/questions/41150590/io-grpc-frame-size-exceeds-maximum-when-pull-google-pubsub-message
|
1.0
|
PubSub: emulator breaks under heavy load from Python - Hi.
It seems that when pubsub emulator is under high-load with python client (I haven't tested other clients, maybe it has the same issue), the pubsub emulator fails miserably even though all messages are under 10MB (they have ~500KB). Is there a way how to solve this from the Python client side?
I tried even the latest master version, but the result is the same. Should I ask elsewhere?
*Update*: When trying to use GCP managed pubsub, I start to get 504 and then it chokes entirely and nothing gets through:
```
...
loader_1 | 2019-12-07 22:07:32,311:PID140477465024256:google.api_core.retry:DEBUG - Retrying due to 504 Deadline Exceeded, sleeping 0.3s ...
loader_1 | 2019-12-07 22:07:32,312:PID140477446272768:google.api_core.retry:DEBUG - Retrying due to 504 Deadline Exceeded, sleeping 0.3s ...
loader_1 | 2019-12-07 22:07:32,314:PID140477377148672:google.api_core.retry:DEBUG - Retrying due to 504 Deadline Exceeded, sleeping 0.3s ...
loader_1 | 2019-12-07 22:07:32,325:PID140477437880064:google.api_core.retry:DEBUG - Retrying due to 504 Deadline Exceeded, sleeping 0.3s ...
loader_1 | 2019-12-07 22:07:32,345:PID140477333255936:google.api_core.retry:DEBUG - Retrying due to 504 Deadline Exceeded, sleeping 0.2s ...
loader_1 | 2019-12-07 22:07:32,394:PID140477424264960:google.api_core.retry:DEBUG - Retrying due to 504 Deadline Exceeded, sleeping 0.1s ...
...
```
## Reproducible example (or close to)
```py
# publisher.py
import json
import time
from google.cloud import pubsub_v1
TOPIC_NAME = "test-topic"
PROJECT_ID = "local"
PUBLISHER = pubsub_v1.PublisherClient()
TOPIC_PATH = PUBLISHER.topic_path(PROJECT_ID, TOPIC_NAME)
time.sleep(5) # to let emulator start
import random
dt = [{f"q{random.randint(0,9999999)}_somestringtoadddata{random.randint(0,999999)}": random.randint(0, 2) for i in range(30000)} for i in range(100)]
for i in dt:
PUBLISHER.publish(TOPIC_PATH, data=json.dumps(i).encode())
## if I add something like:
# future = PUBLISHER.publish(TOPIC_PATH, data=json.dumps(i).encode())
# if future.result(): print("OK")
## it works as it "waits" (but is significantly slower than fire and forget or via callbacks
```
```dockerfile
# Dockerfile
FROM python:3.8-slim
ADD publisher.py publisher.py
RUN pip install google-cloud-pubsub
```
```yaml
# docker-compose.yaml
version: '3'
services:
loader:
stdin_open: true
tty: true
build: .
image: loader
cmd: ["publisher.py"]
environment:
PUBSUB_EMULATOR_HOST: pubsub:8681
PYTHONPATH: "."
depends_on:
- pubsub
pubsub:
image: messagebird/gcloud-pubsub-emulator:latest
expose:
- 8681
environment:
- PUBSUB_PROJECT1=local,test-topic:test-subscription
```
When triggered, I get:
```
$ docker-compose up publisher pubsub
pubsub_1 | Executing: /google-cloud-sdk/platform/pubsub-emulator/bin/cloud-pubsub-emulator --host=0.0.0.0 --port=8681
pubsub_1 | [pubsub] This is the Google Pub/Sub fake.
pubsub_1 | [pubsub] Implementation may be incomplete or differ from the real system.
pubsub_1 | [pubsub] Dec 07, 2019 9:02:01 PM com.google.cloud.pubsub.testing.v1.Main main
pubsub_1 | [pubsub] INFO: IAM integration is disabled. IAM policy methods and ACL checks are not supported
publisher_1 | Waiting for pubsub topic creation
pubsub_1 | [pubsub] Dec 07, 2019 9:02:02 PM io.gapi.emulators.netty.NettyUtil applyJava7LongHostnameWorkaround
pubsub_1 | [pubsub] INFO: Applied Java 7 long hostname workaround.
pubsub_1 | [pubsub] Dec 07, 2019 9:02:02 PM com.google.cloud.pubsub.testing.v1.Main main
pubsub_1 | [pubsub] INFO: Server started, listening on 8681
pubsub_1 | Client connected with project ID "local"
pubsub_1 | Creating topic "test-topic"
pubsub_1 | [pubsub] Dec 07, 2019 9:02:02 PM io.gapi.emulators.grpc.GrpcServer$3 operationComplete
pubsub_1 | [pubsub] INFO: Adding handler(s) to newly registered Channel.
pubsub_1 | [pubsub] Dec 07, 2019 9:02:02 PM io.gapi.emulators.grpc.GrpcServer$3 operationComplete
pubsub_1 | [pubsub] INFO: Adding handler(s) to newly registered Channel.
pubsub_1 | [pubsub] Dec 07, 2019 9:02:02 PM io.gapi.emulators.netty.HttpVersionRoutingHandler channelRead
pubsub_1 | [pubsub] INFO: Detected HTTP/2 connection.
pubsub_1 | Creating subscription "test-subscription"
subscriber_1 | Listening for messages on projects/local/subscriptions/test-subscription
pubsub_1 | [pubsub] Dec 07, 2019 9:02:04 PM io.gapi.emulators.grpc.GrpcServer$3 operationComplete
pubsub_1 | [pubsub] INFO: Adding handler(s) to newly registered Channel.
pubsub_1 | [pubsub] Dec 07, 2019 9:02:04 PM io.gapi.emulators.netty.HttpVersionRoutingHandler channelRead
pubsub_1 | [pubsub] INFO: Detected HTTP/2 connection.
pubsub_1 | [pubsub] Dec 07, 2019 9:02:25 PM io.gapi.emulators.grpc.GrpcServer$3 operationComplete
pubsub_1 | [pubsub] INFO: Adding handler(s) to newly registered Channel.
pubsub_1 | [pubsub] Dec 07, 2019 9:02:25 PM io.gapi.emulators.netty.HttpVersionRoutingHandler channelRead
pubsub_1 | [pubsub] INFO: Detected HTTP/2 connection.
pubsub_1 | [pubsub] Dec 07, 2019 9:02:25 PM io.grpc.netty.NettyServerStream$TransportState deframeFailed
pubsub_1 | [pubsub] WARNING: Exception processing message
pubsub_1 | [pubsub] io.grpc.StatusRuntimeException: RESOURCE_EXHAUSTED: io.grpc.netty.NettyServerStream$TransportState: Frame size 9786710 exceeds maximum: 4194304.
pubsub_1 | [pubsub] at io.grpc.Status.asRuntimeException(Status.java:517)
pubsub_1 | [pubsub] at io.grpc.internal.MessageDeframer.processHeader(MessageDeframer.java:384)
pubsub_1 | [pubsub] at io.grpc.internal.MessageDeframer.deliver(MessageDeframer.java:264)
pubsub_1 | [pubsub] at io.grpc.internal.MessageDeframer.deframe(MessageDeframer.java:174)
pubsub_1 | [pubsub] at io.grpc.internal.AbstractStream$TransportState.deframe(AbstractStream.java:181)
pubsub_1 | [pubsub] at io.grpc.internal.AbstractServerStream$TransportState.inboundDataReceived(AbstractServerStream.java:247)
pubsub_1 | [pubsub] at io.grpc.netty.NettyServerStream$TransportState.inboundDataReceived(NettyServerStream.java:178)
pubsub_1 | [pubsub] at io.grpc.netty.NettyServerHandler.onDataRead(NettyServerHandler.java:391)
pubsub_1 | [pubsub] at io.grpc.netty.NettyServerHandler.access$400(NettyServerHandler.java:92)
pubsub_1 | [pubsub] at io.grpc.netty.NettyServerHandler$FrameListener.onDataRead(NettyServerHandler.java:642)
pubsub_1 | [pubsub] at io.netty.handler.codec.http2.DefaultHttp2ConnectionDecoder$FrameReadListener.onDataRead(DefaultHttp2ConnectionDecoder.java:240)
pubsub_1 | [pubsub] at io.netty.handler.codec.http2.Http2InboundFrameLogger$1.onDataRead(Http2InboundFrameLogger.java:48)
pubsub_1 | [pubsub] at io.netty.handler.codec.http2.DefaultHttp2FrameReader.readDataFrame(DefaultHttp2FrameReader.java:421)
pubsub_1 | [pubsub] at io.netty.handler.codec.http2.DefaultHttp2FrameReader.processPayloadState(DefaultHttp2FrameReader.java:251)
pubsub_1 | [pubsub] at io.netty.handler.codec.http2.DefaultHttp2FrameReader.readFrame(DefaultHttp2FrameReader.java:160)
pubsub_1 | [pubsub] at io.netty.handler.codec.http2.Http2InboundFrameLogger.readFrame(Http2InboundFrameLogger.java:41)
pubsub_1 | [pubsub] at io.netty.handler.codec.http2.DefaultHttp2ConnectionDecoder.decodeFrame(DefaultHttp2ConnectionDecoder.java:118)
pubsub_1 | [pubsub] at io.netty.handler.codec.http2.Http2ConnectionHandler$FrameDecoder.decode(Http2ConnectionHandler.java:383)
pubsub_1 | [pubsub] at io.netty.handler.codec.http2.Http2ConnectionHandler.decode(Http2ConnectionHandler.java:443)
pubsub_1 | [pubsub] at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489)
pubsub_1 | [pubsub] at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:428)
pubsub_1 | [pubsub] at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265)
pubsub_1 | [pubsub] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
pubsub_1 | [pubsub] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
pubsub_1 | [pubsub] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
pubsub_1 | [pubsub] at io.gapi.emulators.netty.HttpVersionRoutingHandler.channelRead(HttpVersionRoutingHandler.java:103)
pubsub_1 | [pubsub] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
pubsub_1 | [pubsub] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
pubsub_1 | [pubsub] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
pubsub_1 | [pubsub] at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:241)
pubsub_1 | [pubsub] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
pubsub_1 | [pubsub] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
pubsub_1 | [pubsub] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
pubsub_1 | [pubsub] at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
pubsub_1 | [pubsub] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
pubsub_1 | [pubsub] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
pubsub_1 | [pubsub] at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
pubsub_1 | [pubsub] at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134)
pubsub_1 | [pubsub] at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
pubsub_1 | [pubsub] at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
pubsub_1 | [pubsub] at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
pubsub_1 | [pubsub] at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
pubsub_1 | [pubsub] at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
pubsub_1 | [pubsub] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1152)
pubsub_1 | [pubsub] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622)
pubsub_1 | [pubsub] at java.lang.Thread.run(Thread.java:748)
pubsub_1 | [pubsub]
pubsub_1 | [pubsub] Dec 07, 2019 9:02:25 PM io.grpc.netty.NettyServerHandler onStreamError
pubsub_1 | [pubsub] WARNING: Stream Error
pubsub_1 | [pubsub] io.netty.handler.codec.http2.Http2Exception$StreamException: Received DATA frame for an unknown stream 1
pubsub_1 | [pubsub] at io.netty.handler.codec.http2.Http2Exception.streamError(Http2Exception.java:129)
pubsub_1 | [pubsub] at io.netty.handler.codec.http2.DefaultHttp2ConnectionDecoder$FrameReadListener.shouldIgnoreHeadersOrDataFrame(DefaultHttp2ConnectionDecoder.java:535)
pubsub_1 | [pubsub] at io.netty.handler.codec.http2.DefaultHttp2ConnectionDecoder$FrameReadListener.onDataRead(DefaultHttp2ConnectionDecoder.java:187)
pubsub_1 | [pubsub] at io.netty.handler.codec.http2.Http2InboundFrameLogger$1.onDataRead(Http2InboundFrameLogger.java:48)
pubsub_1 | [pubsub] at io.netty.handler.codec.http2.DefaultHttp2FrameReader.readDataFrame(DefaultHttp2FrameReader.java:421)
pubsub_1 | [pubsub] at io.netty.handler.codec.http2.DefaultHttp2FrameReader.processPayloadState(DefaultHttp2FrameReader.java:251)
pubsub_1 | [pubsub] at io.netty.handler.codec.http2.DefaultHttp2FrameReader.readFrame(DefaultHttp2FrameReader.java:160)
pubsub_1 | [pubsub] at io.netty.handler.codec.http2.Http2InboundFrameLogger.readFrame(Http2InboundFrameLogger.java:41)
pubsub_1 | [pubsub] at io.netty.handler.codec.http2.DefaultHttp2ConnectionDecoder.decodeFrame(DefaultHttp2ConnectionDecoder.java:118)
pubsub_1 | [pubsub] at io.netty.handler.codec.http2.Http2ConnectionHandler$FrameDecoder.decode(Http2ConnectionHandler.java:383)
pubsub_1 | [pubsub] at io.netty.handler.codec.http2.Http2ConnectionHandler.decode(Http2ConnectionHandler.java:443)
pubsub_1 | [pubsub] at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489)
pubsub_1 | [pubsub] at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:428)
pubsub_1 | [pubsub] at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265)
pubsub_1 | [pubsub] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
pubsub_1 | [pubsub] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
pubsub_1 | [pubsub] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
pubsub_1 | [pubsub] at io.gapi.emulators.netty.HttpVersionRoutingHandler.channelRead(HttpVersionRoutingHandler.java:103)
pubsub_1 | [pubsub] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
pubsub_1 | [pubsub] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
pubsub_1 | [pubsub] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
pubsub_1 | [pubsub] at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:241)
pubsub_1 | [pubsub] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
pubsub_1 | [pubsub] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
pubsub_1 | [pubsub] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
pubsub_1 | [pubsub] at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
pubsub_1 | [pubsub] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
pubsub_1 | [pubsub] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
pubsub_1 | [pubsub] at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
pubsub_1 | [pubsub] at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134)
pubsub_1 | [pubsub] at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
pubsub_1 | [pubsub] at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
pubsub_1 | [pubsub] at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
pubsub_1 | [pubsub] at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
pubsub_1 | [pubsub] at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
pubsub_1 | [pubsub] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1152)
pubsub_1 | [pubsub] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622)
pubsub_1 | [pubsub] at java.lang.Thread.run(Thread.java:748)
```
Some stuff I found:
* https://github.com/grpc/grpc-java/issues/3996
* https://github.com/grpc/grpc-java/issues/1578
* https://github.com/grpc/grpc/issues/15738
* https://github.com/grpc/grpc-java/issues/917
* https://github.com/grpc/grpc-java/issues/3996
* https://stackoverflow.com/questions/39753730/java-grpc-how-to-increase-the-message-size-limit-in-a-managedchannel
* https://stackoverflow.com/questions/41150590/io-grpc-frame-size-exceeds-maximum-when-pull-google-pubsub-message
|
process
|
pubsub emulator breaks under heavy load from python hi it seems that when pubsub emulator is under high load with python client i haven t tested other clients maybe it has the same issue the pubsub emulator fails miserably even though all messages are under they have is there a way how to solve this from the python client side i tried even the latest master version but the result is the same should i ask elsewhere update when trying to use gcp managed pubsub i start to get and then it chokes entirely and nothing gets through loader google api core retry debug retrying due to deadline exceeded sleeping loader google api core retry debug retrying due to deadline exceeded sleeping loader google api core retry debug retrying due to deadline exceeded sleeping loader google api core retry debug retrying due to deadline exceeded sleeping loader google api core retry debug retrying due to deadline exceeded sleeping loader google api core retry debug retrying due to deadline exceeded sleeping reproducible example or close to py publisher py import json import time from google cloud import pubsub topic name test topic project id local publisher pubsub publisherclient topic path publisher topic path project id topic name time sleep to let emulator start import random dt for i in dt publisher publish topic path data json dumps i encode if i add something like future publisher publish topic path data json dumps i encode if future result print ok it works as it waits but is significantly slower than fire and forget or via callbacks dockerfile dockerfile from python slim add publisher py publisher py run pip install google cloud pubsub yaml docker compose yaml version services loader stdin open true tty true build image loader cmd environment pubsub emulator host pubsub pythonpath depends on pubsub pubsub image messagebird gcloud pubsub emulator latest expose environment pubsub local test topic test subscription when triggered i get docker compose up publisher pubsub pubsub executing google cloud sdk platform pubsub emulator bin cloud pubsub emulator host port pubsub this is the google pub sub fake pubsub implementation may be incomplete or differ from the real system pubsub dec pm com google cloud pubsub testing main main pubsub info iam integration is disabled iam policy methods and acl checks are not supported publisher waiting for pubsub topic creation pubsub dec pm io gapi emulators netty nettyutil pubsub info applied java long hostname workaround pubsub dec pm com google cloud pubsub testing main main pubsub info server started listening on pubsub client connected with project id local pubsub creating topic test topic pubsub dec pm io gapi emulators grpc grpcserver operationcomplete pubsub info adding handler s to newly registered channel pubsub dec pm io gapi emulators grpc grpcserver operationcomplete pubsub info adding handler s to newly registered channel pubsub dec pm io gapi emulators netty httpversionroutinghandler channelread pubsub info detected http connection pubsub creating subscription test subscription subscriber listening for messages on projects local subscriptions test subscription pubsub dec pm io gapi emulators grpc grpcserver operationcomplete pubsub info adding handler s to newly registered channel pubsub dec pm io gapi emulators netty httpversionroutinghandler channelread pubsub info detected http connection pubsub dec pm io gapi emulators grpc grpcserver operationcomplete pubsub info adding handler s to newly registered channel pubsub dec pm io gapi emulators netty httpversionroutinghandler channelread pubsub info detected http connection pubsub dec pm io grpc netty nettyserverstream transportstate deframefailed pubsub warning exception processing message pubsub io grpc statusruntimeexception resource exhausted io grpc netty nettyserverstream transportstate frame size exceeds maximum pubsub at io grpc status asruntimeexception status java pubsub at io grpc internal messagedeframer processheader messagedeframer java pubsub at io grpc internal messagedeframer deliver messagedeframer java pubsub at io grpc internal messagedeframer deframe messagedeframer java pubsub at io grpc internal abstractstream transportstate deframe abstractstream java pubsub at io grpc internal abstractserverstream transportstate inbounddatareceived abstractserverstream java pubsub at io grpc netty nettyserverstream transportstate inbounddatareceived nettyserverstream java pubsub at io grpc netty nettyserverhandler ondataread nettyserverhandler java pubsub at io grpc netty nettyserverhandler access nettyserverhandler java pubsub at io grpc netty nettyserverhandler framelistener ondataread nettyserverhandler java pubsub at io netty handler codec framereadlistener ondataread java pubsub at io netty handler codec ondataread java pubsub at io netty handler codec readdataframe java pubsub at io netty handler codec processpayloadstate java pubsub at io netty handler codec readframe java pubsub at io netty handler codec readframe java pubsub at io netty handler codec decodeframe java pubsub at io netty handler codec framedecoder decode java pubsub at io netty handler codec decode java pubsub at io netty handler codec bytetomessagedecoder decoderemovalreentryprotection bytetomessagedecoder java pubsub at io netty handler codec bytetomessagedecoder calldecode bytetomessagedecoder java pubsub at io netty handler codec bytetomessagedecoder channelread bytetomessagedecoder java pubsub at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java pubsub at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java pubsub at io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java pubsub at io gapi emulators netty httpversionroutinghandler channelread httpversionroutinghandler java pubsub at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java pubsub at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java pubsub at io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java pubsub at io netty handler logging logginghandler channelread logginghandler java pubsub at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java pubsub at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java pubsub at io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java pubsub at io netty channel defaultchannelpipeline headcontext channelread defaultchannelpipeline java pubsub at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java pubsub at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java pubsub at io netty channel defaultchannelpipeline firechannelread defaultchannelpipeline java pubsub at io netty channel nio abstractniobytechannel niobyteunsafe read abstractniobytechannel java pubsub at io netty channel nio nioeventloop processselectedkey nioeventloop java pubsub at io netty channel nio nioeventloop processselectedkeysoptimized nioeventloop java pubsub at io netty channel nio nioeventloop processselectedkeys nioeventloop java pubsub at io netty channel nio nioeventloop run nioeventloop java pubsub at io netty util concurrent singlethreadeventexecutor run singlethreadeventexecutor java pubsub at java util concurrent threadpoolexecutor runworker threadpoolexecutor java pubsub at java util concurrent threadpoolexecutor worker run threadpoolexecutor java pubsub at java lang thread run thread java pubsub pubsub dec pm io grpc netty nettyserverhandler onstreamerror pubsub warning stream error pubsub io netty handler codec streamexception received data frame for an unknown stream pubsub at io netty handler codec streamerror java pubsub at io netty handler codec framereadlistener shouldignoreheadersordataframe java pubsub at io netty handler codec framereadlistener ondataread java pubsub at io netty handler codec ondataread java pubsub at io netty handler codec readdataframe java pubsub at io netty handler codec processpayloadstate java pubsub at io netty handler codec readframe java pubsub at io netty handler codec readframe java pubsub at io netty handler codec decodeframe java pubsub at io netty handler codec framedecoder decode java pubsub at io netty handler codec decode java pubsub at io netty handler codec bytetomessagedecoder decoderemovalreentryprotection bytetomessagedecoder java pubsub at io netty handler codec bytetomessagedecoder calldecode bytetomessagedecoder java pubsub at io netty handler codec bytetomessagedecoder channelread bytetomessagedecoder java pubsub at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java pubsub at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java pubsub at io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java pubsub at io gapi emulators netty httpversionroutinghandler channelread httpversionroutinghandler java pubsub at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java pubsub at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java pubsub at io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java pubsub at io netty handler logging logginghandler channelread logginghandler java pubsub at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java pubsub at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java pubsub at io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java pubsub at io netty channel defaultchannelpipeline headcontext channelread defaultchannelpipeline java pubsub at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java pubsub at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java pubsub at io netty channel defaultchannelpipeline firechannelread defaultchannelpipeline java pubsub at io netty channel nio abstractniobytechannel niobyteunsafe read abstractniobytechannel java pubsub at io netty channel nio nioeventloop processselectedkey nioeventloop java pubsub at io netty channel nio nioeventloop processselectedkeysoptimized nioeventloop java pubsub at io netty channel nio nioeventloop processselectedkeys nioeventloop java pubsub at io netty channel nio nioeventloop run nioeventloop java pubsub at io netty util concurrent singlethreadeventexecutor run singlethreadeventexecutor java pubsub at java util concurrent threadpoolexecutor runworker threadpoolexecutor java pubsub at java util concurrent threadpoolexecutor worker run threadpoolexecutor java pubsub at java lang thread run thread java some stuff i found
| 1
|
15,733
| 11,688,280,324
|
IssuesEvent
|
2020-03-05 14:17:47
|
danmermel/cryptario
|
https://api.github.com/repos/danmermel/cryptario
|
closed
|
rationalise lambda deployment
|
infrastructure
|
There seems to be a lot of repetition in this... we can turn it into a more streamlined script.
- The `./deploy.sh` should have an array of clue types (anagram, container etc).
- It should loop through this list, performing the tasks currently in `./<cluetype>/prepare.sh`
- then it should deploy the built zip and tidy up
This will save us modifying bash scripts many times.
|
1.0
|
rationalise lambda deployment - There seems to be a lot of repetition in this... we can turn it into a more streamlined script.
- The `./deploy.sh` should have an array of clue types (anagram, container etc).
- It should loop through this list, performing the tasks currently in `./<cluetype>/prepare.sh`
- then it should deploy the built zip and tidy up
This will save us modifying bash scripts many times.
|
non_process
|
rationalise lambda deployment there seems to be a lot of repetition in this we can turn it into a more streamlined script the deploy sh should have an array of clue types anagram container etc it should loop through this list performing the tasks currently in prepare sh then it should deploy the built zip and tidy up this will save us modifying bash scripts many times
| 0
|
14,583
| 17,703,499,590
|
IssuesEvent
|
2021-08-25 03:09:20
|
tdwg/dwc
|
https://api.github.com/repos/tdwg/dwc
|
closed
|
Change term - verbatimLocality
|
Term - change Class - Location non-normative Process - complete
|
## Change term
* Submitter: Kari Lintulaakso
* Justification (why is this change necessary?): mapping to ABCD term is incorrect
* Proponents (who needs this change): Kari Lintulaakso (Stan Blum agrees)
Current Term definition: https://dwc.tdwg.org/terms/#dwc:verbatimLocality
Proposed new attributes of the term:
* Term name (in lowerCamelCase): verbatimLocality (unchanged)
* Organized in Class (e.g. Location, Taxon): Location (unchanged)
* Definition of the term: The original textual description of the place. (unchanged)
* Usage comments (recommendations regarding content, etc.):
* Examples: 25 km NNE Bariloche por R. Nac. 237
* Refines (identifier of the broader term this term refines, if applicable): None
* Replaces (identifier of the existing term that would be deprecated and replaced by this term, if applicable): http://rs.tdwg.org/dwc/terms/version/verbatimLocality-2017-10-06
* ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG, if applicable): **DataSets/DataSet/Units/Unit/Gathering/LocalityText**
Kari Lintulaakso posted the comment below to the TDWG workspace in Slack (general channel) at 2021-03-18 5:46 AM PDT
Hi,
I have a question about https://dwc.tdwg.org/list/#dwc_verbatimLocality "The original textual description of the place."
It has a **ABCD equivalence: DataSets/DataSet/Units/Unit/Gathering/NamedAreas/NamedArea/AreaName** "Name of the gathering area (a geographic, geomorphological, geoecological, or administrative area)."
However, should the ABCD equivalence be **DataSets/DataSet/Units/Unit/Gathering/LocalityText**: "The original gathering locality data as appearing on a label or in an original entry, as a text string. This field should be transcribed verbatim from the specimen label." (edited)
|
1.0
|
Change term - verbatimLocality - ## Change term
* Submitter: Kari Lintulaakso
* Justification (why is this change necessary?): mapping to ABCD term is incorrect
* Proponents (who needs this change): Kari Lintulaakso (Stan Blum agrees)
Current Term definition: https://dwc.tdwg.org/terms/#dwc:verbatimLocality
Proposed new attributes of the term:
* Term name (in lowerCamelCase): verbatimLocality (unchanged)
* Organized in Class (e.g. Location, Taxon): Location (unchanged)
* Definition of the term: The original textual description of the place. (unchanged)
* Usage comments (recommendations regarding content, etc.):
* Examples: 25 km NNE Bariloche por R. Nac. 237
* Refines (identifier of the broader term this term refines, if applicable): None
* Replaces (identifier of the existing term that would be deprecated and replaced by this term, if applicable): http://rs.tdwg.org/dwc/terms/version/verbatimLocality-2017-10-06
* ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG, if applicable): **DataSets/DataSet/Units/Unit/Gathering/LocalityText**
Kari Lintulaakso posted the comment below to the TDWG workspace in Slack (general channel) at 2021-03-18 5:46 AM PDT
Hi,
I have a question about https://dwc.tdwg.org/list/#dwc_verbatimLocality "The original textual description of the place."
It has a **ABCD equivalence: DataSets/DataSet/Units/Unit/Gathering/NamedAreas/NamedArea/AreaName** "Name of the gathering area (a geographic, geomorphological, geoecological, or administrative area)."
However, should the ABCD equivalence be **DataSets/DataSet/Units/Unit/Gathering/LocalityText**: "The original gathering locality data as appearing on a label or in an original entry, as a text string. This field should be transcribed verbatim from the specimen label." (edited)
|
process
|
change term verbatimlocality change term submitter kari lintulaakso justification why is this change necessary mapping to abcd term is incorrect proponents who needs this change kari lintulaakso stan blum agrees current term definition proposed new attributes of the term term name in lowercamelcase verbatimlocality unchanged organized in class e g location taxon location unchanged definition of the term the original textual description of the place unchanged usage comments recommendations regarding content etc examples km nne bariloche por r nac refines identifier of the broader term this term refines if applicable none replaces identifier of the existing term that would be deprecated and replaced by this term if applicable abcd xpath of the equivalent term in abcd or efg if applicable datasets dataset units unit gathering localitytext kari lintulaakso posted the comment below to the tdwg workspace in slack general channel at am pdt hi i have a question about the original textual description of the place it has a abcd equivalence datasets dataset units unit gathering namedareas namedarea areaname name of the gathering area a geographic geomorphological geoecological or administrative area however should the abcd equivalence be datasets dataset units unit gathering localitytext the original gathering locality data as appearing on a label or in an original entry as a text string this field should be transcribed verbatim from the specimen label edited
| 1
|
6,414
| 9,499,383,927
|
IssuesEvent
|
2019-04-24 06:13:19
|
kiwicom/orbit-components
|
https://api.github.com/repos/kiwicom/orbit-components
|
closed
|
Create <Popover /> component
|
Enhancement Processing
|
## Description
A popover can be used as a container for additional content that can be displayed on top of the page.
## Visual style

Zeplin: https://zpl.io/agrBvoM
### Additional information
- Popover can be used with different types of trigger, for example, `Button`, `ButtonLink`, `Tag` or `TextLink`. We shouldn't limit it to any specific component, a trigger can be basically anything clickable.
- There is similar behavior for our search forms (layer on top of the page, containing original input). Not sure if Popover should solve it, I see that more as a new component - something like InputPopover, or similar.
- You can find the proposed mobile behavior in Zeplin. It's copied from how Popovers behave in our mobile app (the Close button position). We need to discuss it and make it probably more consistent with Tooltip and Modal behavior (3 different patterns currently).
|
1.0
|
Create <Popover /> component - ## Description
A popover can be used as a container for additional content that can be displayed on top of the page.
## Visual style

Zeplin: https://zpl.io/agrBvoM
### Additional information
- Popover can be used with different types of trigger, for example, `Button`, `ButtonLink`, `Tag` or `TextLink`. We shouldn't limit it to any specific component, a trigger can be basically anything clickable.
- There is similar behavior for our search forms (layer on top of the page, containing original input). Not sure if Popover should solve it, I see that more as a new component - something like InputPopover, or similar.
- You can find the proposed mobile behavior in Zeplin. It's copied from how Popovers behave in our mobile app (the Close button position). We need to discuss it and make it probably more consistent with Tooltip and Modal behavior (3 different patterns currently).
|
process
|
create component description a popover can be used as a container for additional content that can be displayed on top of the page visual style zeplin additional information popover can be used with different types of trigger for example button buttonlink tag or textlink we shouldn t limit it to any specific component a trigger can be basically anything clickable there is similar behavior for our search forms layer on top of the page containing original input not sure if popover should solve it i see that more as a new component something like inputpopover or similar you can find the proposed mobile behavior in zeplin it s copied from how popovers behave in our mobile app the close button position we need to discuss it and make it probably more consistent with tooltip and modal behavior different patterns currently
| 1
|
19,131
| 25,185,682,926
|
IssuesEvent
|
2022-11-11 17:42:43
|
open-telemetry/opentelemetry-collector-contrib
|
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
|
closed
|
[spanmetricsprocessor] Ability to either statefully route or aggregate span metrics across collectors
|
enhancement Stale processor/spanmetrics
|
**Is your feature request related to a problem? Please describe.**
The span metrics processor along with its examples provide documentation on how we can generate RED metrics and expose it against a prometheus endpoint that can be shipped to Prometheus like systems via remote write exporter. Based on high level experiments what we have observed is that if we are doing RED metric generation at a service, operation level, given that the same service operation name combination can go into several collectors. This means that each collector's prometheus endpoint would have the same set of label combinations. To be able to differentiate metrics coming in, things like `instance` or `collector_host` could be used as Prometheus would only append the first series and drop the rest when the same labels + timestamp combination is seen.
**Describe the solution you'd like**
We would like to be able to either aggregate at the collector level itself or make sure that all service+operations combination are handled always by the same collector. This will ensure that when ingesting to Prometheus, the data is accurate.
**Describe alternatives you've considered**
adding a `collector_host` dimension as part of the `dimensions` setting of spanmetricsprocessor. this however will multiply the cardinality by N times where N is the total number of collector instances. this also needs the collector to run as a statefulset so that the host name doesnt churn across restarts.
**Additional context**
Add any other context or screenshots about the feature request here.
|
1.0
|
[spanmetricsprocessor] Ability to either statefully route or aggregate span metrics across collectors - **Is your feature request related to a problem? Please describe.**
The span metrics processor along with its examples provide documentation on how we can generate RED metrics and expose it against a prometheus endpoint that can be shipped to Prometheus like systems via remote write exporter. Based on high level experiments what we have observed is that if we are doing RED metric generation at a service, operation level, given that the same service operation name combination can go into several collectors. This means that each collector's prometheus endpoint would have the same set of label combinations. To be able to differentiate metrics coming in, things like `instance` or `collector_host` could be used as Prometheus would only append the first series and drop the rest when the same labels + timestamp combination is seen.
**Describe the solution you'd like**
We would like to be able to either aggregate at the collector level itself or make sure that all service+operations combination are handled always by the same collector. This will ensure that when ingesting to Prometheus, the data is accurate.
**Describe alternatives you've considered**
adding a `collector_host` dimension as part of the `dimensions` setting of spanmetricsprocessor. this however will multiply the cardinality by N times where N is the total number of collector instances. this also needs the collector to run as a statefulset so that the host name doesnt churn across restarts.
**Additional context**
Add any other context or screenshots about the feature request here.
|
process
|
ability to either statefully route or aggregate span metrics across collectors is your feature request related to a problem please describe the span metrics processor along with its examples provide documentation on how we can generate red metrics and expose it against a prometheus endpoint that can be shipped to prometheus like systems via remote write exporter based on high level experiments what we have observed is that if we are doing red metric generation at a service operation level given that the same service operation name combination can go into several collectors this means that each collector s prometheus endpoint would have the same set of label combinations to be able to differentiate metrics coming in things like instance or collector host could be used as prometheus would only append the first series and drop the rest when the same labels timestamp combination is seen describe the solution you d like we would like to be able to either aggregate at the collector level itself or make sure that all service operations combination are handled always by the same collector this will ensure that when ingesting to prometheus the data is accurate describe alternatives you ve considered adding a collector host dimension as part of the dimensions setting of spanmetricsprocessor this however will multiply the cardinality by n times where n is the total number of collector instances this also needs the collector to run as a statefulset so that the host name doesnt churn across restarts additional context add any other context or screenshots about the feature request here
| 1
|
251,754
| 21,521,962,922
|
IssuesEvent
|
2022-04-28 14:54:26
|
damccorm/test-migration-target
|
https://api.github.com/repos/damccorm/test-migration-target
|
opened
|
beam_PreCommit_Python_Cron failing on test_create_uses_coder_for_pickling
|
test-failures test P2
|
https://ci-beam.apache.org/job/beam_PreCommit_Python_Cron/5219/
Imported from Jira [BEAM-13769](https://issues.apache.org/jira/browse/BEAM-13769)
Reported by: kileys.
|
2.0
|
beam_PreCommit_Python_Cron failing on test_create_uses_coder_for_pickling - https://ci-beam.apache.org/job/beam_PreCommit_Python_Cron/5219/
Imported from Jira [BEAM-13769](https://issues.apache.org/jira/browse/BEAM-13769)
Reported by: kileys.
|
non_process
|
beam precommit python cron failing on test create uses coder for pickling imported from jira reported by kileys
| 0
|
11,122
| 13,957,685,989
|
IssuesEvent
|
2020-10-24 08:08:48
|
alexanderkotsev/geoportal
|
https://api.github.com/repos/alexanderkotsev/geoportal
|
opened
|
BE: Harvester in processing state
|
BE - Belgium Geoportal Harvesting process
|
Dear,
Just like a couple of weeks ago the harvester for the Flemish/Belgian portal stays in processing state since last Friday.
Thx
Bart
|
1.0
|
BE: Harvester in processing state - Dear,
Just like a couple of weeks ago the harvester for the Flemish/Belgian portal stays in processing state since last Friday.
Thx
Bart
|
process
|
be harvester in processing state dear just like a couple of weeks ago the harvester for the flemish belgian portal stays in processing state since last friday thx bart
| 1
|
196,765
| 14,889,079,752
|
IssuesEvent
|
2021-01-20 20:51:32
|
Oldes/Rebol-issues
|
https://api.github.com/repos/Oldes/Rebol-issues
|
closed
|
remove-each does not work on all series, could do so, though, and also on gobs, maps
|
CC.resolved Oldes.resolved Test.written Type.wish
|
_Submitted by:_ **meijeru**
``` rebol
is there a compelling reason why remove-each only works on block!, paren! and any-string! ??
```
I can see sensible application on the other types in series!:
``` rebol
- binary! vector! image!
- any-path! is perhaps a bit more doubtful
```
``` rebol
why not gob! and map! also? (foreach does...)
```
---
<sup>**Imported from:** **[CureCode](https://www.curecode.org/rebol3/ticket.rsp?id=806)** [ Version: alpha 55 Type: Wish Platform: All Category: Unspecified Reproduce: Always Fixed-in:alpha 61 ]</sup>
<sup>**Imported from**: https://github.com/rebol/rebol-issues/issues/806</sup>
Comments:
---
> **Rebolbot** commented on May 13, 2009:
_Submitted by:_ **BrianH**
REMOVE-EACH should _not_ work on gobs (no sensible behavior) and images (removal of individual pixels is a bad idea), and probably shouldn't be needed for any-paths.
Good point on binary! and vector! - REMOVE-EACH should work on those. It should work explicitly on binary! since binary! should be removed from any-string! (but not series!).
It would be problematic to do this with map!, since that would break its similarity to object!, but it should be OK if you define "remove" as assigning none to a key. However, it is likely unnecessary.
---
> **Rebolbot** commented on May 14, 2009:
_Submitted by:_ **meijeru**
I agree on map!, because it can as well be done with foreach
---
> **Rebolbot** commented on Jun 24, 2009:
_Submitted by:_ **Carl**
REMOVE-EACH has been added for binary in A62. BrianH explains the other issues.
If you think it adds value for MAP!, submit a new "wish" ticket for that.
---
> **Rebolbot** commented on May 5, 2010:
_Submitted by:_ **BrianH**
Finally figured out a sensible behavior for REMOVE-EACH of gobs, and have created ticket #1597 to address it. Note that FOREACH at some point also stopped working for gobs, so #1596 addresses that.
---
> **Rebolbot** mentioned this issue on Jan 12, 2016:
> [REMOVE-EACH from gob!](https://github.com/Oldes/Rebol-issues/issues/1597)
---
> **Rebolbot** mentioned this issue on Oct 18, 2018:
> [Should image! be a part of series! ?](https://github.com/Oldes/Rebol-issues/issues/801)
---
> **Rebolbot** added the **Type.wish** on Jan 12, 2016
---
|
1.0
|
remove-each does not work on all series, could do so, though, and also on gobs, maps - _Submitted by:_ **meijeru**
``` rebol
is there a compelling reason why remove-each only works on block!, paren! and any-string! ??
```
I can see sensible application on the other types in series!:
``` rebol
- binary! vector! image!
- any-path! is perhaps a bit more doubtful
```
``` rebol
why not gob! and map! also? (foreach does...)
```
---
<sup>**Imported from:** **[CureCode](https://www.curecode.org/rebol3/ticket.rsp?id=806)** [ Version: alpha 55 Type: Wish Platform: All Category: Unspecified Reproduce: Always Fixed-in:alpha 61 ]</sup>
<sup>**Imported from**: https://github.com/rebol/rebol-issues/issues/806</sup>
Comments:
---
> **Rebolbot** commented on May 13, 2009:
_Submitted by:_ **BrianH**
REMOVE-EACH should _not_ work on gobs (no sensible behavior) and images (removal of individual pixels is a bad idea), and probably shouldn't be needed for any-paths.
Good point on binary! and vector! - REMOVE-EACH should work on those. It should work explicitly on binary! since binary! should be removed from any-string! (but not series!).
It would be problematic to do this with map!, since that would break its similarity to object!, but it should be OK if you define "remove" as assigning none to a key. However, it is likely unnecessary.
---
> **Rebolbot** commented on May 14, 2009:
_Submitted by:_ **meijeru**
I agree on map!, because it can as well be done with foreach
---
> **Rebolbot** commented on Jun 24, 2009:
_Submitted by:_ **Carl**
REMOVE-EACH has been added for binary in A62. BrianH explains the other issues.
If you think it adds value for MAP!, submit a new "wish" ticket for that.
---
> **Rebolbot** commented on May 5, 2010:
_Submitted by:_ **BrianH**
Finally figured out a sensible behavior for REMOVE-EACH of gobs, and have created ticket #1597 to address it. Note that FOREACH at some point also stopped working for gobs, so #1596 addresses that.
---
> **Rebolbot** mentioned this issue on Jan 12, 2016:
> [REMOVE-EACH from gob!](https://github.com/Oldes/Rebol-issues/issues/1597)
---
> **Rebolbot** mentioned this issue on Oct 18, 2018:
> [Should image! be a part of series! ?](https://github.com/Oldes/Rebol-issues/issues/801)
---
> **Rebolbot** added the **Type.wish** on Jan 12, 2016
---
|
non_process
|
remove each does not work on all series could do so though and also on gobs maps submitted by meijeru rebol is there a compelling reason why remove each only works on block paren and any string i can see sensible application on the other types in series rebol binary vector image any path is perhaps a bit more doubtful rebol why not gob and map also foreach does imported from imported from comments rebolbot commented on may submitted by brianh remove each should not work on gobs no sensible behavior and images removal of individual pixels is a bad idea and probably shouldn t be needed for any paths good point on binary and vector remove each should work on those it should work explicitly on binary since binary should be removed from any string but not series it would be problematic to do this with map since that would break its similarity to object but it should be ok if you define remove as assigning none to a key however it is likely unnecessary rebolbot commented on may submitted by meijeru i agree on map because it can as well be done with foreach rebolbot commented on jun submitted by carl remove each has been added for binary in brianh explains the other issues if you think it adds value for map submit a new wish ticket for that rebolbot commented on may submitted by brianh finally figured out a sensible behavior for remove each of gobs and have created ticket to address it note that foreach at some point also stopped working for gobs so addresses that rebolbot mentioned this issue on jan rebolbot mentioned this issue on oct rebolbot added the type wish on jan
| 0
|
22,284
| 30,834,304,800
|
IssuesEvent
|
2023-08-02 06:00:04
|
Open-EO/openeo-processes
|
https://api.github.com/repos/Open-EO/openeo-processes
|
opened
|
medoid process
|
new process
|
PS: this is a draft, needs some work, but have to get started somewhere, improvements welcome
**medoid**
## Context
In compositing methods, from a set of spectral band values, users want to select the one that is the least dissimilar to all other values. This is exactly what medoid does. The concept can also be seen as a rank composite, where the rank band is the sum of distances to all other elements in the set.
## Summary
From a set of spectral band values, selects the element that is the least dissimilar to all other elements in the set.
## Description
Medoids are representative objects of a data set whose sum of dissimilarities to all the objects in the data set is minimal.
## Parameters
### `data`
**Optional:** no
#### Description
N-dimensional input data cube over which the medoid is to be computed
### `row_dimensions`
**Optional:** no
#### Description
The dimension(s) that define a single object. For instance, the 'bands' dimension often defines objects when compositing.
#### Data Type
data cube
## Categories
* math
## Links to additional resources (optional)
* https://en.wikipedia.org/wiki/Medoid
|
1.0
|
medoid process - PS: this is a draft, needs some work, but have to get started somewhere, improvements welcome
**medoid**
## Context
In compositing methods, from a set of spectral band values, users want to select the one that is the least dissimilar to all other values. This is exactly what medoid does. The concept can also be seen as a rank composite, where the rank band is the sum of distances to all other elements in the set.
## Summary
From a set of spectral band values, selects the element that is the least dissimilar to all other elements in the set.
## Description
Medoids are representative objects of a data set whose sum of dissimilarities to all the objects in the data set is minimal.
## Parameters
### `data`
**Optional:** no
#### Description
N-dimensional input data cube over which the medoid is to be computed
### `row_dimensions`
**Optional:** no
#### Description
The dimension(s) that define a single object. For instance, the 'bands' dimension often defines objects when compositing.
#### Data Type
data cube
## Categories
* math
## Links to additional resources (optional)
* https://en.wikipedia.org/wiki/Medoid
|
process
|
medoid process ps this is a draft needs some work but have to get started somewhere improvements welcome medoid context in compositing methods from a set of spectral band values users want to select the one that is the least dissimilar to all other values this is exactly what medoid does the concept can also be seen as a rank composite where the rank band is the sum of distances to all other elements in the set summary from a set of spectral band values selects the element that is the least dissimilar to all other elements in the set description medoids are representative objects of a data set whose sum of dissimilarities to all the objects in the data set is minimal parameters data optional no description n dimensional input data cube over which the medoid is to be computed row dimensions optional no description the dimension s that define a single object for instance the bands dimension often defines objects when compositing data type data cube categories math links to additional resources optional
| 1
|
13,364
| 15,830,907,974
|
IssuesEvent
|
2021-04-06 13:04:04
|
googleapis/python-os-config
|
https://api.github.com/repos/googleapis/python-os-config
|
closed
|
Release as production/stable
|
api: osconfig type: process
|
Package name: **google-cloud-os-config**
Current release: **beta**
Proposed release: **GA**
## Instructions
Check the lists below, adding tests / documentation as required. Once all the "required" boxes are ticked, please create a release and close this issue.
:calendar: **DO NOT RELEASE BEFORE 2020-07-08** :calendar:
## Required
- [x] 28 days elapsed since last beta release with new API surface
- [x] Server API is GA
- [x] Package API is stable, and we can commit to backward compatibility
- [x] All dependencies are GA
## Optional
- [ ] Most common / important scenarios have descriptive samples
- [ ] Public manual methods have at least one usage sample each (excluding overloads)
- [ ] Per-API README includes a full description of the API
- [ ] Per-API README contains at least one “getting started” sample using the most common API scenario
- [ ] Manual code has been reviewed by API producer
- [ ] Manual code has been reviewed by a DPE responsible for samples
- [ ] 'Client Libraries' page is added to the product documentation in 'APIs & Reference' section of the product's documentation on Cloud Site
|
1.0
|
Release as production/stable - Package name: **google-cloud-os-config**
Current release: **beta**
Proposed release: **GA**
## Instructions
Check the lists below, adding tests / documentation as required. Once all the "required" boxes are ticked, please create a release and close this issue.
:calendar: **DO NOT RELEASE BEFORE 2020-07-08** :calendar:
## Required
- [x] 28 days elapsed since last beta release with new API surface
- [x] Server API is GA
- [x] Package API is stable, and we can commit to backward compatibility
- [x] All dependencies are GA
## Optional
- [ ] Most common / important scenarios have descriptive samples
- [ ] Public manual methods have at least one usage sample each (excluding overloads)
- [ ] Per-API README includes a full description of the API
- [ ] Per-API README contains at least one “getting started” sample using the most common API scenario
- [ ] Manual code has been reviewed by API producer
- [ ] Manual code has been reviewed by a DPE responsible for samples
- [ ] 'Client Libraries' page is added to the product documentation in 'APIs & Reference' section of the product's documentation on Cloud Site
|
process
|
release as production stable package name google cloud os config current release beta proposed release ga instructions check the lists below adding tests documentation as required once all the required boxes are ticked please create a release and close this issue calendar do not release before calendar required days elapsed since last beta release with new api surface server api is ga package api is stable and we can commit to backward compatibility all dependencies are ga optional most common important scenarios have descriptive samples public manual methods have at least one usage sample each excluding overloads per api readme includes a full description of the api per api readme contains at least one “getting started” sample using the most common api scenario manual code has been reviewed by api producer manual code has been reviewed by a dpe responsible for samples client libraries page is added to the product documentation in apis reference section of the product s documentation on cloud site
| 1
|
182,429
| 30,848,075,173
|
IssuesEvent
|
2023-08-02 14:54:13
|
CDCgov/prime-reportstream
|
https://api.github.com/repos/CDCgov/prime-reportstream
|
opened
|
Consolidating Figma file so that it is clear what is ready for dev
|
design experience
|
## User story
As a designer, I want to create a single source of truth for engineers so that it's clear what's ready for development.
## Background & context
_A brief description of why we are doing this task, what user needs it meets, notes on the
approach if appropriate, and any useful context that would inform the reader._
## Open questions
_A bullet list format of any unresolved questions that will need answers in order to complete this
task_
- ...
- ...
## Working links
_Links to any Figma, gDoc, or other working document_
- ...
- ...
## Acceptance criteria
- [ ] Consolidate figma file
|
1.0
|
Consolidating Figma file so that it is clear what is ready for dev - ## User story
As a designer, I want to create a single source of truth for engineers so that it's clear what's ready for development.
## Background & context
_A brief description of why we are doing this task, what user needs it meets, notes on the
approach if appropriate, and any useful context that would inform the reader._
## Open questions
_A bullet list format of any unresolved questions that will need answers in order to complete this
task_
- ...
- ...
## Working links
_Links to any Figma, gDoc, or other working document_
- ...
- ...
## Acceptance criteria
- [ ] Consolidate figma file
|
non_process
|
consolidating figma file so that it is clear what is ready for dev user story as a designer i want to create a single source of truth for engineers so that it s clear what s ready for development background context a brief description of why we are doing this task what user needs it meets notes on the approach if appropriate and any useful context that would inform the reader open questions a bullet list format of any unresolved questions that will need answers in order to complete this task working links links to any figma gdoc or other working document acceptance criteria consolidate figma file
| 0
|
19,830
| 26,221,042,364
|
IssuesEvent
|
2023-01-04 14:52:15
|
xcesco/kripton
|
https://api.github.com/repos/xcesco/kripton
|
closed
|
Add annotation processor parameter to generate date on schema files
|
enhancement orm module annotation-processor module
|
The default behavior till now, when a schema file is generated is to include the generation date. This implies that at every generation, the schema file is updated. To avoid an unuseful commit, a new parameter to the preprocessor needs to be defined.
|
1.0
|
Add annotation processor parameter to generate date on schema files - The default behavior till now, when a schema file is generated is to include the generation date. This implies that at every generation, the schema file is updated. To avoid an unuseful commit, a new parameter to the preprocessor needs to be defined.
|
process
|
add annotation processor parameter to generate date on schema files the default behavior till now when a schema file is generated is to include the generation date this implies that at every generation the schema file is updated to avoid an unuseful commit a new parameter to the preprocessor needs to be defined
| 1
|
17,541
| 23,351,823,105
|
IssuesEvent
|
2022-08-10 01:23:10
|
googleapis/google-cloud-ruby
|
https://api.github.com/repos/googleapis/google-cloud-ruby
|
closed
|
Warning: a recent release failed
|
type: process
|
The following release PRs may have failed:
* #19003 - The release job is 'autorelease: pending', but expected 'autorelease: published'.
* #19002 - The release job is 'autorelease: pending', but expected 'autorelease: published'.
* #19001 - The release job is 'autorelease: pending', but expected 'autorelease: published'.
* #19000 - The release job is 'autorelease: pending', but expected 'autorelease: published'.
* #18999 - The release job is 'autorelease: pending', but expected 'autorelease: published'.
* #18984 - The release job is 'autorelease: pending', but expected 'autorelease: published'.
* #18993 - The release job is 'autorelease: pending', but expected 'autorelease: published'.
|
1.0
|
Warning: a recent release failed - The following release PRs may have failed:
* #19003 - The release job is 'autorelease: pending', but expected 'autorelease: published'.
* #19002 - The release job is 'autorelease: pending', but expected 'autorelease: published'.
* #19001 - The release job is 'autorelease: pending', but expected 'autorelease: published'.
* #19000 - The release job is 'autorelease: pending', but expected 'autorelease: published'.
* #18999 - The release job is 'autorelease: pending', but expected 'autorelease: published'.
* #18984 - The release job is 'autorelease: pending', but expected 'autorelease: published'.
* #18993 - The release job is 'autorelease: pending', but expected 'autorelease: published'.
|
process
|
warning a recent release failed the following release prs may have failed the release job is autorelease pending but expected autorelease published the release job is autorelease pending but expected autorelease published the release job is autorelease pending but expected autorelease published the release job is autorelease pending but expected autorelease published the release job is autorelease pending but expected autorelease published the release job is autorelease pending but expected autorelease published the release job is autorelease pending but expected autorelease published
| 1
|
12,803
| 15,181,372,817
|
IssuesEvent
|
2021-02-15 03:19:35
|
esmero/strawberryfield
|
https://api.github.com/repos/esmero/strawberryfield
|
closed
|
Edge case, strange situation, and ADO without a SBF field / a fix.
|
Digital Preservation Events and Subscriber JSON Postprocessors Symfony Services Typed Data and Search bug
|
# What?
Ok, this is super strange and requires more research, how it happened, why it happened.
@alliomeria during RC1 testing and content preparation managed to create an ADO without any SBF data. Not even the field itself. It is practically impossible since the SBF (field_descriptive_metadata) is required. The consequence of this is that during a second save, not with correct metadata, we fail on accessing the `->original->field_descriptive_metadata` during the AS Structure Event subscriber because we assume its going to be there. Well its not.
How to reproduce? No idea (yet) but the fix is simply to check if the previous revision in fact has a SBF before trying to act on it.
This requires more log checking and understanding how one can actually get around all the Data consistency in Drupal to not ingest a given field that is required, but, the fix I prepared is consistent and allows us to keep processing even in such strange event.
More later on this, if we ever know what happened.
|
1.0
|
Edge case, strange situation, and ADO without a SBF field / a fix. - # What?
Ok, this is super strange and requires more research, how it happened, why it happened.
@alliomeria during RC1 testing and content preparation managed to create an ADO without any SBF data. Not even the field itself. It is practically impossible since the SBF (field_descriptive_metadata) is required. The consequence of this is that during a second save, not with correct metadata, we fail on accessing the `->original->field_descriptive_metadata` during the AS Structure Event subscriber because we assume its going to be there. Well its not.
How to reproduce? No idea (yet) but the fix is simply to check if the previous revision in fact has a SBF before trying to act on it.
This requires more log checking and understanding how one can actually get around all the Data consistency in Drupal to not ingest a given field that is required, but, the fix I prepared is consistent and allows us to keep processing even in such strange event.
More later on this, if we ever know what happened.
|
process
|
edge case strange situation and ado without a sbf field a fix what ok this is super strange and requires more research how it happened why it happened alliomeria during testing and content preparation managed to create an ado without any sbf data not even the field itself it is practically impossible since the sbf field descriptive metadata is required the consequence of this is that during a second save not with correct metadata we fail on accessing the original field descriptive metadata during the as structure event subscriber because we assume its going to be there well its not how to reproduce no idea yet but the fix is simply to check if the previous revision in fact has a sbf before trying to act on it this requires more log checking and understanding how one can actually get around all the data consistency in drupal to not ingest a given field that is required but the fix i prepared is consistent and allows us to keep processing even in such strange event more later on this if we ever know what happened
| 1
|
18,565
| 24,555,784,233
|
IssuesEvent
|
2022-10-12 15:45:45
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[Mobile apps] Not able to delete the participant account > Error message is getting displayed
|
Bug Blocker P0 iOS Android Process: Fixed Process: Tested QA Process: Tested dev
|
AR: Not able to delete the participant account > Error message is getting displayed
ER: Participants should be able to delete their app account

|
3.0
|
[Mobile apps] Not able to delete the participant account > Error message is getting displayed - AR: Not able to delete the participant account > Error message is getting displayed
ER: Participants should be able to delete their app account

|
process
|
not able to delete the participant account error message is getting displayed ar not able to delete the participant account error message is getting displayed er participants should be able to delete their app account
| 1
|
206,033
| 7,108,132,206
|
IssuesEvent
|
2018-01-16 22:36:19
|
GoogleChrome/lighthouse
|
https://api.github.com/repos/GoogleChrome/lighthouse
|
closed
|
Angular service worker
|
needs-more-info needs-priority pending-close
|
As soon the service worker in angular 5 is enabled lightouse is unable to complete the tests.
Output and then waits forever:
```
ChromeLauncher Waiting for browser. +0ms
ChromeLauncher Waiting for browser... +2ms
ChromeLauncher Waiting for browser...√ +526ms
status Initializing… +1s
status Loading page & waiting for onload URL, Viewport, ViewportDimensions, ThemeColor, Manifest, RuntimeExceptions, ChromeConsoleMessages, ImageUsage, Accessibility, EventListeners, AnchorsWithNoRelNoopener, AppCacheManifest, DOMStats, JSLibraries, OptimizedImages, PasswordInputsWithPreventedPaste, ResponseCompression, TagsBlockingFirstPaint, WebSQL, MetaDescription, CrawlableLinks, MetaRobots +476ms
statusEnd Loading page & waiting for onload +21s
status Retrieving trace +1ms
status Retrieving devtoolsLog and network records +1s
status Retrieving: URL +5ms
status Retrieving: Viewport +1ms
status Retrieving: ViewportDimensions +5ms
status Retrieving: ThemeColor +2ms
status Retrieving: Manifest +3ms
status Retrieving: RuntimeExceptions +16ms
status Retrieving: ChromeConsoleMessages +1ms
status Retrieving: ImageUsage +3ms
status Retrieving: Accessibility +24ms
status Retrieving: EventListeners +426ms
status Retrieving: AnchorsWithNoRelNoopener +389ms
status Retrieving: AppCacheManifest +2ms
status Retrieving: DOMStats +3ms
status Retrieving: JSLibraries +26ms
status Retrieving: OptimizedImages +15ms
status Retrieving: PasswordInputsWithPreventedPaste +353ms
status Retrieving: ResponseCompression +3ms
status Retrieving: TagsBlockingFirstPaint +1ms
status Retrieving: WebSQL +3ms
status Retrieving: MetaDescription +502ms
status Retrieving: CrawlableLinks +5ms
status Retrieving: MetaRobots +5ms
status Loading page & waiting for onload ServiceWorker, Offline, StartUrl +309ms
statusEnd Loading page & waiting for onload +935ms
````
|
1.0
|
Angular service worker - As soon the service worker in angular 5 is enabled lightouse is unable to complete the tests.
Output and then waits forever:
```
ChromeLauncher Waiting for browser. +0ms
ChromeLauncher Waiting for browser... +2ms
ChromeLauncher Waiting for browser...√ +526ms
status Initializing… +1s
status Loading page & waiting for onload URL, Viewport, ViewportDimensions, ThemeColor, Manifest, RuntimeExceptions, ChromeConsoleMessages, ImageUsage, Accessibility, EventListeners, AnchorsWithNoRelNoopener, AppCacheManifest, DOMStats, JSLibraries, OptimizedImages, PasswordInputsWithPreventedPaste, ResponseCompression, TagsBlockingFirstPaint, WebSQL, MetaDescription, CrawlableLinks, MetaRobots +476ms
statusEnd Loading page & waiting for onload +21s
status Retrieving trace +1ms
status Retrieving devtoolsLog and network records +1s
status Retrieving: URL +5ms
status Retrieving: Viewport +1ms
status Retrieving: ViewportDimensions +5ms
status Retrieving: ThemeColor +2ms
status Retrieving: Manifest +3ms
status Retrieving: RuntimeExceptions +16ms
status Retrieving: ChromeConsoleMessages +1ms
status Retrieving: ImageUsage +3ms
status Retrieving: Accessibility +24ms
status Retrieving: EventListeners +426ms
status Retrieving: AnchorsWithNoRelNoopener +389ms
status Retrieving: AppCacheManifest +2ms
status Retrieving: DOMStats +3ms
status Retrieving: JSLibraries +26ms
status Retrieving: OptimizedImages +15ms
status Retrieving: PasswordInputsWithPreventedPaste +353ms
status Retrieving: ResponseCompression +3ms
status Retrieving: TagsBlockingFirstPaint +1ms
status Retrieving: WebSQL +3ms
status Retrieving: MetaDescription +502ms
status Retrieving: CrawlableLinks +5ms
status Retrieving: MetaRobots +5ms
status Loading page & waiting for onload ServiceWorker, Offline, StartUrl +309ms
statusEnd Loading page & waiting for onload +935ms
````
|
non_process
|
angular service worker as soon the service worker in angular is enabled lightouse is unable to complete the tests output and then waits forever chromelauncher waiting for browser chromelauncher waiting for browser chromelauncher waiting for browser √ status initializing… status loading page waiting for onload url viewport viewportdimensions themecolor manifest runtimeexceptions chromeconsolemessages imageusage accessibility eventlisteners anchorswithnorelnoopener appcachemanifest domstats jslibraries optimizedimages passwordinputswithpreventedpaste responsecompression tagsblockingfirstpaint websql metadescription crawlablelinks metarobots statusend loading page waiting for onload status retrieving trace status retrieving devtoolslog and network records status retrieving url status retrieving viewport status retrieving viewportdimensions status retrieving themecolor status retrieving manifest status retrieving runtimeexceptions status retrieving chromeconsolemessages status retrieving imageusage status retrieving accessibility status retrieving eventlisteners status retrieving anchorswithnorelnoopener status retrieving appcachemanifest status retrieving domstats status retrieving jslibraries status retrieving optimizedimages status retrieving passwordinputswithpreventedpaste status retrieving responsecompression status retrieving tagsblockingfirstpaint status retrieving websql status retrieving metadescription status retrieving crawlablelinks status retrieving metarobots status loading page waiting for onload serviceworker offline starturl statusend loading page waiting for onload
| 0
|
185,319
| 21,786,157,494
|
IssuesEvent
|
2022-05-14 06:44:30
|
classicvalues/AA-ionic-login
|
https://api.github.com/repos/classicvalues/AA-ionic-login
|
closed
|
CVE-2021-44908 (High) detected in sails-1.5.2.tgz - autoclosed
|
security vulnerability
|
## CVE-2021-44908 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>sails-1.5.2.tgz</b></p></summary>
<p>API-driven framework for building realtime apps, using MVC conventions (based on Express and Socket.io)</p>
<p>Library home page: <a href="https://registry.npmjs.org/sails/-/sails-1.5.2.tgz">https://registry.npmjs.org/sails/-/sails-1.5.2.tgz</a></p>
<p>Path to dependency file: /Application/package.json</p>
<p>Path to vulnerable library: /Application/node_modules/sails/package.json</p>
<p>
Dependency Hierarchy:
- :x: **sails-1.5.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/classicvalues/AA-ionic-login/commit/d4f4480b7ddd8c520e4b02ea2008621ded4be6ab">d4f4480b7ddd8c520e4b02ea2008621ded4be6ab</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
SailsJS Sails.js <=1.4.0 is vulnerable to Prototype Pollution via controller/load-action-modules.js, function loadActionModules().
<p>Publish Date: 2022-03-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44908>CVE-2021-44908</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-44908">https://nvd.nist.gov/vuln/detail/CVE-2021-44908</a></p>
<p>Release Date: 2022-03-17</p>
<p>Fix Resolution: sails - 1.0.0,0.12.10,0.12.2-0,0.12.11</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-44908 (High) detected in sails-1.5.2.tgz - autoclosed - ## CVE-2021-44908 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>sails-1.5.2.tgz</b></p></summary>
<p>API-driven framework for building realtime apps, using MVC conventions (based on Express and Socket.io)</p>
<p>Library home page: <a href="https://registry.npmjs.org/sails/-/sails-1.5.2.tgz">https://registry.npmjs.org/sails/-/sails-1.5.2.tgz</a></p>
<p>Path to dependency file: /Application/package.json</p>
<p>Path to vulnerable library: /Application/node_modules/sails/package.json</p>
<p>
Dependency Hierarchy:
- :x: **sails-1.5.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/classicvalues/AA-ionic-login/commit/d4f4480b7ddd8c520e4b02ea2008621ded4be6ab">d4f4480b7ddd8c520e4b02ea2008621ded4be6ab</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
SailsJS Sails.js <=1.4.0 is vulnerable to Prototype Pollution via controller/load-action-modules.js, function loadActionModules().
<p>Publish Date: 2022-03-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44908>CVE-2021-44908</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-44908">https://nvd.nist.gov/vuln/detail/CVE-2021-44908</a></p>
<p>Release Date: 2022-03-17</p>
<p>Fix Resolution: sails - 1.0.0,0.12.10,0.12.2-0,0.12.11</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in sails tgz autoclosed cve high severity vulnerability vulnerable library sails tgz api driven framework for building realtime apps using mvc conventions based on express and socket io library home page a href path to dependency file application package json path to vulnerable library application node modules sails package json dependency hierarchy x sails tgz vulnerable library found in head commit a href found in base branch master vulnerability details sailsjs sails js is vulnerable to prototype pollution via controller load action modules js function loadactionmodules publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution sails step up your open source security game with whitesource
| 0
|
2,797
| 5,728,382,364
|
IssuesEvent
|
2017-04-21 00:43:19
|
dotnet/corefx
|
https://api.github.com/repos/dotnet/corefx
|
closed
|
System.Diagnostics.Tests.ProcessTests.TestEnableRaiseEvents failed in CI with "System.ComponentModel.Win32Exception"
|
area-System.Diagnostics.Process blocking-clean-ci test-run-core
|
Failed test: System.Diagnostics.Tests.ProcessTests.TestEnableRaiseEvents
Configuration: OuterLoop_Fedora23_release (build#147)
Detail: https://ci.dot.net/job/dotnet_corefx/job/master/job/outerloop_fedora23_release/147/consoleText
~~~
System.Diagnostics.Tests.ProcessTests.TestEnableRaiseEvents(enable: True) [FAIL]
System.ComponentModel.Win32Exception : No such process
Stack Trace:
command exited with ExitCode: 0
/mnt/resource/j/workspace/dotnet_corefx/master/outerloop_fedora23_release/src/System.IO.MemoryMappedFiles/tests/Performance
Finished running tests. End time=18:56:36. Return value was 0
/mnt/resource/j/workspace/dotnet_corefx/master/outerloop_fedora23_release/src/System.Diagnostics.Process/src/System/Diagnostics/Process.Unix.cs(58,0): at System.Diagnostics.Process.Kill()
/mnt/resource/j/workspace/dotnet_corefx/master/outerloop_fedora23_release/src/System.Diagnostics.Process/tests/ProcessTestBase.cs(76,0): at System.Diagnostics.Tests.ProcessTestBase.StartSleepKillWait(Process p)
/mnt/resource/j/workspace/dotnet_corefx/master/outerloop_fedora23_release/src/System.Diagnostics.Process/tests/ProcessTests.cs(117,0): at System.Diagnostics.Tests.ProcessTests.TestEnableRaiseEvents(Nullable`1 enable)
~~~
|
1.0
|
System.Diagnostics.Tests.ProcessTests.TestEnableRaiseEvents failed in CI with "System.ComponentModel.Win32Exception" - Failed test: System.Diagnostics.Tests.ProcessTests.TestEnableRaiseEvents
Configuration: OuterLoop_Fedora23_release (build#147)
Detail: https://ci.dot.net/job/dotnet_corefx/job/master/job/outerloop_fedora23_release/147/consoleText
~~~
System.Diagnostics.Tests.ProcessTests.TestEnableRaiseEvents(enable: True) [FAIL]
System.ComponentModel.Win32Exception : No such process
Stack Trace:
command exited with ExitCode: 0
/mnt/resource/j/workspace/dotnet_corefx/master/outerloop_fedora23_release/src/System.IO.MemoryMappedFiles/tests/Performance
Finished running tests. End time=18:56:36. Return value was 0
/mnt/resource/j/workspace/dotnet_corefx/master/outerloop_fedora23_release/src/System.Diagnostics.Process/src/System/Diagnostics/Process.Unix.cs(58,0): at System.Diagnostics.Process.Kill()
/mnt/resource/j/workspace/dotnet_corefx/master/outerloop_fedora23_release/src/System.Diagnostics.Process/tests/ProcessTestBase.cs(76,0): at System.Diagnostics.Tests.ProcessTestBase.StartSleepKillWait(Process p)
/mnt/resource/j/workspace/dotnet_corefx/master/outerloop_fedora23_release/src/System.Diagnostics.Process/tests/ProcessTests.cs(117,0): at System.Diagnostics.Tests.ProcessTests.TestEnableRaiseEvents(Nullable`1 enable)
~~~
|
process
|
system diagnostics tests processtests testenableraiseevents failed in ci with system componentmodel failed test system diagnostics tests processtests testenableraiseevents configuration outerloop release build detail system diagnostics tests processtests testenableraiseevents enable true system componentmodel no such process stack trace command exited with exitcode mnt resource j workspace dotnet corefx master outerloop release src system io memorymappedfiles tests performance finished running tests end time return value was mnt resource j workspace dotnet corefx master outerloop release src system diagnostics process src system diagnostics process unix cs at system diagnostics process kill mnt resource j workspace dotnet corefx master outerloop release src system diagnostics process tests processtestbase cs at system diagnostics tests processtestbase startsleepkillwait process p mnt resource j workspace dotnet corefx master outerloop release src system diagnostics process tests processtests cs at system diagnostics tests processtests testenableraiseevents nullable enable
| 1
|
15,153
| 5,071,987,968
|
IssuesEvent
|
2016-12-26 18:04:53
|
exercism/xjava
|
https://api.github.com/repos/exercism/xjava
|
closed
|
raindrops: make test failures easier to troubleshoot
|
code good first patch
|
[`raindrops`](https://github.com/exercism/xjava/blob/master/exercises/raindrops/src/test/java/RaindropsTest.java) uses the JUnit [`Parameterized`](https://github.com/junit-team/junit4/wiki/parameterized-tests) test runner. This was done to make the test more compact (and once you learn how the mechanism works, easier to read).
However, when a test fails, the error message does not indicate which value failed. This makes it really difficult to know why the test failed.
Test failures should clearly indicate what failed.
**To Do:**
- [x] ensure that this exercise is using JUnit 4.12 or later
- [ ] add a format string to the `@Parameters` annotation.
_(ref: #147)_
|
1.0
|
raindrops: make test failures easier to troubleshoot - [`raindrops`](https://github.com/exercism/xjava/blob/master/exercises/raindrops/src/test/java/RaindropsTest.java) uses the JUnit [`Parameterized`](https://github.com/junit-team/junit4/wiki/parameterized-tests) test runner. This was done to make the test more compact (and once you learn how the mechanism works, easier to read).
However, when a test fails, the error message does not indicate which value failed. This makes it really difficult to know why the test failed.
Test failures should clearly indicate what failed.
**To Do:**
- [x] ensure that this exercise is using JUnit 4.12 or later
- [ ] add a format string to the `@Parameters` annotation.
_(ref: #147)_
|
non_process
|
raindrops make test failures easier to troubleshoot uses the junit test runner this was done to make the test more compact and once you learn how the mechanism works easier to read however when a test fails the error message does not indicate which value failed this makes it really difficult to know why the test failed test failures should clearly indicate what failed to do ensure that this exercise is using junit or later add a format string to the parameters annotation ref
| 0
|
1,303
| 3,851,788,753
|
IssuesEvent
|
2016-04-06 04:49:08
|
ComputationWithBoundedResources/tct-trs
|
https://api.github.com/repos/ComputationWithBoundedResources/tct-trs
|
closed
|
bounds: start terms
|
bug processor
|
Matchbounds processor returns an error for the following example when used with `tct-hoca`, though it works when taking the TRS resulting from the `hoca` transformation. The only difference is that the start terms of the former system is a subset (only main) of the start terms of latter one.
Probably `bounds` doesn't work properly if the star terms are a subset of the defined function symbols.
```
type 'a list = Nil | Cons of 'a * 'a list
;;
type Unit = Unit
;;
type 'a lazy_list = NilL | ConsL of 'a * (Unit -> 'a lazy_list)
;;
type nat = 0 | S of nat
;;
let rec take_l n xs =
match force xs with
| NilL -> Nil
| ConsL(x,xs') ->
match n with
| 0 -> Nil
| S(n') -> Cons(x,take_l n' xs')
;;
let rec zeros = lazy ConsL(0, zeros)
;;
let take_lazy n = take_l n zeros
;;
```
|
1.0
|
bounds: start terms - Matchbounds processor returns an error for the following example when used with `tct-hoca`, though it works when taking the TRS resulting from the `hoca` transformation. The only difference is that the start terms of the former system is a subset (only main) of the start terms of latter one.
Probably `bounds` doesn't work properly if the star terms are a subset of the defined function symbols.
```
type 'a list = Nil | Cons of 'a * 'a list
;;
type Unit = Unit
;;
type 'a lazy_list = NilL | ConsL of 'a * (Unit -> 'a lazy_list)
;;
type nat = 0 | S of nat
;;
let rec take_l n xs =
match force xs with
| NilL -> Nil
| ConsL(x,xs') ->
match n with
| 0 -> Nil
| S(n') -> Cons(x,take_l n' xs')
;;
let rec zeros = lazy ConsL(0, zeros)
;;
let take_lazy n = take_l n zeros
;;
```
|
process
|
bounds start terms matchbounds processor returns an error for the following example when used with tct hoca though it works when taking the trs resulting from the hoca transformation the only difference is that the start terms of the former system is a subset only main of the start terms of latter one probably bounds doesn t work properly if the star terms are a subset of the defined function symbols type a list nil cons of a a list type unit unit type a lazy list nill consl of a unit a lazy list type nat s of nat let rec take l n xs match force xs with nill nil consl x xs match n with nil s n cons x take l n xs let rec zeros lazy consl zeros let take lazy n take l n zeros
| 1
|
3,464
| 6,545,573,452
|
IssuesEvent
|
2017-09-04 06:07:16
|
renocollective/member-portal
|
https://api.github.com/repos/renocollective/member-portal
|
opened
|
Discuss and add a Code of Conduct
|
docs process
|
The [Contributor Covenant](https://www.contributor-covenant.org/) is a good option, though there are others.
Options should be discussed, as well as how it will be enforced and who will be responsible for enforcement.
Once a choice has been made, project documentation should be updated accordingly.
|
1.0
|
Discuss and add a Code of Conduct - The [Contributor Covenant](https://www.contributor-covenant.org/) is a good option, though there are others.
Options should be discussed, as well as how it will be enforced and who will be responsible for enforcement.
Once a choice has been made, project documentation should be updated accordingly.
|
process
|
discuss and add a code of conduct the is a good option though there are others options should be discussed as well as how it will be enforced and who will be responsible for enforcement once a choice has been made project documentation should be updated accordingly
| 1
|
37,219
| 9,979,805,735
|
IssuesEvent
|
2019-07-10 00:33:25
|
flutter/flutter
|
https://api.github.com/repos/flutter/flutter
|
opened
|
`libapp.so` not found on Android 4.1.2
|
a: build severe: crash ▣ platform-android
|
## Steps to Reproduce
1. `flutter build apk `
2. `adb install <apk>` on a device running Android 4.1.2 (I used Galaxy S3 mini)
3. The app crashes and the logcat indicates:
```
[ERROR:flutter/fml/platform/posix/native_library_posix.cc(16)] Could not open library 'libapp.so' due to error 'Cannot load library: load_library[1093]: Library 'libapp.so' not found'.
07-10 00:16:50.298 8739-8739/? E/flutter: [ERROR:flutter/fml/platform/posix/native_library_posix.cc(16)] Could not open library 'libapp.so' due to error 'Cannot load library: load_library[1093]: Library 'libapp.so' not found'.
07-10 00:16:50.298 8739-8739/? E/flutter: [ERROR:flutter/runtime/dart_vm_data.cc(19)] VM snapshot invalid and could not be inferred from settings.
07-10 00:16:50.298 8739-8739/? E/flutter: [ERROR:flutter/runtime/dart_vm.cc(238)] Could not setup VM data to bootstrap the VM from.
07-10 00:16:50.298 8739-8739/? E/flutter: [ERROR:flutter/runtime/dart_vm_lifecycle.cc(89)] Could not create Dart VM instance.
07-10 00:16:50.298 8739-8739/? A/flutter: [FATAL:flutter/shell/common/shell.cc(218)] Check failed: vm. Must be able to initialize the VM.
```
|
1.0
|
`libapp.so` not found on Android 4.1.2 - ## Steps to Reproduce
1. `flutter build apk `
2. `adb install <apk>` on a device running Android 4.1.2 (I used Galaxy S3 mini)
3. The app crashes and the logcat indicates:
```
[ERROR:flutter/fml/platform/posix/native_library_posix.cc(16)] Could not open library 'libapp.so' due to error 'Cannot load library: load_library[1093]: Library 'libapp.so' not found'.
07-10 00:16:50.298 8739-8739/? E/flutter: [ERROR:flutter/fml/platform/posix/native_library_posix.cc(16)] Could not open library 'libapp.so' due to error 'Cannot load library: load_library[1093]: Library 'libapp.so' not found'.
07-10 00:16:50.298 8739-8739/? E/flutter: [ERROR:flutter/runtime/dart_vm_data.cc(19)] VM snapshot invalid and could not be inferred from settings.
07-10 00:16:50.298 8739-8739/? E/flutter: [ERROR:flutter/runtime/dart_vm.cc(238)] Could not setup VM data to bootstrap the VM from.
07-10 00:16:50.298 8739-8739/? E/flutter: [ERROR:flutter/runtime/dart_vm_lifecycle.cc(89)] Could not create Dart VM instance.
07-10 00:16:50.298 8739-8739/? A/flutter: [FATAL:flutter/shell/common/shell.cc(218)] Check failed: vm. Must be able to initialize the VM.
```
|
non_process
|
libapp so not found on android steps to reproduce flutter build apk adb install on a device running android i used galaxy mini the app crashes and the logcat indicates could not open library libapp so due to error cannot load library load library library libapp so not found e flutter could not open library libapp so due to error cannot load library load library library libapp so not found e flutter vm snapshot invalid and could not be inferred from settings e flutter could not setup vm data to bootstrap the vm from e flutter could not create dart vm instance a flutter check failed vm must be able to initialize the vm
| 0
|
89,610
| 15,831,470,979
|
IssuesEvent
|
2021-04-06 13:40:08
|
azmathasan92/concourse-ci-cd
|
https://api.github.com/repos/azmathasan92/concourse-ci-cd
|
opened
|
CVE-2020-11111 (High) detected in jackson-databind-2.9.6.jar
|
security vulnerability
|
## CVE-2020-11111 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.6.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: concourse-ci-cd/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-webflux-2.0.4.RELEASE.jar (Root Library)
- spring-boot-starter-json-2.0.4.RELEASE.jar
- :x: **jackson-databind-2.9.6.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://api.github.com/repos/azmathasan92/concourse-ci-cd/commits/25189b3c991f7766c09157948e0bc21f27ada4f9">25189b3c991f7766c09157948e0bc21f27ada4f9</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.apache.activemq.* (aka activemq-jms, activemq-core, activemq-pool, and activemq-pool-jms).
<p>Publish Date: 2020-03-31
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11111>CVE-2020-11111</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11113">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11113</a></p>
<p>Release Date: 2020-03-31</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.4,2.10.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-11111 (High) detected in jackson-databind-2.9.6.jar - ## CVE-2020-11111 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.6.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: concourse-ci-cd/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-webflux-2.0.4.RELEASE.jar (Root Library)
- spring-boot-starter-json-2.0.4.RELEASE.jar
- :x: **jackson-databind-2.9.6.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://api.github.com/repos/azmathasan92/concourse-ci-cd/commits/25189b3c991f7766c09157948e0bc21f27ada4f9">25189b3c991f7766c09157948e0bc21f27ada4f9</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.apache.activemq.* (aka activemq-jms, activemq-core, activemq-pool, and activemq-pool-jms).
<p>Publish Date: 2020-03-31
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11111>CVE-2020-11111</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11113">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11113</a></p>
<p>Release Date: 2020-03-31</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.4,2.10.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file concourse ci cd pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy spring boot starter webflux release jar root library spring boot starter json release jar x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org apache activemq aka activemq jms activemq core activemq pool and activemq pool jms publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind step up your open source security game with whitesource
| 0
|
183,361
| 31,390,352,799
|
IssuesEvent
|
2023-08-26 08:56:59
|
MCreator/MCreator
|
https://api.github.com/repos/MCreator/MCreator
|
closed
|
Texture file creation problem during OBJ model import
|
works as designed invalid: support request
|
### Issue description
MCreator seems to have problems creating a .textures file for some obj / mtl files.
After importing the object.obj or bush01.obj and mtl file, MCreator does not open the assign textures popup.
When I double click the model it tells me the model has no textures. -> resulting in the pink black texture in game.
I saw that (goldbarrenmax) worked and it created a .textures file, I copy pasted it, edited it to make it match the (object) model and now the (object) model can hold a texture.
Not sure why, but I have a older model (goldbarrenmax) that just works,
an older model (bush01) that does not work (anymore but worked with older versions of MCreator)
and a new model (object) that did not work but with the manually created .textures file it did.
I assume something of the texture detection from the obj / mtl file or .textures file creation is going wrong.
Maybe compare the (object) and (bush01) model to the working (goldbarrenmax) model.
### How to reproduce this issue?
Just try importing the attatched models and see the different behavior.
The object.obj.textures is the file I created to make this model have a texture. Otherwhise it would just fail.
### Operating system
Windows
### Details
MCreator 2023.2
### Example workspace
[Models.zip](https://github.com/MCreator/MCreator/files/12443806/Models.zip)
### Logs
_No response_
### Issue tracker rule checks (please read carefully)
- [X] I have checked that my problem <a href='https://github.com/MCreator/MCreator/issues?q=is%3Aissue' target='_blank'>is not already reported</a>
- [X] I have checked that my problem is not covered on <a href='https://mcreator.net/support/knowledgebase' target='_blank'>Knowledge Base</a> or on <a href='https://mcreator.net/wiki' target='_blank'>MCreator's Wiki</a>
- [X] I have checked that my written content does not violate the <a href='https://mcreator.net/wiki/general-publishing-guidelines' target='_blank'>publishing guidelines</a>
|
1.0
|
Texture file creation problem during OBJ model import - ### Issue description
MCreator seems to have problems creating a .textures file for some obj / mtl files.
After importing the object.obj or bush01.obj and mtl file, MCreator does not open the assign textures popup.
When I double click the model it tells me the model has no textures. -> resulting in the pink black texture in game.
I saw that (goldbarrenmax) worked and it created a .textures file, I copy pasted it, edited it to make it match the (object) model and now the (object) model can hold a texture.
Not sure why, but I have a older model (goldbarrenmax) that just works,
an older model (bush01) that does not work (anymore but worked with older versions of MCreator)
and a new model (object) that did not work but with the manually created .textures file it did.
I assume something of the texture detection from the obj / mtl file or .textures file creation is going wrong.
Maybe compare the (object) and (bush01) model to the working (goldbarrenmax) model.
### How to reproduce this issue?
Just try importing the attatched models and see the different behavior.
The object.obj.textures is the file I created to make this model have a texture. Otherwhise it would just fail.
### Operating system
Windows
### Details
MCreator 2023.2
### Example workspace
[Models.zip](https://github.com/MCreator/MCreator/files/12443806/Models.zip)
### Logs
_No response_
### Issue tracker rule checks (please read carefully)
- [X] I have checked that my problem <a href='https://github.com/MCreator/MCreator/issues?q=is%3Aissue' target='_blank'>is not already reported</a>
- [X] I have checked that my problem is not covered on <a href='https://mcreator.net/support/knowledgebase' target='_blank'>Knowledge Base</a> or on <a href='https://mcreator.net/wiki' target='_blank'>MCreator's Wiki</a>
- [X] I have checked that my written content does not violate the <a href='https://mcreator.net/wiki/general-publishing-guidelines' target='_blank'>publishing guidelines</a>
|
non_process
|
texture file creation problem during obj model import issue description mcreator seems to have problems creating a textures file for some obj mtl files after importing the object obj or obj and mtl file mcreator does not open the assign textures popup when i double click the model it tells me the model has no textures resulting in the pink black texture in game i saw that goldbarrenmax worked and it created a textures file i copy pasted it edited it to make it match the object model and now the object model can hold a texture not sure why but i have a older model goldbarrenmax that just works an older model that does not work anymore but worked with older versions of mcreator and a new model object that did not work but with the manually created textures file it did i assume something of the texture detection from the obj mtl file or textures file creation is going wrong maybe compare the object and model to the working goldbarrenmax model how to reproduce this issue just try importing the attatched models and see the different behavior the object obj textures is the file i created to make this model have a texture otherwhise it would just fail operating system windows details mcreator example workspace logs no response issue tracker rule checks please read carefully i have checked that my problem is not already reported i have checked that my problem is not covered on knowledge base or on mcreator s wiki i have checked that my written content does not violate the publishing guidelines
| 0
|
274,879
| 8,569,043,390
|
IssuesEvent
|
2018-11-11 05:31:30
|
CS2103-AY1819S1-F11-3/main
|
https://api.github.com/repos/CS2103-AY1819S1-F11-3/main
|
closed
|
Adding dependency from Completed Task to uncompleted Task
|
priority.High type.Bug
|
The above should not be allowed as it goes against the idea of a valid completed state.
|
1.0
|
Adding dependency from Completed Task to uncompleted Task - The above should not be allowed as it goes against the idea of a valid completed state.
|
non_process
|
adding dependency from completed task to uncompleted task the above should not be allowed as it goes against the idea of a valid completed state
| 0
|
8,191
| 6,472,749,925
|
IssuesEvent
|
2017-08-17 14:35:28
|
postgrespro/rum
|
https://api.github.com/repos/postgrespro/rum
|
closed
|
Select count number of search results performance
|
performance
|
Postgres 9.6.4
Windows Server 2016
I am trying to get total number of search results. I used a windows function `count(id) OVER() as count` in my search query select clause and that seemed to have caused documents to be loaded into memory instead of just RUM indexes being hit.
I also tried `select count(*) from code_docs where tsv_natural @@ $1` ($1 being a tsquery).
Any way to get count of search results and still only hit the rum indexes?
|
True
|
Select count number of search results performance - Postgres 9.6.4
Windows Server 2016
I am trying to get total number of search results. I used a windows function `count(id) OVER() as count` in my search query select clause and that seemed to have caused documents to be loaded into memory instead of just RUM indexes being hit.
I also tried `select count(*) from code_docs where tsv_natural @@ $1` ($1 being a tsquery).
Any way to get count of search results and still only hit the rum indexes?
|
non_process
|
select count number of search results performance postgres windows server i am trying to get total number of search results i used a windows function count id over as count in my search query select clause and that seemed to have caused documents to be loaded into memory instead of just rum indexes being hit i also tried select count from code docs where tsv natural being a tsquery any way to get count of search results and still only hit the rum indexes
| 0
|
12,724
| 9,935,789,067
|
IssuesEvent
|
2019-07-02 17:26:24
|
dart-lang/sdk
|
https://api.github.com/repos/dart-lang/sdk
|
closed
|
Update checked in dart binary to a debian compiled version
|
area-infrastructure closed-obsolete p2-medium type-bug
|
Currently we can't drive the testing scripts on Debian because of this
|
1.0
|
Update checked in dart binary to a debian compiled version - Currently we can't drive the testing scripts on Debian because of this
|
non_process
|
update checked in dart binary to a debian compiled version currently we can t drive the testing scripts on debian because of this
| 0
|
16,888
| 22,191,609,191
|
IssuesEvent
|
2022-06-07 00:02:14
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Native Buffer tool doesn't finish in 3.24.3 and master
|
Processing Regression Bug Upstream
|
### What is the bug or the crash?
Native buffer tool takes way too much time or doesn't finish after more than 10 minutes, while it takes less than 1 minute in QGIS 3.16.16, and 3.22.7
I have tried in several machines with clean profiles.
### Steps to reproduce the issue
1. Download the following test data:
https://mega.nz/file/VQ9y0KoZ#PoGgwC7SOoEtM3wob36_iZe3uxKAK8b4ULI5HiBwgcY
2. Load it in QGIS 3.24 or master
3. run the native buffer tool choosing 1m distance and dissolve results
4. wait...
### Versions
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0//EN" "http://www.w3.org/TR/REC-html40/strict.dtd">
<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8" /><style type="text/css">
p, li { white-space: pre-wrap; }
</style></head><body>
QGIS version | 3.24.3-Tisler | QGIS code revision | cf22b74e01
-- | -- | -- | --
Qt version | 5.12.8
Python version | 3.8.10
GDAL/OGR version | 3.0.4
PROJ version | 6.3.1
EPSG Registry database version | v9.8.6 (2020-01-22)
Compiled against GEOS | 3.8.0-CAPI-1.13.1 | Running against GEOS | 3.8.0-CAPI-1.13.1
SQLite version | 3.31.1
PDAL version | 2.0.1
PostgreSQL client version | 12.10 (Ubuntu 12.10-0ubuntu0.20.04.1)
</body></html>QGIS version
3.24.3-Tisler
QGIS code revision
[cf22b74e01](https://github.com/qgis/QGIS/commit/cf22b74e01)
Qt version
5.12.8
Python version
3.8.10
GDAL/OGR version
3.0.4
PROJ version
6.3.1
EPSG Registry database version
v9.8.6 (2020-01-22)
Compiled against GEOS
3.8.0-CAPI-1.13.1
Running against GEOS
3.8.0-CAPI-1.13.1
SQLite version
3.31.1
PDAL version
2.0.1
PostgreSQL client version
12.10 (Ubuntu 12.10-0ubuntu0.20.04.1)
### Supported QGIS version
- [X] I'm running a supported QGIS version according to the roadmap.
### New profile
- [X] I tried with a new QGIS profile
### Additional context
Tested
|
1.0
|
Native Buffer tool doesn't finish in 3.24.3 and master - ### What is the bug or the crash?
Native buffer tool takes way too much time or doesn't finish after more than 10 minutes, while it takes less than 1 minute in QGIS 3.16.16, and 3.22.7
I have tried in several machines with clean profiles.
### Steps to reproduce the issue
1. Download the following test data:
https://mega.nz/file/VQ9y0KoZ#PoGgwC7SOoEtM3wob36_iZe3uxKAK8b4ULI5HiBwgcY
2. Load it in QGIS 3.24 or master
3. run the native buffer tool choosing 1m distance and dissolve results
4. wait...
### Versions
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0//EN" "http://www.w3.org/TR/REC-html40/strict.dtd">
<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8" /><style type="text/css">
p, li { white-space: pre-wrap; }
</style></head><body>
QGIS version | 3.24.3-Tisler | QGIS code revision | cf22b74e01
-- | -- | -- | --
Qt version | 5.12.8
Python version | 3.8.10
GDAL/OGR version | 3.0.4
PROJ version | 6.3.1
EPSG Registry database version | v9.8.6 (2020-01-22)
Compiled against GEOS | 3.8.0-CAPI-1.13.1 | Running against GEOS | 3.8.0-CAPI-1.13.1
SQLite version | 3.31.1
PDAL version | 2.0.1
PostgreSQL client version | 12.10 (Ubuntu 12.10-0ubuntu0.20.04.1)
</body></html>QGIS version
3.24.3-Tisler
QGIS code revision
[cf22b74e01](https://github.com/qgis/QGIS/commit/cf22b74e01)
Qt version
5.12.8
Python version
3.8.10
GDAL/OGR version
3.0.4
PROJ version
6.3.1
EPSG Registry database version
v9.8.6 (2020-01-22)
Compiled against GEOS
3.8.0-CAPI-1.13.1
Running against GEOS
3.8.0-CAPI-1.13.1
SQLite version
3.31.1
PDAL version
2.0.1
PostgreSQL client version
12.10 (Ubuntu 12.10-0ubuntu0.20.04.1)
### Supported QGIS version
- [X] I'm running a supported QGIS version according to the roadmap.
### New profile
- [X] I tried with a new QGIS profile
### Additional context
Tested
|
process
|
native buffer tool doesn t finish in and master what is the bug or the crash native buffer tool takes way too much time or doesn t finish after more than minutes while it takes less than minute in qgis and i have tried in several machines with clean profiles steps to reproduce the issue download the following test data load it in qgis or master run the native buffer tool choosing distance and dissolve results wait versions doctype html public dtd html en p li white space pre wrap qgis version tisler qgis code revision qt version python version gdal ogr version proj version epsg registry database version compiled against geos capi running against geos capi sqlite version pdal version postgresql client version ubuntu qgis version tisler qgis code revision qt version python version gdal ogr version proj version epsg registry database version compiled against geos capi running against geos capi sqlite version pdal version postgresql client version ubuntu supported qgis version i m running a supported qgis version according to the roadmap new profile i tried with a new qgis profile additional context tested
| 1
|
1,498
| 4,075,284,181
|
IssuesEvent
|
2016-05-29 03:32:22
|
alexrj/Slic3r
|
https://api.github.com/repos/alexrj/Slic3r
|
closed
|
Feature request - speed/temp/size changes@layers
|
Feature request Fixable with post-process script
|
I found some nifty test objects on thiniverse (eg http://www.thingiverse.com/thing:211514), but they could be really enhanced if we could tag changes at layers (probably with another file), eg (or XML)
0, 220, 30, 25 , ...
10, 210, 30, 25, ...
20, 220, 60,25,....
That way you could have the customizer or a batch file create a corresponding "Settings File" that would enable you to print the Temperature Totem without having to manually edit the GCode file (and you could have the STL + Layer Profile apply to all machines [rostock, marlin, etc] instead of having to have a customized GCode for each).
I'd like to be able to set everything as I've found that for XT / T-Glase, I set my nozzle to 0.5mm and layer height to 0.4mm for maximum clarity. But that "overextrusion" causes severe sagging unless I boost the Brige speed and Solid Infill speeds to insanity (otherwise the solid top melts the bridge and re-sags it for me). OTOH, if I could change the extrude width back for the final layer...
|
1.0
|
Feature request - speed/temp/size changes@layers - I found some nifty test objects on thiniverse (eg http://www.thingiverse.com/thing:211514), but they could be really enhanced if we could tag changes at layers (probably with another file), eg (or XML)
0, 220, 30, 25 , ...
10, 210, 30, 25, ...
20, 220, 60,25,....
That way you could have the customizer or a batch file create a corresponding "Settings File" that would enable you to print the Temperature Totem without having to manually edit the GCode file (and you could have the STL + Layer Profile apply to all machines [rostock, marlin, etc] instead of having to have a customized GCode for each).
I'd like to be able to set everything as I've found that for XT / T-Glase, I set my nozzle to 0.5mm and layer height to 0.4mm for maximum clarity. But that "overextrusion" causes severe sagging unless I boost the Brige speed and Solid Infill speeds to insanity (otherwise the solid top melts the bridge and re-sags it for me). OTOH, if I could change the extrude width back for the final layer...
|
process
|
feature request speed temp size changes layers i found some nifty test objects on thiniverse eg but they could be really enhanced if we could tag changes at layers probably with another file eg or xml that way you could have the customizer or a batch file create a corresponding settings file that would enable you to print the temperature totem without having to manually edit the gcode file and you could have the stl layer profile apply to all machines instead of having to have a customized gcode for each i d like to be able to set everything as i ve found that for xt t glase i set my nozzle to and layer height to for maximum clarity but that overextrusion causes severe sagging unless i boost the brige speed and solid infill speeds to insanity otherwise the solid top melts the bridge and re sags it for me otoh if i could change the extrude width back for the final layer
| 1
|
16,016
| 20,188,226,490
|
IssuesEvent
|
2022-02-11 01:19:40
|
savitamittalmsft/WAS-SEC-TEST
|
https://api.github.com/repos/savitamittalmsft/WAS-SEC-TEST
|
opened
|
Add planning, testing, and validation rigor to the use of the root management group
|
WARP-Import WAF FEB 2021 Security Performance and Scalability Capacity Management Processes Security & Compliance Control-plane RBAC
|
<a href="https://docs.microsoft.com/azure/architecture/framework/security/design-management-groups#use-root-management-group-with-caution">Add planning, testing, and validation rigor to the use of the root management group</a>
<p><b>Why Consider This?</b></p>
The root management group ensures consistency across the enterprise by applying policies, permissions, and tags across all subscriptions. This capability also means that modifications made here can affect all services and resources within the tenant, and should be carefully controlled.
<p><b>Context</b></p>
<p><span>Care must be taken when planning and implementing assignments to the root management group because this can affect every resource on Azure and potentially cause downtime or other negative impacts on productivity in the event of errors or unanticipated effects.</span></p>
<p><b>Suggested Actions</b></p>
<p><span>Add planning, testing, and validation rigor to the use of the root management group.</span></p>
<p><b>Learn More</b></p>
<p><a href="https://docs.microsoft.com/en-us/azure/architecture/framework/security/governance#use-root-management-group-carefully" target="_blank"><span>Use root management group carefully</span></a><span /></p>
|
1.0
|
Add planning, testing, and validation rigor to the use of the root management group - <a href="https://docs.microsoft.com/azure/architecture/framework/security/design-management-groups#use-root-management-group-with-caution">Add planning, testing, and validation rigor to the use of the root management group</a>
<p><b>Why Consider This?</b></p>
The root management group ensures consistency across the enterprise by applying policies, permissions, and tags across all subscriptions. This capability also means that modifications made here can affect all services and resources within the tenant, and should be carefully controlled.
<p><b>Context</b></p>
<p><span>Care must be taken when planning and implementing assignments to the root management group because this can affect every resource on Azure and potentially cause downtime or other negative impacts on productivity in the event of errors or unanticipated effects.</span></p>
<p><b>Suggested Actions</b></p>
<p><span>Add planning, testing, and validation rigor to the use of the root management group.</span></p>
<p><b>Learn More</b></p>
<p><a href="https://docs.microsoft.com/en-us/azure/architecture/framework/security/governance#use-root-management-group-carefully" target="_blank"><span>Use root management group carefully</span></a><span /></p>
|
process
|
add planning testing and validation rigor to the use of the root management group why consider this the root management group ensures consistency across the enterprise by applying policies permissions and tags across all subscriptions this capability also means that modifications made here can affect all services and resources within the tenant and should be carefully controlled context care must be taken when planning and implementing assignments to the root management group because this can affect every resource on azure and potentially cause downtime or other negative impacts on productivity in the event of errors or unanticipated effects suggested actions add planning testing and validation rigor to the use of the root management group learn more use root management group carefully
| 1
|
14,413
| 17,465,363,063
|
IssuesEvent
|
2021-08-06 16:01:30
|
googleapis/java-pubsublite
|
https://api.github.com/repos/googleapis/java-pubsublite
|
closed
|
Do you have a plan to release version 1.0 (after stabilizing APIs)?
|
type: process api: pubsublite
|
This is not an urgent request. In general it's nice to stabilize APIs and release version 1.0 of a library. Apache Beam (Cloud Dataflow) and Spring Cloud GCP use the [Libraries BOM](https://github.com/GoogleCloudPlatform/cloud-opensource-java/wiki/The-Google-Cloud-Platform-Libraries-BOM) to set most of Google Cloud libraries but the BOM currently does not have google-cloud-pubsublite. The Libraries BOM imports google-cloud-bom. Google-cloud-bom maintainers (@chingor13 ) prefer to have stable (post 1.0 release) libraries in it.
**Describe the solution you'd like**
Stabilize the APIs and release version 1.0.0. Then notify google-cloud-bom maintainers to include this google-cloud-pubsublite library.
**Describe alternatives you've considered**
Currently users specify the version of google-cloud-pubsublite independent from the Libraries BOM.
|
1.0
|
Do you have a plan to release version 1.0 (after stabilizing APIs)? - This is not an urgent request. In general it's nice to stabilize APIs and release version 1.0 of a library. Apache Beam (Cloud Dataflow) and Spring Cloud GCP use the [Libraries BOM](https://github.com/GoogleCloudPlatform/cloud-opensource-java/wiki/The-Google-Cloud-Platform-Libraries-BOM) to set most of Google Cloud libraries but the BOM currently does not have google-cloud-pubsublite. The Libraries BOM imports google-cloud-bom. Google-cloud-bom maintainers (@chingor13 ) prefer to have stable (post 1.0 release) libraries in it.
**Describe the solution you'd like**
Stabilize the APIs and release version 1.0.0. Then notify google-cloud-bom maintainers to include this google-cloud-pubsublite library.
**Describe alternatives you've considered**
Currently users specify the version of google-cloud-pubsublite independent from the Libraries BOM.
|
process
|
do you have a plan to release version after stabilizing apis this is not an urgent request in general it s nice to stabilize apis and release version of a library apache beam cloud dataflow and spring cloud gcp use the to set most of google cloud libraries but the bom currently does not have google cloud pubsublite the libraries bom imports google cloud bom google cloud bom maintainers prefer to have stable post release libraries in it describe the solution you d like stabilize the apis and release version then notify google cloud bom maintainers to include this google cloud pubsublite library describe alternatives you ve considered currently users specify the version of google cloud pubsublite independent from the libraries bom
| 1
|
629
| 3,092,001,079
|
IssuesEvent
|
2015-08-26 15:43:33
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
opened
|
child_process: stdout/stderr flaky on CentOS 5
|
child_process confirmed-bug
|
This is currently (but hopefully not for long--I have a PR to fix it) the contents of `test/parallel/test-process-argv-0.js`:
````javascript
'use strict';
var util = require('util');
var path = require('path');
var assert = require('assert');
var spawn = require('child_process').spawn;
var common = require('../common');
console.error('argv=%j', process.argv);
console.error('exec=%j', process.execPath);
if (process.argv[2] !== 'child') {
var child = spawn(process.execPath, [__filename, 'child'], {
cwd: path.dirname(process.execPath)
});
var childArgv0 = '';
var childErr = '';
child.stdout.on('data', function(chunk) {
childArgv0 += chunk;
});
child.stderr.on('data', function(chunk) {
childErr += chunk;
});
child.on('exit', function() {
console.error('CHILD: %s', childErr.trim().split('\n').join('\nCHILD: '));
assert.equal(childArgv0, process.execPath);
});
}
else {
process.stdout.write(process.argv[0]);
}
````
From time to time on CentOS 5 only, `stderr` and `stdout` from the child process stop working after the first line is written to `stderr`.
See, for example [this centos5-32 test result](https://jenkins-iojs.nodesource.com/job/node-test-commit-linux/207/nodes=centos5-32/tapTestReport/test.tap-544/) or [this centos5-64 test result](https://jenkins-iojs.nodesource.com/job/node-test-commit-linux/249/nodes=centos5-64/tapTestReport/test.tap-545/).
See further discussion at https://github.com/nodejs/node/pull/2541.
[CentOS Project will support CentOS 5 until March 31, 2017.](https://wiki.centos.org/FAQ/General#head-fe8a0be91ee3e7dea812e8694491e1dde5b75e6d)
|
1.0
|
child_process: stdout/stderr flaky on CentOS 5 - This is currently (but hopefully not for long--I have a PR to fix it) the contents of `test/parallel/test-process-argv-0.js`:
````javascript
'use strict';
var util = require('util');
var path = require('path');
var assert = require('assert');
var spawn = require('child_process').spawn;
var common = require('../common');
console.error('argv=%j', process.argv);
console.error('exec=%j', process.execPath);
if (process.argv[2] !== 'child') {
var child = spawn(process.execPath, [__filename, 'child'], {
cwd: path.dirname(process.execPath)
});
var childArgv0 = '';
var childErr = '';
child.stdout.on('data', function(chunk) {
childArgv0 += chunk;
});
child.stderr.on('data', function(chunk) {
childErr += chunk;
});
child.on('exit', function() {
console.error('CHILD: %s', childErr.trim().split('\n').join('\nCHILD: '));
assert.equal(childArgv0, process.execPath);
});
}
else {
process.stdout.write(process.argv[0]);
}
````
From time to time on CentOS 5 only, `stderr` and `stdout` from the child process stop working after the first line is written to `stderr`.
See, for example [this centos5-32 test result](https://jenkins-iojs.nodesource.com/job/node-test-commit-linux/207/nodes=centos5-32/tapTestReport/test.tap-544/) or [this centos5-64 test result](https://jenkins-iojs.nodesource.com/job/node-test-commit-linux/249/nodes=centos5-64/tapTestReport/test.tap-545/).
See further discussion at https://github.com/nodejs/node/pull/2541.
[CentOS Project will support CentOS 5 until March 31, 2017.](https://wiki.centos.org/FAQ/General#head-fe8a0be91ee3e7dea812e8694491e1dde5b75e6d)
|
process
|
child process stdout stderr flaky on centos this is currently but hopefully not for long i have a pr to fix it the contents of test parallel test process argv js javascript use strict var util require util var path require path var assert require assert var spawn require child process spawn var common require common console error argv j process argv console error exec j process execpath if process argv child var child spawn process execpath cwd path dirname process execpath var var childerr child stdout on data function chunk chunk child stderr on data function chunk childerr chunk child on exit function console error child s childerr trim split n join nchild assert equal process execpath else process stdout write process argv from time to time on centos only stderr and stdout from the child process stop working after the first line is written to stderr see for example or see further discussion at
| 1
|
12,555
| 14,977,732,383
|
IssuesEvent
|
2021-01-28 09:53:47
|
tueit/it_management
|
https://api.github.com/repos/tueit/it_management
|
closed
|
merging IT Ticket to Issue
|
core enhancement process
|
To make the app closer to core I suggest to merge IT Ticket into Issue. Most fields are already the same and a lot of usefull funktions are already implemented in Issue (like deleting the assignment when closed).
Let's discuss this over a meeting.
|
1.0
|
merging IT Ticket to Issue - To make the app closer to core I suggest to merge IT Ticket into Issue. Most fields are already the same and a lot of usefull funktions are already implemented in Issue (like deleting the assignment when closed).
Let's discuss this over a meeting.
|
process
|
merging it ticket to issue to make the app closer to core i suggest to merge it ticket into issue most fields are already the same and a lot of usefull funktions are already implemented in issue like deleting the assignment when closed let s discuss this over a meeting
| 1
|
133,980
| 10,877,299,947
|
IssuesEvent
|
2019-11-16 08:55:59
|
angular/angular
|
https://api.github.com/repos/angular/angular
|
reopened
|
@angular/core/testing inject() bug.
|
comp: testing
|
<!--🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅
Oh hi there! 😄
To expedite issue processing please search open and closed issues before submitting a new one.
Existing issues often contain information about workarounds, resolution, or progress updates.
🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅-->
# 🐞 bug report
### Affected Package
<!-- Can you pin-point one or more @angular/* packages as the source of the bug? -->
<!-- ✍️edit: --> The issue is caused by function `inject` from package @angular/core/testing
### Is this a regression?
<!-- Did this behavior use to work in the previous version? -->
<!-- ✍️--> Possibly.
### Description
<!-- ✍️--> When using "subscribe and assert" pattern inside `inject()` function `DoneFN` can't be called so test times out.
## 🔬 Minimal Reproduction
[https://github.com/kdshop/injectbug](https://github.com/kdshop/injectbug)
Just `npm i` and then `ng t` bug lives in `app.service.spec.ts`
## 🔥 Exception or Error
Jasmine returns message:
<pre><code>Error: Timeout - Async callback was not invoked within 5000ms (set by jasmine.DEFAULT_TIMEOUT_INTERVAL)
Error: Timeout - Async callback was not invoked within 5000ms (set by jasmine.DEFAULT_TIMEOUT_INTERVAL)
at Jasmine
</code></pre>
## 🌍 Your Environment
**Angular Version:**
<pre><code>Angular: 8.2.13
RxJS: 6.4.0
Jasmine: 3.4.0
</code></pre>
|
1.0
|
@angular/core/testing inject() bug. - <!--🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅
Oh hi there! 😄
To expedite issue processing please search open and closed issues before submitting a new one.
Existing issues often contain information about workarounds, resolution, or progress updates.
🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅-->
# 🐞 bug report
### Affected Package
<!-- Can you pin-point one or more @angular/* packages as the source of the bug? -->
<!-- ✍️edit: --> The issue is caused by function `inject` from package @angular/core/testing
### Is this a regression?
<!-- Did this behavior use to work in the previous version? -->
<!-- ✍️--> Possibly.
### Description
<!-- ✍️--> When using "subscribe and assert" pattern inside `inject()` function `DoneFN` can't be called so test times out.
## 🔬 Minimal Reproduction
[https://github.com/kdshop/injectbug](https://github.com/kdshop/injectbug)
Just `npm i` and then `ng t` bug lives in `app.service.spec.ts`
## 🔥 Exception or Error
Jasmine returns message:
<pre><code>Error: Timeout - Async callback was not invoked within 5000ms (set by jasmine.DEFAULT_TIMEOUT_INTERVAL)
Error: Timeout - Async callback was not invoked within 5000ms (set by jasmine.DEFAULT_TIMEOUT_INTERVAL)
at Jasmine
</code></pre>
## 🌍 Your Environment
**Angular Version:**
<pre><code>Angular: 8.2.13
RxJS: 6.4.0
Jasmine: 3.4.0
</code></pre>
|
non_process
|
angular core testing inject bug 🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅 oh hi there 😄 to expedite issue processing please search open and closed issues before submitting a new one existing issues often contain information about workarounds resolution or progress updates 🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅 🐞 bug report affected package the issue is caused by function inject from package angular core testing is this a regression possibly description when using subscribe and assert pattern inside inject function donefn can t be called so test times out 🔬 minimal reproduction just npm i and then ng t bug lives in app service spec ts 🔥 exception or error jasmine returns message error timeout async callback was not invoked within set by jasmine default timeout interval error timeout async callback was not invoked within set by jasmine default timeout interval at jasmine 🌍 your environment angular version angular rxjs jasmine
| 0
|
226,297
| 17,333,439,466
|
IssuesEvent
|
2021-07-28 07:12:03
|
matplotlib/matplotlib
|
https://api.github.com/repos/matplotlib/matplotlib
|
opened
|
[Doc]: legend guide should be OO
|
Documentation Good first issue topic: legend
|
### Documentation Link
https://matplotlib.org/devdocs/tutorials/intermediate/legend_guide.html
### Problem
The legend guide is all pyplot interface, which is especially discouraged in the types of examples (complicated, some needing state) in the guide. For example:
```python
from matplotlib.legend_handler import HandlerPatch
class HandlerEllipse(HandlerPatch):
def create_artists(self, legend, orig_handle,
xdescent, ydescent, width, height, fontsize, trans):
center = 0.5 * width - 0.5 * xdescent, 0.5 * height - 0.5 * ydescent
p = mpatches.Ellipse(xy=center, width=width + xdescent,
height=height + ydescent)
self.update_prop(p, orig_handle, legend)
p.set_transform(trans)
return [p]
c = mpatches.Circle((0.5, 0.5), 0.25, facecolor="green",
edgecolor="red", linewidth=3)
plt.gca().add_patch(c)
plt.legend([c], ["An ellipse, not a rectangle"],
handler_map={mpatches.Circle: HandlerEllipse()})
```
### Suggested improvement
switch the tutorial to OO
### Matplotlib Version
3.4.2.post1539+gf8f693922c
### Matplotlib documentation version
3.4.2.post1539+gf8f693922c
|
1.0
|
[Doc]: legend guide should be OO - ### Documentation Link
https://matplotlib.org/devdocs/tutorials/intermediate/legend_guide.html
### Problem
The legend guide is all pyplot interface, which is especially discouraged in the types of examples (complicated, some needing state) in the guide. For example:
```python
from matplotlib.legend_handler import HandlerPatch
class HandlerEllipse(HandlerPatch):
def create_artists(self, legend, orig_handle,
xdescent, ydescent, width, height, fontsize, trans):
center = 0.5 * width - 0.5 * xdescent, 0.5 * height - 0.5 * ydescent
p = mpatches.Ellipse(xy=center, width=width + xdescent,
height=height + ydescent)
self.update_prop(p, orig_handle, legend)
p.set_transform(trans)
return [p]
c = mpatches.Circle((0.5, 0.5), 0.25, facecolor="green",
edgecolor="red", linewidth=3)
plt.gca().add_patch(c)
plt.legend([c], ["An ellipse, not a rectangle"],
handler_map={mpatches.Circle: HandlerEllipse()})
```
### Suggested improvement
switch the tutorial to OO
### Matplotlib Version
3.4.2.post1539+gf8f693922c
### Matplotlib documentation version
3.4.2.post1539+gf8f693922c
|
non_process
|
legend guide should be oo documentation link problem the legend guide is all pyplot interface which is especially discouraged in the types of examples complicated some needing state in the guide for example python from matplotlib legend handler import handlerpatch class handlerellipse handlerpatch def create artists self legend orig handle xdescent ydescent width height fontsize trans center width xdescent height ydescent p mpatches ellipse xy center width width xdescent height height ydescent self update prop p orig handle legend p set transform trans return c mpatches circle facecolor green edgecolor red linewidth plt gca add patch c plt legend handler map mpatches circle handlerellipse suggested improvement switch the tutorial to oo matplotlib version matplotlib documentation version
| 0
|
101,797
| 21,787,352,877
|
IssuesEvent
|
2022-05-14 10:53:45
|
zhmcclient/python-zhmcclient
|
https://api.github.com/repos/zhmcclient/python-zhmcclient
|
closed
|
End2end tests: Support for Ansible inventory and vault files for HMC definitions
|
area: code type: feature under work
|
The recently introduced HMC definition files for end2end tests are specific to the zhmcclient project. People that are using Ansible will set up pretty much the same information in Ansible inventory files and vault files. In fact, that allows for a proper encryption of the vault files.
It would be good to support Ansible inventory and vault files for HMC definitions in some way.
|
1.0
|
End2end tests: Support for Ansible inventory and vault files for HMC definitions - The recently introduced HMC definition files for end2end tests are specific to the zhmcclient project. People that are using Ansible will set up pretty much the same information in Ansible inventory files and vault files. In fact, that allows for a proper encryption of the vault files.
It would be good to support Ansible inventory and vault files for HMC definitions in some way.
|
non_process
|
tests support for ansible inventory and vault files for hmc definitions the recently introduced hmc definition files for tests are specific to the zhmcclient project people that are using ansible will set up pretty much the same information in ansible inventory files and vault files in fact that allows for a proper encryption of the vault files it would be good to support ansible inventory and vault files for hmc definitions in some way
| 0
|
11,461
| 14,286,179,451
|
IssuesEvent
|
2020-11-23 14:50:42
|
zammad/zammad
|
https://api.github.com/repos/zammad/zammad
|
opened
|
Channel::EmailParser.process_unprocessable_mails doesn't find matching Group because of key insensitive mismatch in email address
|
bug mail processing prioritised by payment verified
|
### Infos:
* Used Zammad version: 3.6.x
* Installation method (source, package, ..): any
* Operating system: any
* Database + version: any
* Elasticsearch version: any
* Browser + version: any
* Ticket# 1077707
### Expected behavior:
* `Channel::EmailParser.process_unprocessable_mails` should assign correct Group to Ticket as defined in #3145 also for key insensitive `To`-address match
### Actual behavior:
* `Channel::EmailParser.process_unprocessable_mails` assigns wrong Group to Ticket opposed to defined in #3145 for key insensitive `To`-address match
### Steps to reproduce the behavior:
* Create `Channel` assigned to Group `CORRECT`
* Assign email address `zammad@example.com`
* Have an unprocessable email addressed to `ZaMMaD@example.com`
* Run `Channel::EmailParser.process_unprocessable_mails`
* See that mail doesn't get assigned to Group `CORRECT`
Yes I'm sure this is a bug and no feature request or a general question.
|
1.0
|
Channel::EmailParser.process_unprocessable_mails doesn't find matching Group because of key insensitive mismatch in email address - ### Infos:
* Used Zammad version: 3.6.x
* Installation method (source, package, ..): any
* Operating system: any
* Database + version: any
* Elasticsearch version: any
* Browser + version: any
* Ticket# 1077707
### Expected behavior:
* `Channel::EmailParser.process_unprocessable_mails` should assign correct Group to Ticket as defined in #3145 also for key insensitive `To`-address match
### Actual behavior:
* `Channel::EmailParser.process_unprocessable_mails` assigns wrong Group to Ticket opposed to defined in #3145 for key insensitive `To`-address match
### Steps to reproduce the behavior:
* Create `Channel` assigned to Group `CORRECT`
* Assign email address `zammad@example.com`
* Have an unprocessable email addressed to `ZaMMaD@example.com`
* Run `Channel::EmailParser.process_unprocessable_mails`
* See that mail doesn't get assigned to Group `CORRECT`
Yes I'm sure this is a bug and no feature request or a general question.
|
process
|
channel emailparser process unprocessable mails doesn t find matching group because of key insensitive mismatch in email address infos used zammad version x installation method source package any operating system any database version any elasticsearch version any browser version any ticket expected behavior channel emailparser process unprocessable mails should assign correct group to ticket as defined in also for key insensitive to address match actual behavior channel emailparser process unprocessable mails assigns wrong group to ticket opposed to defined in for key insensitive to address match steps to reproduce the behavior create channel assigned to group correct assign email address zammad example com have an unprocessable email addressed to zammad example com run channel emailparser process unprocessable mails see that mail doesn t get assigned to group correct yes i m sure this is a bug and no feature request or a general question
| 1
|
84,199
| 10,354,637,451
|
IssuesEvent
|
2019-09-05 14:07:36
|
vtex/styleguide
|
https://api.github.com/repos/vtex/styleguide
|
closed
|
Filter Bar documentation is broken
|
🐛 Bug 💻 Developing... 📝 Documentation
|
**Describe the bug**
Filter bar preview in the documentation, both in its own documentation and in the table.
**To Reproduce**
Open the styleguide documentation and test for yourself.
**Expected behavior**
It shouldn't return any errors.
**Screenshots**


|
1.0
|
Filter Bar documentation is broken - **Describe the bug**
Filter bar preview in the documentation, both in its own documentation and in the table.
**To Reproduce**
Open the styleguide documentation and test for yourself.
**Expected behavior**
It shouldn't return any errors.
**Screenshots**


|
non_process
|
filter bar documentation is broken describe the bug filter bar preview in the documentation both in its own documentation and in the table to reproduce open the styleguide documentation and test for yourself expected behavior it shouldn t return any errors screenshots
| 0
|
1,078
| 3,541,524,289
|
IssuesEvent
|
2016-01-19 01:42:58
|
e-government-ua/i
|
https://api.github.com/repos/e-government-ua/i
|
closed
|
Сделать поддержку, при отсылки через юзертаску почты - нескольких адресов "To", перечисляемых через запятую
|
active In process of testing test _wf-base
|
Сейчас, система позволяет слать только на один адрес, а при попытке добавить через запятую несколько адресов - пишет ошибку
|
1.0
|
Сделать поддержку, при отсылки через юзертаску почты - нескольких адресов "To", перечисляемых через запятую - Сейчас, система позволяет слать только на один адрес, а при попытке добавить через запятую несколько адресов - пишет ошибку
|
process
|
сделать поддержку при отсылки через юзертаску почты нескольких адресов to перечисляемых через запятую сейчас система позволяет слать только на один адрес а при попытке добавить через запятую несколько адресов пишет ошибку
| 1
|
18,306
| 24,419,442,092
|
IssuesEvent
|
2022-10-05 18:54:58
|
ForNeVeR/Cesium
|
https://api.github.com/repos/ForNeVeR/Cesium
|
closed
|
Preprocessor interpreter language support (`#if defined`)
|
kind:feature status:help-wanted area:preprocessor good-first-issue hacktoberfest
|
Cesium's preprocessor should support `#if defined(xxx)` syntax.
|
1.0
|
Preprocessor interpreter language support (`#if defined`) - Cesium's preprocessor should support `#if defined(xxx)` syntax.
|
process
|
preprocessor interpreter language support if defined cesium s preprocessor should support if defined xxx syntax
| 1
|
245,223
| 18,774,912,742
|
IssuesEvent
|
2021-11-07 13:57:39
|
PennLINC/xcp_abcd
|
https://api.github.com/repos/PennLINC/xcp_abcd
|
opened
|
workflow/base.py:
|
bug documentation enhancement
|
workflow/base.py:
a lot of arguments to be fed in - maybe use a structure? dictionary or class?
323-325: add to --help for assistance
362-434: a lot of arguments, not very commented, perhaps redundant code
the rest also not very commented
|
1.0
|
workflow/base.py: - workflow/base.py:
a lot of arguments to be fed in - maybe use a structure? dictionary or class?
323-325: add to --help for assistance
362-434: a lot of arguments, not very commented, perhaps redundant code
the rest also not very commented
|
non_process
|
workflow base py workflow base py a lot of arguments to be fed in maybe use a structure dictionary or class add to help for assistance a lot of arguments not very commented perhaps redundant code the rest also not very commented
| 0
|
451,753
| 32,040,480,303
|
IssuesEvent
|
2023-09-22 18:52:17
|
Unstructured-IO/unstructured
|
https://api.github.com/repos/Unstructured-IO/unstructured
|
opened
|
bug: Broken link in documentation
|
bug documentation
|
In `introduction.html` under "Data Ingestion", there is a broken link to a page about upstream connectors.
Find the relevant section [here](https://unstructured-io.github.io/unstructured/introduction.html#data-ingestion).
|
1.0
|
bug: Broken link in documentation - In `introduction.html` under "Data Ingestion", there is a broken link to a page about upstream connectors.
Find the relevant section [here](https://unstructured-io.github.io/unstructured/introduction.html#data-ingestion).
|
non_process
|
bug broken link in documentation in introduction html under data ingestion there is a broken link to a page about upstream connectors find the relevant section
| 0
|
5,203
| 7,976,389,431
|
IssuesEvent
|
2018-07-17 12:33:10
|
pelias/pelias
|
https://api.github.com/repos/pelias/pelias
|
closed
|
Automate more of our pull request/release workflow
|
discussion processed
|
At EmpireJS @gr2m gave a great talk about [building welcoming open source communities](http://hood.ie/blog/welcoming-communities.html) that probably got several of us thinking! There were many takeaways about general interaction but two specifically regarding automation of workflows I would love to see us implement immediately.
## Use [semantic-release](https://github.com/semantic-release/semantic-release) to automate NPM releases :airplane:
NPM releases has been something we've had a bit of discussion about and not always done perfectly. Some cases where things haven't worked great:
- Just today we had a user report [an issue](pelias/pelias#344) that we had fixed, but we hadn't released the fix in an NPM module yet
- I've accidentally included over 1GB of test data in an NPM release which caused no real harm, but slows everything down and takes up space needlessly
- Sometimes we just waste time trying to decide the right version
Semantic-release automatically does NPM releases whenever a new commit hits the master branch, on TravisCI, and automatically increments the version correctly based on the rules of [semantic versioning](http://semver.org/)! Commits simply have to be tagged with the appropriate prefix in the commit message to state whether they're bug fixes, new features, or breaking changes. There's apparently also a way to reject merges into master that don't properly tag the commits.
## Use [LGTM](https://lgtm.co/docs/overview/) to provide simple checks for code review :+1:
We are generally really good at doing at least a quick code review before any code is merged, so why not enforce that with a Github check? I'd also love to see us start to invite some of our awesome external contributors to become maintainers of pelias repositories, and having more of our preferred workflow automated will help us as we do that.
|
1.0
|
Automate more of our pull request/release workflow - At EmpireJS @gr2m gave a great talk about [building welcoming open source communities](http://hood.ie/blog/welcoming-communities.html) that probably got several of us thinking! There were many takeaways about general interaction but two specifically regarding automation of workflows I would love to see us implement immediately.
## Use [semantic-release](https://github.com/semantic-release/semantic-release) to automate NPM releases :airplane:
NPM releases has been something we've had a bit of discussion about and not always done perfectly. Some cases where things haven't worked great:
- Just today we had a user report [an issue](pelias/pelias#344) that we had fixed, but we hadn't released the fix in an NPM module yet
- I've accidentally included over 1GB of test data in an NPM release which caused no real harm, but slows everything down and takes up space needlessly
- Sometimes we just waste time trying to decide the right version
Semantic-release automatically does NPM releases whenever a new commit hits the master branch, on TravisCI, and automatically increments the version correctly based on the rules of [semantic versioning](http://semver.org/)! Commits simply have to be tagged with the appropriate prefix in the commit message to state whether they're bug fixes, new features, or breaking changes. There's apparently also a way to reject merges into master that don't properly tag the commits.
## Use [LGTM](https://lgtm.co/docs/overview/) to provide simple checks for code review :+1:
We are generally really good at doing at least a quick code review before any code is merged, so why not enforce that with a Github check? I'd also love to see us start to invite some of our awesome external contributors to become maintainers of pelias repositories, and having more of our preferred workflow automated will help us as we do that.
|
process
|
automate more of our pull request release workflow at empirejs gave a great talk about that probably got several of us thinking there were many takeaways about general interaction but two specifically regarding automation of workflows i would love to see us implement immediately use to automate npm releases airplane npm releases has been something we ve had a bit of discussion about and not always done perfectly some cases where things haven t worked great just today we had a user report pelias pelias that we had fixed but we hadn t released the fix in an npm module yet i ve accidentally included over of test data in an npm release which caused no real harm but slows everything down and takes up space needlessly sometimes we just waste time trying to decide the right version semantic release automatically does npm releases whenever a new commit hits the master branch on travisci and automatically increments the version correctly based on the rules of commits simply have to be tagged with the appropriate prefix in the commit message to state whether they re bug fixes new features or breaking changes there s apparently also a way to reject merges into master that don t properly tag the commits use to provide simple checks for code review we are generally really good at doing at least a quick code review before any code is merged so why not enforce that with a github check i d also love to see us start to invite some of our awesome external contributors to become maintainers of pelias repositories and having more of our preferred workflow automated will help us as we do that
| 1
|
6,311
| 9,311,731,368
|
IssuesEvent
|
2019-03-25 22:15:38
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
closed
|
Revamp Python 2/3 mode selection
|
P1 team-Rules-Python type: process
|
This issue tracks discussion around, and implementation of, the proposal to change the Python mode configuration state from a tri-value to a boolean.
See also #6444 to track specific features/bugs relating to the Python mode.
[Design doc](https://github.com/bazelbuild/rules_python/blob/master/proposals/2018-10-25-selecting-between-python-2-and-3.md)
[Discussion mail thread](https://groups.google.com/forum/#!topic/bazel-sig-python/7yiys9coGuc)
|
1.0
|
Revamp Python 2/3 mode selection - This issue tracks discussion around, and implementation of, the proposal to change the Python mode configuration state from a tri-value to a boolean.
See also #6444 to track specific features/bugs relating to the Python mode.
[Design doc](https://github.com/bazelbuild/rules_python/blob/master/proposals/2018-10-25-selecting-between-python-2-and-3.md)
[Discussion mail thread](https://groups.google.com/forum/#!topic/bazel-sig-python/7yiys9coGuc)
|
process
|
revamp python mode selection this issue tracks discussion around and implementation of the proposal to change the python mode configuration state from a tri value to a boolean see also to track specific features bugs relating to the python mode
| 1
|
6,373
| 9,421,634,036
|
IssuesEvent
|
2019-04-11 07:23:15
|
elastic/beats
|
https://api.github.com/repos/elastic/beats
|
closed
|
Feature Request: Copy Processor
|
:Processors enhancement
|
I found the `rename` processor for filebeat, couldn't find anything related to a `copy` field processor. My usecase is I want to copy some fields from the kubernetes processor to a root field with the original fields remaining intact. I have a similar usecase in the case of journalbeat as well.
I tried using the fields, fields_under_root which instead of deriving the name of the variable use the variable name itself. For eg.
```
fields:
test: "%{[kubernetes][namespace]}"
```
makes a field test with value `%{[kubernetes][namespace]}` instead of derived value of that actual namespace. Requesting this feature. Also are you willing to accept a PR for the same?
Discussion link [here](https://discuss.elastic.co/t/filebeat-copy-field-processor/143098)
|
1.0
|
Feature Request: Copy Processor - I found the `rename` processor for filebeat, couldn't find anything related to a `copy` field processor. My usecase is I want to copy some fields from the kubernetes processor to a root field with the original fields remaining intact. I have a similar usecase in the case of journalbeat as well.
I tried using the fields, fields_under_root which instead of deriving the name of the variable use the variable name itself. For eg.
```
fields:
test: "%{[kubernetes][namespace]}"
```
makes a field test with value `%{[kubernetes][namespace]}` instead of derived value of that actual namespace. Requesting this feature. Also are you willing to accept a PR for the same?
Discussion link [here](https://discuss.elastic.co/t/filebeat-copy-field-processor/143098)
|
process
|
feature request copy processor i found the rename processor for filebeat couldn t find anything related to a copy field processor my usecase is i want to copy some fields from the kubernetes processor to a root field with the original fields remaining intact i have a similar usecase in the case of journalbeat as well i tried using the fields fields under root which instead of deriving the name of the variable use the variable name itself for eg fields test makes a field test with value instead of derived value of that actual namespace requesting this feature also are you willing to accept a pr for the same discussion link
| 1
|
8,806
| 11,908,283,624
|
IssuesEvent
|
2020-03-31 00:30:19
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Review the Subdivide algorithm help: advertised bug?
|
Bug Processing
|
Author Name: **Harrissou Santanna** (@DelazJ)
Original Redmine Issue: [18223](https://issues.qgis.org/issues/18223)
Affected QGIS version: 3.0.0
Redmine category:processing/qgis
Assignee: Victor Olaya
---
The subdivide algorithm help (in app and in docs) states that
> The returned geometry parts may not be valid and may contain self-intersections.
I'm not sure this comment adds confidence in the tool. Shouldn't creation of invalid features be considered as a bug and fixed in the tool instead?
|
1.0
|
Review the Subdivide algorithm help: advertised bug? - Author Name: **Harrissou Santanna** (@DelazJ)
Original Redmine Issue: [18223](https://issues.qgis.org/issues/18223)
Affected QGIS version: 3.0.0
Redmine category:processing/qgis
Assignee: Victor Olaya
---
The subdivide algorithm help (in app and in docs) states that
> The returned geometry parts may not be valid and may contain self-intersections.
I'm not sure this comment adds confidence in the tool. Shouldn't creation of invalid features be considered as a bug and fixed in the tool instead?
|
process
|
review the subdivide algorithm help advertised bug author name harrissou santanna delazj original redmine issue affected qgis version redmine category processing qgis assignee victor olaya the subdivide algorithm help in app and in docs states that the returned geometry parts may not be valid and may contain self intersections i m not sure this comment adds confidence in the tool shouldn t creation of invalid features be considered as a bug and fixed in the tool instead
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.