Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
50,052
| 13,187,316,371
|
IssuesEvent
|
2020-08-13 03:01:37
|
icecube-trac/tix3
|
https://api.github.com/repos/icecube-trac/tix3
|
closed
|
cmake header-only projects. (Trac #62)
|
Migrated from Trac cmake defect
|
header-only and executable-only projects aren't supported.
<details>
<summary>_Migrated from https://code.icecube.wisc.edu/ticket/62
, reported by troy and owned by troy_</summary>
<p>
```json
{
"status": "closed",
"changetime": "2007-11-09T22:34:28",
"description": "header-only and executable-only projects aren't supported.",
"reporter": "troy",
"cc": "",
"resolution": "duplicate",
"_ts": "1194647668000000",
"component": "cmake",
"summary": "cmake header-only projects.",
"priority": "normal",
"keywords": "",
"time": "2007-06-12T17:56:22",
"milestone": "",
"owner": "troy",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
cmake header-only projects. (Trac #62) - header-only and executable-only projects aren't supported.
<details>
<summary>_Migrated from https://code.icecube.wisc.edu/ticket/62
, reported by troy and owned by troy_</summary>
<p>
```json
{
"status": "closed",
"changetime": "2007-11-09T22:34:28",
"description": "header-only and executable-only projects aren't supported.",
"reporter": "troy",
"cc": "",
"resolution": "duplicate",
"_ts": "1194647668000000",
"component": "cmake",
"summary": "cmake header-only projects.",
"priority": "normal",
"keywords": "",
"time": "2007-06-12T17:56:22",
"milestone": "",
"owner": "troy",
"type": "defect"
}
```
</p>
</details>
|
non_process
|
cmake header only projects trac header only and executable only projects aren t supported migrated from reported by troy and owned by troy json status closed changetime description header only and executable only projects aren t supported reporter troy cc resolution duplicate ts component cmake summary cmake header only projects priority normal keywords time milestone owner troy type defect
| 0
|
9,609
| 12,549,885,088
|
IssuesEvent
|
2020-06-06 08:56:34
|
neuropsychology/NeuroKit
|
https://api.github.com/repos/neuropsychology/NeuroKit
|
closed
|
Heart/respiration rate - interpolated signal creates crazy values (including negative values)
|
ECG :heart: RSP :triumph: signal processing :chart_with_upwards_trend:
|
Follow-up of #236 @Mitchellb16
So I took a look at your data and it seems that indeed the cubic interpolation (used by default) extends the range. It seems like the more there is short and sudden variability, and the more it will struggle to connect the dots with curves, resulting positives and negative spikes between close peaks.

Here's the difference between linear, quadratic (2nd order) and cubic (3rd order) interpolation. As we can see, the cubic one extends the range drastically it some cases, resulting in implausible values (negative numbers).
We had this discussion in #180, with the suggestion of changing the default interpolation method to linear which has the advantage of being "safe", although not realistic. My guess is that cubic interpolation should work fine in most cases **but** in regions with sudden, high and chaotic variability, which is usually indicative of artefacts.
Given that, you could either
- Switch to linear interpolation (`nk.rsp_rate(peaks, interpolation_order="linear")`)
- Remove the peak locations or intervals that are likely to be artefacted
Hope that helps
|
1.0
|
Heart/respiration rate - interpolated signal creates crazy values (including negative values) - Follow-up of #236 @Mitchellb16
So I took a look at your data and it seems that indeed the cubic interpolation (used by default) extends the range. It seems like the more there is short and sudden variability, and the more it will struggle to connect the dots with curves, resulting positives and negative spikes between close peaks.

Here's the difference between linear, quadratic (2nd order) and cubic (3rd order) interpolation. As we can see, the cubic one extends the range drastically it some cases, resulting in implausible values (negative numbers).
We had this discussion in #180, with the suggestion of changing the default interpolation method to linear which has the advantage of being "safe", although not realistic. My guess is that cubic interpolation should work fine in most cases **but** in regions with sudden, high and chaotic variability, which is usually indicative of artefacts.
Given that, you could either
- Switch to linear interpolation (`nk.rsp_rate(peaks, interpolation_order="linear")`)
- Remove the peak locations or intervals that are likely to be artefacted
Hope that helps
|
process
|
heart respiration rate interpolated signal creates crazy values including negative values follow up of so i took a look at your data and it seems that indeed the cubic interpolation used by default extends the range it seems like the more there is short and sudden variability and the more it will struggle to connect the dots with curves resulting positives and negative spikes between close peaks here s the difference between linear quadratic order and cubic order interpolation as we can see the cubic one extends the range drastically it some cases resulting in implausible values negative numbers we had this discussion in with the suggestion of changing the default interpolation method to linear which has the advantage of being safe although not realistic my guess is that cubic interpolation should work fine in most cases but in regions with sudden high and chaotic variability which is usually indicative of artefacts given that you could either switch to linear interpolation nk rsp rate peaks interpolation order linear remove the peak locations or intervals that are likely to be artefacted hope that helps
| 1
|
6,496
| 9,561,533,919
|
IssuesEvent
|
2019-05-03 23:56:29
|
bisq-network/bisq
|
https://api.github.com/repos/bisq-network/bisq
|
closed
|
Failure to open *any* support ticket from within bisq. Dialog opened with crtl+o just closes after clicking on “OPEN SUPPORT TICKET”.
|
an:investigation in:trade-process
|
Note that even though that this issue was raised in a similar context, it _differs_ from https://github.com/bisq-network/bisq/issues/2788 in two regards:
1. It is not possible to open _any_ support ticket.
2. Evidence suggests that it _may_ not be limited to the context of stuck trades before step 1.
Description:
1. After pressing crtl+o while navigated to "PORTFOLIO"/"OPEN TRADES" the "Open support ticket"-dialog window appear.
2. After clicking the button "OPEN SUPPORT TICKET", the dialog window disappears and the user is back at the main screen of the application.
3. No other dialog appears.
Context:
- The problem has been reported by at least three different users:
https://bisq.community/t/trade-stuck-before-step-1/7245/6
https://bisq.community/t/trade-stuck-before-step-1/7245/14
https://bisq.community/t/trade-stuck-before-step-1/7245/30
- The problem _could_ be independent of the "trade-stuck-before-step-1"-phenomenon, since one of the affected users _seems_ to report it at another step of the trade process.
- Bisq versions: 0.9.5 and 1.0.1.
- OS: Linux and OSX.
- The log file shows the following error just right after unsuccessfully trying to open the support ticket:
```
May-01 16:47:31.069 [JavaFX Application Thread] INFO b.d.m.p.p.PendingTradesDataModel: Trade.depositTx is null. We try to find the tx in our wallet.
May-01 16:47:31.069 [JavaFX Application Thread] ERROR b.d.m.p.p.PendingTradesDataModel: Trade.depositTx is null and we did not find any MultiSig transaction.
```
Further thoughts:
- Is it possible that this problem only occurs, when the user did not have any other successful trades in the past?
|
1.0
|
Failure to open *any* support ticket from within bisq. Dialog opened with crtl+o just closes after clicking on “OPEN SUPPORT TICKET”. - Note that even though that this issue was raised in a similar context, it _differs_ from https://github.com/bisq-network/bisq/issues/2788 in two regards:
1. It is not possible to open _any_ support ticket.
2. Evidence suggests that it _may_ not be limited to the context of stuck trades before step 1.
Description:
1. After pressing crtl+o while navigated to "PORTFOLIO"/"OPEN TRADES" the "Open support ticket"-dialog window appear.
2. After clicking the button "OPEN SUPPORT TICKET", the dialog window disappears and the user is back at the main screen of the application.
3. No other dialog appears.
Context:
- The problem has been reported by at least three different users:
https://bisq.community/t/trade-stuck-before-step-1/7245/6
https://bisq.community/t/trade-stuck-before-step-1/7245/14
https://bisq.community/t/trade-stuck-before-step-1/7245/30
- The problem _could_ be independent of the "trade-stuck-before-step-1"-phenomenon, since one of the affected users _seems_ to report it at another step of the trade process.
- Bisq versions: 0.9.5 and 1.0.1.
- OS: Linux and OSX.
- The log file shows the following error just right after unsuccessfully trying to open the support ticket:
```
May-01 16:47:31.069 [JavaFX Application Thread] INFO b.d.m.p.p.PendingTradesDataModel: Trade.depositTx is null. We try to find the tx in our wallet.
May-01 16:47:31.069 [JavaFX Application Thread] ERROR b.d.m.p.p.PendingTradesDataModel: Trade.depositTx is null and we did not find any MultiSig transaction.
```
Further thoughts:
- Is it possible that this problem only occurs, when the user did not have any other successful trades in the past?
|
process
|
failure to open any support ticket from within bisq dialog opened with crtl o just closes after clicking on “open support ticket” note that even though that this issue was raised in a similar context it differs from in two regards it is not possible to open any support ticket evidence suggests that it may not be limited to the context of stuck trades before step description after pressing crtl o while navigated to portfolio open trades the open support ticket dialog window appear after clicking the button open support ticket the dialog window disappears and the user is back at the main screen of the application no other dialog appears context the problem has been reported by at least three different users the problem could be independent of the trade stuck before step phenomenon since one of the affected users seems to report it at another step of the trade process bisq versions and os linux and osx the log file shows the following error just right after unsuccessfully trying to open the support ticket may info b d m p p pendingtradesdatamodel trade deposittx is null we try to find the tx in our wallet may error b d m p p pendingtradesdatamodel trade deposittx is null and we did not find any multisig transaction further thoughts is it possible that this problem only occurs when the user did not have any other successful trades in the past
| 1
|
13,539
| 8,269,243,060
|
IssuesEvent
|
2018-09-15 03:11:52
|
Microsoft/visualfsharp
|
https://api.github.com/repos/Microsoft/visualfsharp
|
closed
|
Boolean operations generated IL
|
Area-Compiler Feature Improvement Ready Tenet-Performance
|
I was doing perf test today and noticed that F# didn't use intrinsic boolean operator with && and ||.
C# also makes some optimizations in release, but the F# version is a bit slower that C#.
The test case uses the following function:
```
let isAlphaNum (s:string) pos =
let c = s.[pos]
(c >= '0' && c<='9') || (c>='A' && c<='Z') || (c>='a' && c <= 'z')
```
when decompiled with dotPeek it looks like:
```
public static bool isAlphaNum(string s, int pos) {
char ch = s[pos];
if ((((int) ch < 48 ? 0 : ((int) ch <= 57 ? 1 : 0)) == 0 ? ((int) ch < 65 ? 0 : ((int) ch <= 90 ? 1 : 0)) : 1) != 0)
return true;
if ((int) ch >= 97)
return (int) ch <= 122;
return false; }
```
The following equivalent C# implementation which is highly similar:
```
static bool IsAlphaNum(string s, int pos) {
var c = s[pos];
return c >= '0' && c <= '9' || c >= 'A' && c <= 'Z' || c >= 'a' && c <= 'z'; }
```
Results in release in:
```
private static bool IsAlphaNum(string s, int pos) {
char ch = s[pos];
if ((int) ch >= 48 && (int) ch <= 57 || (int) ch >= 65 && (int) ch <= 90)
return true;
if ((int) ch >= 97)
return (int) ch <= 122;
return false; }
```
If the last lines look the same, F# uses a ternary construct with 1s and 0s instead of using && and || ...
A micro benchmark on 100.000.000 iterations doing the full path (always testing a '}') gives
~417ms for the F# version
~280ms for the C# version
Tests with other charts give an advantage for C#.
The question is: is there a good reason to use ?: == 0 == 1 instead of Boolean && and || native operators ?
|
True
|
Boolean operations generated IL - I was doing perf test today and noticed that F# didn't use intrinsic boolean operator with && and ||.
C# also makes some optimizations in release, but the F# version is a bit slower that C#.
The test case uses the following function:
```
let isAlphaNum (s:string) pos =
let c = s.[pos]
(c >= '0' && c<='9') || (c>='A' && c<='Z') || (c>='a' && c <= 'z')
```
when decompiled with dotPeek it looks like:
```
public static bool isAlphaNum(string s, int pos) {
char ch = s[pos];
if ((((int) ch < 48 ? 0 : ((int) ch <= 57 ? 1 : 0)) == 0 ? ((int) ch < 65 ? 0 : ((int) ch <= 90 ? 1 : 0)) : 1) != 0)
return true;
if ((int) ch >= 97)
return (int) ch <= 122;
return false; }
```
The following equivalent C# implementation which is highly similar:
```
static bool IsAlphaNum(string s, int pos) {
var c = s[pos];
return c >= '0' && c <= '9' || c >= 'A' && c <= 'Z' || c >= 'a' && c <= 'z'; }
```
Results in release in:
```
private static bool IsAlphaNum(string s, int pos) {
char ch = s[pos];
if ((int) ch >= 48 && (int) ch <= 57 || (int) ch >= 65 && (int) ch <= 90)
return true;
if ((int) ch >= 97)
return (int) ch <= 122;
return false; }
```
If the last lines look the same, F# uses a ternary construct with 1s and 0s instead of using && and || ...
A micro benchmark on 100.000.000 iterations doing the full path (always testing a '}') gives
~417ms for the F# version
~280ms for the C# version
Tests with other charts give an advantage for C#.
The question is: is there a good reason to use ?: == 0 == 1 instead of Boolean && and || native operators ?
|
non_process
|
boolean operations generated il i was doing perf test today and noticed that f didn t use intrinsic boolean operator with and c also makes some optimizations in release but the f version is a bit slower that c the test case uses the following function let isalphanum s string pos let c s c c a c a c z when decompiled with dotpeek it looks like public static bool isalphanum string s int pos char ch s if int ch int ch int ch int ch return true if int ch return int ch return false the following equivalent c implementation which is highly similar static bool isalphanum string s int pos var c s return c c a c a c z results in release in private static bool isalphanum string s int pos char ch s if int ch int ch int ch return true if int ch return int ch return false if the last lines look the same f uses a ternary construct with and instead of using and a micro benchmark on iterations doing the full path always testing a gives for the f version for the c version tests with other charts give an advantage for c the question is is there a good reason to use instead of boolean and native operators
| 0
|
93,157
| 3,886,320,790
|
IssuesEvent
|
2016-04-14 00:12:13
|
ArctosDB/arctos
|
https://api.github.com/repos/ArctosDB/arctos
|
closed
|
media uploading error
|
Function-Media Priority-Critical Type-Form/Function
|
We are trying to upload metadata for audio files, and are getting a 'status' error where it's trying to match something in a comment field. See attached spreadsheet, last two columns.
[Bulkload_Cicero_Pipilo_audio_2009_mp3_forArctos.xlsx](https://github.com/ArctosDB/arctos/files/216221/Bulkload_Cicero_Pipilo_audio_2009_mp3_forArctos.xlsx)

|
1.0
|
media uploading error - We are trying to upload metadata for audio files, and are getting a 'status' error where it's trying to match something in a comment field. See attached spreadsheet, last two columns.
[Bulkload_Cicero_Pipilo_audio_2009_mp3_forArctos.xlsx](https://github.com/ArctosDB/arctos/files/216221/Bulkload_Cicero_Pipilo_audio_2009_mp3_forArctos.xlsx)

|
non_process
|
media uploading error we are trying to upload metadata for audio files and are getting a status error where it s trying to match something in a comment field see attached spreadsheet last two columns
| 0
|
352,819
| 10,546,287,595
|
IssuesEvent
|
2019-10-02 21:06:14
|
mozilla/addons-frontend
|
https://api.github.com/repos/mozilla/addons-frontend
|
closed
|
Creating a collection with the same slug as a previously deleted collection will generate a 404
|
component: collections priority: p3 state: stale type: bug type: papercut
|
STR:
1. Log in to AMO
2. Go to View my Collections
3. Select one of your collections to open its details page and make a note of the custom slug
4. Delete the collection
5. Create a new collection and use the same slug as in your previously deleted collection
Actual result:
A Page not found is displayed after the collection is created
Expected result:
The collection is successfully created and the user is prompted to the edit screen
Notes:
- reproduced on all AMO servers with FF61, Win10x64

|
1.0
|
Creating a collection with the same slug as a previously deleted collection will generate a 404 - STR:
1. Log in to AMO
2. Go to View my Collections
3. Select one of your collections to open its details page and make a note of the custom slug
4. Delete the collection
5. Create a new collection and use the same slug as in your previously deleted collection
Actual result:
A Page not found is displayed after the collection is created
Expected result:
The collection is successfully created and the user is prompted to the edit screen
Notes:
- reproduced on all AMO servers with FF61, Win10x64

|
non_process
|
creating a collection with the same slug as a previously deleted collection will generate a str log in to amo go to view my collections select one of your collections to open its details page and make a note of the custom slug delete the collection create a new collection and use the same slug as in your previously deleted collection actual result a page not found is displayed after the collection is created expected result the collection is successfully created and the user is prompted to the edit screen notes reproduced on all amo servers with
| 0
|
637,713
| 20,676,115,860
|
IssuesEvent
|
2022-03-10 09:26:44
|
owid/etl
|
https://api.github.com/repos/owid/etl
|
closed
|
Automate: use Github actions to always track `walden` master
|
priority: 3 - nice to have
|
## Problem
By convention, we want people to add new data snapshots to `walden`, and to push them to `master` as long as they pass tests.
The step to then update the `etl` repo is a bit annoying and potentially confusing, since people are not used to submodules.
## Proposed solution
On each push to `master` on `walden`, we should auto-update the submodule version on the `etl` repo and push a new commit. This way a person never has to remember to do it.
|
1.0
|
Automate: use Github actions to always track `walden` master - ## Problem
By convention, we want people to add new data snapshots to `walden`, and to push them to `master` as long as they pass tests.
The step to then update the `etl` repo is a bit annoying and potentially confusing, since people are not used to submodules.
## Proposed solution
On each push to `master` on `walden`, we should auto-update the submodule version on the `etl` repo and push a new commit. This way a person never has to remember to do it.
|
non_process
|
automate use github actions to always track walden master problem by convention we want people to add new data snapshots to walden and to push them to master as long as they pass tests the step to then update the etl repo is a bit annoying and potentially confusing since people are not used to submodules proposed solution on each push to master on walden we should auto update the submodule version on the etl repo and push a new commit this way a person never has to remember to do it
| 0
|
2,650
| 5,427,998,311
|
IssuesEvent
|
2017-03-03 14:53:06
|
openvstorage/framework
|
https://api.github.com/repos/openvstorage/framework
|
reopened
|
bad sco cache (global write buffer) configuration
|
process_duplicate
|
On tenv186...
```
[09:54:40] Arne Redlich: the thing is that 2 sco cache mountpoints are configured:
[09:54:41] Arne Redlich: "scocache_mount_points": [
{
"path": "/mnt/ssd2/hapool_write_sco_1",
"size": "46080000KiB"
},
{
"path": "/mnt/ssd1/hapool_write_sco_1",
"size": "57344KiB"
}
]
[09:54:57] Arne Redlich: together with these settings:
"trigger_gap": "1GB",
"backoff_gap": "2GB",
[09:55:39] Arne Redlich: IOW the second mountpoint is always in a bad state as it can never reach a free space of 2GB
```
Results in
```
Feb 27 09:35:00 ftcmp02 volumedriver_fs.sh[3766]: 2017-02-27 09:35:00 441189 +0100 - ftcmp02 - 3766/0x00007fa1b2b31700 - volumedriverfs/SCOCache - 00000000000f9ade - info - trimMountPoint_: "/mnt/ssd1/hapool_write_sco_1" is choking: free 56MiB < trigger 953MiB. Throttling ingest with 68119 usec per cluster write.
```
There's a related voldrvr ticket (https://github.com/openvstorage/volumedriver/issues/228) for the voldrvr not accepting such a config but the creating a bad config shouldn't be possible either...
Global Write Buffer was set to 50 GiB via the GUI.
|
1.0
|
bad sco cache (global write buffer) configuration - On tenv186...
```
[09:54:40] Arne Redlich: the thing is that 2 sco cache mountpoints are configured:
[09:54:41] Arne Redlich: "scocache_mount_points": [
{
"path": "/mnt/ssd2/hapool_write_sco_1",
"size": "46080000KiB"
},
{
"path": "/mnt/ssd1/hapool_write_sco_1",
"size": "57344KiB"
}
]
[09:54:57] Arne Redlich: together with these settings:
"trigger_gap": "1GB",
"backoff_gap": "2GB",
[09:55:39] Arne Redlich: IOW the second mountpoint is always in a bad state as it can never reach a free space of 2GB
```
Results in
```
Feb 27 09:35:00 ftcmp02 volumedriver_fs.sh[3766]: 2017-02-27 09:35:00 441189 +0100 - ftcmp02 - 3766/0x00007fa1b2b31700 - volumedriverfs/SCOCache - 00000000000f9ade - info - trimMountPoint_: "/mnt/ssd1/hapool_write_sco_1" is choking: free 56MiB < trigger 953MiB. Throttling ingest with 68119 usec per cluster write.
```
There's a related voldrvr ticket (https://github.com/openvstorage/volumedriver/issues/228) for the voldrvr not accepting such a config but the creating a bad config shouldn't be possible either...
Global Write Buffer was set to 50 GiB via the GUI.
|
process
|
bad sco cache global write buffer configuration on arne redlich the thing is that sco cache mountpoints are configured arne redlich scocache mount points path mnt hapool write sco size path mnt hapool write sco size arne redlich together with these settings trigger gap backoff gap arne redlich iow the second mountpoint is always in a bad state as it can never reach a free space of results in feb volumedriver fs sh volumedriverfs scocache info trimmountpoint mnt hapool write sco is choking free trigger throttling ingest with usec per cluster write there s a related voldrvr ticket for the voldrvr not accepting such a config but the creating a bad config shouldn t be possible either global write buffer was set to gib via the gui
| 1
|
284,212
| 30,913,604,581
|
IssuesEvent
|
2023-08-05 02:22:24
|
panasalap/linux-4.19.72_Fix
|
https://api.github.com/repos/panasalap/linux-4.19.72_Fix
|
reopened
|
CVE-2020-10690 (Medium) detected in multiple libraries
|
Mend: dependency security vulnerability
|
## CVE-2020-10690 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>linux-yoctov5.4.51</b>, <b>linux-yoctov5.4.51</b>, <b>linux-yoctov5.4.51</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
There is a use-after-free in kernel versions before 5.5 due to a race condition between the release of ptp_clock and cdev while resource deallocation. When a (high privileged) process allocates a ptp device file (like /dev/ptpX) and voluntarily goes to sleep. During this time if the underlying device is removed, it can cause an exploitable condition as the process wakes up to terminate and clean all attached files. The system crashes due to the cdev structure being invalid (as already freed) which is pointed to by the inode.
<p>Publish Date: 2020-05-08
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-10690>CVE-2020-10690</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-10690">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-10690</a></p>
<p>Release Date: 2020-05-08</p>
<p>Fix Resolution: v5.5-rc5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-10690 (Medium) detected in multiple libraries - ## CVE-2020-10690 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>linux-yoctov5.4.51</b>, <b>linux-yoctov5.4.51</b>, <b>linux-yoctov5.4.51</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
There is a use-after-free in kernel versions before 5.5 due to a race condition between the release of ptp_clock and cdev while resource deallocation. When a (high privileged) process allocates a ptp device file (like /dev/ptpX) and voluntarily goes to sleep. During this time if the underlying device is removed, it can cause an exploitable condition as the process wakes up to terminate and clean all attached files. The system crashes due to the cdev structure being invalid (as already freed) which is pointed to by the inode.
<p>Publish Date: 2020-05-08
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-10690>CVE-2020-10690</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-10690">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-10690</a></p>
<p>Release Date: 2020-05-08</p>
<p>Fix Resolution: v5.5-rc5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in multiple libraries cve medium severity vulnerability vulnerable libraries linux linux linux vulnerability details there is a use after free in kernel versions before due to a race condition between the release of ptp clock and cdev while resource deallocation when a high privileged process allocates a ptp device file like dev ptpx and voluntarily goes to sleep during this time if the underlying device is removed it can cause an exploitable condition as the process wakes up to terminate and clean all attached files the system crashes due to the cdev structure being invalid as already freed which is pointed to by the inode publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required high user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
343,484
| 10,331,252,490
|
IssuesEvent
|
2019-09-02 17:12:44
|
diamm/diamm
|
https://api.github.com/repos/diamm/diamm
|
reopened
|
Consistently show the icon for 'has images' next to sources
|
Component: User Interface Priority: Medium Status: Waiting to be addressed Type: Bug Type: Enhancement
|
In places that show links to sources (e.g., in sets, or in a list of sources in which a composition appears) we should display an icon that shows which of these sources have images attached to them.
|
1.0
|
Consistently show the icon for 'has images' next to sources - In places that show links to sources (e.g., in sets, or in a list of sources in which a composition appears) we should display an icon that shows which of these sources have images attached to them.
|
non_process
|
consistently show the icon for has images next to sources in places that show links to sources e g in sets or in a list of sources in which a composition appears we should display an icon that shows which of these sources have images attached to them
| 0
|
5,569
| 8,407,409,961
|
IssuesEvent
|
2018-10-11 20:51:46
|
SynBioDex/SEPs
|
https://api.github.com/repos/SynBioDex/SEPs
|
reopened
|
SEP 003 -- SBOL Governance Update
|
Accepted Active Type: Process
|
# SEP 003 -- SBOL Governance Update
| SEP | 003 |
| --- | --- |
| **Title** | SBOL Governance Update |
| **Authors** | Jacob Beal (jakebeal@bbn.com) |
| **Type** | Procedure |
| **Status** | Draft |
| **Created** | 13-Oct-2015 |
| **Last modified** | 15-Oct-2015 |
## Abstract
This document proposes a thorough updating and revision of the SBOL governance documents at [1](http://sbolstandard.org/development/gov/).
The key changes in this document are:
- Create a steering committee that recognizes the de facto strategic coordination carried out by key PIs and makes it an open and inclusive process
- Clarify the role and term of the SBOL Chair --- in particular, as head of the steering committee
- Editors meetings should be open, and we should post minutes again
- Recognize a number of de facto changes that the community has made in its structure and processes since the original document was drafted
- Explicitly state core values of openness and diversity
## Table of Contents
- [1. Rationale](#rationale)
- [2. Governance](#specification)
- 2.1 SBOL Organization
- 2.2 SBOL Development Group
- 2.3 Ad-Hoc Subgroups
- 2.4 SBOL Editors
- 2.5 SBOL Chair and Steering Committee
- 2.6 Voting Procedures
- [3. Discussion](#discussion)
- [References](#references)
- [Copyright](#copyright)
## 1. Rationale <a name="rationale"></a>
The current SBOL governance documents are rather incomplete and do not reflect the consensus state of practice in how the community is governing itself. This SEP attempts to fill out important missing information (e.g., quorum for votes, purpose of the SBOL Chair), and also to recognize and legitimate current practices (e.g., editors are elected by a vote organized via the mailing list, not by consensus at a workshop).
If adopted, the “Governance” section will replace the text of the governance document at [1](http://sbolstandard.org/development/gov/).
## 2. Governance <a name="specification"></a>
SBOL is being developed by a generally collegial volunteer organization, which runs primarily by rough consensus. As an organization, the SBOL development community holds the following values:
- SBOL should be free and open standard developed by open and inclusive processes.
- SBOL is a community that believes in fostering, cultivating and preserving a culture of diversity and inclusion.
- Leaders of the SBOL community are expected to actively foster a safe environment where all participants feel comfortable making their voices heard.
- Leadership from new and diverse voices should be actively developed, including by participation in working groups and by public speaking at workshops.
In order to ensure effective development of SBOL that conforms with these values, the SBOL community has adopted the following regulations for its governance (last amended on DATE):
## SBOL Organization
The SBOL community has the following organization:
- SBOL Development Group: general body comprising all members of the SBOL community, and ultimate authority
- Ad-Hoc Subgroups: portions of the general development group focused on particular aspects of SBOL development.
- SBOL Editors: operational executives for the community
- SBOL Chair and Steering Committee: strategic planning and guidance for the community
## SBOL Development Group
Membership in the SBOL Development Group is open to all interested parties. Individuals interested in joining the group should contact the editors (editors@sbolstandard.org).
The SBOL Development Group as a whole is the ultimate authority and source of legitimacy for SBOL standards decisions. All elections, standards changes, and governance changes are voted on by the SBOL Development Group according to the procedures below.
The primary means of communication for the SBOL Development Group is via the sbol-dev Google Group. All significant decisions on governance and the evolution of the standard must be discussed via this mailing list.
SBOL workshops should be held twice per year. These may be stand-alone workshops or in combination with another event, but must provide ample time for attending members to hold focused discussions on the development of SBOL.
All members of the SBOL Development Group are encouraged (but not required) to:
1. Attend the SBOL Workshops and other meetings
2. Participate in discussions on SBOL mailing lists
3. Support the SBOL standard in any tools and systems they work on
4. Provide constructive feedback for improving the standard
## Ad-Hoc Subgroups
Certain activities of the SBOL community are expected to either generate noticeably higher volumes of communication (e.g., software development and support) or to address topics expected to be of interest only to certain special interest groups. When deemed appropriate, ad-hoc subgroups may be created for addressing such topics.
Subgroups must communicate via open lists that may be readily accessed and joined by any member of the SBOL Development Group. Any communication regarding standards changes, however, should be migrated back to the SBOL Development Group mailing list.
## SBOL Editors
As a matter of pragmatism, many organizational decisions and actions are delegated to an elected group of SBOL Editors, so named because their primary responsibility is ensuring the effective curation of documents for the community.
The responsibilities of the SBOL Editors are:
- Equitably representing the community in voting, documents, and guidance of discussion.
- Curation and dissemination of the SBOL standards and related documents (including writing, editing, and coordinating changes).
- Maintaining an open and structured process by which members of the SBOL Development Group can modify and improve SBOL standards (including timely implementation of tracking, processing, responding to, and organizing voting on change proposals).
- Ensuring effective development and maintenance of official SBOL software libraries and associated documentation and tutorials.
- Coordinating scholarly publications and ensuring proper attribution of contributions.
- Running elections and other community votes.
- Organization (or delegation) of SBOL Workshops and other events.
- Maintaining community infrastructure, including: the SBOL web site, source code repositories, mailing lists
The SBOL Editors shall hold weekly meetings to coordinate their execution of these responsibilities. These meetings should be open, and their minutes should be reported to the SBOL Development Group.
The SBOL Editors mailing list is editors@sbolstandard.org. Editorial communications should generally use this list to preserve records and aid organization. To preserve organizational memory, former editors are kept on the mailing list until they choose to remove themselves.
SBOL Editors are elected by a community vote, following the process below. Editors serve for a two year term and cannot serve terms consecutively. To maximize continuity, editors’ terms should be desynchronized. There are 5 editorial positions, currently held by:
- Bryan Bartley, University of Washington
\* Jacob Beal, BBN Technologies
\* Kevin Clancy, Life Technologies
\* Raik Grunberg, University of Montreal
- Goksel Misirli, Newcastle University
Previous SBOL Editors include:
Michal Galdzicki, University of Washington,
Ernst Oberortner, Boston University,
Jacqueline Quinn, Google,
Matthew Pocock, Newcastle University,
Cesar Rodriguez, Autodesk,
Nicholas Roehner, University of Utah,
Mandy Wilson, Virginia Bioinformatics Institute
## SBOL Chair and Steering Committee
The positions of SBOL Chair and Steering Committee are a means for organizing strategic planning and coordination amongst PI-level members of the SBOL community.
The SBOL Chair is the head of the SBOL Steering Committee. The SBOL Chair is responsible for ensuring regular steering committee meetings are held, for effective moderation of these meetings, and for being an advertised public point of contact for outside organizations.
The SBOL Chair is elected by a community vote, following the process below. Chairs serve for a four year term and cannot serve terms consecutively. In order to ensure a smooth working relationship between the SBOL Chair and SBOL Editors, the SBOL Editors can remove a chair by a no-confidence vote amongst the SBOL Editors. at any time.
The rest of the SBOL Steering Committee is appointed (and removed) by the SBOL Chair. The Steering Committee has no fixed size or fixed term, but should generally comprise the set of PI-level members who are strongly active, have a strong stake in SBOL, and are willing to assume community leadership responsibilities.
The SBOL Steering Committee shall hold monthly meetings to coordinate around strategic issues for the community (e.g., funding, setting key priorities and goals). These meetings should be open, and their minutes should be reported to the SBOL Development Group. The SBOL Steering Committee should also convene an external advisory board to help maintain strategic links with other communities and to obtain useful advice in guiding the community.
The SBOL Steering Committee mailing list is steering-committee@sbolstandard.org. Communications should generally use this list to preserve records and aid organization. To preserve organizational memory, former chairs and members are kept on the mailing list until they choose to remove themselves.
The SBOL Chair is currently:
- Herbert Sauro, University of Washington
The SBOL Steering Committee is currently:
- Nobody.
## Voting Procedures
### Elections Process:
1. Before any election, there is a nomination period of 5 working days.
2. All members of the SBOL Development Group are eligible for all offices (except as term-limited).
3. Any member can self-nominate or can nominate any other member.
4. Voting runs for 5 working days, starting at the end of the nomination period. All members of the SBOL Development Group are eligible to vote.
5. Candidates are elected by plurality.
### Voting process:
1. Any member of the SBOL Developers Group can initiate a motion for a vote.
2. Any other member of the SBOL Developers Group can second the motion.
3. SBOL Editors post a voting form, for discussion and amendment over a period of 2 working days.
4. Voting runs for 5 working days, starting at the end of the discussion period. All members of the SBOL Development Group are eligible to vote.
5. The SBOL Editors may extend the voting period by up to an additional 5 working days when they feel that an insufficient number of votes have been obtained.
6. SBOL Editors tally and call the vote. First vote will be judged by a 67% majority to indicate “rough consensus”
1. If rough consensus is not reached, discussion of 3 working days is to follow.
2. The reasons for decisions must be recorded with the results of the vote.
3. Any second followup vote will be ruled 50% majority will be treated as the decision
### Voting form must:
1. State clearly the item being voted on.
2. State the version and document to change (e.g. SBOL 1.1 specification) and which version target it works towards.
3. State the eligibility criteria for voting, “All members of the SBOL Developers Group are eligible to vote.”
4. Provide the options for the vote
1. Include all opinions made clear in the discussion as options.
2. Include an “Abstain” and “Table for Further Discussion” option.
3. Include a “No Change” and a “No Opinion” option when appropriate.
4. Include a field for entering the email address of a voter.
5. Include a comments field.
## 3. Discussion <a name='discussion'></a>
A draft proposal was discussed in a session at COMBINE 2015, resulting in the initial version of the document. Significant changes from the original draft were:
- The most important guiding principle is the first statement, that the community runs in a collegial manner by rough consensus. Lots of concerns and possible contingencies were avoided by referring back to that statement, saying that if those become necessary (e.g., a takeover attempt by some subgroup), then the the collegial community is already gone, and governance has to be fundamentally restructured in any case.
- A proposal for quorum was removed in favor of adding “Table for Further Discussion” as an option and giving the editors the ability to extend the voting time.
- Elections are settled by plurality, rather than 67%, since people thought it would be silly to have lots of votes when there are multiple candidates. If we start having problems, then we may revisit this.
- The relationship between Chair and Editors was considered very important, and thus the Editors were given the power to remove the chair. Alternate proposals included having the Editors elect or nominate the chair, which were discarded because they felt contrary to open community.
- The external advisory board was not part of the initial draft, but has been widely felt necessary for some time, and so was added. People did not feel it was necessary to define precisely, but that it would be OK to basically leave it up to the steering committee to sort out details of what would work best for them.
In response to mailing list discussion:
- Added point on editors duty to equitably represent.
## References <a name='references'></a>
## Copyright <a name='copyright'></a>
<p xmlns:dct="http://purl.org/dc/terms/" xmlns:vcard="http://www.w3.org/2001/vcard-rdf/3.0#">
<a rel="license"
href="http://creativecommons.org/publicdomain/zero/1.0/">
<img src="http://i.creativecommons.org/p/zero/1.0/88x31.png" style="border-style: none;" alt="CC0" />
</a>
<br />
To the extent possible under law,
<a rel="dct:publisher"
href="sbolstandard.org">
<span property="dct:title">SBOL developers</span></a>
has waived all copyright and related or neighboring rights to
<span property="dct:title">SEP 003</span>.
This work is published from:
<span property="vcard:Country" datatype="dct:ISO3166"
content="US" about="sbolstandard.org">
United States</span>.
</p>
|
1.0
|
SEP 003 -- SBOL Governance Update - # SEP 003 -- SBOL Governance Update
| SEP | 003 |
| --- | --- |
| **Title** | SBOL Governance Update |
| **Authors** | Jacob Beal (jakebeal@bbn.com) |
| **Type** | Procedure |
| **Status** | Draft |
| **Created** | 13-Oct-2015 |
| **Last modified** | 15-Oct-2015 |
## Abstract
This document proposes a thorough updating and revision of the SBOL governance documents at [1](http://sbolstandard.org/development/gov/).
The key changes in this document are:
- Create a steering committee that recognizes the de facto strategic coordination carried out by key PIs and makes it an open and inclusive process
- Clarify the role and term of the SBOL Chair --- in particular, as head of the steering committee
- Editors meetings should be open, and we should post minutes again
- Recognize a number of de facto changes that the community has made in its structure and processes since the original document was drafted
- Explicitly state core values of openness and diversity
## Table of Contents
- [1. Rationale](#rationale)
- [2. Governance](#specification)
- 2.1 SBOL Organization
- 2.2 SBOL Development Group
- 2.3 Ad-Hoc Subgroups
- 2.4 SBOL Editors
- 2.5 SBOL Chair and Steering Committee
- 2.6 Voting Procedures
- [3. Discussion](#discussion)
- [References](#references)
- [Copyright](#copyright)
## 1. Rationale <a name="rationale"></a>
The current SBOL governance documents are rather incomplete and do not reflect the consensus state of practice in how the community is governing itself. This SEP attempts to fill out important missing information (e.g., quorum for votes, purpose of the SBOL Chair), and also to recognize and legitimate current practices (e.g., editors are elected by a vote organized via the mailing list, not by consensus at a workshop).
If adopted, the “Governance” section will replace the text of the governance document at [1](http://sbolstandard.org/development/gov/).
## 2. Governance <a name="specification"></a>
SBOL is being developed by a generally collegial volunteer organization, which runs primarily by rough consensus. As an organization, the SBOL development community holds the following values:
- SBOL should be free and open standard developed by open and inclusive processes.
- SBOL is a community that believes in fostering, cultivating and preserving a culture of diversity and inclusion.
- Leaders of the SBOL community are expected to actively foster a safe environment where all participants feel comfortable making their voices heard.
- Leadership from new and diverse voices should be actively developed, including by participation in working groups and by public speaking at workshops.
In order to ensure effective development of SBOL that conforms with these values, the SBOL community has adopted the following regulations for its governance (last amended on DATE):
## SBOL Organization
The SBOL community has the following organization:
- SBOL Development Group: general body comprising all members of the SBOL community, and ultimate authority
- Ad-Hoc Subgroups: portions of the general development group focused on particular aspects of SBOL development.
- SBOL Editors: operational executives for the community
- SBOL Chair and Steering Committee: strategic planning and guidance for the community
## SBOL Development Group
Membership in the SBOL Development Group is open to all interested parties. Individuals interested in joining the group should contact the editors (editors@sbolstandard.org).
The SBOL Development Group as a whole is the ultimate authority and source of legitimacy for SBOL standards decisions. All elections, standards changes, and governance changes are voted on by the SBOL Development Group according to the procedures below.
The primary means of communication for the SBOL Development Group is via the sbol-dev Google Group. All significant decisions on governance and the evolution of the standard must be discussed via this mailing list.
SBOL workshops should be held twice per year. These may be stand-alone workshops or in combination with another event, but must provide ample time for attending members to hold focused discussions on the development of SBOL.
All members of the SBOL Development Group are encouraged (but not required) to:
1. Attend the SBOL Workshops and other meetings
2. Participate in discussions on SBOL mailing lists
3. Support the SBOL standard in any tools and systems they work on
4. Provide constructive feedback for improving the standard
## Ad-Hoc Subgroups
Certain activities of the SBOL community are expected to either generate noticeably higher volumes of communication (e.g., software development and support) or to address topics expected to be of interest only to certain special interest groups. When deemed appropriate, ad-hoc subgroups may be created for addressing such topics.
Subgroups must communicate via open lists that may be readily accessed and joined by any member of the SBOL Development Group. Any communication regarding standards changes, however, should be migrated back to the SBOL Development Group mailing list.
## SBOL Editors
As a matter of pragmatism, many organizational decisions and actions are delegated to an elected group of SBOL Editors, so named because their primary responsibility is ensuring the effective curation of documents for the community.
The responsibilities of the SBOL Editors are:
- Equitably representing the community in voting, documents, and guidance of discussion.
- Curation and dissemination of the SBOL standards and related documents (including writing, editing, and coordinating changes).
- Maintaining an open and structured process by which members of the SBOL Development Group can modify and improve SBOL standards (including timely implementation of tracking, processing, responding to, and organizing voting on change proposals).
- Ensuring effective development and maintenance of official SBOL software libraries and associated documentation and tutorials.
- Coordinating scholarly publications and ensuring proper attribution of contributions.
- Running elections and other community votes.
- Organization (or delegation) of SBOL Workshops and other events.
- Maintaining community infrastructure, including: the SBOL web site, source code repositories, mailing lists
The SBOL Editors shall hold weekly meetings to coordinate their execution of these responsibilities. These meetings should be open, and their minutes should be reported to the SBOL Development Group.
The SBOL Editors mailing list is editors@sbolstandard.org. Editorial communications should generally use this list to preserve records and aid organization. To preserve organizational memory, former editors are kept on the mailing list until they choose to remove themselves.
SBOL Editors are elected by a community vote, following the process below. Editors serve for a two year term and cannot serve terms consecutively. To maximize continuity, editors’ terms should be desynchronized. There are 5 editorial positions, currently held by:
- Bryan Bartley, University of Washington
\* Jacob Beal, BBN Technologies
\* Kevin Clancy, Life Technologies
\* Raik Grunberg, University of Montreal
- Goksel Misirli, Newcastle University
Previous SBOL Editors include:
Michal Galdzicki, University of Washington,
Ernst Oberortner, Boston University,
Jacqueline Quinn, Google,
Matthew Pocock, Newcastle University,
Cesar Rodriguez, Autodesk,
Nicholas Roehner, University of Utah,
Mandy Wilson, Virginia Bioinformatics Institute
## SBOL Chair and Steering Committee
The positions of SBOL Chair and Steering Committee are a means for organizing strategic planning and coordination amongst PI-level members of the SBOL community.
The SBOL Chair is the head of the SBOL Steering Committee. The SBOL Chair is responsible for ensuring regular steering committee meetings are held, for effective moderation of these meetings, and for being an advertised public point of contact for outside organizations.
The SBOL Chair is elected by a community vote, following the process below. Chairs serve for a four year term and cannot serve terms consecutively. In order to ensure a smooth working relationship between the SBOL Chair and SBOL Editors, the SBOL Editors can remove a chair by a no-confidence vote amongst the SBOL Editors. at any time.
The rest of the SBOL Steering Committee is appointed (and removed) by the SBOL Chair. The Steering Committee has no fixed size or fixed term, but should generally comprise the set of PI-level members who are strongly active, have a strong stake in SBOL, and are willing to assume community leadership responsibilities.
The SBOL Steering Committee shall hold monthly meetings to coordinate around strategic issues for the community (e.g., funding, setting key priorities and goals). These meetings should be open, and their minutes should be reported to the SBOL Development Group. The SBOL Steering Committee should also convene an external advisory board to help maintain strategic links with other communities and to obtain useful advice in guiding the community.
The SBOL Steering Committee mailing list is steering-committee@sbolstandard.org. Communications should generally use this list to preserve records and aid organization. To preserve organizational memory, former chairs and members are kept on the mailing list until they choose to remove themselves.
The SBOL Chair is currently:
- Herbert Sauro, University of Washington
The SBOL Steering Committee is currently:
- Nobody.
## Voting Procedures
### Elections Process:
1. Before any election, there is a nomination period of 5 working days.
2. All members of the SBOL Development Group are eligible for all offices (except as term-limited).
3. Any member can self-nominate or can nominate any other member.
4. Voting runs for 5 working days, starting at the end of the nomination period. All members of the SBOL Development Group are eligible to vote.
5. Candidates are elected by plurality.
### Voting process:
1. Any member of the SBOL Developers Group can initiate a motion for a vote.
2. Any other member of the SBOL Developers Group can second the motion.
3. SBOL Editors post a voting form, for discussion and amendment over a period of 2 working days.
4. Voting runs for 5 working days, starting at the end of the discussion period. All members of the SBOL Development Group are eligible to vote.
5. The SBOL Editors may extend the voting period by up to an additional 5 working days when they feel that an insufficient number of votes have been obtained.
6. SBOL Editors tally and call the vote. First vote will be judged by a 67% majority to indicate “rough consensus”
1. If rough consensus is not reached, discussion of 3 working days is to follow.
2. The reasons for decisions must be recorded with the results of the vote.
3. Any second followup vote will be ruled 50% majority will be treated as the decision
### Voting form must:
1. State clearly the item being voted on.
2. State the version and document to change (e.g. SBOL 1.1 specification) and which version target it works towards.
3. State the eligibility criteria for voting, “All members of the SBOL Developers Group are eligible to vote.”
4. Provide the options for the vote
1. Include all opinions made clear in the discussion as options.
2. Include an “Abstain” and “Table for Further Discussion” option.
3. Include a “No Change” and a “No Opinion” option when appropriate.
4. Include a field for entering the email address of a voter.
5. Include a comments field.
## 3. Discussion <a name='discussion'></a>
A draft proposal was discussed in a session at COMBINE 2015, resulting in the initial version of the document. Significant changes from the original draft were:
- The most important guiding principle is the first statement, that the community runs in a collegial manner by rough consensus. Lots of concerns and possible contingencies were avoided by referring back to that statement, saying that if those become necessary (e.g., a takeover attempt by some subgroup), then the the collegial community is already gone, and governance has to be fundamentally restructured in any case.
- A proposal for quorum was removed in favor of adding “Table for Further Discussion” as an option and giving the editors the ability to extend the voting time.
- Elections are settled by plurality, rather than 67%, since people thought it would be silly to have lots of votes when there are multiple candidates. If we start having problems, then we may revisit this.
- The relationship between Chair and Editors was considered very important, and thus the Editors were given the power to remove the chair. Alternate proposals included having the Editors elect or nominate the chair, which were discarded because they felt contrary to open community.
- The external advisory board was not part of the initial draft, but has been widely felt necessary for some time, and so was added. People did not feel it was necessary to define precisely, but that it would be OK to basically leave it up to the steering committee to sort out details of what would work best for them.
In response to mailing list discussion:
- Added point on editors duty to equitably represent.
## References <a name='references'></a>
## Copyright <a name='copyright'></a>
<p xmlns:dct="http://purl.org/dc/terms/" xmlns:vcard="http://www.w3.org/2001/vcard-rdf/3.0#">
<a rel="license"
href="http://creativecommons.org/publicdomain/zero/1.0/">
<img src="http://i.creativecommons.org/p/zero/1.0/88x31.png" style="border-style: none;" alt="CC0" />
</a>
<br />
To the extent possible under law,
<a rel="dct:publisher"
href="sbolstandard.org">
<span property="dct:title">SBOL developers</span></a>
has waived all copyright and related or neighboring rights to
<span property="dct:title">SEP 003</span>.
This work is published from:
<span property="vcard:Country" datatype="dct:ISO3166"
content="US" about="sbolstandard.org">
United States</span>.
</p>
|
process
|
sep sbol governance update sep sbol governance update sep title sbol governance update authors jacob beal jakebeal bbn com type procedure status draft created oct last modified oct abstract this document proposes a thorough updating and revision of the sbol governance documents at the key changes in this document are create a steering committee that recognizes the de facto strategic coordination carried out by key pis and makes it an open and inclusive process clarify the role and term of the sbol chair in particular as head of the steering committee editors meetings should be open and we should post minutes again recognize a number of de facto changes that the community has made in its structure and processes since the original document was drafted explicitly state core values of openness and diversity table of contents rationale specification sbol organization sbol development group ad hoc subgroups sbol editors sbol chair and steering committee voting procedures discussion references copyright rationale the current sbol governance documents are rather incomplete and do not reflect the consensus state of practice in how the community is governing itself this sep attempts to fill out important missing information e g quorum for votes purpose of the sbol chair and also to recognize and legitimate current practices e g editors are elected by a vote organized via the mailing list not by consensus at a workshop if adopted the “governance” section will replace the text of the governance document at governance sbol is being developed by a generally collegial volunteer organization which runs primarily by rough consensus as an organization the sbol development community holds the following values sbol should be free and open standard developed by open and inclusive processes sbol is a community that believes in fostering cultivating and preserving a culture of diversity and inclusion leaders of the sbol community are expected to actively foster a safe environment where all participants feel comfortable making their voices heard leadership from new and diverse voices should be actively developed including by participation in working groups and by public speaking at workshops in order to ensure effective development of sbol that conforms with these values the sbol community has adopted the following regulations for its governance last amended on date sbol organization the sbol community has the following organization sbol development group general body comprising all members of the sbol community and ultimate authority ad hoc subgroups portions of the general development group focused on particular aspects of sbol development sbol editors operational executives for the community sbol chair and steering committee strategic planning and guidance for the community sbol development group membership in the sbol development group is open to all interested parties individuals interested in joining the group should contact the editors editors sbolstandard org the sbol development group as a whole is the ultimate authority and source of legitimacy for sbol standards decisions all elections standards changes and governance changes are voted on by the sbol development group according to the procedures below the primary means of communication for the sbol development group is via the sbol dev google group all significant decisions on governance and the evolution of the standard must be discussed via this mailing list sbol workshops should be held twice per year these may be stand alone workshops or in combination with another event but must provide ample time for attending members to hold focused discussions on the development of sbol all members of the sbol development group are encouraged but not required to attend the sbol workshops and other meetings
participate in discussions on sbol mailing lists
support the sbol standard in any tools and systems they work on
provide constructive feedback for improving the standard ad hoc subgroups certain activities of the sbol community are expected to either generate noticeably higher volumes of communication e g software development and support or to address topics expected to be of interest only to certain special interest groups when deemed appropriate ad hoc subgroups may be created for addressing such topics subgroups must communicate via open lists that may be readily accessed and joined by any member of the sbol development group any communication regarding standards changes however should be migrated back to the sbol development group mailing list sbol editors as a matter of pragmatism many organizational decisions and actions are delegated to an elected group of sbol editors so named because their primary responsibility is ensuring the effective curation of documents for the community the responsibilities of the sbol editors are equitably representing the community in voting documents and guidance of discussion curation and dissemination of the sbol standards and related documents including writing editing and coordinating changes maintaining an open and structured process by which members of the sbol development group can modify and improve sbol standards including timely implementation of tracking processing responding to and organizing voting on change proposals ensuring effective development and maintenance of official sbol software libraries and associated documentation and tutorials coordinating scholarly publications and ensuring proper attribution of contributions running elections and other community votes organization or delegation of sbol workshops and other events maintaining community infrastructure including the sbol web site source code repositories mailing lists the sbol editors shall hold weekly meetings to coordinate their execution of these responsibilities these meetings should be open and their minutes should be reported to the sbol development group the sbol editors mailing list is editors sbolstandard org editorial communications should generally use this list to preserve records and aid organization to preserve organizational memory former editors are kept on the mailing list until they choose to remove themselves sbol editors are elected by a community vote following the process below editors serve for a two year term and cannot serve terms consecutively to maximize continuity editors’ terms should be desynchronized there are editorial positions currently held by bryan bartley university of washington
jacob beal bbn technologies
kevin clancy life technologies
raik grunberg university of montreal
goksel misirli newcastle university previous sbol editors include michal galdzicki university of washington
ernst oberortner boston university
jacqueline quinn google
matthew pocock newcastle university
cesar rodriguez autodesk
nicholas roehner university of utah
mandy wilson virginia bioinformatics institute sbol chair and steering committee the positions of sbol chair and steering committee are a means for organizing strategic planning and coordination amongst pi level members of the sbol community the sbol chair is the head of the sbol steering committee the sbol chair is responsible for ensuring regular steering committee meetings are held for effective moderation of these meetings and for being an advertised public point of contact for outside organizations the sbol chair is elected by a community vote following the process below chairs serve for a four year term and cannot serve terms consecutively in order to ensure a smooth working relationship between the sbol chair and sbol editors the sbol editors can remove a chair by a no confidence vote amongst the sbol editors at any time the rest of the sbol steering committee is appointed and removed by the sbol chair the steering committee has no fixed size or fixed term but should generally comprise the set of pi level members who are strongly active have a strong stake in sbol and are willing to assume community leadership responsibilities the sbol steering committee shall hold monthly meetings to coordinate around strategic issues for the community e g funding setting key priorities and goals these meetings should be open and their minutes should be reported to the sbol development group the sbol steering committee should also convene an external advisory board to help maintain strategic links with other communities and to obtain useful advice in guiding the community the sbol steering committee mailing list is steering committee sbolstandard org communications should generally use this list to preserve records and aid organization to preserve organizational memory former chairs and members are kept on the mailing list until they choose to remove themselves the sbol chair is currently herbert sauro university of washington the sbol steering committee is currently nobody voting procedures elections process before any election there is a nomination period of working days all members of the sbol development group are eligible for all offices except as term limited any member can self nominate or can nominate any other member voting runs for working days starting at the end of the nomination period all members of the sbol development group are eligible to vote candidates are elected by plurality voting process any member of the sbol developers group can initiate a motion for a vote any other member of the sbol developers group can second the motion sbol editors post a voting form for discussion and amendment over a period of working days voting runs for working days starting at the end of the discussion period all members of the sbol development group are eligible to vote the sbol editors may extend the voting period by up to an additional working days when they feel that an insufficient number of votes have been obtained sbol editors tally and call the vote first vote will be judged by a majority to indicate “rough consensus” if rough consensus is not reached discussion of working days is to follow the reasons for decisions must be recorded with the results of the vote any second followup vote will be ruled majority will be treated as the decision voting form must state clearly the item being voted on state the version and document to change e g sbol specification and which version target it works towards state the eligibility criteria for voting “all members of the sbol developers group are eligible to vote ” provide the options for the vote include all opinions made clear in the discussion as options include an “abstain” and “table for further discussion” option include a “no change” and a “no opinion” option when appropriate include a field for entering the email address of a voter include a comments field discussion a draft proposal was discussed in a session at combine resulting in the initial version of the document significant changes from the original draft were the most important guiding principle is the first statement that the community runs in a collegial manner by rough consensus lots of concerns and possible contingencies were avoided by referring back to that statement saying that if those become necessary e g a takeover attempt by some subgroup then the the collegial community is already gone and governance has to be fundamentally restructured in any case a proposal for quorum was removed in favor of adding “table for further discussion” as an option and giving the editors the ability to extend the voting time elections are settled by plurality rather than since people thought it would be silly to have lots of votes when there are multiple candidates if we start having problems then we may revisit this the relationship between chair and editors was considered very important and thus the editors were given the power to remove the chair alternate proposals included having the editors elect or nominate the chair which were discarded because they felt contrary to open community the external advisory board was not part of the initial draft but has been widely felt necessary for some time and so was added people did not feel it was necessary to define precisely but that it would be ok to basically leave it up to the steering committee to sort out details of what would work best for them in response to mailing list discussion added point on editors duty to equitably represent references copyright p xmlns dct xmlns vcard a rel license href to the extent possible under law a rel dct publisher href sbolstandard org sbol developers has waived all copyright and related or neighboring rights to sep this work is published from span property vcard country datatype dct content us about sbolstandard org united states
| 1
|
5,691
| 8,560,195,361
|
IssuesEvent
|
2018-11-09 00:02:52
|
hashicorp/packer
|
https://api.github.com/repos/hashicorp/packer
|
closed
|
Error: cURL error: SSL connect error - builder:vmware-iso, post-processor:vsphere
|
bug post-processor/vsphere
|
Packer fails on step during exporting build image into new VM on ESXi.
In template there is as option **`"insecure": true`** used
Also ovftool have parameters **`--noSSLVerify --noSSLVerify=true`**
```
==> vmware-iso: Compacting all attached virtual disks...
vmware-iso: Compacting virtual disk 1
==> vmware-iso: Cleaning VMX prior to finishing up...
vmware-iso: Unmounting floppy from VMX...
vmware-iso: Detaching ISO from CD-ROM device...
vmware-iso: Disabling VNC server...
==> vmware-iso: Exporting virtual machine...
vmware-iso: Executing: ovftool.exe --noSSLVerify --noSSLVerify=true --skipManifestCheck -tt=vmx vi://root:****@10.29.99.81/win-2016-datacenter-test-001 output-vmware-iso
==> vmware-iso: Error exporting virtual machine: exit status 1
==> vmware-iso: Error: cURL error: SSL connect error
==> vmware-iso: Completed with errors
==> vmware-iso:
==> vmware-iso:
==> vmware-iso: Destroying virtual machine...
Build 'vmware-iso' errored: Error exporting virtual machine: exit status 1
Error: cURL error: SSL connect error
Completed with errors
```
- Packer v1.2.4
- VMware OVF Tool 4.0.0
- Windows 10, build on ESXi 6.5
- Template: https://gist.github.com/zetneteork/67627ef60177b98ab669c6acf050dd4f
- Log-output: https://gist.github.com/zetneteork/84946e30bf1210b2050b3578eecfcc92
|
1.0
|
Error: cURL error: SSL connect error - builder:vmware-iso, post-processor:vsphere - Packer fails on step during exporting build image into new VM on ESXi.
In template there is as option **`"insecure": true`** used
Also ovftool have parameters **`--noSSLVerify --noSSLVerify=true`**
```
==> vmware-iso: Compacting all attached virtual disks...
vmware-iso: Compacting virtual disk 1
==> vmware-iso: Cleaning VMX prior to finishing up...
vmware-iso: Unmounting floppy from VMX...
vmware-iso: Detaching ISO from CD-ROM device...
vmware-iso: Disabling VNC server...
==> vmware-iso: Exporting virtual machine...
vmware-iso: Executing: ovftool.exe --noSSLVerify --noSSLVerify=true --skipManifestCheck -tt=vmx vi://root:****@10.29.99.81/win-2016-datacenter-test-001 output-vmware-iso
==> vmware-iso: Error exporting virtual machine: exit status 1
==> vmware-iso: Error: cURL error: SSL connect error
==> vmware-iso: Completed with errors
==> vmware-iso:
==> vmware-iso:
==> vmware-iso: Destroying virtual machine...
Build 'vmware-iso' errored: Error exporting virtual machine: exit status 1
Error: cURL error: SSL connect error
Completed with errors
```
- Packer v1.2.4
- VMware OVF Tool 4.0.0
- Windows 10, build on ESXi 6.5
- Template: https://gist.github.com/zetneteork/67627ef60177b98ab669c6acf050dd4f
- Log-output: https://gist.github.com/zetneteork/84946e30bf1210b2050b3578eecfcc92
|
process
|
error curl error ssl connect error builder vmware iso post processor vsphere packer fails on step during exporting build image into new vm on esxi in template there is as option insecure true used also ovftool have parameters nosslverify nosslverify true vmware iso compacting all attached virtual disks vmware iso compacting virtual disk vmware iso cleaning vmx prior to finishing up vmware iso unmounting floppy from vmx vmware iso detaching iso from cd rom device vmware iso disabling vnc server vmware iso exporting virtual machine vmware iso executing ovftool exe nosslverify nosslverify true skipmanifestcheck tt vmx vi root win datacenter test output vmware iso vmware iso error exporting virtual machine exit status vmware iso error curl error ssl connect error vmware iso completed with errors vmware iso vmware iso vmware iso destroying virtual machine build vmware iso errored error exporting virtual machine exit status error curl error ssl connect error completed with errors packer vmware ovf tool windows build on esxi template log output
| 1
|
132,148
| 18,526,948,616
|
IssuesEvent
|
2021-10-20 21:52:58
|
Azure/autorest
|
https://api.github.com/repos/Azure/autorest
|
closed
|
Constant ObjectSchema
|
Modeler design-discussion P1 - Required triage
|
If every property in an ObjectSchema is constant, I think it should have a ConstantSchema. See ConstantProduct in Validation: https://github.com/Azure/autorest.testserver/blob/e0d8dcad0f06f45ad6ec4416e45b326a139a63ff/swagger/validation.json#L239
|
1.0
|
Constant ObjectSchema - If every property in an ObjectSchema is constant, I think it should have a ConstantSchema. See ConstantProduct in Validation: https://github.com/Azure/autorest.testserver/blob/e0d8dcad0f06f45ad6ec4416e45b326a139a63ff/swagger/validation.json#L239
|
non_process
|
constant objectschema if every property in an objectschema is constant i think it should have a constantschema see constantproduct in validation
| 0
|
223,503
| 7,457,742,941
|
IssuesEvent
|
2018-03-30 06:44:38
|
Kunstmaan/KunstmaanBundlesCMS
|
https://api.github.com/repos/Kunstmaan/KunstmaanBundlesCMS
|
closed
|
Image search in media
|
Priority: Normal Profile: Backend Profile: Frontend Target audience: Administrators Type: Feature Type: Roadmap
|
add a image search in the media.
The search can include searching on filename, filesize, filetype, ...
|
1.0
|
Image search in media - add a image search in the media.
The search can include searching on filename, filesize, filetype, ...
|
non_process
|
image search in media add a image search in the media the search can include searching on filename filesize filetype
| 0
|
7,475
| 2,610,388,177
|
IssuesEvent
|
2015-02-26 20:05:35
|
chrsmith/hedgewars
|
https://api.github.com/repos/chrsmith/hedgewars
|
closed
|
Gentoo user crashing on startup. Wrong SDL version detected
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1. Start the hwengine from game(in that case, when start the game, engine
simply doesn't starts).
2. If the engine is invoked from command line, with the string like this:
"hwengine .hedgewars/ /usr/share/games/hedgewars/Data/
Downloads/Repair_Border.37.hwd" the output will looks like that:
"Hedgewars 0.9.17 engine (network protocol: 41)
Init SDL... ok
Init SDL_ttf... ok
Init SDL_image... ok
Loading .hedgewars//Data/Graphics/hwengine.png [flags: 8] An unhandled
exception occurred at $00007F1B5B27C8C9 :
EAccessViolation : Access violation
$00007F1B5B27C8C9"
Maybe problem is with libpng or SDL, but other applications which uses it works
fine.
My operating system is Gentoo, processor architeсture is amd64, version of SDL
is 1.2.14-r6, version of sdl-image is 1.2.10-r1, versions of libpng, which
installed on my system is 1.2.46,1.4.8-r2,1.5.7
```
-----
Original issue reported on code.google.com by `gruz...@gmail.com` on 6 Jan 2012 at 8:01
|
1.0
|
Gentoo user crashing on startup. Wrong SDL version detected - ```
What steps will reproduce the problem?
1. Start the hwengine from game(in that case, when start the game, engine
simply doesn't starts).
2. If the engine is invoked from command line, with the string like this:
"hwengine .hedgewars/ /usr/share/games/hedgewars/Data/
Downloads/Repair_Border.37.hwd" the output will looks like that:
"Hedgewars 0.9.17 engine (network protocol: 41)
Init SDL... ok
Init SDL_ttf... ok
Init SDL_image... ok
Loading .hedgewars//Data/Graphics/hwengine.png [flags: 8] An unhandled
exception occurred at $00007F1B5B27C8C9 :
EAccessViolation : Access violation
$00007F1B5B27C8C9"
Maybe problem is with libpng or SDL, but other applications which uses it works
fine.
My operating system is Gentoo, processor architeсture is amd64, version of SDL
is 1.2.14-r6, version of sdl-image is 1.2.10-r1, versions of libpng, which
installed on my system is 1.2.46,1.4.8-r2,1.5.7
```
-----
Original issue reported on code.google.com by `gruz...@gmail.com` on 6 Jan 2012 at 8:01
|
non_process
|
gentoo user crashing on startup wrong sdl version detected what steps will reproduce the problem start the hwengine from game in that case when start the game engine simply doesn t starts if the engine is invoked from command line with the string like this hwengine hedgewars usr share games hedgewars data downloads repair border hwd the output will looks like that hedgewars engine network protocol init sdl ok init sdl ttf ok init sdl image ok loading hedgewars data graphics hwengine png an unhandled exception occurred at eaccessviolation access violation maybe problem is with libpng or sdl but other applications which uses it works fine my operating system is gentoo processor architeсture is version of sdl is version of sdl image is versions of libpng which installed on my system is original issue reported on code google com by gruz gmail com on jan at
| 0
|
21,892
| 30,342,064,426
|
IssuesEvent
|
2023-07-11 13:20:18
|
USGS-WiM/StreamStats
|
https://api.github.com/repos/USGS-WiM/StreamStats
|
closed
|
Order Stream Grids table by alphabetical State / Region name
|
Batch Processor
|
Should be a quick fix. Currently they are not sorted and Delaware ended up at the bottom because I updated the FileName in the database:

|
1.0
|
Order Stream Grids table by alphabetical State / Region name - Should be a quick fix. Currently they are not sorted and Delaware ended up at the bottom because I updated the FileName in the database:

|
process
|
order stream grids table by alphabetical state region name should be a quick fix currently they are not sorted and delaware ended up at the bottom because i updated the filename in the database
| 1
|
5,132
| 7,918,331,296
|
IssuesEvent
|
2018-07-04 12:58:05
|
mono/mono
|
https://api.github.com/repos/mono/mono
|
closed
|
UnixServiceController implementation
|
area-BCL: System.ServiceProcess
|
[ ] macOS
[X] Linux
[ ] Windows
**Version Used**: LAST
Any chance this library will be implemented?
|
1.0
|
UnixServiceController implementation - [ ] macOS
[X] Linux
[ ] Windows
**Version Used**: LAST
Any chance this library will be implemented?
|
process
|
unixservicecontroller implementation macos linux windows version used last any chance this library will be implemented
| 1
|
13,727
| 16,488,003,991
|
IssuesEvent
|
2021-05-24 21:09:40
|
CodeForPhilly/paws-data-pipeline
|
https://api.github.com/repos/CodeForPhilly/paws-data-pipeline
|
opened
|
Handle even longer Execute runs, give better UX
|
API Async processes UX
|
When we did #227 , the Execute Match run time was < 60 minutes. As we've added more features, it's now taking just under three hours (on a pretty fast machine). This hits two timeouts:
- 30 minute login refresh
- 60 minute nginx request timer
If the user were to keep the tab in the foreground and hit the refresh button every 30 min, after 60 min nginx will send a 502 (which the JS code does not catch) and the spinner will continue forever as the JS will never see a 200 for the execute request.
If the user then reloads the page (or logs back in after being timed out) she's presented with an Admin page showing uploaded files but no indication of the running job. The normal reaction will be to hit the **Run Analysis** button again, launching a second execute process. This will generally cause an error, killing one or both processes causing a 500 or 502 to be returned to the client.
**Proposal** ________________________________________
Modify **/api/get_execution_status** so as not to require a job id. Ensure there's no more than one active execute running.
When server starts up, check for remnants of an incomplete execution (_i.e._, non-completed job record in DB). Assuming we can know it's dead[1], delete the in-progress record to allow a new run to be started.
Modify client to check **get_execution_status** every X seconds. If there's a run executing, disable the **Run Analysis** button and show the execution progress. If no run in progress, enable the button. Ensure client handles non-200 responses.
Modify execute code to update status every 100 (?) records so we get more frequent updates. Have client check for progress.[2,3]
Investigate check-pointing the match execution. Could we dump to DB every 1000 records and then restart from there later?
<hr>
[1] As we're having uwsgi pre-fork two processes, we _shouldn't_ have new server processes except at startup.
[2] What to do if it appears there's no progress?
[3] Later blocks take much longer than earlier blocks.
|
1.0
|
Handle even longer Execute runs, give better UX - When we did #227 , the Execute Match run time was < 60 minutes. As we've added more features, it's now taking just under three hours (on a pretty fast machine). This hits two timeouts:
- 30 minute login refresh
- 60 minute nginx request timer
If the user were to keep the tab in the foreground and hit the refresh button every 30 min, after 60 min nginx will send a 502 (which the JS code does not catch) and the spinner will continue forever as the JS will never see a 200 for the execute request.
If the user then reloads the page (or logs back in after being timed out) she's presented with an Admin page showing uploaded files but no indication of the running job. The normal reaction will be to hit the **Run Analysis** button again, launching a second execute process. This will generally cause an error, killing one or both processes causing a 500 or 502 to be returned to the client.
**Proposal** ________________________________________
Modify **/api/get_execution_status** so as not to require a job id. Ensure there's no more than one active execute running.
When server starts up, check for remnants of an incomplete execution (_i.e._, non-completed job record in DB). Assuming we can know it's dead[1], delete the in-progress record to allow a new run to be started.
Modify client to check **get_execution_status** every X seconds. If there's a run executing, disable the **Run Analysis** button and show the execution progress. If no run in progress, enable the button. Ensure client handles non-200 responses.
Modify execute code to update status every 100 (?) records so we get more frequent updates. Have client check for progress.[2,3]
Investigate check-pointing the match execution. Could we dump to DB every 1000 records and then restart from there later?
<hr>
[1] As we're having uwsgi pre-fork two processes, we _shouldn't_ have new server processes except at startup.
[2] What to do if it appears there's no progress?
[3] Later blocks take much longer than earlier blocks.
|
process
|
handle even longer execute runs give better ux when we did the execute match run time was minutes as we ve added more features it s now taking just under three hours on a pretty fast machine this hits two timeouts minute login refresh minute nginx request timer if the user were to keep the tab in the foreground and hit the refresh button every min after min nginx will send a which the js code does not catch and the spinner will continue forever as the js will never see a for the execute request if the user then reloads the page or logs back in after being timed out she s presented with an admin page showing uploaded files but no indication of the running job the normal reaction will be to hit the run analysis button again launching a second execute process this will generally cause an error killing one or both processes causing a or to be returned to the client proposal modify api get execution status so as not to require a job id ensure there s no more than one active execute running when server starts up check for remnants of an incomplete execution i e non completed job record in db assuming we can know it s dead delete the in progress record to allow a new run to be started modify client to check get execution status every x seconds if there s a run executing disable the run analysis button and show the execution progress if no run in progress enable the button ensure client handles non responses modify execute code to update status every records so we get more frequent updates have client check for progress investigate check pointing the match execution could we dump to db every records and then restart from there later as we re having uwsgi pre fork two processes we shouldn t have new server processes except at startup what to do if it appears there s no progress later blocks take much longer than earlier blocks
| 1
|
22,127
| 30,672,909,960
|
IssuesEvent
|
2023-07-26 01:03:48
|
googleapis/google-cloud-ruby
|
https://api.github.com/repos/googleapis/google-cloud-ruby
|
closed
|
Warning: a recent release failed
|
type: process
|
The following release PRs may have failed:
* #22565 - The release job is 'autorelease: pending', but expected 'autorelease: published'.
* #22566 - The release job is 'autorelease: pending', but expected 'autorelease: published'.
|
1.0
|
Warning: a recent release failed - The following release PRs may have failed:
* #22565 - The release job is 'autorelease: pending', but expected 'autorelease: published'.
* #22566 - The release job is 'autorelease: pending', but expected 'autorelease: published'.
|
process
|
warning a recent release failed the following release prs may have failed the release job is autorelease pending but expected autorelease published the release job is autorelease pending but expected autorelease published
| 1
|
4,035
| 6,971,711,244
|
IssuesEvent
|
2017-12-11 14:53:38
|
w3c/html
|
https://api.github.com/repos/w3c/html
|
closed
|
CFC: Merge Web Workers into HTML
|
process
|
This is a Call For Consensus (CFC) to merge the [Web Workers](https://w3c.github.io/workers/) specification into the [HTML](https://w3c.github.io/html) specification.
The reason for merging the two specifications is that it would make it easier to maintain Web Workers, and therefore more likely that the Web Workers parts of the HTML specification will be kept up to date (and issues addressed more responsively).
The proposal was raised as issue #1075, and Sangwhan Moon (the current Web Workers editor) has [agreed to do the work](https://github.com/w3c/html/issues/1075#issuecomment-347460911).
Please respond to this CFC by the end of day on Thursday 7th December. To support the proposal, add a "thumbs up" to this comment. If you don't support the proposal, add a "thumbs down" and post your reasons in a comment.
If you choose not to respond it will be taken as silent support for the proposal. Actual responses are preferred however.
|
1.0
|
CFC: Merge Web Workers into HTML - This is a Call For Consensus (CFC) to merge the [Web Workers](https://w3c.github.io/workers/) specification into the [HTML](https://w3c.github.io/html) specification.
The reason for merging the two specifications is that it would make it easier to maintain Web Workers, and therefore more likely that the Web Workers parts of the HTML specification will be kept up to date (and issues addressed more responsively).
The proposal was raised as issue #1075, and Sangwhan Moon (the current Web Workers editor) has [agreed to do the work](https://github.com/w3c/html/issues/1075#issuecomment-347460911).
Please respond to this CFC by the end of day on Thursday 7th December. To support the proposal, add a "thumbs up" to this comment. If you don't support the proposal, add a "thumbs down" and post your reasons in a comment.
If you choose not to respond it will be taken as silent support for the proposal. Actual responses are preferred however.
|
process
|
cfc merge web workers into html this is a call for consensus cfc to merge the specification into the specification the reason for merging the two specifications is that it would make it easier to maintain web workers and therefore more likely that the web workers parts of the html specification will be kept up to date and issues addressed more responsively the proposal was raised as issue and sangwhan moon the current web workers editor has please respond to this cfc by the end of day on thursday december to support the proposal add a thumbs up to this comment if you don t support the proposal add a thumbs down and post your reasons in a comment if you choose not to respond it will be taken as silent support for the proposal actual responses are preferred however
| 1
|
6,897
| 10,039,752,311
|
IssuesEvent
|
2019-07-18 18:13:05
|
dCentralizedSystems/customer-support
|
https://api.github.com/repos/dCentralizedSystems/customer-support
|
opened
|
7 day timespan report takes too . long and is inconsistent
|
data sample client fleet management stream processing
|
known issue, internal issue created to track. We currently use unicast (single node query) to retrieve data, since merging across nodes takes too long. But since the load balancer randomly picks a node, if each node has gaps in its index, the results will vary.
we have enabled full replication since middle of this week, so in the future, even unicast should return complete data sets, regardless of node.
i will resolve this issue once the performance is improved.
internal issue:
https://github.com/dCentralizedSystems/cap-ui/issues/61
|
1.0
|
7 day timespan report takes too . long and is inconsistent - known issue, internal issue created to track. We currently use unicast (single node query) to retrieve data, since merging across nodes takes too long. But since the load balancer randomly picks a node, if each node has gaps in its index, the results will vary.
we have enabled full replication since middle of this week, so in the future, even unicast should return complete data sets, regardless of node.
i will resolve this issue once the performance is improved.
internal issue:
https://github.com/dCentralizedSystems/cap-ui/issues/61
|
process
|
day timespan report takes too long and is inconsistent known issue internal issue created to track we currently use unicast single node query to retrieve data since merging across nodes takes too long but since the load balancer randomly picks a node if each node has gaps in its index the results will vary we have enabled full replication since middle of this week so in the future even unicast should return complete data sets regardless of node i will resolve this issue once the performance is improved internal issue
| 1
|
91,028
| 11,457,915,874
|
IssuesEvent
|
2020-02-07 01:24:01
|
draveness/blog-comments
|
https://api.github.com/repos/draveness/blog-comments
|
opened
|
为什么比特币可以防篡改 · Why's THE Design?
|
/whys-the-design-bitcoin-database Gitalk
|
https://draveness.me/whys-the-design-bitcoin-database
如果我们对比特币等区块链技术稍有了解,就会发现它是一个设计巧妙的分布式数据库。作为在公网运行的分布式数据库,比特币和其它区块链网络都会面对网络中的恶意节点的攻击。因为比特币需要面对复杂的网络环境以及不可靠的节点,所以它在设计和实现上也做出了应对,我们可以看看它是如何组合现有的技术防止恶意节点对交易和账户数据进行篡改的。
|
1.0
|
为什么比特币可以防篡改 · Why's THE Design? - https://draveness.me/whys-the-design-bitcoin-database
如果我们对比特币等区块链技术稍有了解,就会发现它是一个设计巧妙的分布式数据库。作为在公网运行的分布式数据库,比特币和其它区块链网络都会面对网络中的恶意节点的攻击。因为比特币需要面对复杂的网络环境以及不可靠的节点,所以它在设计和实现上也做出了应对,我们可以看看它是如何组合现有的技术防止恶意节点对交易和账户数据进行篡改的。
|
non_process
|
为什么比特币可以防篡改 · why s the design 如果我们对比特币等区块链技术稍有了解,就会发现它是一个设计巧妙的分布式数据库。作为在公网运行的分布式数据库,比特币和其它区块链网络都会面对网络中的恶意节点的攻击。因为比特币需要面对复杂的网络环境以及不可靠的节点,所以它在设计和实现上也做出了应对,我们可以看看它是如何组合现有的技术防止恶意节点对交易和账户数据进行篡改的。
| 0
|
2,946
| 5,924,115,160
|
IssuesEvent
|
2017-05-23 09:39:12
|
Hurence/logisland
|
https://api.github.com/repos/Hurence/logisland
|
opened
|
Add DeepLearning learning demo processor
|
cyber-security feature processor
|
The aim of this processor is to show how to deploy a DeepLearning processor in logisland using deepLearning4j. It implement a (preliminary built and trained with MNIST dataset) Neural Network model trained on the famous MNIST hand digit dataset.
# Specifications like the version of the project, operating system, or hardware.
|
1.0
|
Add DeepLearning learning demo processor - The aim of this processor is to show how to deploy a DeepLearning processor in logisland using deepLearning4j. It implement a (preliminary built and trained with MNIST dataset) Neural Network model trained on the famous MNIST hand digit dataset.
# Specifications like the version of the project, operating system, or hardware.
|
process
|
add deeplearning learning demo processor the aim of this processor is to show how to deploy a deeplearning processor in logisland using it implement a preliminary built and trained with mnist dataset neural network model trained on the famous mnist hand digit dataset specifications like the version of the project operating system or hardware
| 1
|
13,375
| 15,835,715,597
|
IssuesEvent
|
2021-04-06 18:22:19
|
EKGF/ekg-mm
|
https://api.github.com/repos/EKGF/ekg-mm
|
closed
|
Add collaboration process as appendix
|
ekg-mm-process
|
See also issue #17.
The version of the Google Docs doc (saved as PDF) that will now be added as LaTeX to the EKG/MM repo:
[ekgf - collaboration process.pdf](https://github.com/EKGF/ekg-mm/files/5787152/ekgf.-.collaboration.process.pdf)
|
1.0
|
Add collaboration process as appendix - See also issue #17.
The version of the Google Docs doc (saved as PDF) that will now be added as LaTeX to the EKG/MM repo:
[ekgf - collaboration process.pdf](https://github.com/EKGF/ekg-mm/files/5787152/ekgf.-.collaboration.process.pdf)
|
process
|
add collaboration process as appendix see also issue the version of the google docs doc saved as pdf that will now be added as latex to the ekg mm repo
| 1
|
13,001
| 15,361,038,177
|
IssuesEvent
|
2021-03-01 17:38:28
|
googleapis/python-org-policy
|
https://api.github.com/repos/googleapis/python-org-policy
|
opened
|
Use bazel to generate _pb2.py files
|
type: process
|
Follow up on https://github.com/googleapis/python-org-policy/blob/d5073bc0fa69af37948137b4e7e8777c5ce83faf/synth.py#L81-L83
Use bazel rules to generate the `_pb2.py` files for orgpolicy v1.
|
1.0
|
Use bazel to generate _pb2.py files - Follow up on https://github.com/googleapis/python-org-policy/blob/d5073bc0fa69af37948137b4e7e8777c5ce83faf/synth.py#L81-L83
Use bazel rules to generate the `_pb2.py` files for orgpolicy v1.
|
process
|
use bazel to generate py files follow up on use bazel rules to generate the py files for orgpolicy
| 1
|
13,715
| 23,606,691,205
|
IssuesEvent
|
2022-08-24 08:52:02
|
renovatebot/renovate
|
https://api.github.com/repos/renovatebot/renovate
|
opened
|
rollbackPr breaks updates for poetry project
|
type:bug status:requirements priority-5-triage
|
### How are you running Renovate?
Self-hosted
### If you're self-hosting Renovate, tell us what version of Renovate you run.
32.173.1
### Please select which platform you are using if self-hosting.
gitlab.com
### If you're self-hosting Renovate, tell us what version of the platform you run.
_No response_
### Was this something which used to work for you, and then stopped?
It used to work, and then stopped
### Describe the bug
With `rollbackPRs` enabled no PRs for updates get created. Instead it fails with an error message: Can't find version matching 5.2.3 for celery.
The log claims the version is missing although this version of celery is still available: https://pypi.org/project/celery/5.2.3/
```
Missing version has nothing to roll back to (repository=dsch1/renovate-rollback-pr, packageFile=pyproject.toml)
"depName": "celery",
"currentValue": "5.2.3"
```
minimal reproduction repository: https://gitlab.com/dsch1/renovate-rollback-pr
### Relevant debug logs
<details><summary>Logs</summary>
```
DEBUG: Using RE2 as regex engine
DEBUG: Parsing configs
DEBUG: Checking for config file in /usr/src/app/config.js
DEBUG: No config file found on disk - skipping
WARN: cli config dryRun property has been changed to full
DEBUG: Converting GITHUB_COM_TOKEN into a global host rule
DEBUG: File config
"config": {}
DEBUG: CLI config
"config": {
"repositories": ["dsch1/renovate-rollback-pr"],
"dryRun": "full",
"platform": "gitlab"
}
DEBUG: Env config
"config": {
"hostRules": [
{"hostType": "github", "matchHost": "github.com", "token": "***********"}
],
"token": "***********"
}
DEBUG: Combined config
"config": {
"hostRules": [
{"hostType": "github", "matchHost": "github.com", "token": "***********"}
],
"token": "***********",
"repositories": ["dsch1/renovate-rollback-pr"],
"dryRun": "full",
"platform": "gitlab"
}
DEBUG: Found valid git version: 2.37.2
DEBUG: Using default GitLab endpoint: https://gitlab.com/api/v4/
DEBUG: GitLab version is: 15.4.0-pre
DEBUG: Using platform gitAuthor: David Schneider <schneidav81@gmail.com>
DEBUG: Adding token authentication for gitlab.com to hostRules
DEBUG: Using baseDir: /tmp/renovate
DEBUG: Using cacheDir: /tmp/renovate/cache
DEBUG: Initializing Renovate internal cache into /tmp/renovate/cache/renovate/renovate-cache-v1
DEBUG: Commits limit = null
DEBUG: Setting global hostRules
DEBUG: Adding token authentication for github.com to hostRules
DEBUG: Adding token authentication for gitlab.com to hostRules
DEBUG: validatePresets()
DEBUG: Reinitializing hostRules for repo
DEBUG: Clearing hostRules
DEBUG: Adding token authentication for github.com to hostRules
DEBUG: Adding token authentication for gitlab.com to hostRules
INFO: Repository started (repository=dsch1/renovate-rollback-pr)
"renovateVersion": "32.173.1"
DEBUG: Using localDir: /tmp/renovate/repos/gitlab/dsch1/renovate-rollback-pr (repository=dsch1/renovate-rollback-pr)
DEBUG: PackageFiles.clear() - Package files deleted (repository=dsch1/renovate-rollback-pr)
"baseBranches": []
DEBUG: resetMemCache() (repository=dsch1/renovate-rollback-pr)
DEBUG: dsch1/renovate-rollback-pr default branch = main (repository=dsch1/renovate-rollback-pr)
DEBUG: Enabling Git FS (repository=dsch1/renovate-rollback-pr)
DEBUG: using http URL (repository=dsch1/renovate-rollback-pr)
"url": "https://gitlab.com/dsch1/renovate-rollback-pr.git"
DEBUG: Resetting npmrc (repository=dsch1/renovate-rollback-pr)
DEBUG: detectSemanticCommits() (repository=dsch1/renovate-rollback-pr)
DEBUG: Initializing git repository into /tmp/renovate/repos/gitlab/dsch1/renovate-rollback-pr (repository=dsch1/renovate-rollback-pr)
DEBUG: Performing blobless clone (repository=dsch1/renovate-rollback-pr)
DEBUG: git clone completed (repository=dsch1/renovate-rollback-pr)
"durationMs": 2357
DEBUG: latest repository commit (repository=dsch1/renovate-rollback-pr)
"latestCommit": {
"hash": "7fb90c6e071e0695d79b88b5aff2e058d7bee554",
"date": "2022-08-24T10:21:54+02:00",
"message": "onboarding",
"refs": "HEAD -> main, origin/main, origin/HEAD",
"body": "",
"author_name": "David Schneider",
"author_email": "david.schneider@chargepoint.com"
}
DEBUG: getCommitMessages (repository=dsch1/renovate-rollback-pr)
DEBUG: Semantic commits detection: unknown (repository=dsch1/renovate-rollback-pr)
DEBUG: No semantic commits detected (repository=dsch1/renovate-rollback-pr)
DEBUG: checkOnboarding() (repository=dsch1/renovate-rollback-pr)
DEBUG: isOnboarded() (repository=dsch1/renovate-rollback-pr)
DEBUG: findFile(renovate.json) (repository=dsch1/renovate-rollback-pr)
DEBUG: Config file exists (repository=dsch1/renovate-rollback-pr)
"fileName": "renovate.json"
DEBUG: ensureIssueClosing() (repository=dsch1/renovate-rollback-pr)
DEBUG: Repo is onboarded (repository=dsch1/renovate-rollback-pr)
DEBUG: Found renovate.json config file (repository=dsch1/renovate-rollback-pr)
DEBUG: Repository config (repository=dsch1/renovate-rollback-pr)
"fileName": "renovate.json",
"config": {"rollbackPrs": true}
DEBUG: migrateAndValidate() (repository=dsch1/renovate-rollback-pr)
DEBUG: No config migration necessary (repository=dsch1/renovate-rollback-pr)
DEBUG: massaged config (repository=dsch1/renovate-rollback-pr)
"config": {"rollbackPrs": true}
DEBUG: migrated config (repository=dsch1/renovate-rollback-pr)
"config": {"rollbackPrs": true}
DEBUG: Found repo ignorePaths (repository=dsch1/renovate-rollback-pr)
"ignorePaths": ["**/node_modules/**", "**/bower_components/**"]
DEBUG: No vulnerability alerts found (repository=dsch1/renovate-rollback-pr)
DEBUG: No baseBranches (repository=dsch1/renovate-rollback-pr)
DEBUG: extract() (repository=dsch1/renovate-rollback-pr)
DEBUG: Setting current branch to main (repository=dsch1/renovate-rollback-pr)
DEBUG: latest commit (repository=dsch1/renovate-rollback-pr)
"branchName": "main",
"latestCommitDate": "2022-08-24T10:21:54+02:00"
DEBUG: Using file match: (^|/)tasks/[^/]+\.ya?ml$ for manager ansible (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)requirements\.ya?ml$ for manager ansible-galaxy (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)galaxy\.ya?ml$ for manager ansible-galaxy (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: azure.*pipelines?.*\.ya?ml$ for manager azure-pipelines (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)batect(-bundle)?\.yml$ for manager batect (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)batect$ for manager batect-wrapper (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)WORKSPACE(|\.bazel)$ for manager bazel (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: \.bzl$ for manager bazel (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|\/)\.bazelversion$ for manager bazelisk (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)\.?bitbucket-pipelines\.ya?ml$ for manager bitbucket-pipelines (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: buildkite\.ya?ml for manager buildkite (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: \.buildkite/.+\.ya?ml$ for manager buildkite (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)Gemfile$ for manager bundler (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: \.cake$ for manager cake (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)Cargo\.toml$ for manager cargo (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)\.circleci/config\.yml$ for manager circleci (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)cloudbuild\.ya?ml for manager cloudbuild (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)Podfile$ for manager cocoapods (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)([\w-]*)composer\.json$ for manager composer (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)conanfile\.(txt|py)$ for manager conan (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)(?:deps|bb)\.edn$ for manager deps-edn (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)(?:docker-)?compose[^/]*\.ya?ml$ for manager docker-compose (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/|\.)Dockerfile$ for manager dockerfile (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)Dockerfile[^/]*$ for manager dockerfile (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)\.drone\.yml$ for manager droneci (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)fleet\.ya?ml for manager fleet (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)flux-system/gotk-components\.yaml$ for manager flux (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|\/)\.fvm\/fvm_config\.json$ for manager fvm (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)\.gitmodules$ for manager git-submodules (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: ^(workflow-templates|\.github\/workflows)\/[^/]+\.ya?ml$ for manager github-actions (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|\/)action\.ya?ml$ for manager github-actions (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: \.gitlab-ci\.yml$ for manager gitlabci (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: \.gitlab-ci\.yml$ for manager gitlabci-include (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)go\.mod$ for manager gomod (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: \.gradle(\.kts)?$ for manager gradle (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|\/)gradle\.properties$ for manager gradle (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|\/)gradle\/.+\.toml$ for manager gradle (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: \.versions\.toml$ for manager gradle (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)gradle/wrapper/gradle-wrapper\.properties$ for manager gradle-wrapper (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)requirements\.yaml$ for manager helm-requirements (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)values\.yaml$ for manager helm-values (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)helmfile\.yaml$ for manager helmfile (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)Chart\.yaml$ for manager helmv3 (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)bin/hermit$ for manager hermit (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: ^Formula/[^/]+[.]rb$ for manager homebrew (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: \.html?$ for manager html (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)plugins\.(txt|ya?ml)$ for manager jenkins (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)jsonnetfile\.json$ for manager jsonnet-bundler (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: ^.+\.main\.kts$ for manager kotlin-script (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)kustomization\.ya?ml$ for manager kustomize (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)project\.clj$ for manager leiningen (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/|\.)pom\.xml$ for manager maven (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: ^(((\.mvn)|(\.m2))/)?settings\.xml$ for manager maven (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)package\.js$ for manager meteor (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)mix\.exs$ for manager mix (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)\.node-version$ for manager nodenv (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)package\.json$ for manager npm (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: \.(?:cs|fs|vb)proj$ for manager nuget (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: \.(?:props|targets)$ for manager nuget (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|\/)dotnet-tools\.json$ for manager nuget (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|\/)global\.json$ for manager nuget (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)\.nvmrc$ for manager nvm (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)([\w-]*)requirements\.(txt|pip)$ for manager pip_requirements (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)setup\.py$ for manager pip_setup (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)Pipfile$ for manager pipenv (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)pyproject\.toml$ for manager poetry (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)\.pre-commit-config\.yaml$ for manager pre-commit (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)pubspec\.ya?ml$ for manager pub (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|\/)Puppetfile$ for manager puppet (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)\.python-version$ for manager pyenv (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)\.ruby-version$ for manager ruby-version (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: \.sbt$ for manager sbt (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: project/[^/]*.scala$ for manager sbt (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)setup\.cfg$ for manager setup-cfg (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)Package\.swift for manager swift (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: \.tf$ for manager terraform (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)\.terraform-version$ for manager terraform-version (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)terragrunt\.hcl$ for manager terragrunt (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)\.terragrunt-version$ for manager terragrunt-version (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: ^\.travis\.yml$ for manager travis (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)\.vela\.ya?ml$ for manager velaci (repository=dsch1/renovate-rollback-pr)
DEBUG: Matched 1 file(s) for manager poetry: pyproject.toml (repository=dsch1/renovate-rollback-pr)
DEBUG: Found poetry package files (repository=dsch1/renovate-rollback-pr)
DEBUG: Found 1 package file(s) (repository=dsch1/renovate-rollback-pr)
INFO: Dependency extraction complete (repository=dsch1/renovate-rollback-pr, baseBranch=main)
"stats": {
"managers": {"poetry": {"fileCount": 1, "depCount": 1}},
"total": {"fileCount": 1, "depCount": 1}
}
DEBUG: Missing version has nothing to roll back to (repository=dsch1/renovate-rollback-pr, packageFile=pyproject.toml)
"depName": "celery",
"currentValue": "5.2.3"
DEBUG: PackageFiles.add() - Package file saved for branch (repository=dsch1/renovate-rollback-pr, baseBranch=main)
DEBUG: Package releases lookups complete (repository=dsch1/renovate-rollback-pr, baseBranch=main)
DEBUG: branchifyUpgrades (repository=dsch1/renovate-rollback-pr)
DEBUG: 0 flattened updates found: (repository=dsch1/renovate-rollback-pr)
DEBUG: Returning 0 branch(es) (repository=dsch1/renovate-rollback-pr)
DEBUG: config.repoIsOnboarded=true (repository=dsch1/renovate-rollback-pr)
DEBUG: packageFiles with updates (repository=dsch1/renovate-rollback-pr, baseBranch=main)
"config": {
"poetry": [
{
"packageFile": "pyproject.toml",
"deps": [
{
"depName": "celery",
"depType": "dependencies",
"currentValue": "5.2.3",
"managerData": {"nestedVersion": false},
"datasource": "pypi",
"lockedVersion": "5.2.3",
"versioning": "pep440",
"depIndex": 0,
"updates": [],
"warnings": [
{
"topic": "celery",
"message": "Can't find version matching 5.2.3 for celery"
}
],
"sourceUrl": "https://github.com/celery/celery",
"homepage": "http://celeryproject.org",
"changelogUrl": "https://docs.celeryproject.org/en/stable/changelog.html"
}
],
"extractedConstraints": {"python": "^3.10"},
"lockFiles": ["poetry.lock"]
}
]
}
DEBUG: processRepo() (repository=dsch1/renovate-rollback-pr)
DEBUG: Processing 0 branches: (repository=dsch1/renovate-rollback-pr)
DEBUG: Calculated maximum PRs remaining this run (repository=dsch1/renovate-rollback-pr)
"prsRemaining": 99
DEBUG: PullRequests limit = 99 (repository=dsch1/renovate-rollback-pr)
DEBUG: Calculated maximum branches remaining this run (repository=dsch1/renovate-rollback-pr)
"branchesRemaining": 99
DEBUG: Branches limit = 99 (repository=dsch1/renovate-rollback-pr)
INFO: DRY-RUN: Would close Dependency Dashboard (repository=dsch1/renovate-rollback-pr)
"title": "Dependency Dashboard"
DEBUG: Removing any stale branches (repository=dsch1/renovate-rollback-pr)
DEBUG: config.repoIsOnboarded=true (repository=dsch1/renovate-rollback-pr)
DEBUG: No renovate branches found (repository=dsch1/renovate-rollback-pr)
DEBUG: ensureIssueClosing() (repository=dsch1/renovate-rollback-pr)
DEBUG: PackageFiles.clear() - Package files deleted (repository=dsch1/renovate-rollback-pr)
"baseBranches": ["main"]
DEBUG: Renovate repository PR statistics (repository=dsch1/renovate-rollback-pr)
"stats": {"total": 0, "open": 0, "closed": 0, "merged": 0}
DEBUG: Repository result: done, status: onboarded, enabled: true, onboarded: true (repository=dsch1/renovate-rollback-pr)
DEBUG: Repository timing splits (milliseconds) (repository=dsch1/renovate-rollback-pr)
"splits": {"init": 4458, "extract": 176, "lookup": 382, "onboarding": 1, "update": 1},
"total": 5493
DEBUG: http statistics (repository=dsch1/renovate-rollback-pr)
"urls": {
"https://gitlab.com/api/v4/projects/dsch1%2Frenovate-rollback-pr/issues (GET,200)": 1,
"https://gitlab.com/api/v4/projects/dsch1%2Frenovate-rollback-pr/merge_requests (GET,200)": 1,
"https://pypi.org/pypi/celery/json (GET,200)": 1
},
"hostStats": {
"gitlab.com": {"requestCount": 2, "requestAvgMs": 448, "queueAvgMs": 0},
"pypi.org": {"requestCount": 1, "requestAvgMs": 313, "queueAvgMs": 0}
},
"totalRequests": 3
INFO: Repository finished (repository=dsch1/renovate-rollback-pr)
"durationMs": 5493
DEBUG: Renovate exiting
```
</details>
### Have you created a minimal reproduction repository?
I have linked to a minimal reproduction repository in the bug description
|
1.0
|
rollbackPr breaks updates for poetry project - ### How are you running Renovate?
Self-hosted
### If you're self-hosting Renovate, tell us what version of Renovate you run.
32.173.1
### Please select which platform you are using if self-hosting.
gitlab.com
### If you're self-hosting Renovate, tell us what version of the platform you run.
_No response_
### Was this something which used to work for you, and then stopped?
It used to work, and then stopped
### Describe the bug
With `rollbackPRs` enabled no PRs for updates get created. Instead it fails with an error message: Can't find version matching 5.2.3 for celery.
The log claims the version is missing although this version of celery is still available: https://pypi.org/project/celery/5.2.3/
```
Missing version has nothing to roll back to (repository=dsch1/renovate-rollback-pr, packageFile=pyproject.toml)
"depName": "celery",
"currentValue": "5.2.3"
```
minimal reproduction repository: https://gitlab.com/dsch1/renovate-rollback-pr
### Relevant debug logs
<details><summary>Logs</summary>
```
DEBUG: Using RE2 as regex engine
DEBUG: Parsing configs
DEBUG: Checking for config file in /usr/src/app/config.js
DEBUG: No config file found on disk - skipping
WARN: cli config dryRun property has been changed to full
DEBUG: Converting GITHUB_COM_TOKEN into a global host rule
DEBUG: File config
"config": {}
DEBUG: CLI config
"config": {
"repositories": ["dsch1/renovate-rollback-pr"],
"dryRun": "full",
"platform": "gitlab"
}
DEBUG: Env config
"config": {
"hostRules": [
{"hostType": "github", "matchHost": "github.com", "token": "***********"}
],
"token": "***********"
}
DEBUG: Combined config
"config": {
"hostRules": [
{"hostType": "github", "matchHost": "github.com", "token": "***********"}
],
"token": "***********",
"repositories": ["dsch1/renovate-rollback-pr"],
"dryRun": "full",
"platform": "gitlab"
}
DEBUG: Found valid git version: 2.37.2
DEBUG: Using default GitLab endpoint: https://gitlab.com/api/v4/
DEBUG: GitLab version is: 15.4.0-pre
DEBUG: Using platform gitAuthor: David Schneider <schneidav81@gmail.com>
DEBUG: Adding token authentication for gitlab.com to hostRules
DEBUG: Using baseDir: /tmp/renovate
DEBUG: Using cacheDir: /tmp/renovate/cache
DEBUG: Initializing Renovate internal cache into /tmp/renovate/cache/renovate/renovate-cache-v1
DEBUG: Commits limit = null
DEBUG: Setting global hostRules
DEBUG: Adding token authentication for github.com to hostRules
DEBUG: Adding token authentication for gitlab.com to hostRules
DEBUG: validatePresets()
DEBUG: Reinitializing hostRules for repo
DEBUG: Clearing hostRules
DEBUG: Adding token authentication for github.com to hostRules
DEBUG: Adding token authentication for gitlab.com to hostRules
INFO: Repository started (repository=dsch1/renovate-rollback-pr)
"renovateVersion": "32.173.1"
DEBUG: Using localDir: /tmp/renovate/repos/gitlab/dsch1/renovate-rollback-pr (repository=dsch1/renovate-rollback-pr)
DEBUG: PackageFiles.clear() - Package files deleted (repository=dsch1/renovate-rollback-pr)
"baseBranches": []
DEBUG: resetMemCache() (repository=dsch1/renovate-rollback-pr)
DEBUG: dsch1/renovate-rollback-pr default branch = main (repository=dsch1/renovate-rollback-pr)
DEBUG: Enabling Git FS (repository=dsch1/renovate-rollback-pr)
DEBUG: using http URL (repository=dsch1/renovate-rollback-pr)
"url": "https://gitlab.com/dsch1/renovate-rollback-pr.git"
DEBUG: Resetting npmrc (repository=dsch1/renovate-rollback-pr)
DEBUG: detectSemanticCommits() (repository=dsch1/renovate-rollback-pr)
DEBUG: Initializing git repository into /tmp/renovate/repos/gitlab/dsch1/renovate-rollback-pr (repository=dsch1/renovate-rollback-pr)
DEBUG: Performing blobless clone (repository=dsch1/renovate-rollback-pr)
DEBUG: git clone completed (repository=dsch1/renovate-rollback-pr)
"durationMs": 2357
DEBUG: latest repository commit (repository=dsch1/renovate-rollback-pr)
"latestCommit": {
"hash": "7fb90c6e071e0695d79b88b5aff2e058d7bee554",
"date": "2022-08-24T10:21:54+02:00",
"message": "onboarding",
"refs": "HEAD -> main, origin/main, origin/HEAD",
"body": "",
"author_name": "David Schneider",
"author_email": "david.schneider@chargepoint.com"
}
DEBUG: getCommitMessages (repository=dsch1/renovate-rollback-pr)
DEBUG: Semantic commits detection: unknown (repository=dsch1/renovate-rollback-pr)
DEBUG: No semantic commits detected (repository=dsch1/renovate-rollback-pr)
DEBUG: checkOnboarding() (repository=dsch1/renovate-rollback-pr)
DEBUG: isOnboarded() (repository=dsch1/renovate-rollback-pr)
DEBUG: findFile(renovate.json) (repository=dsch1/renovate-rollback-pr)
DEBUG: Config file exists (repository=dsch1/renovate-rollback-pr)
"fileName": "renovate.json"
DEBUG: ensureIssueClosing() (repository=dsch1/renovate-rollback-pr)
DEBUG: Repo is onboarded (repository=dsch1/renovate-rollback-pr)
DEBUG: Found renovate.json config file (repository=dsch1/renovate-rollback-pr)
DEBUG: Repository config (repository=dsch1/renovate-rollback-pr)
"fileName": "renovate.json",
"config": {"rollbackPrs": true}
DEBUG: migrateAndValidate() (repository=dsch1/renovate-rollback-pr)
DEBUG: No config migration necessary (repository=dsch1/renovate-rollback-pr)
DEBUG: massaged config (repository=dsch1/renovate-rollback-pr)
"config": {"rollbackPrs": true}
DEBUG: migrated config (repository=dsch1/renovate-rollback-pr)
"config": {"rollbackPrs": true}
DEBUG: Found repo ignorePaths (repository=dsch1/renovate-rollback-pr)
"ignorePaths": ["**/node_modules/**", "**/bower_components/**"]
DEBUG: No vulnerability alerts found (repository=dsch1/renovate-rollback-pr)
DEBUG: No baseBranches (repository=dsch1/renovate-rollback-pr)
DEBUG: extract() (repository=dsch1/renovate-rollback-pr)
DEBUG: Setting current branch to main (repository=dsch1/renovate-rollback-pr)
DEBUG: latest commit (repository=dsch1/renovate-rollback-pr)
"branchName": "main",
"latestCommitDate": "2022-08-24T10:21:54+02:00"
DEBUG: Using file match: (^|/)tasks/[^/]+\.ya?ml$ for manager ansible (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)requirements\.ya?ml$ for manager ansible-galaxy (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)galaxy\.ya?ml$ for manager ansible-galaxy (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: azure.*pipelines?.*\.ya?ml$ for manager azure-pipelines (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)batect(-bundle)?\.yml$ for manager batect (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)batect$ for manager batect-wrapper (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)WORKSPACE(|\.bazel)$ for manager bazel (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: \.bzl$ for manager bazel (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|\/)\.bazelversion$ for manager bazelisk (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)\.?bitbucket-pipelines\.ya?ml$ for manager bitbucket-pipelines (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: buildkite\.ya?ml for manager buildkite (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: \.buildkite/.+\.ya?ml$ for manager buildkite (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)Gemfile$ for manager bundler (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: \.cake$ for manager cake (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)Cargo\.toml$ for manager cargo (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)\.circleci/config\.yml$ for manager circleci (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)cloudbuild\.ya?ml for manager cloudbuild (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)Podfile$ for manager cocoapods (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)([\w-]*)composer\.json$ for manager composer (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)conanfile\.(txt|py)$ for manager conan (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)(?:deps|bb)\.edn$ for manager deps-edn (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)(?:docker-)?compose[^/]*\.ya?ml$ for manager docker-compose (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/|\.)Dockerfile$ for manager dockerfile (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)Dockerfile[^/]*$ for manager dockerfile (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)\.drone\.yml$ for manager droneci (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)fleet\.ya?ml for manager fleet (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)flux-system/gotk-components\.yaml$ for manager flux (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|\/)\.fvm\/fvm_config\.json$ for manager fvm (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)\.gitmodules$ for manager git-submodules (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: ^(workflow-templates|\.github\/workflows)\/[^/]+\.ya?ml$ for manager github-actions (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|\/)action\.ya?ml$ for manager github-actions (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: \.gitlab-ci\.yml$ for manager gitlabci (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: \.gitlab-ci\.yml$ for manager gitlabci-include (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)go\.mod$ for manager gomod (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: \.gradle(\.kts)?$ for manager gradle (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|\/)gradle\.properties$ for manager gradle (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|\/)gradle\/.+\.toml$ for manager gradle (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: \.versions\.toml$ for manager gradle (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)gradle/wrapper/gradle-wrapper\.properties$ for manager gradle-wrapper (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)requirements\.yaml$ for manager helm-requirements (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)values\.yaml$ for manager helm-values (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)helmfile\.yaml$ for manager helmfile (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)Chart\.yaml$ for manager helmv3 (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)bin/hermit$ for manager hermit (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: ^Formula/[^/]+[.]rb$ for manager homebrew (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: \.html?$ for manager html (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)plugins\.(txt|ya?ml)$ for manager jenkins (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)jsonnetfile\.json$ for manager jsonnet-bundler (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: ^.+\.main\.kts$ for manager kotlin-script (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)kustomization\.ya?ml$ for manager kustomize (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)project\.clj$ for manager leiningen (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/|\.)pom\.xml$ for manager maven (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: ^(((\.mvn)|(\.m2))/)?settings\.xml$ for manager maven (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)package\.js$ for manager meteor (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)mix\.exs$ for manager mix (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)\.node-version$ for manager nodenv (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)package\.json$ for manager npm (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: \.(?:cs|fs|vb)proj$ for manager nuget (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: \.(?:props|targets)$ for manager nuget (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|\/)dotnet-tools\.json$ for manager nuget (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|\/)global\.json$ for manager nuget (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)\.nvmrc$ for manager nvm (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)([\w-]*)requirements\.(txt|pip)$ for manager pip_requirements (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)setup\.py$ for manager pip_setup (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)Pipfile$ for manager pipenv (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)pyproject\.toml$ for manager poetry (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)\.pre-commit-config\.yaml$ for manager pre-commit (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)pubspec\.ya?ml$ for manager pub (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|\/)Puppetfile$ for manager puppet (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)\.python-version$ for manager pyenv (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)\.ruby-version$ for manager ruby-version (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: \.sbt$ for manager sbt (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: project/[^/]*.scala$ for manager sbt (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)setup\.cfg$ for manager setup-cfg (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)Package\.swift for manager swift (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: \.tf$ for manager terraform (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)\.terraform-version$ for manager terraform-version (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)terragrunt\.hcl$ for manager terragrunt (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)\.terragrunt-version$ for manager terragrunt-version (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: ^\.travis\.yml$ for manager travis (repository=dsch1/renovate-rollback-pr)
DEBUG: Using file match: (^|/)\.vela\.ya?ml$ for manager velaci (repository=dsch1/renovate-rollback-pr)
DEBUG: Matched 1 file(s) for manager poetry: pyproject.toml (repository=dsch1/renovate-rollback-pr)
DEBUG: Found poetry package files (repository=dsch1/renovate-rollback-pr)
DEBUG: Found 1 package file(s) (repository=dsch1/renovate-rollback-pr)
INFO: Dependency extraction complete (repository=dsch1/renovate-rollback-pr, baseBranch=main)
"stats": {
"managers": {"poetry": {"fileCount": 1, "depCount": 1}},
"total": {"fileCount": 1, "depCount": 1}
}
DEBUG: Missing version has nothing to roll back to (repository=dsch1/renovate-rollback-pr, packageFile=pyproject.toml)
"depName": "celery",
"currentValue": "5.2.3"
DEBUG: PackageFiles.add() - Package file saved for branch (repository=dsch1/renovate-rollback-pr, baseBranch=main)
DEBUG: Package releases lookups complete (repository=dsch1/renovate-rollback-pr, baseBranch=main)
DEBUG: branchifyUpgrades (repository=dsch1/renovate-rollback-pr)
DEBUG: 0 flattened updates found: (repository=dsch1/renovate-rollback-pr)
DEBUG: Returning 0 branch(es) (repository=dsch1/renovate-rollback-pr)
DEBUG: config.repoIsOnboarded=true (repository=dsch1/renovate-rollback-pr)
DEBUG: packageFiles with updates (repository=dsch1/renovate-rollback-pr, baseBranch=main)
"config": {
"poetry": [
{
"packageFile": "pyproject.toml",
"deps": [
{
"depName": "celery",
"depType": "dependencies",
"currentValue": "5.2.3",
"managerData": {"nestedVersion": false},
"datasource": "pypi",
"lockedVersion": "5.2.3",
"versioning": "pep440",
"depIndex": 0,
"updates": [],
"warnings": [
{
"topic": "celery",
"message": "Can't find version matching 5.2.3 for celery"
}
],
"sourceUrl": "https://github.com/celery/celery",
"homepage": "http://celeryproject.org",
"changelogUrl": "https://docs.celeryproject.org/en/stable/changelog.html"
}
],
"extractedConstraints": {"python": "^3.10"},
"lockFiles": ["poetry.lock"]
}
]
}
DEBUG: processRepo() (repository=dsch1/renovate-rollback-pr)
DEBUG: Processing 0 branches: (repository=dsch1/renovate-rollback-pr)
DEBUG: Calculated maximum PRs remaining this run (repository=dsch1/renovate-rollback-pr)
"prsRemaining": 99
DEBUG: PullRequests limit = 99 (repository=dsch1/renovate-rollback-pr)
DEBUG: Calculated maximum branches remaining this run (repository=dsch1/renovate-rollback-pr)
"branchesRemaining": 99
DEBUG: Branches limit = 99 (repository=dsch1/renovate-rollback-pr)
INFO: DRY-RUN: Would close Dependency Dashboard (repository=dsch1/renovate-rollback-pr)
"title": "Dependency Dashboard"
DEBUG: Removing any stale branches (repository=dsch1/renovate-rollback-pr)
DEBUG: config.repoIsOnboarded=true (repository=dsch1/renovate-rollback-pr)
DEBUG: No renovate branches found (repository=dsch1/renovate-rollback-pr)
DEBUG: ensureIssueClosing() (repository=dsch1/renovate-rollback-pr)
DEBUG: PackageFiles.clear() - Package files deleted (repository=dsch1/renovate-rollback-pr)
"baseBranches": ["main"]
DEBUG: Renovate repository PR statistics (repository=dsch1/renovate-rollback-pr)
"stats": {"total": 0, "open": 0, "closed": 0, "merged": 0}
DEBUG: Repository result: done, status: onboarded, enabled: true, onboarded: true (repository=dsch1/renovate-rollback-pr)
DEBUG: Repository timing splits (milliseconds) (repository=dsch1/renovate-rollback-pr)
"splits": {"init": 4458, "extract": 176, "lookup": 382, "onboarding": 1, "update": 1},
"total": 5493
DEBUG: http statistics (repository=dsch1/renovate-rollback-pr)
"urls": {
"https://gitlab.com/api/v4/projects/dsch1%2Frenovate-rollback-pr/issues (GET,200)": 1,
"https://gitlab.com/api/v4/projects/dsch1%2Frenovate-rollback-pr/merge_requests (GET,200)": 1,
"https://pypi.org/pypi/celery/json (GET,200)": 1
},
"hostStats": {
"gitlab.com": {"requestCount": 2, "requestAvgMs": 448, "queueAvgMs": 0},
"pypi.org": {"requestCount": 1, "requestAvgMs": 313, "queueAvgMs": 0}
},
"totalRequests": 3
INFO: Repository finished (repository=dsch1/renovate-rollback-pr)
"durationMs": 5493
DEBUG: Renovate exiting
```
</details>
### Have you created a minimal reproduction repository?
I have linked to a minimal reproduction repository in the bug description
|
non_process
|
rollbackpr breaks updates for poetry project how are you running renovate self hosted if you re self hosting renovate tell us what version of renovate you run please select which platform you are using if self hosting gitlab com if you re self hosting renovate tell us what version of the platform you run no response was this something which used to work for you and then stopped it used to work and then stopped describe the bug with rollbackprs enabled no prs for updates get created instead it fails with an error message can t find version matching for celery the log claims the version is missing although this version of celery is still available missing version has nothing to roll back to repository renovate rollback pr packagefile pyproject toml depname celery currentvalue minimal reproduction repository relevant debug logs logs debug using as regex engine debug parsing configs debug checking for config file in usr src app config js debug no config file found on disk skipping warn cli config dryrun property has been changed to full debug converting github com token into a global host rule debug file config config debug cli config config repositories dryrun full platform gitlab debug env config config hostrules hosttype github matchhost github com token token debug combined config config hostrules hosttype github matchhost github com token token repositories dryrun full platform gitlab debug found valid git version debug using default gitlab endpoint debug gitlab version is pre debug using platform gitauthor david schneider debug adding token authentication for gitlab com to hostrules debug using basedir tmp renovate debug using cachedir tmp renovate cache debug initializing renovate internal cache into tmp renovate cache renovate renovate cache debug commits limit null debug setting global hostrules debug adding token authentication for github com to hostrules debug adding token authentication for gitlab com to hostrules debug validatepresets debug reinitializing hostrules for repo debug clearing hostrules debug adding token authentication for github com to hostrules debug adding token authentication for gitlab com to hostrules info repository started repository renovate rollback pr renovateversion debug using localdir tmp renovate repos gitlab renovate rollback pr repository renovate rollback pr debug packagefiles clear package files deleted repository renovate rollback pr basebranches debug resetmemcache repository renovate rollback pr debug renovate rollback pr default branch main repository renovate rollback pr debug enabling git fs repository renovate rollback pr debug using http url repository renovate rollback pr url debug resetting npmrc repository renovate rollback pr debug detectsemanticcommits repository renovate rollback pr debug initializing git repository into tmp renovate repos gitlab renovate rollback pr repository renovate rollback pr debug performing blobless clone repository renovate rollback pr debug git clone completed repository renovate rollback pr durationms debug latest repository commit repository renovate rollback pr latestcommit hash date message onboarding refs head main origin main origin head body author name david schneider author email david schneider chargepoint com debug getcommitmessages repository renovate rollback pr debug semantic commits detection unknown repository renovate rollback pr debug no semantic commits detected repository renovate rollback pr debug checkonboarding repository renovate rollback pr debug isonboarded repository renovate rollback pr debug findfile renovate json repository renovate rollback pr debug config file exists repository renovate rollback pr filename renovate json debug ensureissueclosing repository renovate rollback pr debug repo is onboarded repository renovate rollback pr debug found renovate json config file repository renovate rollback pr debug repository config repository renovate rollback pr filename renovate json config rollbackprs true debug migrateandvalidate repository renovate rollback pr debug no config migration necessary repository renovate rollback pr debug massaged config repository renovate rollback pr config rollbackprs true debug migrated config repository renovate rollback pr config rollbackprs true debug found repo ignorepaths repository renovate rollback pr ignorepaths debug no vulnerability alerts found repository renovate rollback pr debug no basebranches repository renovate rollback pr debug extract repository renovate rollback pr debug setting current branch to main repository renovate rollback pr debug latest commit repository renovate rollback pr branchname main latestcommitdate debug using file match tasks ya ml for manager ansible repository renovate rollback pr debug using file match requirements ya ml for manager ansible galaxy repository renovate rollback pr debug using file match galaxy ya ml for manager ansible galaxy repository renovate rollback pr debug using file match azure pipelines ya ml for manager azure pipelines repository renovate rollback pr debug using file match batect bundle yml for manager batect repository renovate rollback pr debug using file match batect for manager batect wrapper repository renovate rollback pr debug using file match workspace bazel for manager bazel repository renovate rollback pr debug using file match bzl for manager bazel repository renovate rollback pr debug using file match bazelversion for manager bazelisk repository renovate rollback pr debug using file match bitbucket pipelines ya ml for manager bitbucket pipelines repository renovate rollback pr debug using file match buildkite ya ml for manager buildkite repository renovate rollback pr debug using file match buildkite ya ml for manager buildkite repository renovate rollback pr debug using file match gemfile for manager bundler repository renovate rollback pr debug using file match cake for manager cake repository renovate rollback pr debug using file match cargo toml for manager cargo repository renovate rollback pr debug using file match circleci config yml for manager circleci repository renovate rollback pr debug using file match cloudbuild ya ml for manager cloudbuild repository renovate rollback pr debug using file match podfile for manager cocoapods repository renovate rollback pr debug using file match composer json for manager composer repository renovate rollback pr debug using file match conanfile txt py for manager conan repository renovate rollback pr debug using file match deps bb edn for manager deps edn repository renovate rollback pr debug using file match docker compose ya ml for manager docker compose repository renovate rollback pr debug using file match dockerfile for manager dockerfile repository renovate rollback pr debug using file match dockerfile for manager dockerfile repository renovate rollback pr debug using file match drone yml for manager droneci repository renovate rollback pr debug using file match fleet ya ml for manager fleet repository renovate rollback pr debug using file match flux system gotk components yaml for manager flux repository renovate rollback pr debug using file match fvm fvm config json for manager fvm repository renovate rollback pr debug using file match gitmodules for manager git submodules repository renovate rollback pr debug using file match workflow templates github workflows ya ml for manager github actions repository renovate rollback pr debug using file match action ya ml for manager github actions repository renovate rollback pr debug using file match gitlab ci yml for manager gitlabci repository renovate rollback pr debug using file match gitlab ci yml for manager gitlabci include repository renovate rollback pr debug using file match go mod for manager gomod repository renovate rollback pr debug using file match gradle kts for manager gradle repository renovate rollback pr debug using file match gradle properties for manager gradle repository renovate rollback pr debug using file match gradle toml for manager gradle repository renovate rollback pr debug using file match versions toml for manager gradle repository renovate rollback pr debug using file match gradle wrapper gradle wrapper properties for manager gradle wrapper repository renovate rollback pr debug using file match requirements yaml for manager helm requirements repository renovate rollback pr debug using file match values yaml for manager helm values repository renovate rollback pr debug using file match helmfile yaml for manager helmfile repository renovate rollback pr debug using file match chart yaml for manager repository renovate rollback pr debug using file match bin hermit for manager hermit repository renovate rollback pr debug using file match formula rb for manager homebrew repository renovate rollback pr debug using file match html for manager html repository renovate rollback pr debug using file match plugins txt ya ml for manager jenkins repository renovate rollback pr debug using file match jsonnetfile json for manager jsonnet bundler repository renovate rollback pr debug using file match main kts for manager kotlin script repository renovate rollback pr debug using file match kustomization ya ml for manager kustomize repository renovate rollback pr debug using file match project clj for manager leiningen repository renovate rollback pr debug using file match pom xml for manager maven repository renovate rollback pr debug using file match mvn settings xml for manager maven repository renovate rollback pr debug using file match package js for manager meteor repository renovate rollback pr debug using file match mix exs for manager mix repository renovate rollback pr debug using file match node version for manager nodenv repository renovate rollback pr debug using file match package json for manager npm repository renovate rollback pr debug using file match cs fs vb proj for manager nuget repository renovate rollback pr debug using file match props targets for manager nuget repository renovate rollback pr debug using file match dotnet tools json for manager nuget repository renovate rollback pr debug using file match global json for manager nuget repository renovate rollback pr debug using file match nvmrc for manager nvm repository renovate rollback pr debug using file match requirements txt pip for manager pip requirements repository renovate rollback pr debug using file match setup py for manager pip setup repository renovate rollback pr debug using file match pipfile for manager pipenv repository renovate rollback pr debug using file match pyproject toml for manager poetry repository renovate rollback pr debug using file match pre commit config yaml for manager pre commit repository renovate rollback pr debug using file match pubspec ya ml for manager pub repository renovate rollback pr debug using file match puppetfile for manager puppet repository renovate rollback pr debug using file match python version for manager pyenv repository renovate rollback pr debug using file match ruby version for manager ruby version repository renovate rollback pr debug using file match sbt for manager sbt repository renovate rollback pr debug using file match project scala for manager sbt repository renovate rollback pr debug using file match setup cfg for manager setup cfg repository renovate rollback pr debug using file match package swift for manager swift repository renovate rollback pr debug using file match tf for manager terraform repository renovate rollback pr debug using file match terraform version for manager terraform version repository renovate rollback pr debug using file match terragrunt hcl for manager terragrunt repository renovate rollback pr debug using file match terragrunt version for manager terragrunt version repository renovate rollback pr debug using file match travis yml for manager travis repository renovate rollback pr debug using file match vela ya ml for manager velaci repository renovate rollback pr debug matched file s for manager poetry pyproject toml repository renovate rollback pr debug found poetry package files repository renovate rollback pr debug found package file s repository renovate rollback pr info dependency extraction complete repository renovate rollback pr basebranch main stats managers poetry filecount depcount total filecount depcount debug missing version has nothing to roll back to repository renovate rollback pr packagefile pyproject toml depname celery currentvalue debug packagefiles add package file saved for branch repository renovate rollback pr basebranch main debug package releases lookups complete repository renovate rollback pr basebranch main debug branchifyupgrades repository renovate rollback pr debug flattened updates found repository renovate rollback pr debug returning branch es repository renovate rollback pr debug config repoisonboarded true repository renovate rollback pr debug packagefiles with updates repository renovate rollback pr basebranch main config poetry packagefile pyproject toml deps depname celery deptype dependencies currentvalue managerdata nestedversion false datasource pypi lockedversion versioning depindex updates warnings topic celery message can t find version matching for celery sourceurl homepage changelogurl extractedconstraints python lockfiles debug processrepo repository renovate rollback pr debug processing branches repository renovate rollback pr debug calculated maximum prs remaining this run repository renovate rollback pr prsremaining debug pullrequests limit repository renovate rollback pr debug calculated maximum branches remaining this run repository renovate rollback pr branchesremaining debug branches limit repository renovate rollback pr info dry run would close dependency dashboard repository renovate rollback pr title dependency dashboard debug removing any stale branches repository renovate rollback pr debug config repoisonboarded true repository renovate rollback pr debug no renovate branches found repository renovate rollback pr debug ensureissueclosing repository renovate rollback pr debug packagefiles clear package files deleted repository renovate rollback pr basebranches debug renovate repository pr statistics repository renovate rollback pr stats total open closed merged debug repository result done status onboarded enabled true onboarded true repository renovate rollback pr debug repository timing splits milliseconds repository renovate rollback pr splits init extract lookup onboarding update total debug http statistics repository renovate rollback pr urls get get get hoststats gitlab com requestcount requestavgms queueavgms pypi org requestcount requestavgms queueavgms totalrequests info repository finished repository renovate rollback pr durationms debug renovate exiting have you created a minimal reproduction repository i have linked to a minimal reproduction repository in the bug description
| 0
|
15,569
| 19,703,505,020
|
IssuesEvent
|
2022-01-12 19:08:06
|
googleapis/java-workflow-executions
|
https://api.github.com/repos/googleapis/java-workflow-executions
|
opened
|
Your .repo-metadata.json file has a problem 🤒
|
type: process repo-metadata: lint
|
You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* release_level must be equal to one of the allowed values in .repo-metadata.json
* api_shortname 'workflow-executions' invalid in .repo-metadata.json
☝️ Once you correct these problems, you can close this issue.
Reach out to **go/github-automation** if you have any questions.
|
1.0
|
Your .repo-metadata.json file has a problem 🤒 - You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* release_level must be equal to one of the allowed values in .repo-metadata.json
* api_shortname 'workflow-executions' invalid in .repo-metadata.json
☝️ Once you correct these problems, you can close this issue.
Reach out to **go/github-automation** if you have any questions.
|
process
|
your repo metadata json file has a problem 🤒 you have a problem with your repo metadata json file result of scan 📈 release level must be equal to one of the allowed values in repo metadata json api shortname workflow executions invalid in repo metadata json ☝️ once you correct these problems you can close this issue reach out to go github automation if you have any questions
| 1
|
15,138
| 18,891,988,806
|
IssuesEvent
|
2021-11-15 14:11:38
|
pystatgen/sgkit
|
https://api.github.com/repos/pystatgen/sgkit
|
closed
|
NumFOCUS onboarding
|
process + tools
|
- [ ] Add the “Powered by NumFOCUS” badge for your website and/or GitHub
- [ ] Add the Fiscal Sponsor Readme Attribution to your project’s GitHub
- [ ] Add the NumFOCUS sponsored project logo (with link to numfocus.org) to your project’s website (under About) and language indicating that you are a sponsored project of NumFOCUS.
- [ ] Link to the donation page.
|
1.0
|
NumFOCUS onboarding - - [ ] Add the “Powered by NumFOCUS” badge for your website and/or GitHub
- [ ] Add the Fiscal Sponsor Readme Attribution to your project’s GitHub
- [ ] Add the NumFOCUS sponsored project logo (with link to numfocus.org) to your project’s website (under About) and language indicating that you are a sponsored project of NumFOCUS.
- [ ] Link to the donation page.
|
process
|
numfocus onboarding add the “powered by numfocus” badge for your website and or github add the fiscal sponsor readme attribution to your project’s github add the numfocus sponsored project logo with link to numfocus org to your project’s website under about and language indicating that you are a sponsored project of numfocus link to the donation page
| 1
|
18,588
| 24,568,189,206
|
IssuesEvent
|
2022-10-13 06:15:31
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
closed
|
Unable to add new maintainer to go-maintainers GitHub team
|
type: support / not a bug (process) untriaged team-Bazel
|
### Description of the bug:
Members of the `bazelbuild/go-maintainers` team are unable to add new maintainers.
### What's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible.
1. Go to https://github.com/orgs/bazelbuild/teams/go-maintainers/members as a member of the go-maintainers team
2. Click "Add a member"
3. Type in the name of the maintainer you would like to add
4. Receive a message like this
<img width="805" alt="image" src="https://user-images.githubusercontent.com/37206/172758440-0f9449fb-b7ac-447d-8971-65ba2fca8e41.png">
I chose to put `@ghost` as the user for the screenshot for the party's privacy. If this requires someone from the Bazel team to click a button, I'm happy to out of band share their username.
### Which operating system are you running Bazel on?
_No response_
### What is the output of `bazel info release`?
_No response_
### If `bazel info release` returns `development version` or `(@non-git)`, tell us how you built Bazel.
_No response_
### What's the output of `git remote get-url origin; git rev-parse master; git rev-parse HEAD` ?
_No response_
### Have you found anything relevant by searching the web?
_No response_
### Any other information, logs, or outputs that you want to share?
_No response_
|
1.0
|
Unable to add new maintainer to go-maintainers GitHub team - ### Description of the bug:
Members of the `bazelbuild/go-maintainers` team are unable to add new maintainers.
### What's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible.
1. Go to https://github.com/orgs/bazelbuild/teams/go-maintainers/members as a member of the go-maintainers team
2. Click "Add a member"
3. Type in the name of the maintainer you would like to add
4. Receive a message like this
<img width="805" alt="image" src="https://user-images.githubusercontent.com/37206/172758440-0f9449fb-b7ac-447d-8971-65ba2fca8e41.png">
I chose to put `@ghost` as the user for the screenshot for the party's privacy. If this requires someone from the Bazel team to click a button, I'm happy to out of band share their username.
### Which operating system are you running Bazel on?
_No response_
### What is the output of `bazel info release`?
_No response_
### If `bazel info release` returns `development version` or `(@non-git)`, tell us how you built Bazel.
_No response_
### What's the output of `git remote get-url origin; git rev-parse master; git rev-parse HEAD` ?
_No response_
### Have you found anything relevant by searching the web?
_No response_
### Any other information, logs, or outputs that you want to share?
_No response_
|
process
|
unable to add new maintainer to go maintainers github team description of the bug members of the bazelbuild go maintainers team are unable to add new maintainers what s the simplest easiest way to reproduce this bug please provide a minimal example if possible go to as a member of the go maintainers team click add a member type in the name of the maintainer you would like to add receive a message like this img width alt image src i chose to put ghost as the user for the screenshot for the party s privacy if this requires someone from the bazel team to click a button i m happy to out of band share their username which operating system are you running bazel on no response what is the output of bazel info release no response if bazel info release returns development version or non git tell us how you built bazel no response what s the output of git remote get url origin git rev parse master git rev parse head no response have you found anything relevant by searching the web no response any other information logs or outputs that you want to share no response
| 1
|
518,958
| 15,037,997,298
|
IssuesEvent
|
2021-02-02 16:57:20
|
mredwin1/DivisionManagementSystem
|
https://api.github.com/repos/mredwin1/DivisionManagementSystem
|
closed
|
Assignee to be stored as employee_id as opposed to "first_name last_name"
|
Area: Backend Priority: Low Type: Enhancement
|
This will ensure if we never run into an issue when employees share the same first and last name since employee ID is unique.
|
1.0
|
Assignee to be stored as employee_id as opposed to "first_name last_name" - This will ensure if we never run into an issue when employees share the same first and last name since employee ID is unique.
|
non_process
|
assignee to be stored as employee id as opposed to first name last name this will ensure if we never run into an issue when employees share the same first and last name since employee id is unique
| 0
|
128,074
| 18,025,704,210
|
IssuesEvent
|
2021-09-17 04:03:15
|
scriptex/at-the-wall
|
https://api.github.com/repos/scriptex/at-the-wall
|
closed
|
CVE-2021-3795 (Medium) detected in semver-regex-2.0.0.tgz
|
security vulnerability
|
## CVE-2021-3795 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>semver-regex-2.0.0.tgz</b></p></summary>
<p>Regular expression for matching semver versions</p>
<p>Library home page: <a href="https://registry.npmjs.org/semver-regex/-/semver-regex-2.0.0.tgz">https://registry.npmjs.org/semver-regex/-/semver-regex-2.0.0.tgz</a></p>
<p>Path to dependency file: at-the-wall/package.json</p>
<p>Path to vulnerable library: at-the-wall/node_modules/semver-regex/package.json</p>
<p>
Dependency Hierarchy:
- optisize-1.3.0.tgz (Root Library)
- imagemin-pngquant-9.0.1.tgz
- pngquant-bin-6.0.0.tgz
- bin-wrapper-4.1.0.tgz
- bin-version-check-4.0.0.tgz
- bin-version-3.1.0.tgz
- find-versions-3.2.0.tgz
- :x: **semver-regex-2.0.0.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
semver-regex is vulnerable to Inefficient Regular Expression Complexity
<p>Publish Date: 2021-09-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3795>CVE-2021-3795</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: N/A
- Attack Complexity: N/A
- Privileges Required: N/A
- User Interaction: N/A
- Scope: N/A
- Impact Metrics:
- Confidentiality Impact: N/A
- Integrity Impact: N/A
- Availability Impact: N/A
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/sindresorhus/semver-regex/releases/tag/v4.0.1">https://github.com/sindresorhus/semver-regex/releases/tag/v4.0.1</a></p>
<p>Release Date: 2021-09-15</p>
<p>Fix Resolution: semver-regex - 3.1.3,4.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-3795 (Medium) detected in semver-regex-2.0.0.tgz - ## CVE-2021-3795 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>semver-regex-2.0.0.tgz</b></p></summary>
<p>Regular expression for matching semver versions</p>
<p>Library home page: <a href="https://registry.npmjs.org/semver-regex/-/semver-regex-2.0.0.tgz">https://registry.npmjs.org/semver-regex/-/semver-regex-2.0.0.tgz</a></p>
<p>Path to dependency file: at-the-wall/package.json</p>
<p>Path to vulnerable library: at-the-wall/node_modules/semver-regex/package.json</p>
<p>
Dependency Hierarchy:
- optisize-1.3.0.tgz (Root Library)
- imagemin-pngquant-9.0.1.tgz
- pngquant-bin-6.0.0.tgz
- bin-wrapper-4.1.0.tgz
- bin-version-check-4.0.0.tgz
- bin-version-3.1.0.tgz
- find-versions-3.2.0.tgz
- :x: **semver-regex-2.0.0.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
semver-regex is vulnerable to Inefficient Regular Expression Complexity
<p>Publish Date: 2021-09-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3795>CVE-2021-3795</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: N/A
- Attack Complexity: N/A
- Privileges Required: N/A
- User Interaction: N/A
- Scope: N/A
- Impact Metrics:
- Confidentiality Impact: N/A
- Integrity Impact: N/A
- Availability Impact: N/A
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/sindresorhus/semver-regex/releases/tag/v4.0.1">https://github.com/sindresorhus/semver-regex/releases/tag/v4.0.1</a></p>
<p>Release Date: 2021-09-15</p>
<p>Fix Resolution: semver-regex - 3.1.3,4.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in semver regex tgz cve medium severity vulnerability vulnerable library semver regex tgz regular expression for matching semver versions library home page a href path to dependency file at the wall package json path to vulnerable library at the wall node modules semver regex package json dependency hierarchy optisize tgz root library imagemin pngquant tgz pngquant bin tgz bin wrapper tgz bin version check tgz bin version tgz find versions tgz x semver regex tgz vulnerable library found in base branch master vulnerability details semver regex is vulnerable to inefficient regular expression complexity publish date url a href cvss score details base score metrics exploitability metrics attack vector n a attack complexity n a privileges required n a user interaction n a scope n a impact metrics confidentiality impact n a integrity impact n a availability impact n a for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution semver regex step up your open source security game with whitesource
| 0
|
29,536
| 5,641,975,557
|
IssuesEvent
|
2017-04-06 20:01:11
|
ExactTarget/fuelux
|
https://api.github.com/repos/ExactTarget/fuelux
|
opened
|
Out of date theme documentation page
|
Documentation
|
Looks like the theme heroku app has been taken down. If it's going to stay down, either the page or the iframe should be removed.

http://getfuelux.com/mctheme/index.html#mctheme
|
1.0
|
Out of date theme documentation page - Looks like the theme heroku app has been taken down. If it's going to stay down, either the page or the iframe should be removed.

http://getfuelux.com/mctheme/index.html#mctheme
|
non_process
|
out of date theme documentation page looks like the theme heroku app has been taken down if it s going to stay down either the page or the iframe should be removed
| 0
|
19,144
| 25,206,229,324
|
IssuesEvent
|
2022-11-13 18:10:51
|
darktable-org/darktable
|
https://api.github.com/repos/darktable-org/darktable
|
closed
|
Filmic defaults for dt 4.2
|
feature: enhancement scope: UI scope: image processing
|
As of lately (since dt 4.0) there has been quite a bunch of confusion caused by filmic among the userbase. I think these are mainly due to two reasons:
1. Introduction of v6 color science which always preserves the chromaticity angle, even in the "no" preservation mode. It also removed the desaturation curve which might have hidden some issues with color shift to red in e.g. sunset images.
2. Default preservation mode was changed to max RGB. It magnifies issues with clipped highlights and causes color casts where things should be white. I've personally never quite liked the flattening effect on local contrast it causes, and I think others have found it two.
Number 1 is a tougher one to address. There are some possible tweaks:
* Allowing the user to relax the preservation of chromaticity angle, like the slider in sigmoid
* Changing the gamut mapping in the highlights a bit to preserve the original saturation a bit better.
However this would require introduction of a new color science mode. I have been experimenting with some things that would benefit filmic's output, but it would be more likely to introduce those early in the 4.3 dev cycle. I personally think it's a bit late to have them for 4.2.
Number 2 can be easily addressed by changing the default preservation mode to e.g. RGB power which has worked previously pretty well for people as far as I've seen. Defaults are kind of important as they give the first impression of the module, its capabilities and "look".
Also there's https://github.com/darktable-org/darktable/issues/12442 which would be pretty easy to address for now by changing the filmic highlights reconstruction parameters a bit (still need to test a bit if it holds up).
So my proposals for changes of defaults would be:
1. Preservation mode to RGB power
2. Highlight reconstruction threshold to 6 EV
3. Highlight reconstruction transition to 0.25 EV
For 2. and 3. I'd possibly also consider changing the minimum of the transition to 0 and changing the code such that 0 would entirely disable the reconstruction. Currently it is not possible to disable it entirely.
@TurboGit and others what do you think of this?
|
1.0
|
Filmic defaults for dt 4.2 - As of lately (since dt 4.0) there has been quite a bunch of confusion caused by filmic among the userbase. I think these are mainly due to two reasons:
1. Introduction of v6 color science which always preserves the chromaticity angle, even in the "no" preservation mode. It also removed the desaturation curve which might have hidden some issues with color shift to red in e.g. sunset images.
2. Default preservation mode was changed to max RGB. It magnifies issues with clipped highlights and causes color casts where things should be white. I've personally never quite liked the flattening effect on local contrast it causes, and I think others have found it two.
Number 1 is a tougher one to address. There are some possible tweaks:
* Allowing the user to relax the preservation of chromaticity angle, like the slider in sigmoid
* Changing the gamut mapping in the highlights a bit to preserve the original saturation a bit better.
However this would require introduction of a new color science mode. I have been experimenting with some things that would benefit filmic's output, but it would be more likely to introduce those early in the 4.3 dev cycle. I personally think it's a bit late to have them for 4.2.
Number 2 can be easily addressed by changing the default preservation mode to e.g. RGB power which has worked previously pretty well for people as far as I've seen. Defaults are kind of important as they give the first impression of the module, its capabilities and "look".
Also there's https://github.com/darktable-org/darktable/issues/12442 which would be pretty easy to address for now by changing the filmic highlights reconstruction parameters a bit (still need to test a bit if it holds up).
So my proposals for changes of defaults would be:
1. Preservation mode to RGB power
2. Highlight reconstruction threshold to 6 EV
3. Highlight reconstruction transition to 0.25 EV
For 2. and 3. I'd possibly also consider changing the minimum of the transition to 0 and changing the code such that 0 would entirely disable the reconstruction. Currently it is not possible to disable it entirely.
@TurboGit and others what do you think of this?
|
process
|
filmic defaults for dt as of lately since dt there has been quite a bunch of confusion caused by filmic among the userbase i think these are mainly due to two reasons introduction of color science which always preserves the chromaticity angle even in the no preservation mode it also removed the desaturation curve which might have hidden some issues with color shift to red in e g sunset images default preservation mode was changed to max rgb it magnifies issues with clipped highlights and causes color casts where things should be white i ve personally never quite liked the flattening effect on local contrast it causes and i think others have found it two number is a tougher one to address there are some possible tweaks allowing the user to relax the preservation of chromaticity angle like the slider in sigmoid changing the gamut mapping in the highlights a bit to preserve the original saturation a bit better however this would require introduction of a new color science mode i have been experimenting with some things that would benefit filmic s output but it would be more likely to introduce those early in the dev cycle i personally think it s a bit late to have them for number can be easily addressed by changing the default preservation mode to e g rgb power which has worked previously pretty well for people as far as i ve seen defaults are kind of important as they give the first impression of the module its capabilities and look also there s which would be pretty easy to address for now by changing the filmic highlights reconstruction parameters a bit still need to test a bit if it holds up so my proposals for changes of defaults would be preservation mode to rgb power highlight reconstruction threshold to ev highlight reconstruction transition to ev for and i d possibly also consider changing the minimum of the transition to and changing the code such that would entirely disable the reconstruction currently it is not possible to disable it entirely turbogit and others what do you think of this
| 1
|
15,168
| 18,925,353,209
|
IssuesEvent
|
2021-11-17 08:58:07
|
mozilla-mobile/focus-android
|
https://api.github.com/repos/mozilla-mobile/focus-android
|
opened
|
Automation: Uplift translations from main to release branches
|
l10n eng:automation L10N process (FADP-17)
|
We have an automated process for landing translations in `main`. Whenever we branch for an upcoming release the current state of translations will be on that branch and no further updates will reach the release branches. This can be problematic for translations that come in after branching, but also for fixing bugs in translations (see https://github.com/mozilla-mobile/focus-android/issues/5813).
In the Fenix repository we automatically uplift translations from `main` to release branches. This is done by the following GitHub workflow:
https://github.com/mozilla-mobile/fenix/blob/main/.github/workflows/sync-strings.yml
We need a similar process here too. The workflow in Fenix uses two custom actions (`mozilla-mobile/fenix-beta-version`, `mozilla-mobile/sync-strings-action`). We need to check if they work out of the box with Focus too or if we need to adapt them or create Focus specific versions.
|
1.0
|
Automation: Uplift translations from main to release branches - We have an automated process for landing translations in `main`. Whenever we branch for an upcoming release the current state of translations will be on that branch and no further updates will reach the release branches. This can be problematic for translations that come in after branching, but also for fixing bugs in translations (see https://github.com/mozilla-mobile/focus-android/issues/5813).
In the Fenix repository we automatically uplift translations from `main` to release branches. This is done by the following GitHub workflow:
https://github.com/mozilla-mobile/fenix/blob/main/.github/workflows/sync-strings.yml
We need a similar process here too. The workflow in Fenix uses two custom actions (`mozilla-mobile/fenix-beta-version`, `mozilla-mobile/sync-strings-action`). We need to check if they work out of the box with Focus too or if we need to adapt them or create Focus specific versions.
|
process
|
automation uplift translations from main to release branches we have an automated process for landing translations in main whenever we branch for an upcoming release the current state of translations will be on that branch and no further updates will reach the release branches this can be problematic for translations that come in after branching but also for fixing bugs in translations see in the fenix repository we automatically uplift translations from main to release branches this is done by the following github workflow we need a similar process here too the workflow in fenix uses two custom actions mozilla mobile fenix beta version mozilla mobile sync strings action we need to check if they work out of the box with focus too or if we need to adapt them or create focus specific versions
| 1
|
15,716
| 19,849,205,623
|
IssuesEvent
|
2022-01-21 10:21:17
|
ooi-data/RS03AXPS-PC03A-4A-CTDPFA303-streamed-ctdpf_optode_sample
|
https://api.github.com/repos/ooi-data/RS03AXPS-PC03A-4A-CTDPFA303-streamed-ctdpf_optode_sample
|
opened
|
🛑 Processing failed: ValueError
|
process
|
## Overview
`ValueError` found in `processing_task` task during run ended on 2022-01-21T10:21:17.038179.
## Details
Flow name: `RS03AXPS-PC03A-4A-CTDPFA303-streamed-ctdpf_optode_sample`
Task name: `processing_task`
Error type: `ValueError`
Error message: cannot reshape array of size 1209600 into shape (2777778,)
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing
final_path = finalize_data_stream(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 84, in finalize_data_stream
append_to_zarr(mod_ds, final_store, enc, logger=logger)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 357, in append_to_zarr
_append_zarr(store, mod_ds)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 187, in _append_zarr
existing_arr.append(var_data.values)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2305, in append
return self._write_op(self._append_nosync, data, axis=axis)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2211, in _write_op
return self._synchronized_op(f, *args, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2201, in _synchronized_op
result = f(*args, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2341, in _append_nosync
self[append_selection] = data
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1224, in __setitem__
self.set_basic_selection(selection, value, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1319, in set_basic_selection
return self._set_basic_selection_nd(selection, value, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1610, in _set_basic_selection_nd
self._set_selection(indexer, value, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1682, in _set_selection
self._chunk_setitems(lchunk_coords, lchunk_selection, chunk_values,
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1871, in _chunk_setitems
cdatas = [self._process_for_setitem(key, sel, val, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1871, in <listcomp>
cdatas = [self._process_for_setitem(key, sel, val, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1950, in _process_for_setitem
chunk = self._decode_chunk(cdata)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2003, in _decode_chunk
chunk = chunk.reshape(expected_shape or self._chunks, order=self._order)
ValueError: cannot reshape array of size 1209600 into shape (2777778,)
```
</details>
|
1.0
|
🛑 Processing failed: ValueError - ## Overview
`ValueError` found in `processing_task` task during run ended on 2022-01-21T10:21:17.038179.
## Details
Flow name: `RS03AXPS-PC03A-4A-CTDPFA303-streamed-ctdpf_optode_sample`
Task name: `processing_task`
Error type: `ValueError`
Error message: cannot reshape array of size 1209600 into shape (2777778,)
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing
final_path = finalize_data_stream(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 84, in finalize_data_stream
append_to_zarr(mod_ds, final_store, enc, logger=logger)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 357, in append_to_zarr
_append_zarr(store, mod_ds)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 187, in _append_zarr
existing_arr.append(var_data.values)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2305, in append
return self._write_op(self._append_nosync, data, axis=axis)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2211, in _write_op
return self._synchronized_op(f, *args, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2201, in _synchronized_op
result = f(*args, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2341, in _append_nosync
self[append_selection] = data
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1224, in __setitem__
self.set_basic_selection(selection, value, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1319, in set_basic_selection
return self._set_basic_selection_nd(selection, value, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1610, in _set_basic_selection_nd
self._set_selection(indexer, value, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1682, in _set_selection
self._chunk_setitems(lchunk_coords, lchunk_selection, chunk_values,
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1871, in _chunk_setitems
cdatas = [self._process_for_setitem(key, sel, val, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1871, in <listcomp>
cdatas = [self._process_for_setitem(key, sel, val, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1950, in _process_for_setitem
chunk = self._decode_chunk(cdata)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2003, in _decode_chunk
chunk = chunk.reshape(expected_shape or self._chunks, order=self._order)
ValueError: cannot reshape array of size 1209600 into shape (2777778,)
```
</details>
|
process
|
🛑 processing failed valueerror overview valueerror found in processing task task during run ended on details flow name streamed ctdpf optode sample task name processing task error type valueerror error message cannot reshape array of size into shape traceback traceback most recent call last file srv conda envs notebook lib site packages ooi harvester processor pipeline py line in processing final path finalize data stream file srv conda envs notebook lib site packages ooi harvester processor init py line in finalize data stream append to zarr mod ds final store enc logger logger file srv conda envs notebook lib site packages ooi harvester processor init py line in append to zarr append zarr store mod ds file srv conda envs notebook lib site packages ooi harvester processor utils py line in append zarr existing arr append var data values file srv conda envs notebook lib site packages zarr core py line in append return self write op self append nosync data axis axis file srv conda envs notebook lib site packages zarr core py line in write op return self synchronized op f args kwargs file srv conda envs notebook lib site packages zarr core py line in synchronized op result f args kwargs file srv conda envs notebook lib site packages zarr core py line in append nosync self data file srv conda envs notebook lib site packages zarr core py line in setitem self set basic selection selection value fields fields file srv conda envs notebook lib site packages zarr core py line in set basic selection return self set basic selection nd selection value fields fields file srv conda envs notebook lib site packages zarr core py line in set basic selection nd self set selection indexer value fields fields file srv conda envs notebook lib site packages zarr core py line in set selection self chunk setitems lchunk coords lchunk selection chunk values file srv conda envs notebook lib site packages zarr core py line in chunk setitems cdatas self process for setitem key sel val fields fields file srv conda envs notebook lib site packages zarr core py line in cdatas self process for setitem key sel val fields fields file srv conda envs notebook lib site packages zarr core py line in process for setitem chunk self decode chunk cdata file srv conda envs notebook lib site packages zarr core py line in decode chunk chunk chunk reshape expected shape or self chunks order self order valueerror cannot reshape array of size into shape
| 1
|
87,464
| 17,270,839,363
|
IssuesEvent
|
2021-07-22 19:36:29
|
009-Personal-Alexa-like-Speech-Service/009---Personal-Alexa-like-Speech-Service
|
https://api.github.com/repos/009-Personal-Alexa-like-Speech-Service/009---Personal-Alexa-like-Speech-Service
|
closed
|
Define wrong input
|
code
|
e. g. the backround noise it too loud or Hal doesnt understand the spoken command -> implement a sentences like "I didnt understand, please be clear mate"
|
1.0
|
Define wrong input - e. g. the backround noise it too loud or Hal doesnt understand the spoken command -> implement a sentences like "I didnt understand, please be clear mate"
|
non_process
|
define wrong input e g the backround noise it too loud or hal doesnt understand the spoken command implement a sentences like i didnt understand please be clear mate
| 0
|
1,443
| 4,009,116,837
|
IssuesEvent
|
2016-05-13 01:23:21
|
BlesseNtumble/GalaxySpace
|
https://api.github.com/repos/BlesseNtumble/GalaxySpace
|
closed
|
Crash
|
bug in the process of correcting
|
---- Minecraft Crash Report ----
// Oops.
Time: 12.05.16 10:43
Description: Exception in server tick loop
cpw.mods.fml.common.LoaderException: java.lang.NoSuchFieldError: textures
at cpw.mods.fml.common.LoadController.transition(LoadController.java:163)
at cpw.mods.fml.common.Loader.preinitializeMods(Loader.java:559)
at cpw.mods.fml.server.FMLServerHandler.beginServerLoading(FMLServerHandler.java:88)
at cpw.mods.fml.common.FMLCommonHandler.onServerStart(FMLCommonHandler.java:319)
at net.minecraft.server.dedicated.DedicatedServer.func_71197_b(DedicatedServer.java:176)
at net.minecraft.server.MinecraftServer.run(MinecraftServer.java:631)
at java.lang.Thread.run(Unknown Source)
Caused by: java.lang.NoSuchFieldError: textures
at galaxyspace.SolarSystem.planets.mercury.blocks.MercuryBlocks.<init>(MercuryBlocks.java:32)
at galaxyspace.SolarSystem.core.registers.blocks.GSBlocks.initialize(GSBlocks.java:151)
at galaxyspace.GalaxySpace.preInit(GalaxySpace.java:164)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at cpw.mods.fml.common.FMLModContainer.handleModStateEvent(FMLModContainer.java:532)
at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at com.google.common.eventbus.EventSubscriber.handleEvent(EventSubscriber.java:74)
at com.google.common.eventbus.SynchronizedEventSubscriber.handleEvent(SynchronizedEventSubscriber.java:47)
at com.google.common.eventbus.EventBus.dispatch(EventBus.java:322)
at com.google.common.eventbus.EventBus.dispatchQueuedEvents(EventBus.java:304)
at com.google.common.eventbus.EventBus.post(EventBus.java:275)
at cpw.mods.fml.common.LoadController.sendEventToModContainer(LoadController.java:212)
at cpw.mods.fml.common.LoadController.propogateStateMessage(LoadController.java:190)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at com.google.common.eventbus.EventSubscriber.handleEvent(EventSubscriber.java:74)
at com.google.common.eventbus.SynchronizedEventSubscriber.handleEvent(SynchronizedEventSubscriber.java:47)
at com.google.common.eventbus.EventBus.dispatch(EventBus.java:322)
at com.google.common.eventbus.EventBus.dispatchQueuedEvents(EventBus.java:304)
at com.google.common.eventbus.EventBus.post(EventBus.java:275)
at cpw.mods.fml.common.LoadController.distributeStateMessage(LoadController.java:119)
at cpw.mods.fml.common.Loader.preinitializeMods(Loader.java:556)
... 5 more
A detailed walkthrough of the error, its code path and all known details is as follows:
---------------------------------------------------------------------------------------
-- System Details --
Details:
Minecraft Version: 1.7.10
KCauldron Version: pw.prok:KCauldron:1.7.10-1492.152
Operating System: Windows 10 (amd64) version 10.0
Java Version: 1.8.0_77, Oracle Corporation
Java VM Version: Java HotSpot(TM) 64-Bit Server VM (mixed mode), Oracle Corporation
Memory: 92107232 bytes (87 MB) / 259522560 bytes (247 MB) up to 4260102144 bytes (4062 MB)
JVM Flags: 2 total; -Xincgc -Xmx4G
AABB Pool Size: 0 (0 bytes; 0 MB) allocated, 0 (0 bytes; 0 MB) used
IntCache: cache: 0, tcache: 0, allocated: 0, tallocated: 0
FML: MCP v9.05 FML v7.10.99.99 Minecraft Forge 10.13.4.1492 37 mods loaded, 37 mods active
States: 'U' = Unloaded 'L' = Loaded 'C' = Constructed 'H' = Pre-initialized 'I' = Initialized 'J' = Post-initialized 'A' = Available 'D' = Disabled 'E' = Errored
UCH mcp{9.05} [Minecraft Coder Pack] (minecraft.jar)
UCH FML{7.10.99.99} [Forge Mod Loader] (spigot.jar)
UCH Forge{10.13.4.1492} [Minecraft Forge] (spigot.jar)
UCH kimagine{0.1} [KImagine] (minecraft.jar)
UCH appliedenergistics2-core{rv2-stable-10} [AppliedEnergistics2 Core] (minecraft.jar)
UCH CodeChickenCore{1.0.6.39} [CodeChicken Core] (minecraft.jar)
UCH Micdoodlecore{} [Micdoodle8 Core] (minecraft.jar)
UCH MobiusCore{1.2.5} [MobiusCore] (minecraft.jar)
UCH NotEnoughItems{1.0.4.100} [Not Enough Items] (NotEnoughItems-1.7.10-1.0.4.100-universal.jar)
UCH appliedenergistics2{rv2-stable-10} [Applied Energistics 2] (appliedenergistics2-rv2-stable-10.jar)
UCH mod_AsyncWorldEdit_Injector{2.1.0} [AsyncWorldEdit Injector] (AsyncWorldEditInjector.jar)
UCH BiblioCraft{1.10.4} [BiblioCraft] (BiblioCraft[v1.10.4][MC1.7.10].jar)
UCH BuildCraft|Core{7.1.16} [BuildCraft] (buildcraft-7.1.16.jar)
UCH BuildCraft|Builders{7.1.16} [BC Builders] (buildcraft-7.1.16.jar)
UCH BuildCraft|Transport{7.1.16} [BC Transport] (buildcraft-7.1.16.jar)
UCH BuildCraft|Energy{7.1.16} [BC Energy] (buildcraft-7.1.16.jar)
UCH BuildCraft|Silicon{7.1.16} [BC Silicon] (buildcraft-7.1.16.jar)
UCH BuildCraft|Robotics{7.1.16} [BC Robotics] (buildcraft-7.1.16.jar)
UCH BuildCraft|Factory{7.1.16} [BC Factory] (buildcraft-7.1.16.jar)
UCH chisel{2.5.1.a790281} [Chisel 2] (Chisel2-2.5.1.a790281.jar)
UCH ChunkPurge{2.1} [Chunk Purge] (ChunkPurge-1.7.10-2.1.jar)
UCH customnpcs{1.7.10d} [CustomNpcs] (CustomNPCs_1.7.10d.jar)
UCH DragonsRadioMod{1.6.3} [Dragon's Radio Mod] (DragonsRadioMod-MC1.7.10-1.6.3.jar)
UCH EventHelper{1.6} [EventHelper] (EventHelper-1.6.jar)
UCH IC2{2.2.820-experimental} [IndustrialCraft 2] (industrialcraft-2-2.2.820-experimental.jar)
UCH GalacticraftCore{3.0.12} [Galacticraft Core] (GalacticraftCore-1.7-3.0.12.460.jar)
UCH GalacticraftMars{3.0.12} [Galacticraft Planets] (Galacticraft-Planets-1.7-3.0.12.460.jar)
UCE GalaxySpace{1.0.9} [GalaxySpace] (GalaxySpace-1.0.9 STABLE.jar)
UCH GraviSuite{1.7.10-2.0.3} [Graviation Suite] (GraviSuite-1.7.10-2.0.3.jar)
UCH IC2LaserFix{2.0} [IC2 Laser Fix] (IC2LaserFix-1.7.10.jar)
UCH IC2NuclearControl{2.3.3a-Exist} [Nuclear Control 2] (IC2NuclearControl-2.3.3a-Exist.jar)
UCH IronChest{6.0.62.742} [Iron Chest] (ironchest-1.7.10-6.0.62.742-universal.jar)
UCH MarketCompanion{2.0.0} [MarketCompanion] (MarketCompanion-1.7.10-2.0.1.jar)
UCH MineTweaker3{3.0.10} [MineTweaker 3] (MineTweaker3-1.7.10-3.0.10B.jar)
UCH MapWriter{2.1.2} [MapWriter] (Opis-1.2.5_1.7.10.jar)
UCH Opis{1.2.5} [Opis] (Opis-1.2.5_1.7.10.jar)
UCH worldedit{6.0-beta-01} [WorldEdit] (worldedit-forge-mc1.7.10-6.0-beta-01.jar)
AE2 Version: stable rv2-stable-10 for Forge 10.13.2.1291
Profiler Position: N/A (disabled)
Is Modded: Definitely; Server brand changed to 'kcauldron,cauldron,craftbukkit,mcpc,fml,forge'
Type: Dedicated Server (map_server.txt)
|
1.0
|
Crash - ---- Minecraft Crash Report ----
// Oops.
Time: 12.05.16 10:43
Description: Exception in server tick loop
cpw.mods.fml.common.LoaderException: java.lang.NoSuchFieldError: textures
at cpw.mods.fml.common.LoadController.transition(LoadController.java:163)
at cpw.mods.fml.common.Loader.preinitializeMods(Loader.java:559)
at cpw.mods.fml.server.FMLServerHandler.beginServerLoading(FMLServerHandler.java:88)
at cpw.mods.fml.common.FMLCommonHandler.onServerStart(FMLCommonHandler.java:319)
at net.minecraft.server.dedicated.DedicatedServer.func_71197_b(DedicatedServer.java:176)
at net.minecraft.server.MinecraftServer.run(MinecraftServer.java:631)
at java.lang.Thread.run(Unknown Source)
Caused by: java.lang.NoSuchFieldError: textures
at galaxyspace.SolarSystem.planets.mercury.blocks.MercuryBlocks.<init>(MercuryBlocks.java:32)
at galaxyspace.SolarSystem.core.registers.blocks.GSBlocks.initialize(GSBlocks.java:151)
at galaxyspace.GalaxySpace.preInit(GalaxySpace.java:164)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at cpw.mods.fml.common.FMLModContainer.handleModStateEvent(FMLModContainer.java:532)
at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at com.google.common.eventbus.EventSubscriber.handleEvent(EventSubscriber.java:74)
at com.google.common.eventbus.SynchronizedEventSubscriber.handleEvent(SynchronizedEventSubscriber.java:47)
at com.google.common.eventbus.EventBus.dispatch(EventBus.java:322)
at com.google.common.eventbus.EventBus.dispatchQueuedEvents(EventBus.java:304)
at com.google.common.eventbus.EventBus.post(EventBus.java:275)
at cpw.mods.fml.common.LoadController.sendEventToModContainer(LoadController.java:212)
at cpw.mods.fml.common.LoadController.propogateStateMessage(LoadController.java:190)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at com.google.common.eventbus.EventSubscriber.handleEvent(EventSubscriber.java:74)
at com.google.common.eventbus.SynchronizedEventSubscriber.handleEvent(SynchronizedEventSubscriber.java:47)
at com.google.common.eventbus.EventBus.dispatch(EventBus.java:322)
at com.google.common.eventbus.EventBus.dispatchQueuedEvents(EventBus.java:304)
at com.google.common.eventbus.EventBus.post(EventBus.java:275)
at cpw.mods.fml.common.LoadController.distributeStateMessage(LoadController.java:119)
at cpw.mods.fml.common.Loader.preinitializeMods(Loader.java:556)
... 5 more
A detailed walkthrough of the error, its code path and all known details is as follows:
---------------------------------------------------------------------------------------
-- System Details --
Details:
Minecraft Version: 1.7.10
KCauldron Version: pw.prok:KCauldron:1.7.10-1492.152
Operating System: Windows 10 (amd64) version 10.0
Java Version: 1.8.0_77, Oracle Corporation
Java VM Version: Java HotSpot(TM) 64-Bit Server VM (mixed mode), Oracle Corporation
Memory: 92107232 bytes (87 MB) / 259522560 bytes (247 MB) up to 4260102144 bytes (4062 MB)
JVM Flags: 2 total; -Xincgc -Xmx4G
AABB Pool Size: 0 (0 bytes; 0 MB) allocated, 0 (0 bytes; 0 MB) used
IntCache: cache: 0, tcache: 0, allocated: 0, tallocated: 0
FML: MCP v9.05 FML v7.10.99.99 Minecraft Forge 10.13.4.1492 37 mods loaded, 37 mods active
States: 'U' = Unloaded 'L' = Loaded 'C' = Constructed 'H' = Pre-initialized 'I' = Initialized 'J' = Post-initialized 'A' = Available 'D' = Disabled 'E' = Errored
UCH mcp{9.05} [Minecraft Coder Pack] (minecraft.jar)
UCH FML{7.10.99.99} [Forge Mod Loader] (spigot.jar)
UCH Forge{10.13.4.1492} [Minecraft Forge] (spigot.jar)
UCH kimagine{0.1} [KImagine] (minecraft.jar)
UCH appliedenergistics2-core{rv2-stable-10} [AppliedEnergistics2 Core] (minecraft.jar)
UCH CodeChickenCore{1.0.6.39} [CodeChicken Core] (minecraft.jar)
UCH Micdoodlecore{} [Micdoodle8 Core] (minecraft.jar)
UCH MobiusCore{1.2.5} [MobiusCore] (minecraft.jar)
UCH NotEnoughItems{1.0.4.100} [Not Enough Items] (NotEnoughItems-1.7.10-1.0.4.100-universal.jar)
UCH appliedenergistics2{rv2-stable-10} [Applied Energistics 2] (appliedenergistics2-rv2-stable-10.jar)
UCH mod_AsyncWorldEdit_Injector{2.1.0} [AsyncWorldEdit Injector] (AsyncWorldEditInjector.jar)
UCH BiblioCraft{1.10.4} [BiblioCraft] (BiblioCraft[v1.10.4][MC1.7.10].jar)
UCH BuildCraft|Core{7.1.16} [BuildCraft] (buildcraft-7.1.16.jar)
UCH BuildCraft|Builders{7.1.16} [BC Builders] (buildcraft-7.1.16.jar)
UCH BuildCraft|Transport{7.1.16} [BC Transport] (buildcraft-7.1.16.jar)
UCH BuildCraft|Energy{7.1.16} [BC Energy] (buildcraft-7.1.16.jar)
UCH BuildCraft|Silicon{7.1.16} [BC Silicon] (buildcraft-7.1.16.jar)
UCH BuildCraft|Robotics{7.1.16} [BC Robotics] (buildcraft-7.1.16.jar)
UCH BuildCraft|Factory{7.1.16} [BC Factory] (buildcraft-7.1.16.jar)
UCH chisel{2.5.1.a790281} [Chisel 2] (Chisel2-2.5.1.a790281.jar)
UCH ChunkPurge{2.1} [Chunk Purge] (ChunkPurge-1.7.10-2.1.jar)
UCH customnpcs{1.7.10d} [CustomNpcs] (CustomNPCs_1.7.10d.jar)
UCH DragonsRadioMod{1.6.3} [Dragon's Radio Mod] (DragonsRadioMod-MC1.7.10-1.6.3.jar)
UCH EventHelper{1.6} [EventHelper] (EventHelper-1.6.jar)
UCH IC2{2.2.820-experimental} [IndustrialCraft 2] (industrialcraft-2-2.2.820-experimental.jar)
UCH GalacticraftCore{3.0.12} [Galacticraft Core] (GalacticraftCore-1.7-3.0.12.460.jar)
UCH GalacticraftMars{3.0.12} [Galacticraft Planets] (Galacticraft-Planets-1.7-3.0.12.460.jar)
UCE GalaxySpace{1.0.9} [GalaxySpace] (GalaxySpace-1.0.9 STABLE.jar)
UCH GraviSuite{1.7.10-2.0.3} [Graviation Suite] (GraviSuite-1.7.10-2.0.3.jar)
UCH IC2LaserFix{2.0} [IC2 Laser Fix] (IC2LaserFix-1.7.10.jar)
UCH IC2NuclearControl{2.3.3a-Exist} [Nuclear Control 2] (IC2NuclearControl-2.3.3a-Exist.jar)
UCH IronChest{6.0.62.742} [Iron Chest] (ironchest-1.7.10-6.0.62.742-universal.jar)
UCH MarketCompanion{2.0.0} [MarketCompanion] (MarketCompanion-1.7.10-2.0.1.jar)
UCH MineTweaker3{3.0.10} [MineTweaker 3] (MineTweaker3-1.7.10-3.0.10B.jar)
UCH MapWriter{2.1.2} [MapWriter] (Opis-1.2.5_1.7.10.jar)
UCH Opis{1.2.5} [Opis] (Opis-1.2.5_1.7.10.jar)
UCH worldedit{6.0-beta-01} [WorldEdit] (worldedit-forge-mc1.7.10-6.0-beta-01.jar)
AE2 Version: stable rv2-stable-10 for Forge 10.13.2.1291
Profiler Position: N/A (disabled)
Is Modded: Definitely; Server brand changed to 'kcauldron,cauldron,craftbukkit,mcpc,fml,forge'
Type: Dedicated Server (map_server.txt)
|
process
|
crash minecraft crash report oops time description exception in server tick loop cpw mods fml common loaderexception java lang nosuchfielderror textures at cpw mods fml common loadcontroller transition loadcontroller java at cpw mods fml common loader preinitializemods loader java at cpw mods fml server fmlserverhandler beginserverloading fmlserverhandler java at cpw mods fml common fmlcommonhandler onserverstart fmlcommonhandler java at net minecraft server dedicated dedicatedserver func b dedicatedserver java at net minecraft server minecraftserver run minecraftserver java at java lang thread run unknown source caused by java lang nosuchfielderror textures at galaxyspace solarsystem planets mercury blocks mercuryblocks mercuryblocks java at galaxyspace solarsystem core registers blocks gsblocks initialize gsblocks java at galaxyspace galaxyspace preinit galaxyspace java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke unknown source at sun reflect delegatingmethodaccessorimpl invoke unknown source at java lang reflect method invoke unknown source at cpw mods fml common fmlmodcontainer handlemodstateevent fmlmodcontainer java at sun reflect invoke unknown source at sun reflect delegatingmethodaccessorimpl invoke unknown source at java lang reflect method invoke unknown source at com google common eventbus eventsubscriber handleevent eventsubscriber java at com google common eventbus synchronizedeventsubscriber handleevent synchronizedeventsubscriber java at com google common eventbus eventbus dispatch eventbus java at com google common eventbus eventbus dispatchqueuedevents eventbus java at com google common eventbus eventbus post eventbus java at cpw mods fml common loadcontroller sendeventtomodcontainer loadcontroller java at cpw mods fml common loadcontroller propogatestatemessage loadcontroller java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke unknown source at sun reflect delegatingmethodaccessorimpl invoke unknown source at java lang reflect method invoke unknown source at com google common eventbus eventsubscriber handleevent eventsubscriber java at com google common eventbus synchronizedeventsubscriber handleevent synchronizedeventsubscriber java at com google common eventbus eventbus dispatch eventbus java at com google common eventbus eventbus dispatchqueuedevents eventbus java at com google common eventbus eventbus post eventbus java at cpw mods fml common loadcontroller distributestatemessage loadcontroller java at cpw mods fml common loader preinitializemods loader java more a detailed walkthrough of the error its code path and all known details is as follows system details details minecraft version kcauldron version pw prok kcauldron operating system windows version java version oracle corporation java vm version java hotspot tm bit server vm mixed mode oracle corporation memory bytes mb bytes mb up to bytes mb jvm flags total xincgc aabb pool size bytes mb allocated bytes mb used intcache cache tcache allocated tallocated fml mcp fml minecraft forge mods loaded mods active states u unloaded l loaded c constructed h pre initialized i initialized j post initialized a available d disabled e errored uch mcp minecraft jar uch fml spigot jar uch forge spigot jar uch kimagine minecraft jar uch core stable minecraft jar uch codechickencore minecraft jar uch micdoodlecore minecraft jar uch mobiuscore minecraft jar uch notenoughitems notenoughitems universal jar uch stable stable jar uch mod asyncworldedit injector asyncworldeditinjector jar uch bibliocraft bibliocraft jar uch buildcraft core buildcraft jar uch buildcraft builders buildcraft jar uch buildcraft transport buildcraft jar uch buildcraft energy buildcraft jar uch buildcraft silicon buildcraft jar uch buildcraft robotics buildcraft jar uch buildcraft factory buildcraft jar uch chisel jar uch chunkpurge chunkpurge jar uch customnpcs customnpcs jar uch dragonsradiomod dragonsradiomod jar uch eventhelper eventhelper jar uch experimental industrialcraft experimental jar uch galacticraftcore galacticraftcore jar uch galacticraftmars galacticraft planets jar uce galaxyspace galaxyspace stable jar uch gravisuite gravisuite jar uch jar uch exist exist jar uch ironchest ironchest universal jar uch marketcompanion marketcompanion jar uch jar uch mapwriter opis jar uch opis opis jar uch worldedit beta worldedit forge beta jar version stable stable for forge profiler position n a disabled is modded definitely server brand changed to kcauldron cauldron craftbukkit mcpc fml forge type dedicated server map server txt
| 1
|
15,927
| 20,144,845,078
|
IssuesEvent
|
2022-02-09 05:52:44
|
CMPT756-A5-Org-Patel-Dhruv/MYC756PROJECT
|
https://api.github.com/repos/CMPT756-A5-Org-Patel-Dhruv/MYC756PROJECT
|
opened
|
Update VISA_Balance column with average values
|
preprocessing
|
Write a suitable python code in a jupyter notebook to impute the 0 VISA_Balance column values and with Avg values.

|
1.0
|
Update VISA_Balance column with average values - Write a suitable python code in a jupyter notebook to impute the 0 VISA_Balance column values and with Avg values.

|
process
|
update visa balance column with average values write a suitable python code in a jupyter notebook to impute the visa balance column values and with avg values
| 1
|
41,528
| 21,726,569,758
|
IssuesEvent
|
2022-05-11 08:13:34
|
agda/agda-stdlib
|
https://api.github.com/repos/agda/agda-stdlib
|
closed
|
Disastrous expansion of operations in `Data.Rational.Base`
|
bug performance
|
The operations such as addition and multiplication in `Data.Rational.Base` are defined in terms of the numerator and denominator projection operations, e.g.
https://github.com/agda/agda-stdlib/blob/650e05f3a5ef06cb938c3b91a4fc8b9b8d7c5ef2/src/Data/Rational/Base.agda#L206-L207
This is beautiful and elegant, and arguably the right way to define them. However, it does mean that Agda unfolds these definitions automatically. By the time you have an expression `x + y + z + a + b + c` the unfolded expression is over 200 lines long.
This makes the rational numbers almost impossible to work with in large proofs. In [recent proofs](https://github.com/vehicle-lang/vehicle/blob/dev/examples/windController/agdaProof/AbstractRationals.agda) I've resorted to redefining everything that's needed under the `abstract` keyword.
|
True
|
Disastrous expansion of operations in `Data.Rational.Base` - The operations such as addition and multiplication in `Data.Rational.Base` are defined in terms of the numerator and denominator projection operations, e.g.
https://github.com/agda/agda-stdlib/blob/650e05f3a5ef06cb938c3b91a4fc8b9b8d7c5ef2/src/Data/Rational/Base.agda#L206-L207
This is beautiful and elegant, and arguably the right way to define them. However, it does mean that Agda unfolds these definitions automatically. By the time you have an expression `x + y + z + a + b + c` the unfolded expression is over 200 lines long.
This makes the rational numbers almost impossible to work with in large proofs. In [recent proofs](https://github.com/vehicle-lang/vehicle/blob/dev/examples/windController/agdaProof/AbstractRationals.agda) I've resorted to redefining everything that's needed under the `abstract` keyword.
|
non_process
|
disastrous expansion of operations in data rational base the operations such as addition and multiplication in data rational base are defined in terms of the numerator and denominator projection operations e g this is beautiful and elegant and arguably the right way to define them however it does mean that agda unfolds these definitions automatically by the time you have an expression x y z a b c the unfolded expression is over lines long this makes the rational numbers almost impossible to work with in large proofs in i ve resorted to redefining everything that s needed under the abstract keyword
| 0
|
166,209
| 26,323,689,724
|
IssuesEvent
|
2023-01-10 03:27:21
|
wso2/ballerina-plugin-vscode
|
https://api.github.com/repos/wso2/ballerina-plugin-vscode
|
closed
|
Place services in columns based on depth of interactions in Service Interaction view
|
Type/Improvement Area/ProjectDesignTool
|
Please see the attached image of ballerina gcp demo project in https://github.com/ballerina-guides/gcp-microservices-demo

1. Can we place the service in the column corresponding to depth of the service interaction tree.
2. Can we render the tree from top to down
|
1.0
|
Place services in columns based on depth of interactions in Service Interaction view - Please see the attached image of ballerina gcp demo project in https://github.com/ballerina-guides/gcp-microservices-demo

1. Can we place the service in the column corresponding to depth of the service interaction tree.
2. Can we render the tree from top to down
|
non_process
|
place services in columns based on depth of interactions in service interaction view please see the attached image of ballerina gcp demo project in can we place the service in the column corresponding to depth of the service interaction tree can we render the tree from top to down
| 0
|
163,365
| 12,719,685,659
|
IssuesEvent
|
2020-06-24 09:41:56
|
prestosql/presto
|
https://api.github.com/repos/prestosql/presto
|
opened
|
Automatically cancel old PR builds on pushes
|
enhancement test
|
During the migration from Travis to GitHub Actions we lost functionality
to automatically cancel old PR builds on pushes to the PR's source branch.
Provide equivalent functionality in GHA environment.
|
1.0
|
Automatically cancel old PR builds on pushes - During the migration from Travis to GitHub Actions we lost functionality
to automatically cancel old PR builds on pushes to the PR's source branch.
Provide equivalent functionality in GHA environment.
|
non_process
|
automatically cancel old pr builds on pushes during the migration from travis to github actions we lost functionality to automatically cancel old pr builds on pushes to the pr s source branch provide equivalent functionality in gha environment
| 0
|
326,892
| 24,106,614,738
|
IssuesEvent
|
2022-09-20 08:00:30
|
nnaisense/evotorch
|
https://api.github.com/repos/nnaisense/evotorch
|
closed
|
Simple example scripts
|
documentation
|
Simpler example scripts that demonstrate basic usage without extra dependencies (such as Sacred) will help new users get started.
|
1.0
|
Simple example scripts - Simpler example scripts that demonstrate basic usage without extra dependencies (such as Sacred) will help new users get started.
|
non_process
|
simple example scripts simpler example scripts that demonstrate basic usage without extra dependencies such as sacred will help new users get started
| 0
|
14,081
| 16,961,466,952
|
IssuesEvent
|
2021-06-29 04:55:18
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
Android Build failure
|
Bug Process: Fixed
|
**Describe the bug**
The github repo that provides the `'com.uphyca:stetho_realm:2.3.0'` package, https://github.com/WickeDev/stetho-realm/raw/master/maven-repo, no longer exists. Therefore, the build is failing with this message:
```
"Install Android SDK Platform 29 (revision: 5)" finished.
registerResGeneratingTask is deprecated, use registerGeneratedResFolders(FileCollection)
registerResGeneratingTask is deprecated, use registerGeneratedResFolders(FileCollection)
> Task :app:preBuild UP-TO-DATE
> Task :app:preFdaDebugBuild
> Task :app:preFdaDebugBuild FAILED
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':app:preFdaDebugBuild'.
> Could not resolve all files for configuration ':app:fdaDebugCompileClasspath'.
> Could not find com.uphyca:stetho_realm:2.3.0.
Required by:
project :app
```
Instead, replacing the github repo with https://github.com/uPhyca/stetho-realm and downgrading the version to 2.1.0 resulted in successful compilation
**To Reproduce**
Run `./gradlew test` for example
**Expected behavior**
Successful compilation and passing tests
**Desktop (please complete the following information):**
- Docker image `thyrlian/android-sdk:4.0`
- Codebase tag v2.0.3
**Labels**
Android
|
1.0
|
Android Build failure - **Describe the bug**
The github repo that provides the `'com.uphyca:stetho_realm:2.3.0'` package, https://github.com/WickeDev/stetho-realm/raw/master/maven-repo, no longer exists. Therefore, the build is failing with this message:
```
"Install Android SDK Platform 29 (revision: 5)" finished.
registerResGeneratingTask is deprecated, use registerGeneratedResFolders(FileCollection)
registerResGeneratingTask is deprecated, use registerGeneratedResFolders(FileCollection)
> Task :app:preBuild UP-TO-DATE
> Task :app:preFdaDebugBuild
> Task :app:preFdaDebugBuild FAILED
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':app:preFdaDebugBuild'.
> Could not resolve all files for configuration ':app:fdaDebugCompileClasspath'.
> Could not find com.uphyca:stetho_realm:2.3.0.
Required by:
project :app
```
Instead, replacing the github repo with https://github.com/uPhyca/stetho-realm and downgrading the version to 2.1.0 resulted in successful compilation
**To Reproduce**
Run `./gradlew test` for example
**Expected behavior**
Successful compilation and passing tests
**Desktop (please complete the following information):**
- Docker image `thyrlian/android-sdk:4.0`
- Codebase tag v2.0.3
**Labels**
Android
|
process
|
android build failure describe the bug the github repo that provides the com uphyca stetho realm package no longer exists therefore the build is failing with this message install android sdk platform revision finished registerresgeneratingtask is deprecated use registergeneratedresfolders filecollection registerresgeneratingtask is deprecated use registergeneratedresfolders filecollection task app prebuild up to date task app prefdadebugbuild task app prefdadebugbuild failed failure build failed with an exception what went wrong execution failed for task app prefdadebugbuild could not resolve all files for configuration app fdadebugcompileclasspath could not find com uphyca stetho realm required by project app instead replacing the github repo with and downgrading the version to resulted in successful compilation to reproduce run gradlew test for example expected behavior successful compilation and passing tests desktop please complete the following information docker image thyrlian android sdk codebase tag labels android
| 1
|
614,617
| 19,187,067,173
|
IssuesEvent
|
2021-12-05 11:37:19
|
ElPumpo/TinyNvidiaUpdateChecker
|
https://api.github.com/repos/ElPumpo/TinyNvidiaUpdateChecker
|
opened
|
Redo fetching latest graphics driver from NVIDIA
|
enhancement priority
|
The current code for fetching the latest drivers is not any good because it reports to NVIDIA a fixed GPU ID that doesn't match everyones GPU. This causes issues when there's different drivers available for different hardware.
|
1.0
|
Redo fetching latest graphics driver from NVIDIA - The current code for fetching the latest drivers is not any good because it reports to NVIDIA a fixed GPU ID that doesn't match everyones GPU. This causes issues when there's different drivers available for different hardware.
|
non_process
|
redo fetching latest graphics driver from nvidia the current code for fetching the latest drivers is not any good because it reports to nvidia a fixed gpu id that doesn t match everyones gpu this causes issues when there s different drivers available for different hardware
| 0
|
207,889
| 16,096,868,855
|
IssuesEvent
|
2021-04-27 02:00:43
|
SarayGal/Unac-Virtual-Tracking
|
https://api.github.com/repos/SarayGal/Unac-Virtual-Tracking
|
closed
|
Principales partes interesadas
|
ESTRATEGIA DE GESTIÓN DE LAS PARTES INTERESADAS documentation
|
Esto identifica el sub-conjunto de partes interesadas que han sido identificadas como partes interesadas clave y el razonamiento para determinar que son partes interesadas clave. Las partes interesadas clave son a menudo aquellas que potencialmente tienen más influencia sobre un proyecto o aquellos que pueden ser los más afectados por el proyecto. También pueden ser partes interesadas que se resisten al cambio representado por el proyecto. Estas partes interesadas clave pueden requerir más comunicación y gestión a lo largo del ciclo de vida del proyecto y es importante identificarlas para que busquen sus comentarios sobre su nivel deseado de participación y comunicación.
|
1.0
|
Principales partes interesadas - Esto identifica el sub-conjunto de partes interesadas que han sido identificadas como partes interesadas clave y el razonamiento para determinar que son partes interesadas clave. Las partes interesadas clave son a menudo aquellas que potencialmente tienen más influencia sobre un proyecto o aquellos que pueden ser los más afectados por el proyecto. También pueden ser partes interesadas que se resisten al cambio representado por el proyecto. Estas partes interesadas clave pueden requerir más comunicación y gestión a lo largo del ciclo de vida del proyecto y es importante identificarlas para que busquen sus comentarios sobre su nivel deseado de participación y comunicación.
|
non_process
|
principales partes interesadas esto identifica el sub conjunto de partes interesadas que han sido identificadas como partes interesadas clave y el razonamiento para determinar que son partes interesadas clave las partes interesadas clave son a menudo aquellas que potencialmente tienen más influencia sobre un proyecto o aquellos que pueden ser los más afectados por el proyecto también pueden ser partes interesadas que se resisten al cambio representado por el proyecto estas partes interesadas clave pueden requerir más comunicación y gestión a lo largo del ciclo de vida del proyecto y es importante identificarlas para que busquen sus comentarios sobre su nivel deseado de participación y comunicación
| 0
|
7,956
| 11,137,565,746
|
IssuesEvent
|
2019-12-20 19:42:51
|
openopps/openopps-platform
|
https://api.github.com/repos/openopps/openopps-platform
|
closed
|
DoS apply: Allow for sorting of work experience
|
Apply Process Requirements Ready State Dept.
|
Who: Applicants
What: Ability to sort work experience on an open opps student application
Why: in order to allow students to highlight which work experience should display first
Acceptance Criteria:
- Add the ability to sort work experience in Open Opps by using arrows
- Work experience will be pulled over from USAJOBS with a USAJOBS assigned sort order
- work experience can be reordered in Open Opps and when saved, the sort in Open Opps should also be saved.
Screen shot of sorting on Work Experience in USAJOBS:

F
|
1.0
|
DoS apply: Allow for sorting of work experience - Who: Applicants
What: Ability to sort work experience on an open opps student application
Why: in order to allow students to highlight which work experience should display first
Acceptance Criteria:
- Add the ability to sort work experience in Open Opps by using arrows
- Work experience will be pulled over from USAJOBS with a USAJOBS assigned sort order
- work experience can be reordered in Open Opps and when saved, the sort in Open Opps should also be saved.
Screen shot of sorting on Work Experience in USAJOBS:

F
|
process
|
dos apply allow for sorting of work experience who applicants what ability to sort work experience on an open opps student application why in order to allow students to highlight which work experience should display first acceptance criteria add the ability to sort work experience in open opps by using arrows work experience will be pulled over from usajobs with a usajobs assigned sort order work experience can be reordered in open opps and when saved the sort in open opps should also be saved screen shot of sorting on work experience in usajobs f
| 1
|
231,567
| 17,693,780,146
|
IssuesEvent
|
2021-08-24 13:15:11
|
ANCPLabOldenburg/ancp-bids
|
https://api.github.com/repos/ANCPLabOldenburg/ancp-bids
|
opened
|
Support browsing of schema graph
|
documentation
|
Pls add functionality that makes it possible for users to browse the BIDS graph defined in the schema
|
1.0
|
Support browsing of schema graph - Pls add functionality that makes it possible for users to browse the BIDS graph defined in the schema
|
non_process
|
support browsing of schema graph pls add functionality that makes it possible for users to browse the bids graph defined in the schema
| 0
|
17,671
| 23,494,973,650
|
IssuesEvent
|
2022-08-17 23:34:00
|
brucemiller/LaTeXML
|
https://api.github.com/repos/brucemiller/LaTeXML
|
closed
|
\tableofcontents within a section only generates TOC for that section if it's split out
|
bug postprocessing minor
|
If you run `latexmlc --splitat=section` on the following file, the desired table of contents contains only the entries for section B, rather than for the entire document. Not a huge bug, but slightly annoying.
```latex
\documentclass{amsart}
\begin{document}
\tableofcontents
\section{A}
\subsection{AA}
\subsection{AB}
\subsection{AC}
\section{B}
\subsection{BA}
\tableofcontents
\end{document}
```
|
1.0
|
\tableofcontents within a section only generates TOC for that section if it's split out - If you run `latexmlc --splitat=section` on the following file, the desired table of contents contains only the entries for section B, rather than for the entire document. Not a huge bug, but slightly annoying.
```latex
\documentclass{amsart}
\begin{document}
\tableofcontents
\section{A}
\subsection{AA}
\subsection{AB}
\subsection{AC}
\section{B}
\subsection{BA}
\tableofcontents
\end{document}
```
|
process
|
tableofcontents within a section only generates toc for that section if it s split out if you run latexmlc splitat section on the following file the desired table of contents contains only the entries for section b rather than for the entire document not a huge bug but slightly annoying latex documentclass amsart begin document tableofcontents section a subsection aa subsection ab subsection ac section b subsection ba tableofcontents end document
| 1
|
130,656
| 18,104,367,035
|
IssuesEvent
|
2021-09-22 17:29:12
|
elementary/wingpanel-indicator-keyboard
|
https://api.github.com/repos/elementary/wingpanel-indicator-keyboard
|
closed
|
Ibus daemon should be started if necessary when ibus input methods are in use
|
Priority: Wishlist Needs Design
|
<!--
* Please read and follow these tips: https://elementary.io/docs/code/reference#proposing-design-changes
* Be sure to search open and closed issues for duplicates
-->
## Problem
<!--Describe the problem that this new feature or idea is meant to address-->
When setting up ibus input methods with the switchboard plug it is necessary to turn the ibus daemon on (and the means to do so is provided). However after rebooting the ibus daemon is not restarted and the configured input method is not available. At the moment it is necessary to use a custom startup command to start ibus.
## Proposal
<!--Describe the new feature or idea that you would like to propose-->
The wingpanel indicator automatically starts the ibus daemon if there are ibus sources configured.
## Prior Art
<!--List any supporting examples of how others have implemented this feature-->
<!--Please be sure to preview your issue before saving. Thanks!-->
<bountysource-plugin>
---
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/100081894-ibus-daemon-should-be-started-if-necessary-when-ibus-input-methods-are-in-use?utm_campaign=plugin&utm_content=tracker%2F60236121&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F60236121&utm_medium=issues&utm_source=github).
</bountysource-plugin>
|
1.0
|
Ibus daemon should be started if necessary when ibus input methods are in use - <!--
* Please read and follow these tips: https://elementary.io/docs/code/reference#proposing-design-changes
* Be sure to search open and closed issues for duplicates
-->
## Problem
<!--Describe the problem that this new feature or idea is meant to address-->
When setting up ibus input methods with the switchboard plug it is necessary to turn the ibus daemon on (and the means to do so is provided). However after rebooting the ibus daemon is not restarted and the configured input method is not available. At the moment it is necessary to use a custom startup command to start ibus.
## Proposal
<!--Describe the new feature or idea that you would like to propose-->
The wingpanel indicator automatically starts the ibus daemon if there are ibus sources configured.
## Prior Art
<!--List any supporting examples of how others have implemented this feature-->
<!--Please be sure to preview your issue before saving. Thanks!-->
<bountysource-plugin>
---
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/100081894-ibus-daemon-should-be-started-if-necessary-when-ibus-input-methods-are-in-use?utm_campaign=plugin&utm_content=tracker%2F60236121&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F60236121&utm_medium=issues&utm_source=github).
</bountysource-plugin>
|
non_process
|
ibus daemon should be started if necessary when ibus input methods are in use please read and follow these tips be sure to search open and closed issues for duplicates problem when setting up ibus input methods with the switchboard plug it is necessary to turn the ibus daemon on and the means to do so is provided however after rebooting the ibus daemon is not restarted and the configured input method is not available at the moment it is necessary to use a custom startup command to start ibus proposal the wingpanel indicator automatically starts the ibus daemon if there are ibus sources configured prior art want to back this issue we accept bounties via
| 0
|
6,387
| 9,462,416,236
|
IssuesEvent
|
2019-04-17 15:23:34
|
googleapis/google-cloud-cpp
|
https://api.github.com/repos/googleapis/google-cloud-cpp
|
closed
|
Consider fixing warnings emitted by ShellCheck
|
type: process
|
[ShellCheck](https://github.com/koalaman/shellcheck) is a static analysis tool for shell scripts. I ran it on this codebase and here's the list of warnings emitted run through `uniq`. We can decide individually what warnings we care about and don't.
[SC2007](https://github.com/koalaman/shellcheck/wiki/SC2007): Use $((..)) instead of deprecated $[..]
[SC2016](https://github.com/koalaman/shellcheck/wiki/SC2016): Expressions don't expand in single quotes, use double quotes for that.
[SC2034](https://github.com/koalaman/shellcheck/wiki/SC2034): attempt appears unused. Verify use (or export if used externally).
[SC2044](https://github.com/koalaman/shellcheck/wiki/SC2044): For loops over find output are fragile. Use find -exec or a while read loop.
[SC2046](https://github.com/koalaman/shellcheck/wiki/SC2046): Quote this to prevent word splitting.
[SC2048](https://github.com/koalaman/shellcheck/wiki/SC2048): Use "$@" (with quotes) to prevent whitespace problems.
[SC2078](https://github.com/koalaman/shellcheck/wiki/SC2078): This expression is constant. Did you forget a $ somewhere?
[SC2086](https://github.com/koalaman/shellcheck/wiki/SC2086): Double quote to prevent globbing and word splitting.
[SC2146](https://github.com/koalaman/shellcheck/wiki/SC2146): This action ignores everything before the -o. Use \( \) to group.
[SC2153](https://github.com/koalaman/shellcheck/wiki/SC2153): Possible misspelling: BUCKET_NAME may not be assigned, but bucket_name is.
[SC2155](https://github.com/koalaman/shellcheck/wiki/SC2155): Declare and assign separately to avoid masking return values.
[SC2156](https://github.com/koalaman/shellcheck/wiki/SC2156): Injecting filenames is fragile and insecure. Use parameters.
[SC2164](https://github.com/koalaman/shellcheck/wiki/SC2164): Use 'cd ... || exit' or 'cd ... || return' in case cd fails.
[SC2166](https://github.com/koalaman/shellcheck/wiki/SC2166): Prefer [ p ] && [ q ] as [ p -a q ] is not well defined.
[SC2181](https://github.com/koalaman/shellcheck/wiki/SC2181): Check exit code directly with e.g. 'if mycmd;', not indirectly with $?.
[SC2196](https://github.com/koalaman/shellcheck/wiki/SC2196): egrep is non-standard and deprecated. Use grep -E instead.
[SC2230](https://github.com/koalaman/shellcheck/wiki/SC2230): which is non-standard. Use builtin 'command -v' instead.
|
1.0
|
Consider fixing warnings emitted by ShellCheck - [ShellCheck](https://github.com/koalaman/shellcheck) is a static analysis tool for shell scripts. I ran it on this codebase and here's the list of warnings emitted run through `uniq`. We can decide individually what warnings we care about and don't.
[SC2007](https://github.com/koalaman/shellcheck/wiki/SC2007): Use $((..)) instead of deprecated $[..]
[SC2016](https://github.com/koalaman/shellcheck/wiki/SC2016): Expressions don't expand in single quotes, use double quotes for that.
[SC2034](https://github.com/koalaman/shellcheck/wiki/SC2034): attempt appears unused. Verify use (or export if used externally).
[SC2044](https://github.com/koalaman/shellcheck/wiki/SC2044): For loops over find output are fragile. Use find -exec or a while read loop.
[SC2046](https://github.com/koalaman/shellcheck/wiki/SC2046): Quote this to prevent word splitting.
[SC2048](https://github.com/koalaman/shellcheck/wiki/SC2048): Use "$@" (with quotes) to prevent whitespace problems.
[SC2078](https://github.com/koalaman/shellcheck/wiki/SC2078): This expression is constant. Did you forget a $ somewhere?
[SC2086](https://github.com/koalaman/shellcheck/wiki/SC2086): Double quote to prevent globbing and word splitting.
[SC2146](https://github.com/koalaman/shellcheck/wiki/SC2146): This action ignores everything before the -o. Use \( \) to group.
[SC2153](https://github.com/koalaman/shellcheck/wiki/SC2153): Possible misspelling: BUCKET_NAME may not be assigned, but bucket_name is.
[SC2155](https://github.com/koalaman/shellcheck/wiki/SC2155): Declare and assign separately to avoid masking return values.
[SC2156](https://github.com/koalaman/shellcheck/wiki/SC2156): Injecting filenames is fragile and insecure. Use parameters.
[SC2164](https://github.com/koalaman/shellcheck/wiki/SC2164): Use 'cd ... || exit' or 'cd ... || return' in case cd fails.
[SC2166](https://github.com/koalaman/shellcheck/wiki/SC2166): Prefer [ p ] && [ q ] as [ p -a q ] is not well defined.
[SC2181](https://github.com/koalaman/shellcheck/wiki/SC2181): Check exit code directly with e.g. 'if mycmd;', not indirectly with $?.
[SC2196](https://github.com/koalaman/shellcheck/wiki/SC2196): egrep is non-standard and deprecated. Use grep -E instead.
[SC2230](https://github.com/koalaman/shellcheck/wiki/SC2230): which is non-standard. Use builtin 'command -v' instead.
|
process
|
consider fixing warnings emitted by shellcheck is a static analysis tool for shell scripts i ran it on this codebase and here s the list of warnings emitted run through uniq we can decide individually what warnings we care about and don t use instead of deprecated expressions don t expand in single quotes use double quotes for that attempt appears unused verify use or export if used externally for loops over find output are fragile use find exec or a while read loop quote this to prevent word splitting use with quotes to prevent whitespace problems this expression is constant did you forget a somewhere double quote to prevent globbing and word splitting this action ignores everything before the o use to group possible misspelling bucket name may not be assigned but bucket name is declare and assign separately to avoid masking return values injecting filenames is fragile and insecure use parameters use cd exit or cd return in case cd fails prefer as is not well defined check exit code directly with e g if mycmd not indirectly with egrep is non standard and deprecated use grep e instead which is non standard use builtin command v instead
| 1
|
15,017
| 18,727,921,663
|
IssuesEvent
|
2021-11-03 18:12:16
|
SAP/spartacus
|
https://api.github.com/repos/SAP/spartacus
|
closed
|
[MASTER] release process optimizations
|
release-activities improvement release-process m2j
|
Optimizations for our release process
- [x] Spartacus installation (via script) on CI
- [x] #12109
- [x] #14146
|
1.0
|
[MASTER] release process optimizations - Optimizations for our release process
- [x] Spartacus installation (via script) on CI
- [x] #12109
- [x] #14146
|
process
|
release process optimizations optimizations for our release process spartacus installation via script on ci
| 1
|
21,182
| 10,580,475,050
|
IssuesEvent
|
2019-10-08 06:51:31
|
ignatandrei/Presentations
|
https://api.github.com/repos/ignatandrei/Presentations
|
opened
|
WS-2019-0291 (High) detected in handlebars-4.1.0.tgz
|
security vulnerability
|
## WS-2019-0291 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.1.0.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.1.0.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.1.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/Presentations/2019/shorts/AngLibrary_NPMComponent/myTestApp/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/Presentations/2019/shorts/AngLibrary_NPMComponent/myTestApp/node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- build-angular-0.11.4.tgz (Root Library)
- istanbul-0.4.5.tgz
- :x: **handlebars-4.1.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ignatandrei/Presentations/commit/fcccd2f53e4df361a7e3de9c076c4cef96471b16">fcccd2f53e4df361a7e3de9c076c4cef96471b16</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
handlebars before 4.3.0 is vulnerable to Prototype Pollution leading to Remote Code Execution. Templates may alter an Objects' __proto__ and __defineGetter__ properties, which may allow an attacker to execute arbitrary code through crafted payloads.
<p>Publish Date: 2019-10-06
<p>URL: <a href=https://github.com/wycats/handlebars.js/issues/1558>WS-2019-0291</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>7.3</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1164">https://www.npmjs.com/advisories/1164</a></p>
<p>Release Date: 2019-10-06</p>
<p>Fix Resolution: 4.3.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
WS-2019-0291 (High) detected in handlebars-4.1.0.tgz - ## WS-2019-0291 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.1.0.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.1.0.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.1.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/Presentations/2019/shorts/AngLibrary_NPMComponent/myTestApp/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/Presentations/2019/shorts/AngLibrary_NPMComponent/myTestApp/node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- build-angular-0.11.4.tgz (Root Library)
- istanbul-0.4.5.tgz
- :x: **handlebars-4.1.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ignatandrei/Presentations/commit/fcccd2f53e4df361a7e3de9c076c4cef96471b16">fcccd2f53e4df361a7e3de9c076c4cef96471b16</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
handlebars before 4.3.0 is vulnerable to Prototype Pollution leading to Remote Code Execution. Templates may alter an Objects' __proto__ and __defineGetter__ properties, which may allow an attacker to execute arbitrary code through crafted payloads.
<p>Publish Date: 2019-10-06
<p>URL: <a href=https://github.com/wycats/handlebars.js/issues/1558>WS-2019-0291</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>7.3</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1164">https://www.npmjs.com/advisories/1164</a></p>
<p>Release Date: 2019-10-06</p>
<p>Fix Resolution: 4.3.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
ws high detected in handlebars tgz ws high severity vulnerability vulnerable library handlebars tgz handlebars provides the power necessary to let you build semantic templates effectively with no frustration library home page a href path to dependency file tmp ws scm presentations shorts anglibrary npmcomponent mytestapp package json path to vulnerable library tmp ws scm presentations shorts anglibrary npmcomponent mytestapp node modules handlebars package json dependency hierarchy build angular tgz root library istanbul tgz x handlebars tgz vulnerable library found in head commit a href vulnerability details handlebars before is vulnerable to prototype pollution leading to remote code execution templates may alter an objects proto and definegetter properties which may allow an attacker to execute arbitrary code through crafted payloads publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
273,227
| 29,820,207,185
|
IssuesEvent
|
2023-06-17 01:08:53
|
Nivaskumark/CVE-2020-0114-frameworks_base11
|
https://api.github.com/repos/Nivaskumark/CVE-2020-0114-frameworks_base11
|
opened
|
CVE-2023-20908 (Medium) detected in baseandroid-10.0.0_r14
|
Mend: dependency security vulnerability
|
## CVE-2023-20908 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>baseandroid-10.0.0_r14</b></p></summary>
<p>
<p>Android framework classes and services</p>
<p>Library home page: <a href=https://android.googlesource.com/platform/frameworks/base>https://android.googlesource.com/platform/frameworks/base</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Nivaskumark/CVE-2020-0114-frameworks_base/commit/9400e4a699c996e3a4a42d3d2d718b5d054142fd">9400e4a699c996e3a4a42d3d2d718b5d054142fd</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/SettingsProvider/src/com/android/providers/settings/SettingsState.java</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
In several functions of SettingsState.java, there is a possible system crash loop due to resource exhaustion. This could lead to local denial of service with no additional execution privileges needed. User interaction is not needed for exploitation.Product: AndroidVersions: Android-10 Android-11 Android-12 Android-12L Android-13Android ID: A-239415861
<p>Publish Date: 2023-01-26
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-20908>CVE-2023-20908</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://android.googlesource.com/platform/frameworks/base/+/e144b7802e04841a22afcb5100ac46be5e595d82">https://android.googlesource.com/platform/frameworks/base/+/e144b7802e04841a22afcb5100ac46be5e595d82</a></p>
<p>Release Date: 2022-11-04</p>
<p>Fix Resolution: android-13.0.0_r19</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2023-20908 (Medium) detected in baseandroid-10.0.0_r14 - ## CVE-2023-20908 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>baseandroid-10.0.0_r14</b></p></summary>
<p>
<p>Android framework classes and services</p>
<p>Library home page: <a href=https://android.googlesource.com/platform/frameworks/base>https://android.googlesource.com/platform/frameworks/base</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Nivaskumark/CVE-2020-0114-frameworks_base/commit/9400e4a699c996e3a4a42d3d2d718b5d054142fd">9400e4a699c996e3a4a42d3d2d718b5d054142fd</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/SettingsProvider/src/com/android/providers/settings/SettingsState.java</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
In several functions of SettingsState.java, there is a possible system crash loop due to resource exhaustion. This could lead to local denial of service with no additional execution privileges needed. User interaction is not needed for exploitation.Product: AndroidVersions: Android-10 Android-11 Android-12 Android-12L Android-13Android ID: A-239415861
<p>Publish Date: 2023-01-26
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-20908>CVE-2023-20908</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://android.googlesource.com/platform/frameworks/base/+/e144b7802e04841a22afcb5100ac46be5e595d82">https://android.googlesource.com/platform/frameworks/base/+/e144b7802e04841a22afcb5100ac46be5e595d82</a></p>
<p>Release Date: 2022-11-04</p>
<p>Fix Resolution: android-13.0.0_r19</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in baseandroid cve medium severity vulnerability vulnerable library baseandroid android framework classes and services library home page a href found in head commit a href found in base branch master vulnerable source files settingsprovider src com android providers settings settingsstate java vulnerability details in several functions of settingsstate java there is a possible system crash loop due to resource exhaustion this could lead to local denial of service with no additional execution privileges needed user interaction is not needed for exploitation product androidversions android android android android android id a publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution android step up your open source security game with mend
| 0
|
16,375
| 21,093,280,073
|
IssuesEvent
|
2022-04-04 07:56:56
|
opensafely-core/job-server
|
https://api.github.com/repos/opensafely-core/job-server
|
closed
|
Set up permissions for viewing and editing application forms
|
application-process
|
As an admin user, I want more granular permission level for other admin user, so that we can control who has access to add comments or view application form submissions.
## Permissions
- View and add comments
- View-only
## To do
- [ ] Admin can apply permissions to other admins
- [ ] View and add comments permission
- [ ] View-only permission
- [ ] Apply permissions to existing staff users
- [ ] Only allow "admin" users to view form submissions (currently all staff can view form submissions)
|
1.0
|
Set up permissions for viewing and editing application forms - As an admin user, I want more granular permission level for other admin user, so that we can control who has access to add comments or view application form submissions.
## Permissions
- View and add comments
- View-only
## To do
- [ ] Admin can apply permissions to other admins
- [ ] View and add comments permission
- [ ] View-only permission
- [ ] Apply permissions to existing staff users
- [ ] Only allow "admin" users to view form submissions (currently all staff can view form submissions)
|
process
|
set up permissions for viewing and editing application forms as an admin user i want more granular permission level for other admin user so that we can control who has access to add comments or view application form submissions permissions view and add comments view only to do admin can apply permissions to other admins view and add comments permission view only permission apply permissions to existing staff users only allow admin users to view form submissions currently all staff can view form submissions
| 1
|
10,725
| 13,526,847,467
|
IssuesEvent
|
2020-09-15 14:43:22
|
cypress-io/cypress
|
https://api.github.com/repos/cypress-io/cypress
|
closed
|
Improve spec serving
|
internal-priority process: tests stage: needs review
|
We can improve how specs are served, utilizing express's built-in etag handling.
|
1.0
|
Improve spec serving - We can improve how specs are served, utilizing express's built-in etag handling.
|
process
|
improve spec serving we can improve how specs are served utilizing express s built in etag handling
| 1
|
92,120
| 11,610,290,728
|
IssuesEvent
|
2020-02-26 02:32:25
|
SHPEUCF/shpeucfapp
|
https://api.github.com/repos/SHPEUCF/shpeucfapp
|
opened
|
Reset Password screen looks terrible on pixel 3a
|
Design bug
|
Text does not read well on background.
height of textbox is not tall enough for the text.
<img width="296" alt="Screen Shot 2020-02-25 at 9 31 21 PM" src="https://user-images.githubusercontent.com/22435531/75306173-3cc09400-5816-11ea-921f-dffed6ad1194.png">
|
1.0
|
Reset Password screen looks terrible on pixel 3a - Text does not read well on background.
height of textbox is not tall enough for the text.
<img width="296" alt="Screen Shot 2020-02-25 at 9 31 21 PM" src="https://user-images.githubusercontent.com/22435531/75306173-3cc09400-5816-11ea-921f-dffed6ad1194.png">
|
non_process
|
reset password screen looks terrible on pixel text does not read well on background height of textbox is not tall enough for the text img width alt screen shot at pm src
| 0
|
12,985
| 2,732,621,639
|
IssuesEvent
|
2015-04-17 08:00:58
|
creativo/softmodii
|
https://api.github.com/repos/creativo/softmodii
|
opened
|
La ruta para descargar el SD Formatter en el documento Softmodii PDF rev10.pdf es incorrecta
|
Prioridad-Media Tipo-Defecto Version-Ejecutable
|
**¿Qué pasos reproducen el problema?**
1\. Ir a la página 6 del manual.
2\. La URL para descargar el formateador oficial para las SD es [http://www.sdcard.org/consumers/formatter_3/SDFormatterv3.0.zip](http://www.sdcard.org/consumers/formatter_3/SDFormatterv3.0.zip)
3\. Al abrir la ruta se obtiene el error Resource not found. La ruta correcta ahora es: [https://www.sdcard.org/downloads/formatter_3/sdfmt3_1.zip](https://www.sdcard.org/downloads/formatter_3/sdfmt3_1.zip)
|
1.0
|
La ruta para descargar el SD Formatter en el documento Softmodii PDF rev10.pdf es incorrecta - **¿Qué pasos reproducen el problema?**
1\. Ir a la página 6 del manual.
2\. La URL para descargar el formateador oficial para las SD es [http://www.sdcard.org/consumers/formatter_3/SDFormatterv3.0.zip](http://www.sdcard.org/consumers/formatter_3/SDFormatterv3.0.zip)
3\. Al abrir la ruta se obtiene el error Resource not found. La ruta correcta ahora es: [https://www.sdcard.org/downloads/formatter_3/sdfmt3_1.zip](https://www.sdcard.org/downloads/formatter_3/sdfmt3_1.zip)
|
non_process
|
la ruta para descargar el sd formatter en el documento softmodii pdf pdf es incorrecta ¿qué pasos reproducen el problema ir a la página del manual la url para descargar el formateador oficial para las sd es al abrir la ruta se obtiene el error resource not found la ruta correcta ahora es
| 0
|
11,575
| 14,442,447,942
|
IssuesEvent
|
2020-12-07 18:10:08
|
googleapis/google-cloud-go
|
https://api.github.com/repos/googleapis/google-cloud-go
|
closed
|
all: investigate using release-please for releases
|
type: process
|
release-please can help automate the release process.
Can/should we use it for Go?
release-please can generate changelogs if the repo follows https://www.conventionalcommits.org/en/v1.0.0/. But, we only include the scope (the affected package), not the type. See https://github.com/golang/go/wiki/CommitMessage.
cc @codyoss @broady @bcoe
|
1.0
|
all: investigate using release-please for releases - release-please can help automate the release process.
Can/should we use it for Go?
release-please can generate changelogs if the repo follows https://www.conventionalcommits.org/en/v1.0.0/. But, we only include the scope (the affected package), not the type. See https://github.com/golang/go/wiki/CommitMessage.
cc @codyoss @broady @bcoe
|
process
|
all investigate using release please for releases release please can help automate the release process can should we use it for go release please can generate changelogs if the repo follows but we only include the scope the affected package not the type see cc codyoss broady bcoe
| 1
|
10,992
| 13,785,994,772
|
IssuesEvent
|
2020-10-09 00:27:18
|
cbrennanpoole/Qualitative-Self
|
https://api.github.com/repos/cbrennanpoole/Qualitative-Self
|
closed
|
Review Assertions | System for Award Management
|
Competitive Advantage Creative Strategy Git God Git Gud Google Cloud Next 2020 Growth-Hack-Attack Growth-Hacking IT Gates Machines Learning Microsoft The State Way abc.xyz good first issue help wanted institutional stigmatization process implementation
|

## Change Wind Will See In this World
1. Register Entity
2. Navigate Abyss
3. Establish EIN
4. Establish DUNS
5. Register at ___- dot Gov
6. Systems Upgrade move to ____ dot Gov
7. Determine NAICS
8. Switch to dot dot dot dot Gov
9. Redundant
10. Non-cross-functional
11. Barriers and Groupthink Diseased
12. Definitively Drain US Swamp
13. Put US Under God Once More
14. Change the novel coronavirus trend
15. Find happier, healthier, whole tax-paying citizens
16. Improve the exponentially detrimental fertility rates
17. Stop printing Currency to purchase Equities
18. Find Sustainability.
19. End recidivistic horse shit
20. Eradicate earth of Big Pharma and Chemo-non-therapy politicking.
21. Ban Lobbyist and Federal Investments in Alcohol and Tobacco until such time as all Substances are not just decriminalized, legalized, and all non-violent offenders receive pardons, and stipends from victimization of a 50+ year failed war that is literally killing US from the inside out.
22. Eradicate the two-party system and replace it with a dynamic, true democracy that alll US citizens can take place in as the time for constituents, legacies, latin gates, and sophisticated social status caste systems is long since dead.
23. End epidemic
24. Stop censorship
25. Bend curves
26. Bridge divides
27. Change color theory
28. Flip Scripts
29. Move Consciousness
30. Wipe Slates
31. Prevent one fell swoop - whoosh - from above...
32. Eradicate IT Gates ...
33. When everyone else is in ... are you in?
Best,
x.________
[with Wind LLC](https://www.pinterest.com/withWindllc "Popular on the categorical associative foldering space aka the 'Pinterest'")
---
**Source URL**:
[https://sam.gov/SAM/pages/secured/entity/assertionReview.jsf](https://sam.gov/SAM/pages/secured/entity/assertionReview.jsf)
<table><tr><td><strong>Browser</strong></td><td>Chrome 85.0.4183.15</td></tr><tr><td><strong>OS</strong></td><td>Windows 10 64-bit</td></tr><tr><td><strong>Screen Size</strong></td><td>2560x1080</td></tr><tr><td><strong>Viewport Size</strong></td><td>2560x937</td></tr><tr><td><strong>Pixel Ratio</strong></td><td>@1x</td></tr><tr><td><strong>Zoom Level</strong></td><td>100%</td></tr></table>
|
1.0
|
Review Assertions | System for Award Management - 
## Change Wind Will See In this World
1. Register Entity
2. Navigate Abyss
3. Establish EIN
4. Establish DUNS
5. Register at ___- dot Gov
6. Systems Upgrade move to ____ dot Gov
7. Determine NAICS
8. Switch to dot dot dot dot Gov
9. Redundant
10. Non-cross-functional
11. Barriers and Groupthink Diseased
12. Definitively Drain US Swamp
13. Put US Under God Once More
14. Change the novel coronavirus trend
15. Find happier, healthier, whole tax-paying citizens
16. Improve the exponentially detrimental fertility rates
17. Stop printing Currency to purchase Equities
18. Find Sustainability.
19. End recidivistic horse shit
20. Eradicate earth of Big Pharma and Chemo-non-therapy politicking.
21. Ban Lobbyist and Federal Investments in Alcohol and Tobacco until such time as all Substances are not just decriminalized, legalized, and all non-violent offenders receive pardons, and stipends from victimization of a 50+ year failed war that is literally killing US from the inside out.
22. Eradicate the two-party system and replace it with a dynamic, true democracy that alll US citizens can take place in as the time for constituents, legacies, latin gates, and sophisticated social status caste systems is long since dead.
23. End epidemic
24. Stop censorship
25. Bend curves
26. Bridge divides
27. Change color theory
28. Flip Scripts
29. Move Consciousness
30. Wipe Slates
31. Prevent one fell swoop - whoosh - from above...
32. Eradicate IT Gates ...
33. When everyone else is in ... are you in?
Best,
x.________
[with Wind LLC](https://www.pinterest.com/withWindllc "Popular on the categorical associative foldering space aka the 'Pinterest'")
---
**Source URL**:
[https://sam.gov/SAM/pages/secured/entity/assertionReview.jsf](https://sam.gov/SAM/pages/secured/entity/assertionReview.jsf)
<table><tr><td><strong>Browser</strong></td><td>Chrome 85.0.4183.15</td></tr><tr><td><strong>OS</strong></td><td>Windows 10 64-bit</td></tr><tr><td><strong>Screen Size</strong></td><td>2560x1080</td></tr><tr><td><strong>Viewport Size</strong></td><td>2560x937</td></tr><tr><td><strong>Pixel Ratio</strong></td><td>@1x</td></tr><tr><td><strong>Zoom Level</strong></td><td>100%</td></tr></table>
|
process
|
review assertions system for award management change wind will see in this world register entity navigate abyss establish ein establish duns register at dot gov systems upgrade move to dot gov determine naics switch to dot dot dot dot gov redundant non cross functional barriers and groupthink diseased definitively drain us swamp put us under god once more change the novel coronavirus trend find happier healthier whole tax paying citizens improve the exponentially detrimental fertility rates stop printing currency to purchase equities find sustainability end recidivistic horse shit eradicate earth of big pharma and chemo non therapy politicking ban lobbyist and federal investments in alcohol and tobacco until such time as all substances are not just decriminalized legalized and all non violent offenders receive pardons and stipends from victimization of a year failed war that is literally killing us from the inside out eradicate the two party system and replace it with a dynamic true democracy that alll us citizens can take place in as the time for constituents legacies latin gates and sophisticated social status caste systems is long since dead end epidemic stop censorship bend curves bridge divides change color theory flip scripts move consciousness wipe slates prevent one fell swoop whoosh from above eradicate it gates when everyone else is in are you in best x popular on the categorical associative foldering space aka the pinterest source url browser chrome os windows bit screen size viewport size pixel ratio zoom level
| 1
|
52,551
| 22,291,189,618
|
IssuesEvent
|
2022-06-12 11:44:11
|
JeongSeonggil/SubMarketWithGit
|
https://api.github.com/repos/JeongSeonggil/SubMarketWithGit
|
closed
|
사용자 구독 생성 시 Kafka Topic
|
user-service item-service order-service kafka
|
## 📌 기능 설명
(item-Service) 사용자가 성공적으로 구독을 진행했을 경우 상품의 수량 감소 ( -1 )
(order-Service) 주문 생성
## 📑 완료 조건
- [x] reduceItemCount
- [x] 주문 생성 Service 호출
|
3.0
|
사용자 구독 생성 시 Kafka Topic - ## 📌 기능 설명
(item-Service) 사용자가 성공적으로 구독을 진행했을 경우 상품의 수량 감소 ( -1 )
(order-Service) 주문 생성
## 📑 완료 조건
- [x] reduceItemCount
- [x] 주문 생성 Service 호출
|
non_process
|
사용자 구독 생성 시 kafka topic 📌 기능 설명 item service 사용자가 성공적으로 구독을 진행했을 경우 상품의 수량 감소 order service 주문 생성 📑 완료 조건 reduceitemcount 주문 생성 service 호출
| 0
|
22,314
| 30,868,165,447
|
IssuesEvent
|
2023-08-03 09:28:29
|
inmanta/pytest-inmanta
|
https://api.github.com/repos/inmanta/pytest-inmanta
|
closed
|
release pytest-inmanta
|
process tiny
|
Main motivation: #401. Give it a week of use in the solutions team to make sure no issues pop up.
|
1.0
|
release pytest-inmanta - Main motivation: #401. Give it a week of use in the solutions team to make sure no issues pop up.
|
process
|
release pytest inmanta main motivation give it a week of use in the solutions team to make sure no issues pop up
| 1
|
49,805
| 6,260,617,429
|
IssuesEvent
|
2017-07-14 21:07:23
|
blockstack/designs
|
https://api.github.com/repos/blockstack/designs
|
closed
|
create sketch/invision craft shared logo asset library
|
design production v3.1.0
|
Create Sketch/InVision Craft shared logo asset library for design team
|
1.0
|
create sketch/invision craft shared logo asset library - Create Sketch/InVision Craft shared logo asset library for design team
|
non_process
|
create sketch invision craft shared logo asset library create sketch invision craft shared logo asset library for design team
| 0
|
13,572
| 16,109,274,497
|
IssuesEvent
|
2021-04-27 18:48:49
|
onivim/oni2
|
https://api.github.com/repos/onivim/oni2
|
closed
|
Add screen recordings for new PR (aka Demo)
|
A-process U-revery enhancement
|
## Domain knowledge
* PRs can update or add new UI features
* PRs can update or add new editor commands, etc.. (editing features)
* once feature implemented it usually coming testing (manual) to check that everything works as expected
## Idea
* Maybe would make screen recording during final test (aka Demo)
(It many tools for any platform that allow make easy and fast screen(part of screen) recording and save it as gif)
* Attach it as GIF to PR
## Benefit
* Users that have early access will know what was added
* They can use it right away and provide some additional testing
* Don't need to look up in code and try to understand what is the feature
|
1.0
|
Add screen recordings for new PR (aka Demo) - ## Domain knowledge
* PRs can update or add new UI features
* PRs can update or add new editor commands, etc.. (editing features)
* once feature implemented it usually coming testing (manual) to check that everything works as expected
## Idea
* Maybe would make screen recording during final test (aka Demo)
(It many tools for any platform that allow make easy and fast screen(part of screen) recording and save it as gif)
* Attach it as GIF to PR
## Benefit
* Users that have early access will know what was added
* They can use it right away and provide some additional testing
* Don't need to look up in code and try to understand what is the feature
|
process
|
add screen recordings for new pr aka demo domain knowledge prs can update or add new ui features prs can update or add new editor commands etc editing features once feature implemented it usually coming testing manual to check that everything works as expected idea maybe would make screen recording during final test aka demo it many tools for any platform that allow make easy and fast screen part of screen recording and save it as gif attach it as gif to pr benefit users that have early access will know what was added they can use it right away and provide some additional testing don t need to look up in code and try to understand what is the feature
| 1
|
370,200
| 25,893,320,858
|
IssuesEvent
|
2022-12-14 19:57:36
|
p2-inc/phasetwo-docs
|
https://api.github.com/repos/p2-inc/phasetwo-docs
|
closed
|
[Docs] Creating a service account to call the API
|
documentation priority
|
add an API section to the Docs
https://github.com/keycloak/keycloak-documentation/blob/main/server_admin/topics/clients/oidc/service-accounts.adoc
https://www.keycloak.org/docs/latest/server_admin/index.html#_service_accounts
add the proper roles to give the service account user access to the resources you want to access.
On the client configuration page, under Service Account Roles, choose the Client Roles for “realm-management” and add the proper roles to you custom client.
Also make sure the mapper “roles” is added to your client scope.
|
1.0
|
[Docs] Creating a service account to call the API - add an API section to the Docs
https://github.com/keycloak/keycloak-documentation/blob/main/server_admin/topics/clients/oidc/service-accounts.adoc
https://www.keycloak.org/docs/latest/server_admin/index.html#_service_accounts
add the proper roles to give the service account user access to the resources you want to access.
On the client configuration page, under Service Account Roles, choose the Client Roles for “realm-management” and add the proper roles to you custom client.
Also make sure the mapper “roles” is added to your client scope.
|
non_process
|
creating a service account to call the api add an api section to the docs add the proper roles to give the service account user access to the resources you want to access on the client configuration page under service account roles choose the client roles for “realm management” and add the proper roles to you custom client also make sure the mapper “roles” is added to your client scope
| 0
|
11,369
| 14,194,246,004
|
IssuesEvent
|
2020-11-15 02:28:54
|
thegooddocsproject/templates
|
https://api.github.com/repos/thegooddocsproject/templates
|
opened
|
Standardise on what we call people
|
good first issue improve process
|
After discussion on the mailing list, we agreed that:
- People who read templates are **readers** (even when they are reading templates in order to write documentation).
- People who use software are **users** (even if they are reading documentation to help them use the software).
This guidance needs to be added to our writing advice, and we should double check that we use these terms correctly throughout the templates.
|
1.0
|
Standardise on what we call people - After discussion on the mailing list, we agreed that:
- People who read templates are **readers** (even when they are reading templates in order to write documentation).
- People who use software are **users** (even if they are reading documentation to help them use the software).
This guidance needs to be added to our writing advice, and we should double check that we use these terms correctly throughout the templates.
|
process
|
standardise on what we call people after discussion on the mailing list we agreed that people who read templates are readers even when they are reading templates in order to write documentation people who use software are users even if they are reading documentation to help them use the software this guidance needs to be added to our writing advice and we should double check that we use these terms correctly throughout the templates
| 1
|
7,153
| 10,299,640,254
|
IssuesEvent
|
2019-08-28 11:02:42
|
heim-rs/heim
|
https://api.github.com/repos/heim-rs/heim
|
closed
|
Properly handle zombie processes for macOS
|
A-process C-bug O-macos
|
All routines which are using `libproc` bindings right now are not handling zombie processes properly. `psutil` sources could be used as a knowledge source of how to do that: https://github.com/giampaolo/psutil/blob/c10df5aa04e1ced58d19501fa42f08c1b909b83d/psutil/_psosx.py#L348-L371
|
1.0
|
Properly handle zombie processes for macOS - All routines which are using `libproc` bindings right now are not handling zombie processes properly. `psutil` sources could be used as a knowledge source of how to do that: https://github.com/giampaolo/psutil/blob/c10df5aa04e1ced58d19501fa42f08c1b909b83d/psutil/_psosx.py#L348-L371
|
process
|
properly handle zombie processes for macos all routines which are using libproc bindings right now are not handling zombie processes properly psutil sources could be used as a knowledge source of how to do that
| 1
|
14,949
| 18,428,651,693
|
IssuesEvent
|
2021-10-14 03:37:20
|
bridgetownrb/bridgetown
|
https://api.github.com/repos/bridgetownrb/bridgetown
|
closed
|
feat: Switch to Rack + Roda and away from Webrick
|
enhancement process
|
As part of the push towards ~~Rails API~~ Ruby backend integration, it's become clear to me we need to remove our dependency on Webrick and retool around Rack (and Puma as the actual server). And not just Rack alone but a routing layer which can sit just above it—a good solution being [Roda](http://roda.jeremyevans.net/index.html).
This would provide a number of benefits:
* The only use case currently for Webrick is local site development/testing. Webrick isn't recommended for any production use. While Bridgetown typically is a build-and-deploy solution (hence the Static Site Generator moniker), there are reasons why running Bridgetown "live" through a web service could be desirable.
* Furthermore, by making Bridgetown essentially a Rack app, it not only makes integration with an API easier, it allows Bridgetown itself to enter the realm of true all-in-one web framework…with the ability to handle a pretty seamless range of SSG & SSR needs.
* Alternatively, if you already have a Rack app/Rails/etc., you could "bolt" Bridgetown on as a sub-path, just like any number of other apps (like how Rails apps can incorporate Sinatra apps such as Sidekiq's admin UI).
My initial preference to start would simply be to swap Webrick out for Rack/Puma and otherwise keep all features and CLI options as much the same as possible. Then we can assess where to go from there.
Feedback most welcome!
|
1.0
|
feat: Switch to Rack + Roda and away from Webrick - As part of the push towards ~~Rails API~~ Ruby backend integration, it's become clear to me we need to remove our dependency on Webrick and retool around Rack (and Puma as the actual server). And not just Rack alone but a routing layer which can sit just above it—a good solution being [Roda](http://roda.jeremyevans.net/index.html).
This would provide a number of benefits:
* The only use case currently for Webrick is local site development/testing. Webrick isn't recommended for any production use. While Bridgetown typically is a build-and-deploy solution (hence the Static Site Generator moniker), there are reasons why running Bridgetown "live" through a web service could be desirable.
* Furthermore, by making Bridgetown essentially a Rack app, it not only makes integration with an API easier, it allows Bridgetown itself to enter the realm of true all-in-one web framework…with the ability to handle a pretty seamless range of SSG & SSR needs.
* Alternatively, if you already have a Rack app/Rails/etc., you could "bolt" Bridgetown on as a sub-path, just like any number of other apps (like how Rails apps can incorporate Sinatra apps such as Sidekiq's admin UI).
My initial preference to start would simply be to swap Webrick out for Rack/Puma and otherwise keep all features and CLI options as much the same as possible. Then we can assess where to go from there.
Feedback most welcome!
|
process
|
feat switch to rack roda and away from webrick as part of the push towards rails api ruby backend integration it s become clear to me we need to remove our dependency on webrick and retool around rack and puma as the actual server and not just rack alone but a routing layer which can sit just above it—a good solution being this would provide a number of benefits the only use case currently for webrick is local site development testing webrick isn t recommended for any production use while bridgetown typically is a build and deploy solution hence the static site generator moniker there are reasons why running bridgetown live through a web service could be desirable furthermore by making bridgetown essentially a rack app it not only makes integration with an api easier it allows bridgetown itself to enter the realm of true all in one web framework…with the ability to handle a pretty seamless range of ssg ssr needs alternatively if you already have a rack app rails etc you could bolt bridgetown on as a sub path just like any number of other apps like how rails apps can incorporate sinatra apps such as sidekiq s admin ui my initial preference to start would simply be to swap webrick out for rack puma and otherwise keep all features and cli options as much the same as possible then we can assess where to go from there feedback most welcome
| 1
|
11,088
| 13,930,014,131
|
IssuesEvent
|
2020-10-22 01:15:25
|
fluent/fluent-bit
|
https://api.github.com/repos/fluent/fluent-bit
|
closed
|
WinLog INPUT: include the StringInserts key-value pairs into the log record
|
work-in-process
|
**Is your feature request related to a problem? Please describe.**
In [this pull request](https://github.com/fluent/fluent-bit/pull/2322), the `StringInserts` [were removed from the resulting log record](https://github.com/fluent/fluent-bit/pull/2322/files#diff-0890d3b5666d8c56708c223ac7bc54a3L267-L269). A formatted `Message` containing the human-readable message [was included instead](https://github.com/fluent/fluent-bit/pull/2322/files#diff-44387dae255041c828cb88280efd027fR401-R406).
This solution is really useful to visualize the resulting message, but it would also be good to include the key-value pairs present in the `StringInserts` as log record attributes.
In FluentBit 1.4.1, a `winlog` contained a `StringInserts` field like the following:
```
[42] winlog.0: [1596733503.081829800, {"RecordNumber"=>43, "TimeGenerated"=>1585850266, "TimeWritten"=>1585850266, "EventID"=>600, ...
"StringInserts"=>["Variable", "Started", " ProviderName=Variable
NewProviderState=Started
SequenceNumber=11
HostName=ConsoleHost
HostVersion=5.1.18362.145
...
RunspaceId=
PipelineId=
CommandName=
CommandType=
ScriptName=
CommandPath=
CommandLine="], "Sid"=>"", "Data"=>""}]
```
It would be really nice if the resulting log record contained all the key-value pairs present in the `StringInserts`, that is, `NewProviderState`, `SequenceNumber`, `HostName`, `HostVersion`, `RunspaceId` **(even if it is empty)**...
**Describe the solution you'd like**
Include all the key-value pairs present in `StringInserts` as log record attributes.
**Additional context**
Even though the `Message` field is very useful as it is human-readable, including all the `StringInserts` key-value pairs as log attributes would enable the user to filter by these registry key values more easily later.
|
1.0
|
WinLog INPUT: include the StringInserts key-value pairs into the log record - **Is your feature request related to a problem? Please describe.**
In [this pull request](https://github.com/fluent/fluent-bit/pull/2322), the `StringInserts` [were removed from the resulting log record](https://github.com/fluent/fluent-bit/pull/2322/files#diff-0890d3b5666d8c56708c223ac7bc54a3L267-L269). A formatted `Message` containing the human-readable message [was included instead](https://github.com/fluent/fluent-bit/pull/2322/files#diff-44387dae255041c828cb88280efd027fR401-R406).
This solution is really useful to visualize the resulting message, but it would also be good to include the key-value pairs present in the `StringInserts` as log record attributes.
In FluentBit 1.4.1, a `winlog` contained a `StringInserts` field like the following:
```
[42] winlog.0: [1596733503.081829800, {"RecordNumber"=>43, "TimeGenerated"=>1585850266, "TimeWritten"=>1585850266, "EventID"=>600, ...
"StringInserts"=>["Variable", "Started", " ProviderName=Variable
NewProviderState=Started
SequenceNumber=11
HostName=ConsoleHost
HostVersion=5.1.18362.145
...
RunspaceId=
PipelineId=
CommandName=
CommandType=
ScriptName=
CommandPath=
CommandLine="], "Sid"=>"", "Data"=>""}]
```
It would be really nice if the resulting log record contained all the key-value pairs present in the `StringInserts`, that is, `NewProviderState`, `SequenceNumber`, `HostName`, `HostVersion`, `RunspaceId` **(even if it is empty)**...
**Describe the solution you'd like**
Include all the key-value pairs present in `StringInserts` as log record attributes.
**Additional context**
Even though the `Message` field is very useful as it is human-readable, including all the `StringInserts` key-value pairs as log attributes would enable the user to filter by these registry key values more easily later.
|
process
|
winlog input include the stringinserts key value pairs into the log record is your feature request related to a problem please describe in the stringinserts a formatted message containing the human readable message this solution is really useful to visualize the resulting message but it would also be good to include the key value pairs present in the stringinserts as log record attributes in fluentbit a winlog contained a stringinserts field like the following winlog recordnumber timegenerated timewritten eventid stringinserts variable started providername variable newproviderstate started sequencenumber hostname consolehost hostversion runspaceid pipelineid commandname commandtype scriptname commandpath commandline sid data it would be really nice if the resulting log record contained all the key value pairs present in the stringinserts that is newproviderstate sequencenumber hostname hostversion runspaceid even if it is empty describe the solution you d like include all the key value pairs present in stringinserts as log record attributes additional context even though the message field is very useful as it is human readable including all the stringinserts key value pairs as log attributes would enable the user to filter by these registry key values more easily later
| 1
|
218,410
| 16,760,765,197
|
IssuesEvent
|
2021-06-13 18:37:30
|
amzn/selling-partner-api-docs
|
https://api.github.com/repos/amzn/selling-partner-api-docs
|
opened
|
[BUG] Order API responses are incomplete
|
bug documentation
|
The Order API responses are incomplete!:
[https://github.com/amzn/selling-partner-api-docs/blob/main/references/orders-api/ordersV0.md](url)
**OrderBuyerInfo**
[https://github.com/amzn/selling-partner-api-docs/blob/main/references/orders-api/ordersV0.md#orderbuyerinfo](url)

**Address**
[https://github.com/amzn/selling-partner-api-docs/blob/main/references/orders-api/ordersV0.md#address](url)

**OrderItem**
[https://github.com/amzn/selling-partner-api-docs/blob/main/references/orders-api/ordersV0.md#orderitem](url)

**OrderItemBuyerInfo**
[https://github.com/amzn/selling-partner-api-docs/blob/main/references/orders-api/ordersV0.md#orderitembuyerinfo](url)

What is happening?
Thanks
|
1.0
|
[BUG] Order API responses are incomplete - The Order API responses are incomplete!:
[https://github.com/amzn/selling-partner-api-docs/blob/main/references/orders-api/ordersV0.md](url)
**OrderBuyerInfo**
[https://github.com/amzn/selling-partner-api-docs/blob/main/references/orders-api/ordersV0.md#orderbuyerinfo](url)

**Address**
[https://github.com/amzn/selling-partner-api-docs/blob/main/references/orders-api/ordersV0.md#address](url)

**OrderItem**
[https://github.com/amzn/selling-partner-api-docs/blob/main/references/orders-api/ordersV0.md#orderitem](url)

**OrderItemBuyerInfo**
[https://github.com/amzn/selling-partner-api-docs/blob/main/references/orders-api/ordersV0.md#orderitembuyerinfo](url)

What is happening?
Thanks
|
non_process
|
order api responses are incomplete the order api responses are incomplete url orderbuyerinfo url address url orderitem url orderitembuyerinfo url what is happening thanks
| 0
|
169,088
| 6,394,565,997
|
IssuesEvent
|
2017-08-04 10:38:45
|
bedita/bedita
|
https://api.github.com/repos/bedita/bedita
|
closed
|
user defined fields list per module
|
Priority - Normal Topic - Administration Topic - UI Type - New Feature
|
In "admin" module (or somewhere else) a place where we define which fields to show in each module's list view.
Each module should expose mandatory fields (like **title**) that must be present, and optional fields that may or may not be listed.
It's an improvement to #408
|
1.0
|
user defined fields list per module - In "admin" module (or somewhere else) a place where we define which fields to show in each module's list view.
Each module should expose mandatory fields (like **title**) that must be present, and optional fields that may or may not be listed.
It's an improvement to #408
|
non_process
|
user defined fields list per module in admin module or somewhere else a place where we define which fields to show in each module s list view each module should expose mandatory fields like title that must be present and optional fields that may or may not be listed it s an improvement to
| 0
|
2,490
| 5,267,125,969
|
IssuesEvent
|
2017-02-04 19:28:09
|
jlm2017/jlm-video-subtitles
|
https://api.github.com/repos/jlm2017/jlm-video-subtitles
|
opened
|
[Subtitles] [FR] MÉLENCHON - Discours sur l'abolition de l'esclavage à Champagney
|
Language: French Process: Someone is working on this issue Process: [1] Writing in progress
|
# Video title
MÉLENCHON - Discours sur l'abolition de l'esclavage à Champagney
# URL
https://www.youtube.com/watch?v=jO8TCOMU2i8
# Youtube subtitles language
Français
# Duration
36:27
# Subtitles URL
https://www.youtube.com/timedtext_editor?ref=player&lang=fr&action_mde_edit_form=1&v=jO8TCOMU2i8&ui=hd&tab=captions&bl=vmp&captions-r=1
|
2.0
|
[Subtitles] [FR] MÉLENCHON - Discours sur l'abolition de l'esclavage à Champagney - # Video title
MÉLENCHON - Discours sur l'abolition de l'esclavage à Champagney
# URL
https://www.youtube.com/watch?v=jO8TCOMU2i8
# Youtube subtitles language
Français
# Duration
36:27
# Subtitles URL
https://www.youtube.com/timedtext_editor?ref=player&lang=fr&action_mde_edit_form=1&v=jO8TCOMU2i8&ui=hd&tab=captions&bl=vmp&captions-r=1
|
process
|
mélenchon discours sur l abolition de l esclavage à champagney video title mélenchon discours sur l abolition de l esclavage à champagney url youtube subtitles language français duration subtitles url
| 1
|
201,048
| 7,021,678,415
|
IssuesEvent
|
2017-12-22 06:26:50
|
johndiiorio/bughouse
|
https://api.github.com/repos/johndiiorio/bughouse
|
closed
|
Do not allow a user to directly access /loading
|
bug medium priority
|
Route the user away from /loading if selectedGame is null
|
1.0
|
Do not allow a user to directly access /loading - Route the user away from /loading if selectedGame is null
|
non_process
|
do not allow a user to directly access loading route the user away from loading if selectedgame is null
| 0
|
142,067
| 21,661,166,633
|
IssuesEvent
|
2022-05-06 19:18:49
|
LukasGasp/klosterguide
|
https://api.github.com/repos/LukasGasp/klosterguide
|
closed
|
Button NavVideos
|
Design Low Priority
|
Der Button bei den Navigationsvideos sollte über den Videos sein und beim Klicken mit einer Animation nach unten gleiten
|
1.0
|
Button NavVideos - Der Button bei den Navigationsvideos sollte über den Videos sein und beim Klicken mit einer Animation nach unten gleiten
|
non_process
|
button navvideos der button bei den navigationsvideos sollte über den videos sein und beim klicken mit einer animation nach unten gleiten
| 0
|
9,363
| 12,371,411,652
|
IssuesEvent
|
2020-05-18 18:31:02
|
kubeflow/kubeflow
|
https://api.github.com/repos/kubeflow/kubeflow
|
opened
|
Release demo script for 1.1
|
kind/process priority/p1
|
We aim to release Kubeflow 1.1 at the end of June. With that in mind it would be good to begin defining an appropriate release demo script to help ensure everyone is on the same page and begin qualifying releases.
Some useful links
* http://bit.ly/demo-v1-0 - The demo script for 1.0 this is probably a good place to start for creating a KF 1.1 demo script
* The [KF Roadmap for 1.1](https://github.com/kubeflow/kubeflow/blob/master/ROADMAP.md#kubeflow-11-features-target-release-late-june-2020) - A good reference point to start figuring out what features to highlight in the demo script
Based on those links it looks like there's a couple features that might be worth highlighting in the demo script
* multi-user pipelines (kubeflow/pipelines#1223)
* This is probably the biggest new feature landing in 1.1 so it would be good to highlight this in the demo script
* upgrades (kubeflow/kfctl#304) this is a major ask from customers
* A lot of the refactoring work in the manifests in 1.1 is happening to simplify upgrades
* It would be good to have script that illustrates how upgrades work
* Lineage tracking (kubeflow/website#1959)
* I think this feature already landed in 1.0 but I don't think our release decks our promoting it so it might be worthwhile to highlight in 1.1
|
1.0
|
Release demo script for 1.1 - We aim to release Kubeflow 1.1 at the end of June. With that in mind it would be good to begin defining an appropriate release demo script to help ensure everyone is on the same page and begin qualifying releases.
Some useful links
* http://bit.ly/demo-v1-0 - The demo script for 1.0 this is probably a good place to start for creating a KF 1.1 demo script
* The [KF Roadmap for 1.1](https://github.com/kubeflow/kubeflow/blob/master/ROADMAP.md#kubeflow-11-features-target-release-late-june-2020) - A good reference point to start figuring out what features to highlight in the demo script
Based on those links it looks like there's a couple features that might be worth highlighting in the demo script
* multi-user pipelines (kubeflow/pipelines#1223)
* This is probably the biggest new feature landing in 1.1 so it would be good to highlight this in the demo script
* upgrades (kubeflow/kfctl#304) this is a major ask from customers
* A lot of the refactoring work in the manifests in 1.1 is happening to simplify upgrades
* It would be good to have script that illustrates how upgrades work
* Lineage tracking (kubeflow/website#1959)
* I think this feature already landed in 1.0 but I don't think our release decks our promoting it so it might be worthwhile to highlight in 1.1
|
process
|
release demo script for we aim to release kubeflow at the end of june with that in mind it would be good to begin defining an appropriate release demo script to help ensure everyone is on the same page and begin qualifying releases some useful links the demo script for this is probably a good place to start for creating a kf demo script the a good reference point to start figuring out what features to highlight in the demo script based on those links it looks like there s a couple features that might be worth highlighting in the demo script multi user pipelines kubeflow pipelines this is probably the biggest new feature landing in so it would be good to highlight this in the demo script upgrades kubeflow kfctl this is a major ask from customers a lot of the refactoring work in the manifests in is happening to simplify upgrades it would be good to have script that illustrates how upgrades work lineage tracking kubeflow website i think this feature already landed in but i don t think our release decks our promoting it so it might be worthwhile to highlight in
| 1
|
11,447
| 13,435,335,187
|
IssuesEvent
|
2020-09-07 12:47:23
|
NEZNAMY/TAB
|
https://api.github.com/repos/NEZNAMY/TAB
|
closed
|
[Unlimited nametags] Not compatible with disguise plugins
|
Compatibility Problem Unlimited nametags
|
Disguised player has two nametags, his own + armor stands added by the plugin.
|
True
|
[Unlimited nametags] Not compatible with disguise plugins - Disguised player has two nametags, his own + armor stands added by the plugin.
|
non_process
|
not compatible with disguise plugins disguised player has two nametags his own armor stands added by the plugin
| 0
|
34,997
| 9,529,482,590
|
IssuesEvent
|
2019-04-29 11:26:03
|
siteorigin/so-widgets-bundle
|
https://api.github.com/repos/siteorigin/so-widgets-bundle
|
opened
|
VC: Google Maps widget doesn't function when Ultimate Addons for WPBakery Page Builder is activated
|
bug builder integrations priority-2
|
The Google Maps widget doesn't function correctly when Ultimate Addons for WPBakery Page Builder plugin is activated. The problem persists with all the modules in Ultimate Addons for WPBakery Page Builder deactivated. It isn't possible to search for a location as the map center.
```Uncaught (in promise) TypeError: $ is not a function
at Object.sowbForms.setupLocationFields (location-field.min.js:1)
at sowbAdminGoogleMapInit (location-field.min.js:1)
at maps.googleapis.com/maps/api/js?key=AIzaSyCw49siPnCludejgS-VOGt1sKOPFT6yuuw&libraries=places&callback=sowbAdminGoogleMapInit:131
at maps.googleapis.com/maps/api/js?key=AIzaSyCw49siPnCludejgS-VOGt1sKOPFT6yuuw&libraries=places&callback=sowbAdminGoogleMapInit:131```
|
1.0
|
VC: Google Maps widget doesn't function when Ultimate Addons for WPBakery Page Builder is activated - The Google Maps widget doesn't function correctly when Ultimate Addons for WPBakery Page Builder plugin is activated. The problem persists with all the modules in Ultimate Addons for WPBakery Page Builder deactivated. It isn't possible to search for a location as the map center.
```Uncaught (in promise) TypeError: $ is not a function
at Object.sowbForms.setupLocationFields (location-field.min.js:1)
at sowbAdminGoogleMapInit (location-field.min.js:1)
at maps.googleapis.com/maps/api/js?key=AIzaSyCw49siPnCludejgS-VOGt1sKOPFT6yuuw&libraries=places&callback=sowbAdminGoogleMapInit:131
at maps.googleapis.com/maps/api/js?key=AIzaSyCw49siPnCludejgS-VOGt1sKOPFT6yuuw&libraries=places&callback=sowbAdminGoogleMapInit:131```
|
non_process
|
vc google maps widget doesn t function when ultimate addons for wpbakery page builder is activated the google maps widget doesn t function correctly when ultimate addons for wpbakery page builder plugin is activated the problem persists with all the modules in ultimate addons for wpbakery page builder deactivated it isn t possible to search for a location as the map center uncaught in promise typeerror is not a function at object sowbforms setuplocationfields location field min js at sowbadmingooglemapinit location field min js at maps googleapis com maps api js key libraries places callback sowbadmingooglemapinit at maps googleapis com maps api js key libraries places callback sowbadmingooglemapinit
| 0
|
6,325
| 9,347,364,821
|
IssuesEvent
|
2019-03-31 00:43:03
|
Maximus5/ConEmu
|
https://api.github.com/repos/Maximus5/ConEmu
|
opened
|
Support both V1 and V2 consoles
|
enhancement processes
|
With `-new_console:V1` or `-new_console:V2` it's possible to start conhost in desired mode (Windows 10 of course).
|
1.0
|
Support both V1 and V2 consoles - With `-new_console:V1` or `-new_console:V2` it's possible to start conhost in desired mode (Windows 10 of course).
|
process
|
support both and consoles with new console or new console it s possible to start conhost in desired mode windows of course
| 1
|
13,552
| 16,094,704,400
|
IssuesEvent
|
2021-04-26 21:17:35
|
googleapis/python-dialogflow
|
https://api.github.com/repos/googleapis/python-dialogflow
|
closed
|
Rename package to `google-cloud-dialogflow` to follow naming convention
|
api: dialogflow type: process
|
All other Python cloud packages are named `google-cloud-*`. For consistency, dialogflow should also be renamed to `google-cloud-dialogflow`.
Steps:
1. Publish a last release of `dialogflow` with a note in the README directing users to the new package.
2. Change the package name to `google-cloud-dialogflow`.
3. Publish the first release of `google-cloud-dialogflow`.
CC @nnegrey
|
1.0
|
Rename package to `google-cloud-dialogflow` to follow naming convention - All other Python cloud packages are named `google-cloud-*`. For consistency, dialogflow should also be renamed to `google-cloud-dialogflow`.
Steps:
1. Publish a last release of `dialogflow` with a note in the README directing users to the new package.
2. Change the package name to `google-cloud-dialogflow`.
3. Publish the first release of `google-cloud-dialogflow`.
CC @nnegrey
|
process
|
rename package to google cloud dialogflow to follow naming convention all other python cloud packages are named google cloud for consistency dialogflow should also be renamed to google cloud dialogflow steps publish a last release of dialogflow with a note in the readme directing users to the new package change the package name to google cloud dialogflow publish the first release of google cloud dialogflow cc nnegrey
| 1
|
13,872
| 16,636,686,586
|
IssuesEvent
|
2021-06-04 00:19:15
|
CodeForPhilly/paws-data-pipeline
|
https://api.github.com/repos/CodeForPhilly/paws-data-pipeline
|
opened
|
Threading.Lock not effective across processes
|
Async processes
|
In investigating #348, I looked at `determine_upload_type()` which uses a Threading.Lock to protect a critical section. I added the logging statements seen here:
```
current_app.logger.info("Acquiring lock for " + file.filename + " >>>>>>>>>>>>>>>>>>>>>>>>>>>")
with lock:
current_app.logger.info("Got lock for " + file.filename + " <<<<<<<<<<<<<<<<<<<<<<<<<<<<<")
found_sources += 1
filename = secure_filename(file.filename)
now = time.gmtime()
now_date = time.strftime("%Y-%m-%d--%H-%M-%S", now)
current_app.logger.info(" -File: " + filename + " Matches files type: " + src_type)
df.to_csv(os.path.join(destination_path, src_type + '-' + now_date + '.csv'))
clean_current_folder(destination_path + '/current', src_type)
df.to_csv(os.path.join(destination_path + '/current', src_type + '-' + now_date + '.csv'))
current_app.logger.info(" -Uploaded successfully as : " + src_type + '-' + now_date + '.' + file_extension)
flash(src_type + " {0} ".format(SUCCESS_MSG), 'info')
current_app.logger.info("Releasing lock for " + file.filename + " >>>>>>>>>>>>>>>>>>>>>>>>>>>")
if found_sources == 0:
```
as the lock does not seem to prevent parallel execution from two browser tabs (Date and HH:MM removed, leaving seconds):
```
:12,955] INFO in file_uploader: Acquiring lock for Salesforce_PAWS Donation (all Time) (excel).xlsx >>>>>>>>>>>>>>>>>>>>>>>>>>>
:12,955] INFO in file_uploader: Got lock for Salesforce_PAWS Donation (all Time) (excel).xlsx <<<<<<<<<<<<<<<<<<<<<<<<<<<<<
:12,956] INFO in file_uploader: -File: Salesforce_PAWS_Donation_all_Time_excel.xlsx Matches files type: salesforcedonations
:14,119] INFO in file_uploader: Start uploading file: shelterluv_people.csv
:14,501] INFO in file_uploader: Acquiring lock for shelterluv_people.csv >>>>>>>>>>>>>>>>>>>>>>>>>>>
:14,501] INFO in file_uploader: Got lock for shelterluv_people.csv <<<<<<<<<<<<<<<<<<<<<<<<<<<<<
:15,384] INFO in file_uploader: -Uploaded successfully as : shelterluvpeople-2021-06-03--23-49-14.csv
:15,384] INFO in file_uploader: Releasing lock for shelterluv_people.csv >>>>>>>>>>>>>>>>>>>>>>>>>>>
:20,109] INFO in file_uploader: -Uploaded successfully as : salesforcedonations-2021-06-03--23-49-12.xlsx
:20,110] INFO in file_uploader: Releasing lock for Salesforce_PAWS Donation (all Time) (excel).xlsx >>>>>>>>>>>>>>>>>>>>>>>>>>>
```
I think this is because uWSGI is handling requests on a per-process basis:
```
paws-compose-server | *** uWSGI is running in multiple interpreter mode ***
paws-compose-server | spawned uWSGI master process (pid: 13)
paws-compose-server | spawned uWSGI worker 1 (pid: 22, cores: 4)
paws-compose-server | spawned uWSGI worker 2 (pid: 26, cores: 4)
````
Does it matter? If everything works properly, it may not matter. Is there a danger if both users are uploading the same file?
If we need to synchronize across processes, we could use the db.
|
1.0
|
Threading.Lock not effective across processes - In investigating #348, I looked at `determine_upload_type()` which uses a Threading.Lock to protect a critical section. I added the logging statements seen here:
```
current_app.logger.info("Acquiring lock for " + file.filename + " >>>>>>>>>>>>>>>>>>>>>>>>>>>")
with lock:
current_app.logger.info("Got lock for " + file.filename + " <<<<<<<<<<<<<<<<<<<<<<<<<<<<<")
found_sources += 1
filename = secure_filename(file.filename)
now = time.gmtime()
now_date = time.strftime("%Y-%m-%d--%H-%M-%S", now)
current_app.logger.info(" -File: " + filename + " Matches files type: " + src_type)
df.to_csv(os.path.join(destination_path, src_type + '-' + now_date + '.csv'))
clean_current_folder(destination_path + '/current', src_type)
df.to_csv(os.path.join(destination_path + '/current', src_type + '-' + now_date + '.csv'))
current_app.logger.info(" -Uploaded successfully as : " + src_type + '-' + now_date + '.' + file_extension)
flash(src_type + " {0} ".format(SUCCESS_MSG), 'info')
current_app.logger.info("Releasing lock for " + file.filename + " >>>>>>>>>>>>>>>>>>>>>>>>>>>")
if found_sources == 0:
```
as the lock does not seem to prevent parallel execution from two browser tabs (Date and HH:MM removed, leaving seconds):
```
:12,955] INFO in file_uploader: Acquiring lock for Salesforce_PAWS Donation (all Time) (excel).xlsx >>>>>>>>>>>>>>>>>>>>>>>>>>>
:12,955] INFO in file_uploader: Got lock for Salesforce_PAWS Donation (all Time) (excel).xlsx <<<<<<<<<<<<<<<<<<<<<<<<<<<<<
:12,956] INFO in file_uploader: -File: Salesforce_PAWS_Donation_all_Time_excel.xlsx Matches files type: salesforcedonations
:14,119] INFO in file_uploader: Start uploading file: shelterluv_people.csv
:14,501] INFO in file_uploader: Acquiring lock for shelterluv_people.csv >>>>>>>>>>>>>>>>>>>>>>>>>>>
:14,501] INFO in file_uploader: Got lock for shelterluv_people.csv <<<<<<<<<<<<<<<<<<<<<<<<<<<<<
:15,384] INFO in file_uploader: -Uploaded successfully as : shelterluvpeople-2021-06-03--23-49-14.csv
:15,384] INFO in file_uploader: Releasing lock for shelterluv_people.csv >>>>>>>>>>>>>>>>>>>>>>>>>>>
:20,109] INFO in file_uploader: -Uploaded successfully as : salesforcedonations-2021-06-03--23-49-12.xlsx
:20,110] INFO in file_uploader: Releasing lock for Salesforce_PAWS Donation (all Time) (excel).xlsx >>>>>>>>>>>>>>>>>>>>>>>>>>>
```
I think this is because uWSGI is handling requests on a per-process basis:
```
paws-compose-server | *** uWSGI is running in multiple interpreter mode ***
paws-compose-server | spawned uWSGI master process (pid: 13)
paws-compose-server | spawned uWSGI worker 1 (pid: 22, cores: 4)
paws-compose-server | spawned uWSGI worker 2 (pid: 26, cores: 4)
````
Does it matter? If everything works properly, it may not matter. Is there a danger if both users are uploading the same file?
If we need to synchronize across processes, we could use the db.
|
process
|
threading lock not effective across processes in investigating i looked at determine upload type which uses a threading lock to protect a critical section i added the logging statements seen here current app logger info acquiring lock for file filename with lock current app logger info got lock for file filename found sources filename secure filename file filename now time gmtime now date time strftime y m d h m s now current app logger info file filename matches files type src type df to csv os path join destination path src type now date csv clean current folder destination path current src type df to csv os path join destination path current src type now date csv current app logger info uploaded successfully as src type now date file extension flash src type format success msg info current app logger info releasing lock for file filename if found sources as the lock does not seem to prevent parallel execution from two browser tabs date and hh mm removed leaving seconds info in file uploader acquiring lock for salesforce paws donation all time excel xlsx info in file uploader got lock for salesforce paws donation all time excel xlsx info in file uploader file salesforce paws donation all time excel xlsx matches files type salesforcedonations info in file uploader start uploading file shelterluv people csv info in file uploader acquiring lock for shelterluv people csv info in file uploader got lock for shelterluv people csv info in file uploader uploaded successfully as shelterluvpeople csv info in file uploader releasing lock for shelterluv people csv info in file uploader uploaded successfully as salesforcedonations xlsx info in file uploader releasing lock for salesforce paws donation all time excel xlsx i think this is because uwsgi is handling requests on a per process basis paws compose server uwsgi is running in multiple interpreter mode paws compose server spawned uwsgi master process pid paws compose server spawned uwsgi worker pid cores paws compose server spawned uwsgi worker pid cores does it matter if everything works properly it may not matter is there a danger if both users are uploading the same file if we need to synchronize across processes we could use the db
| 1
|
7,498
| 10,584,511,975
|
IssuesEvent
|
2019-10-08 15:35:32
|
code4romania/expert-consultation-api
|
https://api.github.com/repos/code4romania/expert-consultation-api
|
closed
|
[Users] Add comments to allocated units
|
document processing documents java spring users
|
As a user of the platform, I will have document breakdown units allocated to me. I want to be able to add my comments to these units.
We need to create a new entity, Comment, that would hold the information about:
- the article/chapter/document for which the comment is added
- the user adding the comment
- the text of the comment
- date/time information about comments added on the document
|
1.0
|
[Users] Add comments to allocated units - As a user of the platform, I will have document breakdown units allocated to me. I want to be able to add my comments to these units.
We need to create a new entity, Comment, that would hold the information about:
- the article/chapter/document for which the comment is added
- the user adding the comment
- the text of the comment
- date/time information about comments added on the document
|
process
|
add comments to allocated units as a user of the platform i will have document breakdown units allocated to me i want to be able to add my comments to these units we need to create a new entity comment that would hold the information about the article chapter document for which the comment is added the user adding the comment the text of the comment date time information about comments added on the document
| 1
|
393
| 2,842,096,796
|
IssuesEvent
|
2015-05-28 07:01:16
|
ChelseaStats/issues
|
https://api.github.com/repos/ChelseaStats/issues
|
closed
|
ChelseaFC May 26 2015 at 11:07PM
|
to process tweet ★ priority-medium
|
<blockquote class="twitter-tweet">
<p lang="en" dir="ltr" xml:lang="en">Our Special Recognition award goes to Lord Attenborough, and his son Michael will collect it on his behalf. <a href="https://twitter.com/hashtag/CFCPOTY?src=hash">#CFCPOTY</a> <a href="http://u.thechels.uk/1IZaGY0">http://pic.twitter.com/j4g7QFNdoP</a></p>
— Chelsea FC (@ChelseaFC) <a href="https://twitter.com/ChelseaFC/status/603321551379902464">May 26, 2015</a>
</blockquote>
<br><br>
May 26, 2015 at 11:07PM<br>
via Twitter<br><hr><br><br>
http://twitter.com/ChelseaFC/status/603321551379902464
|
1.0
|
ChelseaFC May 26 2015 at 11:07PM - <blockquote class="twitter-tweet">
<p lang="en" dir="ltr" xml:lang="en">Our Special Recognition award goes to Lord Attenborough, and his son Michael will collect it on his behalf. <a href="https://twitter.com/hashtag/CFCPOTY?src=hash">#CFCPOTY</a> <a href="http://u.thechels.uk/1IZaGY0">http://pic.twitter.com/j4g7QFNdoP</a></p>
— Chelsea FC (@ChelseaFC) <a href="https://twitter.com/ChelseaFC/status/603321551379902464">May 26, 2015</a>
</blockquote>
<br><br>
May 26, 2015 at 11:07PM<br>
via Twitter<br><hr><br><br>
http://twitter.com/ChelseaFC/status/603321551379902464
|
process
|
chelseafc may at our special recognition award goes to lord attenborough and his son michael will collect it on his behalf a href a href mdash chelsea fc chelseafc may at via twitter
| 1
|
12,108
| 14,740,441,951
|
IssuesEvent
|
2021-01-07 09:05:42
|
kdjstudios/SABillingGitlab
|
https://api.github.com/repos/kdjstudios/SABillingGitlab
|
closed
|
CC and CCOF gateways
|
anc-process anp-not prioritized ant-bug
|
In GitLab by @kdjstudios on Nov 9, 2018, 10:06
Hello Team,
While updating all the sites to use the Authorize Gateway. It occurred to me that something is not working as I would expect it to. I will explain below and hopefully we can determine if this is the true functionality that needs enhancement to work effectively; or if something is broken and needs to be fixed.
- When selecting sites for a gateway in the admin section it will only allow for a site to be associated with one Gateway. Which I find should not be the case, we should be able to specify multiple gateways for each site and then at the site level select which gateway.
- Currently you can at the site level select a gateway from the drop down list. So it would seem we have an issue with the admin gateway sections not allowing for sites to have multiple gateways associated with them.
- On the other hand, if we do only want to have one gateway associated per site. Then we should not allow for a drop down in the site edit page. It should simply display which gateway is setup.
- This I do not feel is correct, as then the software becomes limited and requires an admin to perform updates, rather then the site/operations to do that as they see fit.
- If this is correct, then we should not allow for selecting an already selected site on the gateway pages. It should need to be removed/unassigned from its' current gateway, to allow it to be assigned to another. (Otherwise as what I did today with Winnipeg; we can easily overwrite a sites gateway without knowing it. Luckily, it did not clear the API info at the site level, so I could just switch it back.)
|
1.0
|
CC and CCOF gateways - In GitLab by @kdjstudios on Nov 9, 2018, 10:06
Hello Team,
While updating all the sites to use the Authorize Gateway. It occurred to me that something is not working as I would expect it to. I will explain below and hopefully we can determine if this is the true functionality that needs enhancement to work effectively; or if something is broken and needs to be fixed.
- When selecting sites for a gateway in the admin section it will only allow for a site to be associated with one Gateway. Which I find should not be the case, we should be able to specify multiple gateways for each site and then at the site level select which gateway.
- Currently you can at the site level select a gateway from the drop down list. So it would seem we have an issue with the admin gateway sections not allowing for sites to have multiple gateways associated with them.
- On the other hand, if we do only want to have one gateway associated per site. Then we should not allow for a drop down in the site edit page. It should simply display which gateway is setup.
- This I do not feel is correct, as then the software becomes limited and requires an admin to perform updates, rather then the site/operations to do that as they see fit.
- If this is correct, then we should not allow for selecting an already selected site on the gateway pages. It should need to be removed/unassigned from its' current gateway, to allow it to be assigned to another. (Otherwise as what I did today with Winnipeg; we can easily overwrite a sites gateway without knowing it. Luckily, it did not clear the API info at the site level, so I could just switch it back.)
|
process
|
cc and ccof gateways in gitlab by kdjstudios on nov hello team while updating all the sites to use the authorize gateway it occurred to me that something is not working as i would expect it to i will explain below and hopefully we can determine if this is the true functionality that needs enhancement to work effectively or if something is broken and needs to be fixed when selecting sites for a gateway in the admin section it will only allow for a site to be associated with one gateway which i find should not be the case we should be able to specify multiple gateways for each site and then at the site level select which gateway currently you can at the site level select a gateway from the drop down list so it would seem we have an issue with the admin gateway sections not allowing for sites to have multiple gateways associated with them on the other hand if we do only want to have one gateway associated per site then we should not allow for a drop down in the site edit page it should simply display which gateway is setup this i do not feel is correct as then the software becomes limited and requires an admin to perform updates rather then the site operations to do that as they see fit if this is correct then we should not allow for selecting an already selected site on the gateway pages it should need to be removed unassigned from its current gateway to allow it to be assigned to another otherwise as what i did today with winnipeg we can easily overwrite a sites gateway without knowing it luckily it did not clear the api info at the site level so i could just switch it back
| 1
|
159,447
| 24,993,538,869
|
IssuesEvent
|
2022-11-02 21:06:48
|
bcgov/cleanbc
|
https://api.github.com/repos/bcgov/cleanbc
|
closed
|
Ownership of design (figma) files
|
CleanBC Design
|
**When [Situation]?**
we know more about licensing and privacy agreements
**I want to [Motivation]**
share and distribute ownership to product teams
**So I can [Expected Outcome]**
- document and distribute ownership
- be able to share the flies easily to ministries
|
1.0
|
Ownership of design (figma) files - **When [Situation]?**
we know more about licensing and privacy agreements
**I want to [Motivation]**
share and distribute ownership to product teams
**So I can [Expected Outcome]**
- document and distribute ownership
- be able to share the flies easily to ministries
|
non_process
|
ownership of design figma files when we know more about licensing and privacy agreements i want to share and distribute ownership to product teams so i can document and distribute ownership be able to share the flies easily to ministries
| 0
|
4,829
| 5,290,960,733
|
IssuesEvent
|
2017-02-08 21:15:59
|
dotnet/corefx
|
https://api.github.com/repos/dotnet/corefx
|
closed
|
Unable to rebuildandtest a test project after build.cmd
|
area-Infrastructure dev-eng
|
From the root of my repo, I do:
```
git clean -xdf
git fetch --all
git checkout upstream/dev/eng
build.cmd
cd src\System.Linq\tests
msbuild /t:rebuildandtest
```
Everything works fine until that last step. When I try to do the msbuild /t:rebuildandtest, it fails while trying to start running the tests. I'm forced to do a build-tests.cmd first and then this works.
cc: @weshaggard, @karajas
|
1.0
|
Unable to rebuildandtest a test project after build.cmd - From the root of my repo, I do:
```
git clean -xdf
git fetch --all
git checkout upstream/dev/eng
build.cmd
cd src\System.Linq\tests
msbuild /t:rebuildandtest
```
Everything works fine until that last step. When I try to do the msbuild /t:rebuildandtest, it fails while trying to start running the tests. I'm forced to do a build-tests.cmd first and then this works.
cc: @weshaggard, @karajas
|
non_process
|
unable to rebuildandtest a test project after build cmd from the root of my repo i do git clean xdf git fetch all git checkout upstream dev eng build cmd cd src system linq tests msbuild t rebuildandtest everything works fine until that last step when i try to do the msbuild t rebuildandtest it fails while trying to start running the tests i m forced to do a build tests cmd first and then this works cc weshaggard karajas
| 0
|
3,373
| 6,500,467,434
|
IssuesEvent
|
2017-08-23 04:37:52
|
gaocegege/Processing.R
|
https://api.github.com/repos/gaocegege/Processing.R
|
closed
|
When there are exceptions, fail to close the applets
|
community/processing difficulty/high priority/p2 size/no-idea status/to-be-claimed type/bug
|
Python mode has the same bug
ref https://vimeo.com/224415544

@jdf
|
1.0
|
When there are exceptions, fail to close the applets - Python mode has the same bug
ref https://vimeo.com/224415544

@jdf
|
process
|
when there are exceptions fail to close the applets python mode has the same bug ref jdf
| 1
|
33,055
| 12,165,858,861
|
IssuesEvent
|
2020-04-27 08:17:40
|
Baneeishaque/ask-med-pharma_website
|
https://api.github.com/repos/Baneeishaque/ask-med-pharma_website
|
opened
|
CVE-2018-11695 (High) detected in node-sass-4.12.0.tgz, node-sass-0bd48bbad6fccb0da16d3bdf76ad541f5f45ec70
|
security vulnerability
|
## CVE-2018-11695 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>node-sass-4.12.0.tgz</b></p></summary>
<p>
<details><summary><b>node-sass-4.12.0.tgz</b></p></summary>
<p>Wrapper around libsass</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-sass/-/node-sass-4.12.0.tgz">https://registry.npmjs.org/node-sass/-/node-sass-4.12.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/ask-med-pharma_website/wp-content/themes/twentynineteen/package.json</p>
<p>Path to vulnerable library: /ask-med-pharma_website/wp-content/themes/twentynineteen/node_modules/node-sass/package.json</p>
<p>
Dependency Hierarchy:
- :x: **node-sass-4.12.0.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/Baneeishaque/ask-med-pharma_website/commit/c7f5a051704dd823a801e2402d6a6ddf574962a2">c7f5a051704dd823a801e2402d6a6ddf574962a2</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in LibSass through 3.5.2. A NULL pointer dereference was found in the function Sass::Expand::operator which could be leveraged by an attacker to cause a denial of service (application crash) or possibly have unspecified other impact.
<p>Publish Date: 2018-06-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-11695>CVE-2018-11695</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11695">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11695</a></p>
<p>Release Date: 2018-06-04</p>
<p>Fix Resolution: LibSass - 3.6.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2018-11695 (High) detected in node-sass-4.12.0.tgz, node-sass-0bd48bbad6fccb0da16d3bdf76ad541f5f45ec70 - ## CVE-2018-11695 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>node-sass-4.12.0.tgz</b></p></summary>
<p>
<details><summary><b>node-sass-4.12.0.tgz</b></p></summary>
<p>Wrapper around libsass</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-sass/-/node-sass-4.12.0.tgz">https://registry.npmjs.org/node-sass/-/node-sass-4.12.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/ask-med-pharma_website/wp-content/themes/twentynineteen/package.json</p>
<p>Path to vulnerable library: /ask-med-pharma_website/wp-content/themes/twentynineteen/node_modules/node-sass/package.json</p>
<p>
Dependency Hierarchy:
- :x: **node-sass-4.12.0.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/Baneeishaque/ask-med-pharma_website/commit/c7f5a051704dd823a801e2402d6a6ddf574962a2">c7f5a051704dd823a801e2402d6a6ddf574962a2</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in LibSass through 3.5.2. A NULL pointer dereference was found in the function Sass::Expand::operator which could be leveraged by an attacker to cause a denial of service (application crash) or possibly have unspecified other impact.
<p>Publish Date: 2018-06-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-11695>CVE-2018-11695</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11695">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11695</a></p>
<p>Release Date: 2018-06-04</p>
<p>Fix Resolution: LibSass - 3.6.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in node sass tgz node sass cve high severity vulnerability vulnerable libraries node sass tgz node sass tgz wrapper around libsass library home page a href path to dependency file tmp ws scm ask med pharma website wp content themes twentynineteen package json path to vulnerable library ask med pharma website wp content themes twentynineteen node modules node sass package json dependency hierarchy x node sass tgz vulnerable library found in head commit a href vulnerability details an issue was discovered in libsass through a null pointer dereference was found in the function sass expand operator which could be leveraged by an attacker to cause a denial of service application crash or possibly have unspecified other impact publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution libsass step up your open source security game with whitesource
| 0
|
2,765
| 5,697,059,342
|
IssuesEvent
|
2017-04-16 18:03:31
|
alexrj/Slic3r
|
https://api.github.com/repos/alexrj/Slic3r
|
opened
|
Background processing is not triggered if config and model are loaded from CLI
|
Background Processing
|
1857bf63910a7ade053390157fbd1bb9533629a0
OS: Debian Linux
Using `--gui --load config.ini file.stl` with background processing enabled does not trigger background processing.
|
1.0
|
Background processing is not triggered if config and model are loaded from CLI - 1857bf63910a7ade053390157fbd1bb9533629a0
OS: Debian Linux
Using `--gui --load config.ini file.stl` with background processing enabled does not trigger background processing.
|
process
|
background processing is not triggered if config and model are loaded from cli os debian linux using gui load config ini file stl with background processing enabled does not trigger background processing
| 1
|
16,811
| 22,060,908,108
|
IssuesEvent
|
2022-05-30 17:40:54
|
bitPogo/kmock
|
https://api.github.com/repos/bitPogo/kmock
|
closed
|
vararg support
|
enhancement kmock-processor
|
## Description
<!--- Provide a detailed introduction to the issue itself, and why you consider it to be a bug -->
Currently `vararg` is currently not supported, which it should.
## Expected Behavior
<!--- Tell us what should happen -->
```kotlin
interface Platform {
fun bar(vararg buzz: Int)
}
```
should be:
```kotlin
internal class PlatformMock(
..
) : Platform {
..
public val _bar: KMockContract.SyncFunProxy<Unit, (Array<Int>) -> Unit> =
..
public override fun bar(vararg buzz: Int) = _bar.invoke(buzz)
..
}
```
## Actual Behavior
<!--- Tell us what happens instead -->
```kotlin
internal class PlatformMock(
...
) : Platform {
..
public val _bar: KMockContract.SyncFunProxy<Unit, (Int) -> Unit> =
..
public override fun bar(vararg buzz: Int) = _bar.invoke(buzz)
..
}
```
## Possible Fix
<!--- Not obligatory, but suggest a fix or reason for the bug -->
KSValueParameter has a `isVararg` property, which needs to be propagated starting from `determineParameter`.
|
1.0
|
vararg support - ## Description
<!--- Provide a detailed introduction to the issue itself, and why you consider it to be a bug -->
Currently `vararg` is currently not supported, which it should.
## Expected Behavior
<!--- Tell us what should happen -->
```kotlin
interface Platform {
fun bar(vararg buzz: Int)
}
```
should be:
```kotlin
internal class PlatformMock(
..
) : Platform {
..
public val _bar: KMockContract.SyncFunProxy<Unit, (Array<Int>) -> Unit> =
..
public override fun bar(vararg buzz: Int) = _bar.invoke(buzz)
..
}
```
## Actual Behavior
<!--- Tell us what happens instead -->
```kotlin
internal class PlatformMock(
...
) : Platform {
..
public val _bar: KMockContract.SyncFunProxy<Unit, (Int) -> Unit> =
..
public override fun bar(vararg buzz: Int) = _bar.invoke(buzz)
..
}
```
## Possible Fix
<!--- Not obligatory, but suggest a fix or reason for the bug -->
KSValueParameter has a `isVararg` property, which needs to be propagated starting from `determineParameter`.
|
process
|
vararg support description currently vararg is currently not supported which it should expected behavior kotlin interface platform fun bar vararg buzz int should be kotlin internal class platformmock platform public val bar kmockcontract syncfunproxy unit public override fun bar vararg buzz int bar invoke buzz actual behavior kotlin internal class platformmock platform public val bar kmockcontract syncfunproxy unit public override fun bar vararg buzz int bar invoke buzz possible fix ksvalueparameter has a isvararg property which needs to be propagated starting from determineparameter
| 1
|
17,332
| 23,150,088,937
|
IssuesEvent
|
2022-07-29 07:25:35
|
X-Sharp/XSharpPublic
|
https://api.github.com/repos/X-Sharp/XSharpPublic
|
closed
|
XBase++ preprocessor: #translate unexpected behaviour
|
bug Compiler Preprocessor
|
I have a problem with Extended Expression Match Marker
**test.ch:
`#translate PN(<(pole)>) => GF(<(pole)>)`
**test.prg:
`PN(NAME)`
##Expected:
`GF("NAME")`
##Actual:
`PN(NAME)`
------------------------------------------------------------------------
I use version 2.12.2.0
Same with #XTranslate
With Regular Match Marker and #command everything works fine.
|
1.0
|
XBase++ preprocessor: #translate unexpected behaviour - I have a problem with Extended Expression Match Marker
**test.ch:
`#translate PN(<(pole)>) => GF(<(pole)>)`
**test.prg:
`PN(NAME)`
##Expected:
`GF("NAME")`
##Actual:
`PN(NAME)`
------------------------------------------------------------------------
I use version 2.12.2.0
Same with #XTranslate
With Regular Match Marker and #command everything works fine.
|
process
|
xbase preprocessor translate unexpected behaviour i have a problem with extended expression match marker test ch translate pn gf test prg pn name expected gf name actual pn name i use version same with xtranslate with regular match marker and command everything works fine
| 1
|
236,901
| 19,584,204,228
|
IssuesEvent
|
2022-01-05 03:15:58
|
DickinsonCollege/FarmData2
|
https://api.github.com/repos/DickinsonCollege/FarmData2
|
opened
|
Refactor Test for Seeding Report Page
|
testing refactoring
|
The tests in seedingReport.spec.js need to be refactored. They have fallen out of date with the current state of development and have structural issues that cause intermittent 403 errors. Some of the issues to address:
* Can better use the data-cy properties for the custom table.
* Structure tests as in api.spec.js in the fd2_example module to avoid 403 errors.
Related to #295, #296, #303 which have all been closed as consolidated into this issue.
|
1.0
|
Refactor Test for Seeding Report Page - The tests in seedingReport.spec.js need to be refactored. They have fallen out of date with the current state of development and have structural issues that cause intermittent 403 errors. Some of the issues to address:
* Can better use the data-cy properties for the custom table.
* Structure tests as in api.spec.js in the fd2_example module to avoid 403 errors.
Related to #295, #296, #303 which have all been closed as consolidated into this issue.
|
non_process
|
refactor test for seeding report page the tests in seedingreport spec js need to be refactored they have fallen out of date with the current state of development and have structural issues that cause intermittent errors some of the issues to address can better use the data cy properties for the custom table structure tests as in api spec js in the example module to avoid errors related to which have all been closed as consolidated into this issue
| 0
|
17,576
| 23,389,151,654
|
IssuesEvent
|
2022-08-11 16:06:59
|
GoogleCloudPlatform/vertex-ai-samples
|
https://api.github.com/repos/GoogleCloudPlatform/vertex-ai-samples
|
closed
|
Support VPC in notebook execution test
|
type: process
|
Some features in notebooks require them to be run in a VPC. For example: https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/matching_engine/matching_engine_for_indexing.ipynb
See https://cloud.google.com/build/docs/private-pools/set-up-private-pool-environment#setup-private-connection
|
1.0
|
Support VPC in notebook execution test - Some features in notebooks require them to be run in a VPC. For example: https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/matching_engine/matching_engine_for_indexing.ipynb
See https://cloud.google.com/build/docs/private-pools/set-up-private-pool-environment#setup-private-connection
|
process
|
support vpc in notebook execution test some features in notebooks require them to be run in a vpc for example see
| 1
|
4,703
| 7,544,037,208
|
IssuesEvent
|
2018-04-17 17:12:12
|
UnbFeelings/unb-feelings-docs
|
https://api.github.com/repos/UnbFeelings/unb-feelings-docs
|
opened
|
[Não Conformidade] Métricas de Processo
|
Processo Qualidade invalid
|
@UnbFeelings/devel
@UnbFeelings/process
Perante critérios definidos para as [Auditorias](https://github.com/UnbFeelings/unb-feelings-GQA/wiki/Crit%C3%A9rios-de-Avalia%C3%A7%C3%A3o-e-T%C3%A9cnicas-de-Auditoria) fora auditada a [Métricas de Processo](https://github.com/UnbFeelings/unb-feelings-GQA/wiki/Auditoria-M%C3%A9tricas-de-Processo-Ciclo-1).
### Descrição
Nenhuma métrica de processo foi definida e nem quais técnicas serão utilizadas na coleta, além disso, nenhuma métrica foi coletada.
#### Recomendações
É recomendável que o time de processo se reúna com o time de desenvolvimento para discutir a importância da coleta de métricas e como fazê-la.
Com base na Política de Não Conformidades utilizando a matriz GUT, obteve-se uma pontuação de 45 pontos, o que se encaixa em problema mediano, assim o prazo para resolução da Não conformidade é de 3 dias
#### Detalhes
**Auditor**: Igor Gabriel
**Técnica de Audição**: Checklist
**Tipo:** Desenvolvimento e Processo
**Prazo:** 20/04/2018
|
1.0
|
[Não Conformidade] Métricas de Processo - @UnbFeelings/devel
@UnbFeelings/process
Perante critérios definidos para as [Auditorias](https://github.com/UnbFeelings/unb-feelings-GQA/wiki/Crit%C3%A9rios-de-Avalia%C3%A7%C3%A3o-e-T%C3%A9cnicas-de-Auditoria) fora auditada a [Métricas de Processo](https://github.com/UnbFeelings/unb-feelings-GQA/wiki/Auditoria-M%C3%A9tricas-de-Processo-Ciclo-1).
### Descrição
Nenhuma métrica de processo foi definida e nem quais técnicas serão utilizadas na coleta, além disso, nenhuma métrica foi coletada.
#### Recomendações
É recomendável que o time de processo se reúna com o time de desenvolvimento para discutir a importância da coleta de métricas e como fazê-la.
Com base na Política de Não Conformidades utilizando a matriz GUT, obteve-se uma pontuação de 45 pontos, o que se encaixa em problema mediano, assim o prazo para resolução da Não conformidade é de 3 dias
#### Detalhes
**Auditor**: Igor Gabriel
**Técnica de Audição**: Checklist
**Tipo:** Desenvolvimento e Processo
**Prazo:** 20/04/2018
|
process
|
métricas de processo unbfeelings devel unbfeelings process perante critérios definidos para as fora auditada a descrição nenhuma métrica de processo foi definida e nem quais técnicas serão utilizadas na coleta além disso nenhuma métrica foi coletada recomendações é recomendável que o time de processo se reúna com o time de desenvolvimento para discutir a importância da coleta de métricas e como fazê la com base na política de não conformidades utilizando a matriz gut obteve se uma pontuação de pontos o que se encaixa em problema mediano assim o prazo para resolução da não conformidade é de dias detalhes auditor igor gabriel técnica de audição checklist tipo desenvolvimento e processo prazo
| 1
|
1,397
| 3,964,001,678
|
IssuesEvent
|
2016-05-02 22:34:24
|
pelias/pelias
|
https://api.github.com/repos/pelias/pelias
|
closed
|
Can't find Target store in Eureka, California
|
feedback POIs processed
|
Bug filed from within Erasermap by @nvkelso: [mapzen/eraser-map#152]
* **Device name:** Samsung Galaxy S6 edge
* **Android Version:** 5.1.1
* **App build number:** master-481
* **What did you expected to happen?** Search should have found the Target store in Eureka, California by typing "target eureka".
* **What happened instead?** Instead stores far far away were shown. I had to search for "eureka" and then zoom the map and long press on the store to create a route.
* **Steps to reproduce:** Simulate location to SF. Search for "target eureka".
* **Attach a screenshot** See below.
* **Attach device logs** eh.
If something happened while you were routing, share with us:
* **Where were you?** San Francisco
* **Routing from origin:** San Francisco
* **Routing to destination:** Target Eureka California
[OSM feature](https://www.openstreetmap.org/node/527738556) | [Located here](https://www.openstreetmap.org/#map=17/40.80566/-124.14364)

The crazy thing is if I long press on the Target in Eureka, Search will return that feature to route to:

But sometimes a long press returns:

cc: @nvkelso @mjcunningham
|
1.0
|
Can't find Target store in Eureka, California - Bug filed from within Erasermap by @nvkelso: [mapzen/eraser-map#152]
* **Device name:** Samsung Galaxy S6 edge
* **Android Version:** 5.1.1
* **App build number:** master-481
* **What did you expected to happen?** Search should have found the Target store in Eureka, California by typing "target eureka".
* **What happened instead?** Instead stores far far away were shown. I had to search for "eureka" and then zoom the map and long press on the store to create a route.
* **Steps to reproduce:** Simulate location to SF. Search for "target eureka".
* **Attach a screenshot** See below.
* **Attach device logs** eh.
If something happened while you were routing, share with us:
* **Where were you?** San Francisco
* **Routing from origin:** San Francisco
* **Routing to destination:** Target Eureka California
[OSM feature](https://www.openstreetmap.org/node/527738556) | [Located here](https://www.openstreetmap.org/#map=17/40.80566/-124.14364)

The crazy thing is if I long press on the Target in Eureka, Search will return that feature to route to:

But sometimes a long press returns:

cc: @nvkelso @mjcunningham
|
process
|
can t find target store in eureka california bug filed from within erasermap by nvkelso device name samsung galaxy edge android version app build number master what did you expected to happen search should have found the target store in eureka california by typing target eureka what happened instead instead stores far far away were shown i had to search for eureka and then zoom the map and long press on the store to create a route steps to reproduce simulate location to sf search for target eureka attach a screenshot see below attach device logs eh if something happened while you were routing share with us where were you san francisco routing from origin san francisco routing to destination target eureka california the crazy thing is if i long press on the target in eureka search will return that feature to route to but sometimes a long press returns cc nvkelso mjcunningham
| 1
|
20,560
| 27,220,671,624
|
IssuesEvent
|
2023-02-21 04:54:21
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
closed
|
ERROR: While parsing option --apple_platform_type=ios: Not a valid Apple platform type: 'ios' (should be ıos, watchos, tvos, macos or catalyst)
|
more data needed type: support / not a bug (process)
|
### Description of the bug:
I am following tensorflow installation locally from [here](https://www.tensorflow.org/lite/guide/build_ios). I run configure then run this code:
`bazel build --config=ios_fat -c opt --cxxopt=--std=c++17 \
//tensorflow/lite/ios:TensorFlowLiteC_framework`
### What's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible.
following the link provided after cloning tensorflow from github.
### Which operating system are you running Bazel on?
macos monterey (intel)
### What is the output of `bazel info release`?
5.3.0
### If `bazel info release` returns `development version` or `(@non-git)`, tell us how you built Bazel.
-
### What's the output of `git remote get-url origin; git rev-parse master; git rev-parse HEAD` ?
```text
-
```
### Have you found anything relevant by searching the web?
_No response_
### Any other information, logs, or outputs that you want to share?
_No response_
|
1.0
|
ERROR: While parsing option --apple_platform_type=ios: Not a valid Apple platform type: 'ios' (should be ıos, watchos, tvos, macos or catalyst) - ### Description of the bug:
I am following tensorflow installation locally from [here](https://www.tensorflow.org/lite/guide/build_ios). I run configure then run this code:
`bazel build --config=ios_fat -c opt --cxxopt=--std=c++17 \
//tensorflow/lite/ios:TensorFlowLiteC_framework`
### What's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible.
following the link provided after cloning tensorflow from github.
### Which operating system are you running Bazel on?
macos monterey (intel)
### What is the output of `bazel info release`?
5.3.0
### If `bazel info release` returns `development version` or `(@non-git)`, tell us how you built Bazel.
-
### What's the output of `git remote get-url origin; git rev-parse master; git rev-parse HEAD` ?
```text
-
```
### Have you found anything relevant by searching the web?
_No response_
### Any other information, logs, or outputs that you want to share?
_No response_
|
process
|
error while parsing option apple platform type ios not a valid apple platform type ios should be ıos watchos tvos macos or catalyst description of the bug i am following tensorflow installation locally from i run configure then run this code bazel build config ios fat c opt cxxopt std c tensorflow lite ios tensorflowlitec framework what s the simplest easiest way to reproduce this bug please provide a minimal example if possible following the link provided after cloning tensorflow from github which operating system are you running bazel on macos monterey intel what is the output of bazel info release if bazel info release returns development version or non git tell us how you built bazel what s the output of git remote get url origin git rev parse master git rev parse head text have you found anything relevant by searching the web no response any other information logs or outputs that you want to share no response
| 1
|
42,478
| 6,981,062,830
|
IssuesEvent
|
2017-12-13 05:55:16
|
bigdatagenomics/adam
|
https://api.github.com/repos/bigdatagenomics/adam
|
closed
|
Fix link anchors and other issues in readthedocs
|
documentation
|
Some link anchors and other formatting styles were broken after conversion to rst.
File pull requests against this issue for these and other issues with the new documentation hosted at http://adam.readthedocs.io/en/latest/.
E.g.
```
a known variation database (e.g., dbSNP). {#known-snps}
See candidate generation and realignment. {#known-indels}
While the `transformAlignments <#transformAlignments>`__ command
```
Regarding Pygments styles, I prefer [lovelace](http://pygments.org/demo/6682259/?style=lovelace) and [arduino](http://pygments.org/demo/6682259/?style=arduino) over the default.
|
1.0
|
Fix link anchors and other issues in readthedocs - Some link anchors and other formatting styles were broken after conversion to rst.
File pull requests against this issue for these and other issues with the new documentation hosted at http://adam.readthedocs.io/en/latest/.
E.g.
```
a known variation database (e.g., dbSNP). {#known-snps}
See candidate generation and realignment. {#known-indels}
While the `transformAlignments <#transformAlignments>`__ command
```
Regarding Pygments styles, I prefer [lovelace](http://pygments.org/demo/6682259/?style=lovelace) and [arduino](http://pygments.org/demo/6682259/?style=arduino) over the default.
|
non_process
|
fix link anchors and other issues in readthedocs some link anchors and other formatting styles were broken after conversion to rst file pull requests against this issue for these and other issues with the new documentation hosted at e g a known variation database e g dbsnp known snps see candidate generation and realignment known indels while the transformalignments command regarding pygments styles i prefer and over the default
| 0
|
175,535
| 27,876,927,578
|
IssuesEvent
|
2023-03-21 16:35:40
|
OSOceanAcoustics/echopype
|
https://api.github.com/repos/OSOceanAcoustics/echopype
|
closed
|
Create and move functions to `commongrid` and `clean` subpackages
|
design
|
**UPDATE, 3/20/2023:** We've settled on `commongrid` instead of `unify`. Also, expanded this issue to include moving the noise functions to the new `clean` subpackage (which we initially had named `filter`).
To manage functions that all could be called as "processing" or "preprocessing" type, in #817 we laid out some subpackage organizations. This issue addresses one of them, to create a new subpackage called `unify` and move a subset of the functions currently under `preprocess` under it:
- `unify`
- `compute_MVBS` (existing) -- the efficiency was enhanced in #878
- `compute_MVBS_index_binning` (existing) -- we know more work may be needed for this, but we won't change the implementation at the moment.
- `regrid_Sv` (to be implemented) -- See https://github.com/OSOceanAcoustics/echopype/issues/726
|
1.0
|
Create and move functions to `commongrid` and `clean` subpackages - **UPDATE, 3/20/2023:** We've settled on `commongrid` instead of `unify`. Also, expanded this issue to include moving the noise functions to the new `clean` subpackage (which we initially had named `filter`).
To manage functions that all could be called as "processing" or "preprocessing" type, in #817 we laid out some subpackage organizations. This issue addresses one of them, to create a new subpackage called `unify` and move a subset of the functions currently under `preprocess` under it:
- `unify`
- `compute_MVBS` (existing) -- the efficiency was enhanced in #878
- `compute_MVBS_index_binning` (existing) -- we know more work may be needed for this, but we won't change the implementation at the moment.
- `regrid_Sv` (to be implemented) -- See https://github.com/OSOceanAcoustics/echopype/issues/726
|
non_process
|
create and move functions to commongrid and clean subpackages update we ve settled on commongrid instead of unify also expanded this issue to include moving the noise functions to the new clean subpackage which we initially had named filter to manage functions that all could be called as processing or preprocessing type in we laid out some subpackage organizations this issue addresses one of them to create a new subpackage called unify and move a subset of the functions currently under preprocess under it unify compute mvbs existing the efficiency was enhanced in compute mvbs index binning existing we know more work may be needed for this but we won t change the implementation at the moment regrid sv to be implemented see
| 0
|
12,044
| 14,738,792,511
|
IssuesEvent
|
2021-01-07 05:44:13
|
kdjstudios/SABillingGitlab
|
https://api.github.com/repos/kdjstudios/SABillingGitlab
|
closed
|
Site Validation
|
anc-ops anc-process anp-1.5 ant-bug ant-enhancement ant-support has attachment
|
In GitLab by @kdjstudios on Jul 26, 2018, 09:44
Hello Team,
I know we just did the validation on the account and customer level, but after working on #955 and #960 it seems we should also be adding validation to the site too. I found out the hard way that the last billing cycle date is required on the master cycle, thought I crashed SAB because it would not allow access to the Billing cycles page or the site dashboard page.. Lets mark this for discussion to determine what items are required and what items need to have certain values.
Also the sub billing cycles layout needs to be adjusted. Currently the columns do not line up.

|
1.0
|
Site Validation - In GitLab by @kdjstudios on Jul 26, 2018, 09:44
Hello Team,
I know we just did the validation on the account and customer level, but after working on #955 and #960 it seems we should also be adding validation to the site too. I found out the hard way that the last billing cycle date is required on the master cycle, thought I crashed SAB because it would not allow access to the Billing cycles page or the site dashboard page.. Lets mark this for discussion to determine what items are required and what items need to have certain values.
Also the sub billing cycles layout needs to be adjusted. Currently the columns do not line up.

|
process
|
site validation in gitlab by kdjstudios on jul hello team i know we just did the validation on the account and customer level but after working on and it seems we should also be adding validation to the site too i found out the hard way that the last billing cycle date is required on the master cycle thought i crashed sab because it would not allow access to the billing cycles page or the site dashboard page lets mark this for discussion to determine what items are required and what items need to have certain values also the sub billing cycles layout needs to be adjusted currently the columns do not line up uploads image png
| 1
|
1,761
| 4,469,049,533
|
IssuesEvent
|
2016-08-25 11:40:44
|
matz-e/lobster
|
https://api.github.com/repos/matz-e/lobster
|
opened
|
Caching datasets breaks with changing lumi mask
|
bug processing
|
When changing the lumi mask, datasets retrieved from the cache will always have the mask set from their first…
|
1.0
|
Caching datasets breaks with changing lumi mask - When changing the lumi mask, datasets retrieved from the cache will always have the mask set from their first…
|
process
|
caching datasets breaks with changing lumi mask when changing the lumi mask datasets retrieved from the cache will always have the mask set from their first…
| 1
|
68,218
| 13,096,847,297
|
IssuesEvent
|
2020-08-03 16:22:23
|
joomla/joomla-cms
|
https://api.github.com/repos/joomla/joomla-cms
|
closed
|
Custom Elements integration with Custom Fields needs work
|
Code Style J4 Issue No Code Attached Yet Release Blocker
|
As part of the custom fields integration we've had to make a reasonably nasty hack to hide errors https://github.com/joomla/joomla-cms/pull/22263/files#diff-1012a2e43c80bf48fceac68dbb3552c7R315
I want to spend some time before final release working out if there is a better way of working around custom elements/reworking the internals of custom fields or if this is the best we can do
|
2.0
|
Custom Elements integration with Custom Fields needs work - As part of the custom fields integration we've had to make a reasonably nasty hack to hide errors https://github.com/joomla/joomla-cms/pull/22263/files#diff-1012a2e43c80bf48fceac68dbb3552c7R315
I want to spend some time before final release working out if there is a better way of working around custom elements/reworking the internals of custom fields or if this is the best we can do
|
non_process
|
custom elements integration with custom fields needs work as part of the custom fields integration we ve had to make a reasonably nasty hack to hide errors i want to spend some time before final release working out if there is a better way of working around custom elements reworking the internals of custom fields or if this is the best we can do
| 0
|
132,163
| 18,266,162,197
|
IssuesEvent
|
2021-10-04 08:42:21
|
artsking/linux-3.0.35_CVE-2020-15436_withPatch
|
https://api.github.com/repos/artsking/linux-3.0.35_CVE-2020-15436_withPatch
|
closed
|
CVE-2017-5669 (High) detected in linux-stable-rtv3.8.6 - autoclosed
|
security vulnerability
|
## CVE-2017-5669 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv3.8.6</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/artsking/linux-3.0.35_CVE-2020-15436_withPatch/commit/87eecd735a2e4c02ba0c4dc61594d4311e35d5d9">87eecd735a2e4c02ba0c4dc61594d4311e35d5d9</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/ipc/shm.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The do_shmat function in ipc/shm.c in the Linux kernel through 4.9.12 does not restrict the address calculated by a certain rounding operation, which allows local users to map page zero, and consequently bypass a protection mechanism that exists for the mmap system call, by making crafted shmget and shmat system calls in a privileged context.
<p>Publish Date: 2017-02-24
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-5669>CVE-2017-5669</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-5669">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-5669</a></p>
<p>Release Date: 2017-02-24</p>
<p>Fix Resolution: v4.11-rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2017-5669 (High) detected in linux-stable-rtv3.8.6 - autoclosed - ## CVE-2017-5669 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv3.8.6</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/artsking/linux-3.0.35_CVE-2020-15436_withPatch/commit/87eecd735a2e4c02ba0c4dc61594d4311e35d5d9">87eecd735a2e4c02ba0c4dc61594d4311e35d5d9</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/ipc/shm.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The do_shmat function in ipc/shm.c in the Linux kernel through 4.9.12 does not restrict the address calculated by a certain rounding operation, which allows local users to map page zero, and consequently bypass a protection mechanism that exists for the mmap system call, by making crafted shmget and shmat system calls in a privileged context.
<p>Publish Date: 2017-02-24
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-5669>CVE-2017-5669</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-5669">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-5669</a></p>
<p>Release Date: 2017-02-24</p>
<p>Fix Resolution: v4.11-rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in linux stable autoclosed cve high severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in head commit a href found in base branch master vulnerable source files ipc shm c vulnerability details the do shmat function in ipc shm c in the linux kernel through does not restrict the address calculated by a certain rounding operation which allows local users to map page zero and consequently bypass a protection mechanism that exists for the mmap system call by making crafted shmget and shmat system calls in a privileged context publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.