Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
14,875
| 18,285,308,378
|
IssuesEvent
|
2021-10-05 09:36:29
|
googleapis/python-orchestration-airflow
|
https://api.github.com/repos/googleapis/python-orchestration-airflow
|
closed
|
Release as GA
|
type: process api: composer
|
[GA release template](https://github.com/googleapis/google-cloud-common/issues/287)
## Required
- [ ] 28 days elapsed since last beta release with new API surface **RELEASE ON/AFTER: August 25 2021**
- [x] Server API is GA
- [x] Package API is stable, and we can commit to backward compatibility
- [x] All dependencies are GA
|
1.0
|
Release as GA - [GA release template](https://github.com/googleapis/google-cloud-common/issues/287)
## Required
- [ ] 28 days elapsed since last beta release with new API surface **RELEASE ON/AFTER: August 25 2021**
- [x] Server API is GA
- [x] Package API is stable, and we can commit to backward compatibility
- [x] All dependencies are GA
|
process
|
release as ga required days elapsed since last beta release with new api surface release on after august server api is ga package api is stable and we can commit to backward compatibility all dependencies are ga
| 1
|
444,259
| 31,029,884,571
|
IssuesEvent
|
2023-08-10 11:45:16
|
nilearn/nilearn
|
https://api.github.com/repos/nilearn/nilearn
|
closed
|
[DOC] `load_confounds` private function docstrings
|
Documentation
|
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Describe your proposed suggestion in detail.
I feel the need to complete the docstrings of some private functions.
Some private functions are complex enough that need docstrings for contributors to understand the logics.
And perhaps some typos.
### List any pages that would be impacted.
None. Just private functions.
|
1.0
|
[DOC] `load_confounds` private function docstrings - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Describe your proposed suggestion in detail.
I feel the need to complete the docstrings of some private functions.
Some private functions are complex enough that need docstrings for contributors to understand the logics.
And perhaps some typos.
### List any pages that would be impacted.
None. Just private functions.
|
non_process
|
load confounds private function docstrings is there an existing issue for this i have searched the existing issues describe your proposed suggestion in detail i feel the need to complete the docstrings of some private functions some private functions are complex enough that need docstrings for contributors to understand the logics and perhaps some typos list any pages that would be impacted none just private functions
| 0
|
4,950
| 7,800,187,304
|
IssuesEvent
|
2018-06-09 06:07:23
|
StrikeNP/trac_test
|
https://api.github.com/repos/StrikeNP/trac_test
|
closed
|
Include Lscale as a new panel in the default plotgen plots (Trac #496)
|
Migrated from Trac bladornr@uwm.edu enhancement post_processing
|
We include a lot of variables in our plotgen plots, but we omit one important one, namely, Lscale, which is output in clubb's zt files. Let's output Lscale for all our CLUBB cases on the standard plotgen plots.
The variables to plot in plotgen are specified in the case files. For instance, for RICO, the case file is [http://carson.math.uwm.edu/trac/clubb/browser/trunk/postprocessing/plotgen/cases/clubb/rico.case here]. More information about plotgen and the case files is given in the TWiki.
Note: there is no Lscale output by SAM, WRF, COAMPS, etc., and so we can't include a thick red benchmark line.
Attachments:
Migrated from http://carson.math.uwm.edu/trac/clubb/ticket/496
```json
{
"status": "closed",
"changetime": "2012-06-28T21:55:53",
"description": "We include a lot of variables in our plotgen plots, but we omit one important one, namely, Lscale, which is output in clubb's zt files. Let's output Lscale for all our CLUBB cases on the standard plotgen plots. \n\nThe variables to plot in plotgen are specified in the case files. For instance, for RICO, the case file is [http://carson.math.uwm.edu/trac/clubb/browser/trunk/postprocessing/plotgen/cases/clubb/rico.case here]. More information about plotgen and the case files is given in the TWiki.\n\nNote: there is no Lscale output by SAM, WRF, COAMPS, etc., and so we can't include a thick red benchmark line.\n\n\n\n",
"reporter": "vlarson@uwm.edu",
"cc": "vlarson@uwm.edu",
"resolution": "fixed",
"_ts": "1340920553955391",
"component": "post_processing",
"summary": "Include Lscale as a new panel in the default plotgen plots",
"priority": "minor",
"keywords": "",
"time": "2012-02-07T20:54:59",
"milestone": "",
"owner": "bladornr@uwm.edu",
"type": "enhancement"
}
```
|
1.0
|
Include Lscale as a new panel in the default plotgen plots (Trac #496) - We include a lot of variables in our plotgen plots, but we omit one important one, namely, Lscale, which is output in clubb's zt files. Let's output Lscale for all our CLUBB cases on the standard plotgen plots.
The variables to plot in plotgen are specified in the case files. For instance, for RICO, the case file is [http://carson.math.uwm.edu/trac/clubb/browser/trunk/postprocessing/plotgen/cases/clubb/rico.case here]. More information about plotgen and the case files is given in the TWiki.
Note: there is no Lscale output by SAM, WRF, COAMPS, etc., and so we can't include a thick red benchmark line.
Attachments:
Migrated from http://carson.math.uwm.edu/trac/clubb/ticket/496
```json
{
"status": "closed",
"changetime": "2012-06-28T21:55:53",
"description": "We include a lot of variables in our plotgen plots, but we omit one important one, namely, Lscale, which is output in clubb's zt files. Let's output Lscale for all our CLUBB cases on the standard plotgen plots. \n\nThe variables to plot in plotgen are specified in the case files. For instance, for RICO, the case file is [http://carson.math.uwm.edu/trac/clubb/browser/trunk/postprocessing/plotgen/cases/clubb/rico.case here]. More information about plotgen and the case files is given in the TWiki.\n\nNote: there is no Lscale output by SAM, WRF, COAMPS, etc., and so we can't include a thick red benchmark line.\n\n\n\n",
"reporter": "vlarson@uwm.edu",
"cc": "vlarson@uwm.edu",
"resolution": "fixed",
"_ts": "1340920553955391",
"component": "post_processing",
"summary": "Include Lscale as a new panel in the default plotgen plots",
"priority": "minor",
"keywords": "",
"time": "2012-02-07T20:54:59",
"milestone": "",
"owner": "bladornr@uwm.edu",
"type": "enhancement"
}
```
|
process
|
include lscale as a new panel in the default plotgen plots trac we include a lot of variables in our plotgen plots but we omit one important one namely lscale which is output in clubb s zt files let s output lscale for all our clubb cases on the standard plotgen plots the variables to plot in plotgen are specified in the case files for instance for rico the case file is more information about plotgen and the case files is given in the twiki note there is no lscale output by sam wrf coamps etc and so we can t include a thick red benchmark line attachments migrated from json status closed changetime description we include a lot of variables in our plotgen plots but we omit one important one namely lscale which is output in clubb s zt files let s output lscale for all our clubb cases on the standard plotgen plots n nthe variables to plot in plotgen are specified in the case files for instance for rico the case file is more information about plotgen and the case files is given in the twiki n nnote there is no lscale output by sam wrf coamps etc and so we can t include a thick red benchmark line n n n n reporter vlarson uwm edu cc vlarson uwm edu resolution fixed ts component post processing summary include lscale as a new panel in the default plotgen plots priority minor keywords time milestone owner bladornr uwm edu type enhancement
| 1
|
5,550
| 8,393,741,242
|
IssuesEvent
|
2018-10-09 21:28:46
|
google/eme_logger
|
https://api.github.com/repos/google/eme_logger
|
opened
|
Publish updated version of EME logger
|
process
|
The version in the Chrome web store is out of date. A new version should be published.
Docs should be updated to show how to build and publish the extension.
|
1.0
|
Publish updated version of EME logger - The version in the Chrome web store is out of date. A new version should be published.
Docs should be updated to show how to build and publish the extension.
|
process
|
publish updated version of eme logger the version in the chrome web store is out of date a new version should be published docs should be updated to show how to build and publish the extension
| 1
|
6,504
| 9,577,261,564
|
IssuesEvent
|
2019-05-07 11:13:08
|
threefoldtech/digitalmeX
|
https://api.github.com/repos/threefoldtech/digitalmeX
|
closed
|
bcdb: return from actors must be out._data to be used from gedis client
|
process_wontfix
|
schema:
```
@url = threefold.grid.threebotsettings
doubleName* = ""
totp_secret = ""
firstName* = ""
lastName = ""
email* = ""
addressStreet = ""
addressNumber = ""
addressZipcode = ""
addressCity = ""
addressCountry* = ""
telephone = ""
```
```python
def get_threebotsettings(self, doubleName, schema_out):
"""
```in
doubleName = (S)
```
```out
!threefold.grid.threebotsettings
```
"""
out = schema_out.new()
threebots = self.threebotsettings_model.get_all()
for bot in threebots:
if bot.doubleName == doubleName:
out = bot
break
# TODO: make sure u don't return the totp here
return out._data
```
```python
KjException: kj/io.c++:73: failed: expected result.size() > 0; Premature EOF
stack: 0x7f24a7ac3cd3 0x7f24a7b250a4 0x7f24a7ac2596 0x7f24a7ac368b 0x7f24a79b6494 0x4db337 0x7f24a79a1585 0x45a0e3 0x54fd37 0x5546cf 0x54fbe1 0x54fe6d 0x552b00 0x54fbe1 0x558d76 0x45a461 0x459eee 0x4e15bb 0x4db337 0x45a0e3 0x45a79c 0x54fd37 0x552b00 0x54fbe1 0x54fe6d 0x552b00 0x54fbe1 0x54fe6d 0x5546cf 0x54fbe1 0x550b93
In [6]: cl.actors.threebotsettings.get_threebotsettings("thabeta")
Out[6]:
addressCity = ""
addressCountry = ""
addressNumber = ""
addressStreet = ""
addressZipcode = ""
doubleName = "thabeta"
email = ""
firstName = "thabet"
lastName = ""
telephone = ""
totp_secret = "OQ2HO2JVGF3TK==="
```
|
1.0
|
bcdb: return from actors must be out._data to be used from gedis client - schema:
```
@url = threefold.grid.threebotsettings
doubleName* = ""
totp_secret = ""
firstName* = ""
lastName = ""
email* = ""
addressStreet = ""
addressNumber = ""
addressZipcode = ""
addressCity = ""
addressCountry* = ""
telephone = ""
```
```python
def get_threebotsettings(self, doubleName, schema_out):
"""
```in
doubleName = (S)
```
```out
!threefold.grid.threebotsettings
```
"""
out = schema_out.new()
threebots = self.threebotsettings_model.get_all()
for bot in threebots:
if bot.doubleName == doubleName:
out = bot
break
# TODO: make sure u don't return the totp here
return out._data
```
```python
KjException: kj/io.c++:73: failed: expected result.size() > 0; Premature EOF
stack: 0x7f24a7ac3cd3 0x7f24a7b250a4 0x7f24a7ac2596 0x7f24a7ac368b 0x7f24a79b6494 0x4db337 0x7f24a79a1585 0x45a0e3 0x54fd37 0x5546cf 0x54fbe1 0x54fe6d 0x552b00 0x54fbe1 0x558d76 0x45a461 0x459eee 0x4e15bb 0x4db337 0x45a0e3 0x45a79c 0x54fd37 0x552b00 0x54fbe1 0x54fe6d 0x552b00 0x54fbe1 0x54fe6d 0x5546cf 0x54fbe1 0x550b93
In [6]: cl.actors.threebotsettings.get_threebotsettings("thabeta")
Out[6]:
addressCity = ""
addressCountry = ""
addressNumber = ""
addressStreet = ""
addressZipcode = ""
doubleName = "thabeta"
email = ""
firstName = "thabet"
lastName = ""
telephone = ""
totp_secret = "OQ2HO2JVGF3TK==="
```
|
process
|
bcdb return from actors must be out data to be used from gedis client schema url threefold grid threebotsettings doublename totp secret firstname lastname email addressstreet addressnumber addresszipcode addresscity addresscountry telephone python def get threebotsettings self doublename schema out in doublename s out threefold grid threebotsettings out schema out new threebots self threebotsettings model get all for bot in threebots if bot doublename doublename out bot break todo make sure u don t return the totp here return out data python kjexception kj io c failed expected result size premature eof stack in cl actors threebotsettings get threebotsettings thabeta out addresscity addresscountry addressnumber addressstreet addresszipcode doublename thabeta email firstname thabet lastname telephone totp secret
| 1
|
129,452
| 17,785,601,262
|
IssuesEvent
|
2021-08-31 10:38:32
|
resuminator/resuminator
|
https://api.github.com/repos/resuminator/resuminator
|
closed
|
Allow to Toggle Card view on Resume Paper
|
enhancement Design Coming to v2 🎉
|
## Motivation
This is similar to #10 but instead for particular documents of the users.
## Modification
Same as #10 with similar workflow suggested.
|
1.0
|
Allow to Toggle Card view on Resume Paper - ## Motivation
This is similar to #10 but instead for particular documents of the users.
## Modification
Same as #10 with similar workflow suggested.
|
non_process
|
allow to toggle card view on resume paper motivation this is similar to but instead for particular documents of the users modification same as with similar workflow suggested
| 0
|
5,813
| 8,649,621,434
|
IssuesEvent
|
2018-11-26 19:58:04
|
bazelbuild/continuous-integration
|
https://api.github.com/repos/bazelbuild/continuous-integration
|
closed
|
Redact contributor emails from release notes
|
P1 bug process
|
It looks like the generated release notes contain emails of contributors. Is this intentional?
https://releases.bazel.build/0.20.0/rc4/index.html
@laurentlb @katre
|
1.0
|
Redact contributor emails from release notes - It looks like the generated release notes contain emails of contributors. Is this intentional?
https://releases.bazel.build/0.20.0/rc4/index.html
@laurentlb @katre
|
process
|
redact contributor emails from release notes it looks like the generated release notes contain emails of contributors is this intentional laurentlb katre
| 1
|
14,504
| 17,604,346,991
|
IssuesEvent
|
2021-08-17 15:16:50
|
qgis/QGIS-Documentation
|
https://api.github.com/repos/qgis/QGIS-Documentation
|
closed
|
[feature] allow creation of constant raster with different raster data types
|
Automatic new feature Processing Alg 3.14
|
Original commit: https://github.com/qgis/QGIS/commit/e2e4a99a4830b6a55701d2cbef4b65b7639fff85 by nyalldawson
Unfortunately this naughty coder did not write a description... :-(
|
1.0
|
[feature] allow creation of constant raster with different raster data types - Original commit: https://github.com/qgis/QGIS/commit/e2e4a99a4830b6a55701d2cbef4b65b7639fff85 by nyalldawson
Unfortunately this naughty coder did not write a description... :-(
|
process
|
allow creation of constant raster with different raster data types original commit by nyalldawson unfortunately this naughty coder did not write a description
| 1
|
894
| 3,356,849,365
|
IssuesEvent
|
2015-11-18 22:15:10
|
pwittchen/kirai
|
https://api.github.com/repos/pwittchen/kirai
|
closed
|
Release 1.3.0
|
release process
|
**Initial release notes**:
- `Piece` is now an abstract class
- created `HtmlPiece` extending Piece `class`, which uses `HtmlSyntax` class
- updated documentation and tests
**Things to do**:
- [x] bump library version
- [x] upload archives to Maven Central
- [x] close and release artifact on Maven Central
- [x] update JavaDoc on gh-pages
- [x] update `CHANGELOG.md` after Maven Sync
- [x] bump library version in `README.md`
- [x] create new GitHub release
|
1.0
|
Release 1.3.0 - **Initial release notes**:
- `Piece` is now an abstract class
- created `HtmlPiece` extending Piece `class`, which uses `HtmlSyntax` class
- updated documentation and tests
**Things to do**:
- [x] bump library version
- [x] upload archives to Maven Central
- [x] close and release artifact on Maven Central
- [x] update JavaDoc on gh-pages
- [x] update `CHANGELOG.md` after Maven Sync
- [x] bump library version in `README.md`
- [x] create new GitHub release
|
process
|
release initial release notes piece is now an abstract class created htmlpiece extending piece class which uses htmlsyntax class updated documentation and tests things to do bump library version upload archives to maven central close and release artifact on maven central update javadoc on gh pages update changelog md after maven sync bump library version in readme md create new github release
| 1
|
114,880
| 11,859,755,602
|
IssuesEvent
|
2020-03-25 13:53:22
|
department-of-veterans-affairs/va.gov-team
|
https://api.github.com/repos/department-of-veterans-affairs/va.gov-team
|
closed
|
Change "Call Centers" to "Contact Centers" in GitHub Repo
|
analytics-insights contact-center content-ia-team documentation-support
|
## User Story or Problem Statement
How might we maintain accuracy and consistency for contact center documentation?
## Goal
Change "Call Centers" to "Contact Centers" on all GitHub documentation.
## Tasks
- [x] Bill to run report identifying all instances of `call center` or `call centers` in `va.gov-team` and `va.gov-team-sensitive`
## Acceptance Criteria
- [x] Identify GitHub Markdown files
- [ ] Identify GitHub Labels
---
|
1.0
|
Change "Call Centers" to "Contact Centers" in GitHub Repo - ## User Story or Problem Statement
How might we maintain accuracy and consistency for contact center documentation?
## Goal
Change "Call Centers" to "Contact Centers" on all GitHub documentation.
## Tasks
- [x] Bill to run report identifying all instances of `call center` or `call centers` in `va.gov-team` and `va.gov-team-sensitive`
## Acceptance Criteria
- [x] Identify GitHub Markdown files
- [ ] Identify GitHub Labels
---
|
non_process
|
change call centers to contact centers in github repo user story or problem statement how might we maintain accuracy and consistency for contact center documentation goal change call centers to contact centers on all github documentation tasks bill to run report identifying all instances of call center or call centers in va gov team and va gov team sensitive acceptance criteria identify github markdown files identify github labels
| 0
|
8,650
| 11,790,041,449
|
IssuesEvent
|
2020-03-17 18:13:08
|
mne-tools/mne-python
|
https://api.github.com/repos/mne-tools/mne-python
|
closed
|
mne.preprocessing.find_bad_channels_maxwell()'s crosstalk_file kwarg cannot be pathlib.Path
|
BUG Preprocessing
|
#### Describe the bug
`mne.preprocessing.find_bad_channels_maxwell()` features a `crosstalk_file` kwarg. Unlike in many (all?) other places in MNE, this argument cannot be a `pathlib.Path` object, and is required to be a string.
#### Steps to reproduce
```python
import pathlib
import mne
sample_data_folder = pathlib.Path(mne.datasets.sample.data_path())
sample_data_raw_file = (sample_data_folder / 'MEG' / 'sample' /
'sample_audvis_raw.fif')
fine_cal_file = sample_data_folder / 'SSS' / 'sss_cal_mgh.dat'
crosstalk_file = sample_data_folder / 'SSS' / 'ct_sparse_mgh.fif'
raw = mne.io.read_raw_fif(sample_data_raw_file, verbose=False)
raw.crop(tmax=60).load_data()
mne.preprocessing.find_bad_channels_maxwell(raw, cross_talk=crosstalk_file,
calibration=fine_cal_file)
```
#### Expected results
```
Scanning for bad channels in 12 intervals (5.0 sec) ...
Bad MEG channels being reconstructed: ['MEG 2443']
Processing 204 gradiometers and 102 magnetometers
Low-pass filtering data at 40.0 Hz
Static bad channels: []
[done]
[]
```
#### Actual results
```python
~/Development/mne-python/mne/preprocessing/maxwell.py in _maxwell_filter(***failed resolving arguments***)
316 #
317 if cross_talk is not None:
--> 318 sss_ctc = _read_ctc(cross_talk)
319 ctc_chs = sss_ctc['proj_items_chs']
320 meg_ch_names = [info['ch_names'][p] for p in meg_picks]
~/Development/mne-python/mne/io/proc_history.py in _read_ctc(fname)
167 """Read cross-talk correction matrix."""
168 if not isinstance(fname, str) or not op.isfile(fname):
--> 169 raise ValueError('fname must be a file that exists, not %s' % fname)
170 f, tree, _ = fiff_open(fname)
171 with f as fid:
ValueError: fname must be a file that exists, not /Users/hoechenberger/mne_data/MNE-sample-data/SSS/ct_sparse_mgh.fif
```
#### Additional information
It works as expected when converting `crosstalk_file` to a string before passing it to `find_bad_channels_maxwell()`.
|
1.0
|
mne.preprocessing.find_bad_channels_maxwell()'s crosstalk_file kwarg cannot be pathlib.Path - #### Describe the bug
`mne.preprocessing.find_bad_channels_maxwell()` features a `crosstalk_file` kwarg. Unlike in many (all?) other places in MNE, this argument cannot be a `pathlib.Path` object, and is required to be a string.
#### Steps to reproduce
```python
import pathlib
import mne
sample_data_folder = pathlib.Path(mne.datasets.sample.data_path())
sample_data_raw_file = (sample_data_folder / 'MEG' / 'sample' /
'sample_audvis_raw.fif')
fine_cal_file = sample_data_folder / 'SSS' / 'sss_cal_mgh.dat'
crosstalk_file = sample_data_folder / 'SSS' / 'ct_sparse_mgh.fif'
raw = mne.io.read_raw_fif(sample_data_raw_file, verbose=False)
raw.crop(tmax=60).load_data()
mne.preprocessing.find_bad_channels_maxwell(raw, cross_talk=crosstalk_file,
calibration=fine_cal_file)
```
#### Expected results
```
Scanning for bad channels in 12 intervals (5.0 sec) ...
Bad MEG channels being reconstructed: ['MEG 2443']
Processing 204 gradiometers and 102 magnetometers
Low-pass filtering data at 40.0 Hz
Static bad channels: []
[done]
[]
```
#### Actual results
```python
~/Development/mne-python/mne/preprocessing/maxwell.py in _maxwell_filter(***failed resolving arguments***)
316 #
317 if cross_talk is not None:
--> 318 sss_ctc = _read_ctc(cross_talk)
319 ctc_chs = sss_ctc['proj_items_chs']
320 meg_ch_names = [info['ch_names'][p] for p in meg_picks]
~/Development/mne-python/mne/io/proc_history.py in _read_ctc(fname)
167 """Read cross-talk correction matrix."""
168 if not isinstance(fname, str) or not op.isfile(fname):
--> 169 raise ValueError('fname must be a file that exists, not %s' % fname)
170 f, tree, _ = fiff_open(fname)
171 with f as fid:
ValueError: fname must be a file that exists, not /Users/hoechenberger/mne_data/MNE-sample-data/SSS/ct_sparse_mgh.fif
```
#### Additional information
It works as expected when converting `crosstalk_file` to a string before passing it to `find_bad_channels_maxwell()`.
|
process
|
mne preprocessing find bad channels maxwell s crosstalk file kwarg cannot be pathlib path describe the bug mne preprocessing find bad channels maxwell features a crosstalk file kwarg unlike in many all other places in mne this argument cannot be a pathlib path object and is required to be a string steps to reproduce python import pathlib import mne sample data folder pathlib path mne datasets sample data path sample data raw file sample data folder meg sample sample audvis raw fif fine cal file sample data folder sss sss cal mgh dat crosstalk file sample data folder sss ct sparse mgh fif raw mne io read raw fif sample data raw file verbose false raw crop tmax load data mne preprocessing find bad channels maxwell raw cross talk crosstalk file calibration fine cal file expected results scanning for bad channels in intervals sec bad meg channels being reconstructed processing gradiometers and magnetometers low pass filtering data at hz static bad channels actual results python development mne python mne preprocessing maxwell py in maxwell filter failed resolving arguments if cross talk is not none sss ctc read ctc cross talk ctc chs sss ctc meg ch names for p in meg picks development mne python mne io proc history py in read ctc fname read cross talk correction matrix if not isinstance fname str or not op isfile fname raise valueerror fname must be a file that exists not s fname f tree fiff open fname with f as fid valueerror fname must be a file that exists not users hoechenberger mne data mne sample data sss ct sparse mgh fif additional information it works as expected when converting crosstalk file to a string before passing it to find bad channels maxwell
| 1
|
330,152
| 24,249,048,953
|
IssuesEvent
|
2022-09-27 12:57:25
|
numbbo/coco
|
https://api.github.com/repos/numbbo/coco
|
closed
|
`coco-doc` repository home
|
question Documentation
|
see https://numbbo.github.io/coco-doc, points to a rather outdated version of the paper and and outdated reference. It seems we should remove this html version all together and just point to the final pdf?
|
1.0
|
`coco-doc` repository home - see https://numbbo.github.io/coco-doc, points to a rather outdated version of the paper and and outdated reference. It seems we should remove this html version all together and just point to the final pdf?
|
non_process
|
coco doc repository home see points to a rather outdated version of the paper and and outdated reference it seems we should remove this html version all together and just point to the final pdf
| 0
|
13,296
| 15,769,985,919
|
IssuesEvent
|
2021-03-31 18:56:34
|
w3c/aria-at
|
https://api.github.com/repos/w3c/aria-at
|
opened
|
Test review coordination
|
Agenda+Test Writing process tests
|
Just providing a placeholder/kickoff issue for an item on the next CG meeting agenda (April 1). Need to discuss how test reviews will be coordinated once test plans are merged, how to find/reach out to volunteers, etc.
|
1.0
|
Test review coordination - Just providing a placeholder/kickoff issue for an item on the next CG meeting agenda (April 1). Need to discuss how test reviews will be coordinated once test plans are merged, how to find/reach out to volunteers, etc.
|
process
|
test review coordination just providing a placeholder kickoff issue for an item on the next cg meeting agenda april need to discuss how test reviews will be coordinated once test plans are merged how to find reach out to volunteers etc
| 1
|
44,093
| 13,048,238,998
|
IssuesEvent
|
2020-07-29 12:12:01
|
jgeraigery/imhotep
|
https://api.github.com/repos/jgeraigery/imhotep
|
opened
|
CVE-2019-20444 (High) detected in netty-codec-http-4.1.17.Final.jar
|
security vulnerability
|
## CVE-2019-20444 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>netty-codec-http-4.1.17.Final.jar</b></p></summary>
<p>Netty is an asynchronous event-driven network application framework for
rapid development of maintainable high performance protocol servers and
clients.</p>
<p>Library home page: <a href="http://netty.io/">http://netty.io/</a></p>
<p>Path to dependency file: /tmp/ws-scm/imhotep/imhotep-server/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/io/netty/netty-codec-http/4.1.17.Final/netty-codec-http-4.1.17.Final.jar</p>
<p>
Dependency Hierarchy:
- aws-java-sdk-1.11.262.jar (Root Library)
- aws-java-sdk-kinesisvideo-1.11.262.jar
- :x: **netty-codec-http-4.1.17.Final.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/jgeraigery/imhotep/commit/4432df39a5fc652b4512ad35a6db8f1a3776b771">4432df39a5fc652b4512ad35a6db8f1a3776b771</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
HttpObjectDecoder.java in Netty before 4.1.44 allows an HTTP header that lacks a colon, which might be interpreted as a separate header with an incorrect syntax, or might be interpreted as an "invalid fold."
<p>Publish Date: 2020-01-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-20444>CVE-2019-20444</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-20444">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-20444</a></p>
<p>Release Date: 2020-01-29</p>
<p>Fix Resolution: io.netty:netty-codec-http:4.1.44</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"io.netty","packageName":"netty-codec-http","packageVersion":"4.1.17.Final","isTransitiveDependency":true,"dependencyTree":"com.amazonaws:aws-java-sdk:1.11.262;com.amazonaws:aws-java-sdk-kinesisvideo:1.11.262;io.netty:netty-codec-http:4.1.17.Final","isMinimumFixVersionAvailable":true,"minimumFixVersion":"io.netty:netty-codec-http:4.1.44"}],"vulnerabilityIdentifier":"CVE-2019-20444","vulnerabilityDetails":"HttpObjectDecoder.java in Netty before 4.1.44 allows an HTTP header that lacks a colon, which might be interpreted as a separate header with an incorrect syntax, or might be interpreted as an \"invalid fold.\"","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-20444","cvss3Severity":"high","cvss3Score":"9.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2019-20444 (High) detected in netty-codec-http-4.1.17.Final.jar - ## CVE-2019-20444 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>netty-codec-http-4.1.17.Final.jar</b></p></summary>
<p>Netty is an asynchronous event-driven network application framework for
rapid development of maintainable high performance protocol servers and
clients.</p>
<p>Library home page: <a href="http://netty.io/">http://netty.io/</a></p>
<p>Path to dependency file: /tmp/ws-scm/imhotep/imhotep-server/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/io/netty/netty-codec-http/4.1.17.Final/netty-codec-http-4.1.17.Final.jar</p>
<p>
Dependency Hierarchy:
- aws-java-sdk-1.11.262.jar (Root Library)
- aws-java-sdk-kinesisvideo-1.11.262.jar
- :x: **netty-codec-http-4.1.17.Final.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/jgeraigery/imhotep/commit/4432df39a5fc652b4512ad35a6db8f1a3776b771">4432df39a5fc652b4512ad35a6db8f1a3776b771</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
HttpObjectDecoder.java in Netty before 4.1.44 allows an HTTP header that lacks a colon, which might be interpreted as a separate header with an incorrect syntax, or might be interpreted as an "invalid fold."
<p>Publish Date: 2020-01-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-20444>CVE-2019-20444</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-20444">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-20444</a></p>
<p>Release Date: 2020-01-29</p>
<p>Fix Resolution: io.netty:netty-codec-http:4.1.44</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"io.netty","packageName":"netty-codec-http","packageVersion":"4.1.17.Final","isTransitiveDependency":true,"dependencyTree":"com.amazonaws:aws-java-sdk:1.11.262;com.amazonaws:aws-java-sdk-kinesisvideo:1.11.262;io.netty:netty-codec-http:4.1.17.Final","isMinimumFixVersionAvailable":true,"minimumFixVersion":"io.netty:netty-codec-http:4.1.44"}],"vulnerabilityIdentifier":"CVE-2019-20444","vulnerabilityDetails":"HttpObjectDecoder.java in Netty before 4.1.44 allows an HTTP header that lacks a colon, which might be interpreted as a separate header with an incorrect syntax, or might be interpreted as an \"invalid fold.\"","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-20444","cvss3Severity":"high","cvss3Score":"9.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve high detected in netty codec http final jar cve high severity vulnerability vulnerable library netty codec http final jar netty is an asynchronous event driven network application framework for rapid development of maintainable high performance protocol servers and clients library home page a href path to dependency file tmp ws scm imhotep imhotep server pom xml path to vulnerable library home wss scanner repository io netty netty codec http final netty codec http final jar dependency hierarchy aws java sdk jar root library aws java sdk kinesisvideo jar x netty codec http final jar vulnerable library found in head commit a href vulnerability details httpobjectdecoder java in netty before allows an http header that lacks a colon which might be interpreted as a separate header with an incorrect syntax or might be interpreted as an invalid fold publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution io netty netty codec http isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails httpobjectdecoder java in netty before allows an http header that lacks a colon which might be interpreted as a separate header with an incorrect syntax or might be interpreted as an invalid fold vulnerabilityurl
| 0
|
18,001
| 24,019,132,843
|
IssuesEvent
|
2022-09-15 05:44:10
|
maticnetwork/miden
|
https://api.github.com/repos/maticnetwork/miden
|
opened
|
Advice provider refactoring
|
assembly processor v0.3
|
After #393 is done, we will have refactored all IO operations except for the ones which deal with the advice tape. These operations are: `push.adv.n` and `loadw.adv`.
First, we should probably rename these operations to be consistent with the new naming conventions. Specifically:
* `loadw_adv` should be `adv_loadw`.
* Renaming `push_adv.n` is a bit more tricky since we use only load/store verbs in other place, but for the lack of a better option, I think we can replace it with `adv_push.n` (unless someone has better suggestions).
Second, a more fundamental issue with the advice provider is that we have only a single advice tape. For simple programs this tape can be pre-loaded with some values, and then the program can read these values one-by-one. However, for more complex program, pre-loading the tape would be far from trivial.
Imagine there is a program which needs to "un-hash" one of two values based on some condition which it needs to compute dynamically. To know which one of pre-images need to be put on the advice tape, we'd first need to execute the program up to the point the condition is computed, then once we know the condition, we could initialize the tape appropriately, and only after that, we could execute the program to the end. Needless to say that this approach is not workable for even moderately complicated programs.
One relatively simple way to address the above issue is to have many advice tapes. That is, we could replace a single tape in the advice provider with a key-value map where a key is some tape identifier and the value is the tape itself. This map would always have at least one tape (i.e., for key `0`), but could be initialized with any number of tapes under various keys.
To identify which tape to read values from, we could introduce a concept of an _active tape_. That is, only one tape can be active at a given time, and `adv_loadw` and `adv_push` instructions would always read from the active advice tape.
To change the active tape we could use a decorator - maybe something like `adv_config` (other name suggestions are welcome). The effect of this decorator would be set the active tape to the tape with the key equal to the top 4 items on the stack. The reason for using 4 elements as the key is because we frequently want to "un-hash" some value, and it is convenient to be able to look up hash pre-image by its hash.
A quick example. Let's say we have a Merkle tree. We want to get a leaf at position 1 from the tree, and then get hash pre-image of this leaf. Denoting leaf at position 1 as $n$, we assume that pre-image of $n$ is a tuple $a$, $b$. That is $n = hash(a, b)$. The program to do this could look like so:
```
begin
push.1.2.3.4 # push Merkle root of the tree onto the stack
push.1 # push node index onto the stack
push.16 # push the depth of the tree onto the stack
mtree.get # load the node value onto the stack
adv_config # set the active tape to the tape with key equal to the node value
swapw
adv_loadw # load the first 4 elements (a) from the advice tape onto the stack
adv_push.4 # load another 4 elements (b) from the advice tape onto the stack
repeat.8
dup.7 # make a copy of the top 8 elements
end
rphash # compute hash(a, b)
swapw
swapw.3
eqw # make hash(a, b) = n
assert
swapw # at this point, the stack will look like [b, a, ...]
end
```
To run the above program, we need to initialize advice provider with a map of tapes where one of the entries is `(n, [a, b])` (i.e., the tape containing values `[a, b]` is under the key `n`.
Side note: the above program is rather cumbersome and we probably would want something better than that. My hope that methodology discussed in #336 will make it much more efficient, but having "tape maps" as described above would be very useful there too.
|
1.0
|
Advice provider refactoring - After #393 is done, we will have refactored all IO operations except for the ones which deal with the advice tape. These operations are: `push.adv.n` and `loadw.adv`.
First, we should probably rename these operations to be consistent with the new naming conventions. Specifically:
* `loadw_adv` should be `adv_loadw`.
* Renaming `push_adv.n` is a bit more tricky since we use only load/store verbs in other place, but for the lack of a better option, I think we can replace it with `adv_push.n` (unless someone has better suggestions).
Second, a more fundamental issue with the advice provider is that we have only a single advice tape. For simple programs this tape can be pre-loaded with some values, and then the program can read these values one-by-one. However, for more complex program, pre-loading the tape would be far from trivial.
Imagine there is a program which needs to "un-hash" one of two values based on some condition which it needs to compute dynamically. To know which one of pre-images need to be put on the advice tape, we'd first need to execute the program up to the point the condition is computed, then once we know the condition, we could initialize the tape appropriately, and only after that, we could execute the program to the end. Needless to say that this approach is not workable for even moderately complicated programs.
One relatively simple way to address the above issue is to have many advice tapes. That is, we could replace a single tape in the advice provider with a key-value map where a key is some tape identifier and the value is the tape itself. This map would always have at least one tape (i.e., for key `0`), but could be initialized with any number of tapes under various keys.
To identify which tape to read values from, we could introduce a concept of an _active tape_. That is, only one tape can be active at a given time, and `adv_loadw` and `adv_push` instructions would always read from the active advice tape.
To change the active tape we could use a decorator - maybe something like `adv_config` (other name suggestions are welcome). The effect of this decorator would be set the active tape to the tape with the key equal to the top 4 items on the stack. The reason for using 4 elements as the key is because we frequently want to "un-hash" some value, and it is convenient to be able to look up hash pre-image by its hash.
A quick example. Let's say we have a Merkle tree. We want to get a leaf at position 1 from the tree, and then get hash pre-image of this leaf. Denoting leaf at position 1 as $n$, we assume that pre-image of $n$ is a tuple $a$, $b$. That is $n = hash(a, b)$. The program to do this could look like so:
```
begin
push.1.2.3.4 # push Merkle root of the tree onto the stack
push.1 # push node index onto the stack
push.16 # push the depth of the tree onto the stack
mtree.get # load the node value onto the stack
adv_config # set the active tape to the tape with key equal to the node value
swapw
adv_loadw # load the first 4 elements (a) from the advice tape onto the stack
adv_push.4 # load another 4 elements (b) from the advice tape onto the stack
repeat.8
dup.7 # make a copy of the top 8 elements
end
rphash # compute hash(a, b)
swapw
swapw.3
eqw # make hash(a, b) = n
assert
swapw # at this point, the stack will look like [b, a, ...]
end
```
To run the above program, we need to initialize advice provider with a map of tapes where one of the entries is `(n, [a, b])` (i.e., the tape containing values `[a, b]` is under the key `n`.
Side note: the above program is rather cumbersome and we probably would want something better than that. My hope that methodology discussed in #336 will make it much more efficient, but having "tape maps" as described above would be very useful there too.
|
process
|
advice provider refactoring after is done we will have refactored all io operations except for the ones which deal with the advice tape these operations are push adv n and loadw adv first we should probably rename these operations to be consistent with the new naming conventions specifically loadw adv should be adv loadw renaming push adv n is a bit more tricky since we use only load store verbs in other place but for the lack of a better option i think we can replace it with adv push n unless someone has better suggestions second a more fundamental issue with the advice provider is that we have only a single advice tape for simple programs this tape can be pre loaded with some values and then the program can read these values one by one however for more complex program pre loading the tape would be far from trivial imagine there is a program which needs to un hash one of two values based on some condition which it needs to compute dynamically to know which one of pre images need to be put on the advice tape we d first need to execute the program up to the point the condition is computed then once we know the condition we could initialize the tape appropriately and only after that we could execute the program to the end needless to say that this approach is not workable for even moderately complicated programs one relatively simple way to address the above issue is to have many advice tapes that is we could replace a single tape in the advice provider with a key value map where a key is some tape identifier and the value is the tape itself this map would always have at least one tape i e for key but could be initialized with any number of tapes under various keys to identify which tape to read values from we could introduce a concept of an active tape that is only one tape can be active at a given time and adv loadw and adv push instructions would always read from the active advice tape to change the active tape we could use a decorator maybe something like adv config other name suggestions are welcome the effect of this decorator would be set the active tape to the tape with the key equal to the top items on the stack the reason for using elements as the key is because we frequently want to un hash some value and it is convenient to be able to look up hash pre image by its hash a quick example let s say we have a merkle tree we want to get a leaf at position from the tree and then get hash pre image of this leaf denoting leaf at position as n we assume that pre image of n is a tuple a b that is n hash a b the program to do this could look like so begin push push merkle root of the tree onto the stack push push node index onto the stack push push the depth of the tree onto the stack mtree get load the node value onto the stack adv config set the active tape to the tape with key equal to the node value swapw adv loadw load the first elements a from the advice tape onto the stack adv push load another elements b from the advice tape onto the stack repeat dup make a copy of the top elements end rphash compute hash a b swapw swapw eqw make hash a b n assert swapw at this point the stack will look like end to run the above program we need to initialize advice provider with a map of tapes where one of the entries is n i e the tape containing values is under the key n side note the above program is rather cumbersome and we probably would want something better than that my hope that methodology discussed in will make it much more efficient but having tape maps as described above would be very useful there too
| 1
|
135,055
| 10,961,123,420
|
IssuesEvent
|
2019-11-27 14:50:39
|
aces/Loris
|
https://api.github.com/repos/aces/Loris
|
closed
|
[My Preferences] Email validation rules differ between user account and my preferences
|
22.0.0 TESTING Bug PR sent
|
I was able to create a user with an email address set to 'nicolasbrossard.mni(test)@gmail.com'. This address is valid in the user account module but not on the my preferences page.
|
1.0
|
[My Preferences] Email validation rules differ between user account and my preferences - I was able to create a user with an email address set to 'nicolasbrossard.mni(test)@gmail.com'. This address is valid in the user account module but not on the my preferences page.
|
non_process
|
email validation rules differ between user account and my preferences i was able to create a user with an email address set to nicolasbrossard mni test gmail com this address is valid in the user account module but not on the my preferences page
| 0
|
11,608
| 14,478,938,655
|
IssuesEvent
|
2020-12-10 09:08:17
|
decidim/decidim
|
https://api.github.com/repos/decidim/decidim
|
closed
|
Show that a Process belongs to a Process Group
|
contract: process-groups
|
Ref.: PG05
**Is your feature request related to a problem? Please describe.**
As a visitor when I go to a Process that belongs to a Process Group there isn't any indication that it's like that.
**Describe the solution you'd like**
Add in each process that is part of the process group, an element that identifies that it is part of a specific process group.
This requires design.
**Describe alternatives you've considered**
To have it as it is now but this doesn't allow us to discover similar/related Participatory Processes. For instance, when I visit a process for Superilla of Horta Guinardo, I'd also like to discover the rest of the Superilles Processes (that are on the PG)
To have it as a related Process card like.
To have a breadcrumb.
**Additional context**
N/A
**Does this issue could impact on users private data?**
No
**Acceptance criteria**
- [x] As a visitor I have a way to know which process group that process is part of
- [x] As a visitor I have access to the process group's landing page from a child process.
|
1.0
|
Show that a Process belongs to a Process Group - Ref.: PG05
**Is your feature request related to a problem? Please describe.**
As a visitor when I go to a Process that belongs to a Process Group there isn't any indication that it's like that.
**Describe the solution you'd like**
Add in each process that is part of the process group, an element that identifies that it is part of a specific process group.
This requires design.
**Describe alternatives you've considered**
To have it as it is now but this doesn't allow us to discover similar/related Participatory Processes. For instance, when I visit a process for Superilla of Horta Guinardo, I'd also like to discover the rest of the Superilles Processes (that are on the PG)
To have it as a related Process card like.
To have a breadcrumb.
**Additional context**
N/A
**Does this issue could impact on users private data?**
No
**Acceptance criteria**
- [x] As a visitor I have a way to know which process group that process is part of
- [x] As a visitor I have access to the process group's landing page from a child process.
|
process
|
show that a process belongs to a process group ref is your feature request related to a problem please describe as a visitor when i go to a process that belongs to a process group there isn t any indication that it s like that describe the solution you d like add in each process that is part of the process group an element that identifies that it is part of a specific process group this requires design describe alternatives you ve considered to have it as it is now but this doesn t allow us to discover similar related participatory processes for instance when i visit a process for superilla of horta guinardo i d also like to discover the rest of the superilles processes that are on the pg to have it as a related process card like to have a breadcrumb additional context n a does this issue could impact on users private data no acceptance criteria as a visitor i have a way to know which process group that process is part of as a visitor i have access to the process group s landing page from a child process
| 1
|
334,697
| 29,936,660,264
|
IssuesEvent
|
2023-06-22 13:11:17
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
closed
|
sql: TestQueryCache failed
|
C-test-failure O-robot T-sql-queries branch-release-23.1
|
sql.TestQueryCache [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/10630283?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/10630283?buildTab=artifacts#/) on release-23.1 @ [3532484aa08c3acf6eb5770f1469af09f4ba704d](https://github.com/cockroachdb/cockroach/commits/3532484aa08c3acf6eb5770f1469af09f4ba704d):
```
Goroutine 1372554 (running) created at:
testing.(*T).Run()
GOROOT/src/testing/testing.go:1493 +0x75d
github.com/cockroachdb/cockroach/pkg/sql.TestQueryCache.func1()
github.com/cockroachdb/cockroach/pkg/sql/plan_opt_test.go:194 +0xa4
testing.tRunner()
GOROOT/src/testing/testing.go:1446 +0x216
testing.(*T).Run.func1()
GOROOT/src/testing/testing.go:1493 +0x47
Goroutine 1372557 (running) created at:
testing.(*T).Run()
GOROOT/src/testing/testing.go:1493 +0x75d
github.com/cockroachdb/cockroach/pkg/sql.TestQueryCache.func1()
github.com/cockroachdb/cockroach/pkg/sql/plan_opt_test.go:282 +0x104
testing.tRunner()
GOROOT/src/testing/testing.go:1446 +0x216
testing.(*T).Run.func1()
GOROOT/src/testing/testing.go:1493 +0x47
==================
=== RUN TestQueryCache/group/exec-and-prepare
=== PAUSE TestQueryCache/group/exec-and-prepare
=== RUN TestQueryCache/group/parallel-prepare
=== PAUSE TestQueryCache/group/parallel-prepare
=== CONT TestQueryCache/group/parallel-prepare
=== RUN TestQueryCache/group/relative-timestamp
=== PAUSE TestQueryCache/group/relative-timestamp
=== CONT TestQueryCache/group/relative-timestamp
=== RUN TestQueryCache/group/schemachange
=== PAUSE TestQueryCache/group/schemachange
=== RUN TestQueryCache/group/schemachange-parallel
=== PAUSE TestQueryCache/group/schemachange-parallel
=== CONT TestQueryCache/group/schemachange-parallel
=== RUN TestQueryCache/group/simple
=== PAUSE TestQueryCache/group/simple
=== RUN TestQueryCache/group/multidb
=== PAUSE TestQueryCache/group/multidb
=== CONT TestQueryCache/group/multidb
=== RUN TestQueryCache/group/multidb-prepare
=== PAUSE TestQueryCache/group/multidb-prepare
=== CONT TestQueryCache/group/multidb-prepare
=== RUN TestQueryCache/group/parallel
=== PAUSE TestQueryCache/group/parallel
=== CONT TestQueryCache/group/parallel
=== RUN TestQueryCache/group/prepare-hints
=== PAUSE TestQueryCache/group/prepare-hints
=== RUN TestQueryCache/group/statschange
=== PAUSE TestQueryCache/group/statschange
=== RUN TestQueryCache/group
```
<p>Parameters: <code>TAGS=bazel,gss,race</code>
</p>
<details><summary>Help</summary>
<p>
See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM)
</p>
</details>
/cc @cockroachdb/sql-queries
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestQueryCache.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
Jira issue: CRDB-28984
|
1.0
|
sql: TestQueryCache failed - sql.TestQueryCache [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/10630283?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/10630283?buildTab=artifacts#/) on release-23.1 @ [3532484aa08c3acf6eb5770f1469af09f4ba704d](https://github.com/cockroachdb/cockroach/commits/3532484aa08c3acf6eb5770f1469af09f4ba704d):
```
Goroutine 1372554 (running) created at:
testing.(*T).Run()
GOROOT/src/testing/testing.go:1493 +0x75d
github.com/cockroachdb/cockroach/pkg/sql.TestQueryCache.func1()
github.com/cockroachdb/cockroach/pkg/sql/plan_opt_test.go:194 +0xa4
testing.tRunner()
GOROOT/src/testing/testing.go:1446 +0x216
testing.(*T).Run.func1()
GOROOT/src/testing/testing.go:1493 +0x47
Goroutine 1372557 (running) created at:
testing.(*T).Run()
GOROOT/src/testing/testing.go:1493 +0x75d
github.com/cockroachdb/cockroach/pkg/sql.TestQueryCache.func1()
github.com/cockroachdb/cockroach/pkg/sql/plan_opt_test.go:282 +0x104
testing.tRunner()
GOROOT/src/testing/testing.go:1446 +0x216
testing.(*T).Run.func1()
GOROOT/src/testing/testing.go:1493 +0x47
==================
=== RUN TestQueryCache/group/exec-and-prepare
=== PAUSE TestQueryCache/group/exec-and-prepare
=== RUN TestQueryCache/group/parallel-prepare
=== PAUSE TestQueryCache/group/parallel-prepare
=== CONT TestQueryCache/group/parallel-prepare
=== RUN TestQueryCache/group/relative-timestamp
=== PAUSE TestQueryCache/group/relative-timestamp
=== CONT TestQueryCache/group/relative-timestamp
=== RUN TestQueryCache/group/schemachange
=== PAUSE TestQueryCache/group/schemachange
=== RUN TestQueryCache/group/schemachange-parallel
=== PAUSE TestQueryCache/group/schemachange-parallel
=== CONT TestQueryCache/group/schemachange-parallel
=== RUN TestQueryCache/group/simple
=== PAUSE TestQueryCache/group/simple
=== RUN TestQueryCache/group/multidb
=== PAUSE TestQueryCache/group/multidb
=== CONT TestQueryCache/group/multidb
=== RUN TestQueryCache/group/multidb-prepare
=== PAUSE TestQueryCache/group/multidb-prepare
=== CONT TestQueryCache/group/multidb-prepare
=== RUN TestQueryCache/group/parallel
=== PAUSE TestQueryCache/group/parallel
=== CONT TestQueryCache/group/parallel
=== RUN TestQueryCache/group/prepare-hints
=== PAUSE TestQueryCache/group/prepare-hints
=== RUN TestQueryCache/group/statschange
=== PAUSE TestQueryCache/group/statschange
=== RUN TestQueryCache/group
```
<p>Parameters: <code>TAGS=bazel,gss,race</code>
</p>
<details><summary>Help</summary>
<p>
See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM)
</p>
</details>
/cc @cockroachdb/sql-queries
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestQueryCache.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
Jira issue: CRDB-28984
|
non_process
|
sql testquerycache failed sql testquerycache with on release goroutine running created at testing t run goroot src testing testing go github com cockroachdb cockroach pkg sql testquerycache github com cockroachdb cockroach pkg sql plan opt test go testing trunner goroot src testing testing go testing t run goroot src testing testing go goroutine running created at testing t run goroot src testing testing go github com cockroachdb cockroach pkg sql testquerycache github com cockroachdb cockroach pkg sql plan opt test go testing trunner goroot src testing testing go testing t run goroot src testing testing go run testquerycache group exec and prepare pause testquerycache group exec and prepare run testquerycache group parallel prepare pause testquerycache group parallel prepare cont testquerycache group parallel prepare run testquerycache group relative timestamp pause testquerycache group relative timestamp cont testquerycache group relative timestamp run testquerycache group schemachange pause testquerycache group schemachange run testquerycache group schemachange parallel pause testquerycache group schemachange parallel cont testquerycache group schemachange parallel run testquerycache group simple pause testquerycache group simple run testquerycache group multidb pause testquerycache group multidb cont testquerycache group multidb run testquerycache group multidb prepare pause testquerycache group multidb prepare cont testquerycache group multidb prepare run testquerycache group parallel pause testquerycache group parallel cont testquerycache group parallel run testquerycache group prepare hints pause testquerycache group prepare hints run testquerycache group statschange pause testquerycache group statschange run testquerycache group parameters tags bazel gss race help see also cc cockroachdb sql queries jira issue crdb
| 0
|
13,205
| 15,649,252,113
|
IssuesEvent
|
2021-03-23 07:16:50
|
bitpal/bitpal_umbrella
|
https://api.github.com/repos/bitpal/bitpal_umbrella
|
opened
|
Need status updates from Flowee
|
Payment processor enhancement
|
Especially annoying when restarting a node if Flowee's still syncing.
|
1.0
|
Need status updates from Flowee - Especially annoying when restarting a node if Flowee's still syncing.
|
process
|
need status updates from flowee especially annoying when restarting a node if flowee s still syncing
| 1
|
499,272
| 14,443,855,159
|
IssuesEvent
|
2020-12-07 20:18:25
|
trufflesuite/ganache-core
|
https://api.github.com/repos/trufflesuite/ganache-core
|
closed
|
There should be a way to test overwriting pending transactions
|
bug correctness/consistency priority-high
|
With the current functionality of Ganache Core, if you try to broadcast a transaction with a higher fee that has already been broadcast with the same nonce, you will receive the following error `the tx doesn't have the correct nonce. account has nonce of: 1 tx has nonce of: 0`.
This is because it verifies the nonce itself before running against the vm https://github.com/trufflesuite/ganache-core/blob/283cbeca1cf9f9c598845a00baaa331afc621596/lib/statemanager.js#L965
`ganache-core` does not currently support a `mempool`, although this WIP refactor by @davidmurdoch does support a mempool to some degree https://github.com/trufflesuite/ganache-core/blob/a4e8989efc301b28d094082297dfbb1c4b25a744/src/ledgers/ethereum/components/transaction-pool.ts#L80-L87
A feature like this is incredibly important for DApp developers. Especially ones creating time sensitive DApps that need to ensure parties can overwrite pending transactions.
|
1.0
|
There should be a way to test overwriting pending transactions - With the current functionality of Ganache Core, if you try to broadcast a transaction with a higher fee that has already been broadcast with the same nonce, you will receive the following error `the tx doesn't have the correct nonce. account has nonce of: 1 tx has nonce of: 0`.
This is because it verifies the nonce itself before running against the vm https://github.com/trufflesuite/ganache-core/blob/283cbeca1cf9f9c598845a00baaa331afc621596/lib/statemanager.js#L965
`ganache-core` does not currently support a `mempool`, although this WIP refactor by @davidmurdoch does support a mempool to some degree https://github.com/trufflesuite/ganache-core/blob/a4e8989efc301b28d094082297dfbb1c4b25a744/src/ledgers/ethereum/components/transaction-pool.ts#L80-L87
A feature like this is incredibly important for DApp developers. Especially ones creating time sensitive DApps that need to ensure parties can overwrite pending transactions.
|
non_process
|
there should be a way to test overwriting pending transactions with the current functionality of ganache core if you try to broadcast a transaction with a higher fee that has already been broadcast with the same nonce you will receive the following error the tx doesn t have the correct nonce account has nonce of tx has nonce of this is because it verifies the nonce itself before running against the vm ganache core does not currently support a mempool although this wip refactor by davidmurdoch does support a mempool to some degree a feature like this is incredibly important for dapp developers especially ones creating time sensitive dapps that need to ensure parties can overwrite pending transactions
| 0
|
142,194
| 11,458,024,392
|
IssuesEvent
|
2020-02-07 01:48:24
|
kubernetes/kubernetes
|
https://api.github.com/repos/kubernetes/kubernetes
|
closed
|
[Flaky Test] Stage
|
kind/flake priority/critical-urgent sig/testing
|
**Which jobs are flaking**:
pull-kubernetes-e2e-gce
pull-kubernetes-e2e-gce-rbe
**Which test(s) are flaking**:
Stage
**Testgrid link**:
https://testgrid.k8s.io/presubmits-kubernetes-nonblocking#pull-kubernetes-e2e-gce-rbe&include-filter-by-regex=Overall%7CBuild%7CStage
https://testgrid.k8s.io/presubmits-kubernetes-blocking#pull-kubernetes-e2e-gce&include-filter-by-regex=Overall%7CBuild%7CStage
**Reason for failure**:
This isn't technically a test that's failing, it's part of the build->stage->up->test cycle used by our e2e jobs.
There are two main clusters of failures that I can tell. One is timeouts.
The other looks like:
```
- Hashing and copying public release artifacts to
gs://kubernetes-release-pull/ci/pull-kubernetes-e2e-gce/v1.18.0-alpha.2.14+1b7738cc4ebdbe:
$TEST_TMPDIR defined: output root default is '/bazel-scratch/.cache/bazel' and max_idle_secs default is '15'.
Server terminated abruptly (error code: 14, error message: '', log file: '/bazel-scratch/.cache/bazel/_bazel_prow/cae228f2a89ef5ee47c2085e441a3561/server/jvm.out')
Signal ERR caught!
Traceback (line function script):
208 main /home/prow/go/src/k8s.io/release/push-build.sh
Exiting...
```
Example -gce jobs:
- https://prow.k8s.io/view/gcs/kubernetes-jenkins/pr-logs/pull/85861/pull-kubernetes-e2e-gce/1219647945268269056 (#85861)
- https://prow.k8s.io/view/gcs/kubernetes-jenkins/pr-logs/pull/85846/pull-kubernetes-e2e-gce/1219646811342376960 (#85846)
https://prow.k8s.io/view/gcs/kubernetes-jenkins/pr-logs/pull/87422/pull-kubernetes-e2e-gce/1219612336432615426 (#87422)
- https://prow.k8s.io/view/gcs/kubernetes-jenkins/pr-logs/pull/87372/pull-kubernetes-e2e-gce/1219266675761745921 (#87372)
Example -gce-rbe jobs:
- https://prow.k8s.io/view/gcs/kubernetes-jenkins/pr-logs/pull/87435/pull-kubernetes-e2e-gce-rbe/1219718868767870977
- https://prow.k8s.io/view/gcs/kubernetes-jenkins/pr-logs/pull/87266/pull-kubernetes-e2e-gce-rbe/1219655997224652803
- https://prow.k8s.io/view/gcs/kubernetes-jenkins/pr-logs/pull/86408/pull-kubernetes-e2e-gce-rbe/1219620136827228160
(source gist https://gist.github.com/spiffxp/ed54d006e630b8fd8126ddfe67d9dc1e)
**Anything else we need to know**:
- Triage link for the job that actually reports to PRs: https://storage.googleapis.com/k8s-gubernator/triage/index.html?pr=1&job=gce%24&test=Stage
- [The line where the crash is happening]( https://github.com/kubernetes/release/blob/f59f6ba61ac5f225cde91ff0fc179a0a6f762841/lib/releaselib.sh#L1148) ends up running `bazel run //:push-build $gcs_stage $gcs_destination`
- Some googling for "bazel error code 14" leads me to believe that bazel (or its jvm) is getting killed due to OOM. [This job requests 6Gi of memory](https://github.com/kubernetes/test-infra/blob/b2471685eed6a7d063d7e1e19032282bb33679db/config/jobs/kubernetes/sig-cloud-provider/gcp/gcp-gce.yaml#L59-L61) which is a value that's been cargo culted around as far as I can tell. I was unable to find the decision that led to this value. I'm wondering if a bump to 8Gi of memory would help reduce occurrences of this flake.
- I'm referencing the -rbe jobs to point out that this happens even if we use RBE, but they're not blocking
|
1.0
|
[Flaky Test] Stage - **Which jobs are flaking**:
pull-kubernetes-e2e-gce
pull-kubernetes-e2e-gce-rbe
**Which test(s) are flaking**:
Stage
**Testgrid link**:
https://testgrid.k8s.io/presubmits-kubernetes-nonblocking#pull-kubernetes-e2e-gce-rbe&include-filter-by-regex=Overall%7CBuild%7CStage
https://testgrid.k8s.io/presubmits-kubernetes-blocking#pull-kubernetes-e2e-gce&include-filter-by-regex=Overall%7CBuild%7CStage
**Reason for failure**:
This isn't technically a test that's failing, it's part of the build->stage->up->test cycle used by our e2e jobs.
There are two main clusters of failures that I can tell. One is timeouts.
The other looks like:
```
- Hashing and copying public release artifacts to
gs://kubernetes-release-pull/ci/pull-kubernetes-e2e-gce/v1.18.0-alpha.2.14+1b7738cc4ebdbe:
$TEST_TMPDIR defined: output root default is '/bazel-scratch/.cache/bazel' and max_idle_secs default is '15'.
Server terminated abruptly (error code: 14, error message: '', log file: '/bazel-scratch/.cache/bazel/_bazel_prow/cae228f2a89ef5ee47c2085e441a3561/server/jvm.out')
Signal ERR caught!
Traceback (line function script):
208 main /home/prow/go/src/k8s.io/release/push-build.sh
Exiting...
```
Example -gce jobs:
- https://prow.k8s.io/view/gcs/kubernetes-jenkins/pr-logs/pull/85861/pull-kubernetes-e2e-gce/1219647945268269056 (#85861)
- https://prow.k8s.io/view/gcs/kubernetes-jenkins/pr-logs/pull/85846/pull-kubernetes-e2e-gce/1219646811342376960 (#85846)
https://prow.k8s.io/view/gcs/kubernetes-jenkins/pr-logs/pull/87422/pull-kubernetes-e2e-gce/1219612336432615426 (#87422)
- https://prow.k8s.io/view/gcs/kubernetes-jenkins/pr-logs/pull/87372/pull-kubernetes-e2e-gce/1219266675761745921 (#87372)
Example -gce-rbe jobs:
- https://prow.k8s.io/view/gcs/kubernetes-jenkins/pr-logs/pull/87435/pull-kubernetes-e2e-gce-rbe/1219718868767870977
- https://prow.k8s.io/view/gcs/kubernetes-jenkins/pr-logs/pull/87266/pull-kubernetes-e2e-gce-rbe/1219655997224652803
- https://prow.k8s.io/view/gcs/kubernetes-jenkins/pr-logs/pull/86408/pull-kubernetes-e2e-gce-rbe/1219620136827228160
(source gist https://gist.github.com/spiffxp/ed54d006e630b8fd8126ddfe67d9dc1e)
**Anything else we need to know**:
- Triage link for the job that actually reports to PRs: https://storage.googleapis.com/k8s-gubernator/triage/index.html?pr=1&job=gce%24&test=Stage
- [The line where the crash is happening]( https://github.com/kubernetes/release/blob/f59f6ba61ac5f225cde91ff0fc179a0a6f762841/lib/releaselib.sh#L1148) ends up running `bazel run //:push-build $gcs_stage $gcs_destination`
- Some googling for "bazel error code 14" leads me to believe that bazel (or its jvm) is getting killed due to OOM. [This job requests 6Gi of memory](https://github.com/kubernetes/test-infra/blob/b2471685eed6a7d063d7e1e19032282bb33679db/config/jobs/kubernetes/sig-cloud-provider/gcp/gcp-gce.yaml#L59-L61) which is a value that's been cargo culted around as far as I can tell. I was unable to find the decision that led to this value. I'm wondering if a bump to 8Gi of memory would help reduce occurrences of this flake.
- I'm referencing the -rbe jobs to point out that this happens even if we use RBE, but they're not blocking
|
non_process
|
stage which jobs are flaking pull kubernetes gce pull kubernetes gce rbe which test s are flaking stage testgrid link reason for failure this isn t technically a test that s failing it s part of the build stage up test cycle used by our jobs there are two main clusters of failures that i can tell one is timeouts the other looks like hashing and copying public release artifacts to gs kubernetes release pull ci pull kubernetes gce alpha test tmpdir defined output root default is bazel scratch cache bazel and max idle secs default is server terminated abruptly error code error message log file bazel scratch cache bazel bazel prow server jvm out signal err caught traceback line function script main home prow go src io release push build sh exiting example gce jobs example gce rbe jobs source gist anything else we need to know triage link for the job that actually reports to prs ends up running bazel run push build gcs stage gcs destination some googling for bazel error code leads me to believe that bazel or its jvm is getting killed due to oom which is a value that s been cargo culted around as far as i can tell i was unable to find the decision that led to this value i m wondering if a bump to of memory would help reduce occurrences of this flake i m referencing the rbe jobs to point out that this happens even if we use rbe but they re not blocking
| 0
|
15,819
| 20,014,768,229
|
IssuesEvent
|
2022-02-01 10:53:59
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
closed
|
Updating android remote tools broke downstream projects
|
P1 type: support / not a bug (process) team-OSS
|
https://github.com/bazelbuild/bazel/commit/b6f87f10ba908821cb71f8147815917829f16ebf broke a number of downstream projects (Bazel Examples, rules_jvm_external, Android Testing).
Example failure:
https://buildkite.com/bazel/bazel-at-head-plus-downstream/builds/2010#2595dff6-44ac-4996-9b38-cdacf2842215
We're issuing a rollback now, but perhaps the correct solution is to update AAPT2 on the buildbots before rolling forward?
|
1.0
|
Updating android remote tools broke downstream projects - https://github.com/bazelbuild/bazel/commit/b6f87f10ba908821cb71f8147815917829f16ebf broke a number of downstream projects (Bazel Examples, rules_jvm_external, Android Testing).
Example failure:
https://buildkite.com/bazel/bazel-at-head-plus-downstream/builds/2010#2595dff6-44ac-4996-9b38-cdacf2842215
We're issuing a rollback now, but perhaps the correct solution is to update AAPT2 on the buildbots before rolling forward?
|
process
|
updating android remote tools broke downstream projects broke a number of downstream projects bazel examples rules jvm external android testing example failure we re issuing a rollback now but perhaps the correct solution is to update on the buildbots before rolling forward
| 1
|
235
| 2,663,133,977
|
IssuesEvent
|
2015-03-20 01:21:31
|
hammerlab/pileup.js
|
https://api.github.com/repos/hammerlab/pileup.js
|
closed
|
Missing @flow should be a lint error
|
process
|
Once we have #39, this would be a good check. Missing `@flow` comments give us a false sense of security.
|
1.0
|
Missing @flow should be a lint error - Once we have #39, this would be a good check. Missing `@flow` comments give us a false sense of security.
|
process
|
missing flow should be a lint error once we have this would be a good check missing flow comments give us a false sense of security
| 1
|
136,053
| 18,722,284,653
|
IssuesEvent
|
2021-11-03 13:08:33
|
KDWSS/dd-trace-java
|
https://api.github.com/repos/KDWSS/dd-trace-java
|
opened
|
CVE-2016-1181 (High) detected in struts-core-1.3.8.jar
|
security vulnerability
|
## CVE-2016-1181 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>struts-core-1.3.8.jar</b></p></summary>
<p>Apache Struts</p>
<p>Library home page: <a href="http://struts.apache.org">http://struts.apache.org</a></p>
<p>Path to dependency file: dd-trace-java/dd-java-agent/appsec/weblog/weblog-spring-app/weblog-spring-app.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.struts/struts-core/1.3.8/66178d4a9279ebb1cd1eb79c10dc204b4199f061/struts-core-1.3.8.jar</p>
<p>
Dependency Hierarchy:
- velocity-tools-2.0.jar (Root Library)
- :x: **struts-core-1.3.8.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/KDWSS/dd-trace-java/commit/2819174635979a19573ec0ce8e3e2b63a3848079">2819174635979a19573ec0ce8e3e2b63a3848079</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
ActionServlet.java in Apache Struts 1 1.x through 1.3.10 mishandles multithreaded access to an ActionForm instance, which allows remote attackers to execute arbitrary code or cause a denial of service (unexpected memory access) via a multipart request, a related issue to CVE-2015-0899.
<p>Publish Date: 2016-07-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-1181>CVE-2016-1181</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Change files</p>
<p>Origin: <a href="https://github.com/kawasima/struts1-forever/commit/eda3a79907ed8fcb0387a0496d0cb14332f250e8">https://github.com/kawasima/struts1-forever/commit/eda3a79907ed8fcb0387a0496d0cb14332f250e8</a></p>
<p>Release Date: 2016-06-08</p>
<p>Fix Resolution: Replace or update the following file: ActionServlet.java</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.apache.struts","packageName":"struts-core","packageVersion":"1.3.8","packageFilePaths":["/dd-java-agent/appsec/weblog/weblog-spring-app/weblog-spring-app.gradle"],"isTransitiveDependency":true,"dependencyTree":"org.apache.velocity:velocity-tools:2.0;org.apache.struts:struts-core:1.3.8","isMinimumFixVersionAvailable":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2016-1181","vulnerabilityDetails":"ActionServlet.java in Apache Struts 1 1.x through 1.3.10 mishandles multithreaded access to an ActionForm instance, which allows remote attackers to execute arbitrary code or cause a denial of service (unexpected memory access) via a multipart request, a related issue to CVE-2015-0899.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-1181","cvss3Severity":"high","cvss3Score":"8.1","cvss3Metrics":{"A":"High","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2016-1181 (High) detected in struts-core-1.3.8.jar - ## CVE-2016-1181 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>struts-core-1.3.8.jar</b></p></summary>
<p>Apache Struts</p>
<p>Library home page: <a href="http://struts.apache.org">http://struts.apache.org</a></p>
<p>Path to dependency file: dd-trace-java/dd-java-agent/appsec/weblog/weblog-spring-app/weblog-spring-app.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.struts/struts-core/1.3.8/66178d4a9279ebb1cd1eb79c10dc204b4199f061/struts-core-1.3.8.jar</p>
<p>
Dependency Hierarchy:
- velocity-tools-2.0.jar (Root Library)
- :x: **struts-core-1.3.8.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/KDWSS/dd-trace-java/commit/2819174635979a19573ec0ce8e3e2b63a3848079">2819174635979a19573ec0ce8e3e2b63a3848079</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
ActionServlet.java in Apache Struts 1 1.x through 1.3.10 mishandles multithreaded access to an ActionForm instance, which allows remote attackers to execute arbitrary code or cause a denial of service (unexpected memory access) via a multipart request, a related issue to CVE-2015-0899.
<p>Publish Date: 2016-07-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-1181>CVE-2016-1181</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Change files</p>
<p>Origin: <a href="https://github.com/kawasima/struts1-forever/commit/eda3a79907ed8fcb0387a0496d0cb14332f250e8">https://github.com/kawasima/struts1-forever/commit/eda3a79907ed8fcb0387a0496d0cb14332f250e8</a></p>
<p>Release Date: 2016-06-08</p>
<p>Fix Resolution: Replace or update the following file: ActionServlet.java</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.apache.struts","packageName":"struts-core","packageVersion":"1.3.8","packageFilePaths":["/dd-java-agent/appsec/weblog/weblog-spring-app/weblog-spring-app.gradle"],"isTransitiveDependency":true,"dependencyTree":"org.apache.velocity:velocity-tools:2.0;org.apache.struts:struts-core:1.3.8","isMinimumFixVersionAvailable":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2016-1181","vulnerabilityDetails":"ActionServlet.java in Apache Struts 1 1.x through 1.3.10 mishandles multithreaded access to an ActionForm instance, which allows remote attackers to execute arbitrary code or cause a denial of service (unexpected memory access) via a multipart request, a related issue to CVE-2015-0899.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-1181","cvss3Severity":"high","cvss3Score":"8.1","cvss3Metrics":{"A":"High","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve high detected in struts core jar cve high severity vulnerability vulnerable library struts core jar apache struts library home page a href path to dependency file dd trace java dd java agent appsec weblog weblog spring app weblog spring app gradle path to vulnerable library home wss scanner gradle caches modules files org apache struts struts core struts core jar dependency hierarchy velocity tools jar root library x struts core jar vulnerable library found in head commit a href found in base branch master vulnerability details actionservlet java in apache struts x through mishandles multithreaded access to an actionform instance which allows remote attackers to execute arbitrary code or cause a denial of service unexpected memory access via a multipart request a related issue to cve publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type change files origin a href release date fix resolution replace or update the following file actionservlet java isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree org apache velocity velocity tools org apache struts struts core isminimumfixversionavailable false basebranches vulnerabilityidentifier cve vulnerabilitydetails actionservlet java in apache struts x through mishandles multithreaded access to an actionform instance which allows remote attackers to execute arbitrary code or cause a denial of service unexpected memory access via a multipart request a related issue to cve vulnerabilityurl
| 0
|
388,921
| 11,495,090,969
|
IssuesEvent
|
2020-02-12 03:37:05
|
Ilithor/SocMon-React-AzureCosmos
|
https://api.github.com/repos/Ilithor/SocMon-React-AzureCosmos
|
opened
|
[API] Add support for pagination to API
|
enhancement high priority
|
You should investigate how mongo handles pagination and add support for it on the API
|
1.0
|
[API] Add support for pagination to API - You should investigate how mongo handles pagination and add support for it on the API
|
non_process
|
add support for pagination to api you should investigate how mongo handles pagination and add support for it on the api
| 0
|
621,712
| 19,595,155,043
|
IssuesEvent
|
2022-01-05 16:59:38
|
trufflesuite/ganache
|
https://api.github.com/repos/trufflesuite/ganache
|
closed
|
web3.eth.personal.ecRecover isn't supported.
|
enhancement priority4 📋
|
## Expected Behavior
Proper response to the method call with Ganache as provider (either in WS or RPC)
## Current Behavior
When calling the method, say like this :
```javascript
...
let msg = "The signed message.";
let sig = sigstr; // previously retrieved signature from a web3.eth.personal.sign method call.
web3.eth.personal.ecRecover(msg, sig)
.catch((error) => {
console.error("Error verifying signature :");
console.error(error);
})
.then((recaddr) => {
if (recaddr == regaddr) {
console.log("****** Successful signature verification !! ******");
}
else {
console.log("WRONG SIGNATURE !");
}
})
```
An error is thrown, apparently saying that this method isn't supported :
```
Error verifying signature :
Error: Node error: {"message":"Method personal_ecRecover not supported.","code":-32000,"data":{"stack":"Error: Method personal_ecRecover not supported.\n at GethApiDouble.handleRequest (/tmp/.mount_Ganachu5K7Lu/resources/app.asar/node_modules/ganache-core/lib/subproviders/geth_api_double.js:67:16)\n at next (/tmp/.mount_Ganachu5K7Lu/resources/app.asar/node_modules/ganache-core/node_modules/web3-provider-engine/index.js:116:18)\n at GethDefaults.handleRequest (/tmp/.mount_Ganachu5K7Lu/resources/app.asar/node_modules/ganache-core/lib/subproviders/gethdefaults.js:15:12)\n at next (/tmp/.mount_Ganachu5K7Lu/resources/app.asar/node_modules/ganache-core/node_modules/web3-provider-engine/index.js:116:18)\n at SubscriptionSubprovider.FilterSubprovider.handleRequest (/tmp/.mount_Ganachu5K7Lu/resources/app.asar/node_modules/ganache-core/node_modules/web3-provider-engine/subproviders/filters.js:89:7)\n at SubscriptionSubprovider.handleRequest (/tmp/.mount_Ganachu5K7Lu/resources/app.asar/node_modules/ganache-core/node_modules/web3-provider-engine/subproviders/subscriptions.js:136:49)\n at next (/tmp/.mount_Ganachu5K7Lu/resources/app.asar/node_modules/ganache-core/node_modules/web3-provider-engine/index.js:116:18)\n at DelayedBlockFilter.handleRequest (/tmp/.mount_Ganachu5K7Lu/resources/app.asar/node_modules/ganache-core/lib/subproviders/delayedblockfilter.js:31:3)\n at next (/tmp/.mount_Ganachu5K7Lu/resources/app.asar/node_modules/ganache-core/node_modules/web3-provider-engine/index.js:116:18)\n at RequestFunnel.handleRequest (/tmp/.mount_Ganachu5K7Lu/resources/app.asar/node_modules/ganache-core/lib/subproviders/requestfunnel.js:32:12)\n at next (/tmp/.mount_Ganachu5K7Lu/resources/app.asar/node_modules/ganache-core/node_modules/web3-provider-engine/index.js:116:18)\n at Web3ProviderEngine._handleAsync (/tmp/.mount_Ganachu5K7Lu/resources/app.asar/node_modules/ganache-core/node_modules/web3-provider-engine/index.js:103:3)\n at Timeout._onTimeout (/tmp/.mount_Ganachu5K7Lu/resources/app.asar/node_modules/ganache-core/node_modules/web3-provider-engine/index.js:87:12)\n at ontimeout (timers.js:469:11)\n at tryOnTimeout (timers.js:304:5)\n at Timer.listOnTimeout (timers.js:264:5)","name":"Error"}}
```
## Used environment
* Ganache v2.0.1
* Web3js v1.0.0-beta.55
* Nodejs v10.15.3
* Ubuntu 16.04 LTS
|
1.0
|
web3.eth.personal.ecRecover isn't supported. - ## Expected Behavior
Proper response to the method call with Ganache as provider (either in WS or RPC)
## Current Behavior
When calling the method, say like this :
```javascript
...
let msg = "The signed message.";
let sig = sigstr; // previously retrieved signature from a web3.eth.personal.sign method call.
web3.eth.personal.ecRecover(msg, sig)
.catch((error) => {
console.error("Error verifying signature :");
console.error(error);
})
.then((recaddr) => {
if (recaddr == regaddr) {
console.log("****** Successful signature verification !! ******");
}
else {
console.log("WRONG SIGNATURE !");
}
})
```
An error is thrown, apparently saying that this method isn't supported :
```
Error verifying signature :
Error: Node error: {"message":"Method personal_ecRecover not supported.","code":-32000,"data":{"stack":"Error: Method personal_ecRecover not supported.\n at GethApiDouble.handleRequest (/tmp/.mount_Ganachu5K7Lu/resources/app.asar/node_modules/ganache-core/lib/subproviders/geth_api_double.js:67:16)\n at next (/tmp/.mount_Ganachu5K7Lu/resources/app.asar/node_modules/ganache-core/node_modules/web3-provider-engine/index.js:116:18)\n at GethDefaults.handleRequest (/tmp/.mount_Ganachu5K7Lu/resources/app.asar/node_modules/ganache-core/lib/subproviders/gethdefaults.js:15:12)\n at next (/tmp/.mount_Ganachu5K7Lu/resources/app.asar/node_modules/ganache-core/node_modules/web3-provider-engine/index.js:116:18)\n at SubscriptionSubprovider.FilterSubprovider.handleRequest (/tmp/.mount_Ganachu5K7Lu/resources/app.asar/node_modules/ganache-core/node_modules/web3-provider-engine/subproviders/filters.js:89:7)\n at SubscriptionSubprovider.handleRequest (/tmp/.mount_Ganachu5K7Lu/resources/app.asar/node_modules/ganache-core/node_modules/web3-provider-engine/subproviders/subscriptions.js:136:49)\n at next (/tmp/.mount_Ganachu5K7Lu/resources/app.asar/node_modules/ganache-core/node_modules/web3-provider-engine/index.js:116:18)\n at DelayedBlockFilter.handleRequest (/tmp/.mount_Ganachu5K7Lu/resources/app.asar/node_modules/ganache-core/lib/subproviders/delayedblockfilter.js:31:3)\n at next (/tmp/.mount_Ganachu5K7Lu/resources/app.asar/node_modules/ganache-core/node_modules/web3-provider-engine/index.js:116:18)\n at RequestFunnel.handleRequest (/tmp/.mount_Ganachu5K7Lu/resources/app.asar/node_modules/ganache-core/lib/subproviders/requestfunnel.js:32:12)\n at next (/tmp/.mount_Ganachu5K7Lu/resources/app.asar/node_modules/ganache-core/node_modules/web3-provider-engine/index.js:116:18)\n at Web3ProviderEngine._handleAsync (/tmp/.mount_Ganachu5K7Lu/resources/app.asar/node_modules/ganache-core/node_modules/web3-provider-engine/index.js:103:3)\n at Timeout._onTimeout (/tmp/.mount_Ganachu5K7Lu/resources/app.asar/node_modules/ganache-core/node_modules/web3-provider-engine/index.js:87:12)\n at ontimeout (timers.js:469:11)\n at tryOnTimeout (timers.js:304:5)\n at Timer.listOnTimeout (timers.js:264:5)","name":"Error"}}
```
## Used environment
* Ganache v2.0.1
* Web3js v1.0.0-beta.55
* Nodejs v10.15.3
* Ubuntu 16.04 LTS
|
non_process
|
eth personal ecrecover isn t supported expected behavior proper response to the method call with ganache as provider either in ws or rpc current behavior when calling the method say like this javascript let msg the signed message let sig sigstr previously retrieved signature from a eth personal sign method call eth personal ecrecover msg sig catch error console error error verifying signature console error error then recaddr if recaddr regaddr console log successful signature verification else console log wrong signature an error is thrown apparently saying that this method isn t supported error verifying signature error node error message method personal ecrecover not supported code data stack error method personal ecrecover not supported n at gethapidouble handlerequest tmp mount resources app asar node modules ganache core lib subproviders geth api double js n at next tmp mount resources app asar node modules ganache core node modules provider engine index js n at gethdefaults handlerequest tmp mount resources app asar node modules ganache core lib subproviders gethdefaults js n at next tmp mount resources app asar node modules ganache core node modules provider engine index js n at subscriptionsubprovider filtersubprovider handlerequest tmp mount resources app asar node modules ganache core node modules provider engine subproviders filters js n at subscriptionsubprovider handlerequest tmp mount resources app asar node modules ganache core node modules provider engine subproviders subscriptions js n at next tmp mount resources app asar node modules ganache core node modules provider engine index js n at delayedblockfilter handlerequest tmp mount resources app asar node modules ganache core lib subproviders delayedblockfilter js n at next tmp mount resources app asar node modules ganache core node modules provider engine index js n at requestfunnel handlerequest tmp mount resources app asar node modules ganache core lib subproviders requestfunnel js n at next tmp mount resources app asar node modules ganache core node modules provider engine index js n at handleasync tmp mount resources app asar node modules ganache core node modules provider engine index js n at timeout ontimeout tmp mount resources app asar node modules ganache core node modules provider engine index js n at ontimeout timers js n at tryontimeout timers js n at timer listontimeout timers js name error used environment ganache beta nodejs ubuntu lts
| 0
|
1,213
| 3,451,955,157
|
IssuesEvent
|
2015-12-17 00:23:41
|
BCDevExchange/Our-Project-Docs
|
https://api.github.com/repos/BCDevExchange/Our-Project-Docs
|
closed
|
API List page [7] - UI Integration
|
API Services
|
BK01 to provide lmullane with URLs to full API List as well as individual API consoles to update the BCDevX website links (including profile pages).
|
1.0
|
API List page [7] - UI Integration - BK01 to provide lmullane with URLs to full API List as well as individual API consoles to update the BCDevX website links (including profile pages).
|
non_process
|
api list page ui integration to provide lmullane with urls to full api list as well as individual api consoles to update the bcdevx website links including profile pages
| 0
|
19,386
| 25,523,644,081
|
IssuesEvent
|
2022-11-28 23:06:09
|
aiidateam/aiida-core
|
https://api.github.com/repos/aiidateam/aiida-core
|
closed
|
Exception raised in `Process.on_entered` will not put process in `Excepted` state
|
type/bug priority/important topic/processes
|
This can be triggered as follows:
```
@calcfunction
def test_function(inp):
return Dict(dict={'a': inp})
test_function(Dict(dict={'a': 1}))
```
because the value stored in the output `Dict` of the calculation function is not serializable, the following exception will be triggered:
```
<ipython-input-8-4193a75c1f57> in <module>()
----> 1 test_function(Dict(dict={'a': 1}))
/home/sphuber/code/aiida/env/dev/aiida-core/aiida/engine/processes/functions.pyc in decorated_function(*args, **kwargs)
143 def decorated_function(*args, **kwargs):
144 """This wrapper function is the actual function that is called."""
--> 145 result, _ = run_get_node(*args, **kwargs)
146 return result
147
/home/sphuber/code/aiida/env/dev/aiida-core/aiida/engine/processes/functions.pyc in run_get_node(*args, **kwargs)
123
124 process = process_class(inputs=inputs, runner=runner)
--> 125 result = process.execute()
126
127 # Close the runner properly
/home/sphuber/code/aiida/env/dev/aiida-core/aiida/engine/processes/functions.pyc in execute(self)
311 def execute(self):
312 """Execute the process."""
--> 313 result = super(FunctionProcess, self).execute()
314
315 # FunctionProcesses can return a single value as output, and not a dictionary, so we should also return that
/home/sphuber/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/plumpy/processes.pyc in func_wrapper(self, *args, **kwargs)
86 if self._closed:
87 raise exceptions.ClosedError("Process is closed")
---> 88 return func(self, *args, **kwargs)
89
90 return func_wrapper
/home/sphuber/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/plumpy/processes.pyc in execute(self)
1059 """
1060 if not self.has_terminated():
-> 1061 self.loop().run_sync(self.step_until_terminated)
1062
1063 return self.future().result()
/home/sphuber/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/tornado/ioloop.pyc in run_sync(self, func, timeout)
456 if not future_cell[0].done():
457 raise TimeoutError('Operation timed out after %s seconds' % timeout)
--> 458 return future_cell[0].result()
459
460 def time(self):
/home/sphuber/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/tornado/concurrent.pyc in result(self, timeout)
236 if self._exc_info is not None:
237 try:
--> 238 raise_exc_info(self._exc_info)
239 finally:
240 self = None
/home/sphuber/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/tornado/gen.pyc in run(self)
1061 if exc_info is not None:
1062 try:
-> 1063 yielded = self.gen.throw(*exc_info)
1064 finally:
1065 # Break up a reference to itself
/home/sphuber/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/plumpy/processes.pyc in step_until_terminated(self)
1108 def step_until_terminated(self):
1109 while not self.has_terminated():
-> 1110 yield self.step()
1111
1112 # endregion
/home/sphuber/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/tornado/gen.pyc in run(self)
1053
1054 try:
-> 1055 value = future.result()
1056 except Exception:
1057 self.had_exception = True
/home/sphuber/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/tornado/concurrent.pyc in result(self, timeout)
236 if self._exc_info is not None:
237 try:
--> 238 raise_exc_info(self._exc_info)
239 finally:
240 self = None
/home/sphuber/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/tornado/gen.pyc in run(self)
1067 exc_info = None
1068 else:
-> 1069 yielded = self.gen.send(value)
1070
1071 if stack_context._state.contexts is not orig_stack_contexts:
/home/sphuber/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/plumpy/processes.pyc in step(self)
1099 else:
1100 # Everything nominal so transition to the next state
-> 1101 self.transition_to(next_state)
1102
1103 finally:
/home/sphuber/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/plumpy/base/state_machine.pyc in transition_to(self, new_state, *args, **kwargs)
324 raise
325 self._transition_failing = True
--> 326 self.transition_failed(initial_state_label, label, *sys.exc_info()[1:])
327 finally:
328 self._transition_failing = False
/home/sphuber/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/plumpy/base/state_machine.pyc in transition_failed(self, initial_state, final_state, exception, trace)
337 :type exception: :class:`Exception`
338 """
--> 339 six.reraise(type(exception), exception, trace)
340
341 def get_debug(self):
/home/sphuber/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/plumpy/base/state_machine.pyc in transition_to(self, new_state, *args, **kwargs)
308
309 try:
--> 310 self._enter_next_state(new_state)
311 except StateEntryFailed as exception:
312 new_state = exception.state
/home/sphuber/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/plumpy/base/state_machine.pyc in _enter_next_state(self, next_state)
372 next_state.do_enter()
373 self._state = next_state
--> 374 self._fire_state_event(StateEventHook.ENTERED_STATE, last_state)
375
376 def _create_state_instance(self, state, *args, **kwargs):
/home/sphuber/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/plumpy/base/state_machine.pyc in _fire_state_event(self, hook, state)
286 def _fire_state_event(self, hook, state):
287 for callback in self._event_callbacks.get(hook, []):
--> 288 callback(self, hook, state)
289
290 @super_check
/home/sphuber/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/plumpy/processes.pyc in <lambda>(_s, _h, from_state)
306 lambda _s, _h, state: self.on_entering(state))
307 self.add_state_event_callback(state_machine.StateEventHook.ENTERED_STATE,
--> 308 lambda _s, _h, from_state: self.on_entered(from_state))
309 self.add_state_event_callback(state_machine.StateEventHook.EXITING_STATE,
310 lambda _s, _h, _state: self.on_exiting())
/home/sphuber/code/aiida/env/dev/aiida-core/aiida/engine/processes/process.pyc in on_entered(self, from_state)
284 # pylint: disable=cyclic-import
285 from aiida.engine.utils import set_process_state_change_timestamp
--> 286 self.update_node_state(self._state)
287 self._save_checkpoint()
288 # Update the latest process state change timestamp
/home/sphuber/code/aiida/env/dev/aiida-core/aiida/engine/processes/process.pyc in update_node_state(self, state)
496
497 def update_node_state(self, state):
--> 498 self.update_outputs()
499 self.node.set_process_state(state.LABEL)
500
/home/sphuber/code/aiida/env/dev/aiida-core/aiida/engine/processes/process.pyc in update_outputs(self)
518 output.add_incoming(self.node, LinkType.RETURN, link_label)
519
--> 520 output.store()
521
522 def _setup_db_record(self):
/home/sphuber/code/aiida/env/dev/aiida-core/aiida/orm/nodes/node.pyc in store(self, with_transaction, use_cache)
1024 self._store_from_cache(same_node, with_transaction=with_transaction)
1025 else:
-> 1026 self._store(with_transaction=with_transaction)
1027
1028 # Set up autogrouping used by verdi run
/home/sphuber/code/aiida/env/dev/aiida-core/aiida/orm/nodes/node.pyc in _store(self, with_transaction)
1055 attributes = self._attrs_cache
1056 links = self._incoming_cache
-> 1057 self._backend_entity.store(attributes, links, with_transaction=with_transaction)
1058 except Exception:
1059 # I put back the files in the sandbox folder since the transaction did not succeed
/home/sphuber/code/aiida/env/dev/aiida-core/aiida/orm/implementation/django/nodes.pyc in store(self, attributes, links, with_transaction)
391
392 if attributes:
--> 393 self.ATTRIBUTE_CLASS.reset_values_for_node(self.dbmodel, attributes, with_transaction=False)
394
395 if links:
/home/sphuber/code/aiida/env/dev/aiida-core/aiida/backends/djsite/db/models.pyc in reset_values_for_node(cls, dbnode, attributes, with_transaction, return_not_store)
1106 nodes_to_store.extend(
1107 cls.create_value(k, v,
-> 1108 subspecifier_value=dbnode_node,
1109 ))
1110
/home/sphuber/code/aiida/env/dev/aiida-core/aiida/backends/djsite/db/models.pyc in create_value(cls, key, value, subspecifier_value, other_attribs)
820 jsondata = json.dumps(value)
821 except TypeError:
--> 822 raise ValueError("Unable to store the value: it must be either a basic datatype, or json-serializable: {}".format(value))
823
824 new_entry.datatype = 'json'
ValueError: Unable to store the value: it must be either a basic datatype, or json-serializable: uuid: 060ab975-fb0f-4415-9660-ff88d6037f87 (pk: 3084)
```
This happens in `Process.on_entered` when the outputs of the calculation function is updated, which tries to store the `Dict` output, but it fails on the database level. Since the exception in one of the state change triggers `on_entered` the process is not properly exited and excepted. As a result the `process_state` on the node is still `Running`.
|
1.0
|
Exception raised in `Process.on_entered` will not put process in `Excepted` state - This can be triggered as follows:
```
@calcfunction
def test_function(inp):
return Dict(dict={'a': inp})
test_function(Dict(dict={'a': 1}))
```
because the value stored in the output `Dict` of the calculation function is not serializable, the following exception will be triggered:
```
<ipython-input-8-4193a75c1f57> in <module>()
----> 1 test_function(Dict(dict={'a': 1}))
/home/sphuber/code/aiida/env/dev/aiida-core/aiida/engine/processes/functions.pyc in decorated_function(*args, **kwargs)
143 def decorated_function(*args, **kwargs):
144 """This wrapper function is the actual function that is called."""
--> 145 result, _ = run_get_node(*args, **kwargs)
146 return result
147
/home/sphuber/code/aiida/env/dev/aiida-core/aiida/engine/processes/functions.pyc in run_get_node(*args, **kwargs)
123
124 process = process_class(inputs=inputs, runner=runner)
--> 125 result = process.execute()
126
127 # Close the runner properly
/home/sphuber/code/aiida/env/dev/aiida-core/aiida/engine/processes/functions.pyc in execute(self)
311 def execute(self):
312 """Execute the process."""
--> 313 result = super(FunctionProcess, self).execute()
314
315 # FunctionProcesses can return a single value as output, and not a dictionary, so we should also return that
/home/sphuber/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/plumpy/processes.pyc in func_wrapper(self, *args, **kwargs)
86 if self._closed:
87 raise exceptions.ClosedError("Process is closed")
---> 88 return func(self, *args, **kwargs)
89
90 return func_wrapper
/home/sphuber/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/plumpy/processes.pyc in execute(self)
1059 """
1060 if not self.has_terminated():
-> 1061 self.loop().run_sync(self.step_until_terminated)
1062
1063 return self.future().result()
/home/sphuber/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/tornado/ioloop.pyc in run_sync(self, func, timeout)
456 if not future_cell[0].done():
457 raise TimeoutError('Operation timed out after %s seconds' % timeout)
--> 458 return future_cell[0].result()
459
460 def time(self):
/home/sphuber/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/tornado/concurrent.pyc in result(self, timeout)
236 if self._exc_info is not None:
237 try:
--> 238 raise_exc_info(self._exc_info)
239 finally:
240 self = None
/home/sphuber/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/tornado/gen.pyc in run(self)
1061 if exc_info is not None:
1062 try:
-> 1063 yielded = self.gen.throw(*exc_info)
1064 finally:
1065 # Break up a reference to itself
/home/sphuber/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/plumpy/processes.pyc in step_until_terminated(self)
1108 def step_until_terminated(self):
1109 while not self.has_terminated():
-> 1110 yield self.step()
1111
1112 # endregion
/home/sphuber/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/tornado/gen.pyc in run(self)
1053
1054 try:
-> 1055 value = future.result()
1056 except Exception:
1057 self.had_exception = True
/home/sphuber/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/tornado/concurrent.pyc in result(self, timeout)
236 if self._exc_info is not None:
237 try:
--> 238 raise_exc_info(self._exc_info)
239 finally:
240 self = None
/home/sphuber/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/tornado/gen.pyc in run(self)
1067 exc_info = None
1068 else:
-> 1069 yielded = self.gen.send(value)
1070
1071 if stack_context._state.contexts is not orig_stack_contexts:
/home/sphuber/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/plumpy/processes.pyc in step(self)
1099 else:
1100 # Everything nominal so transition to the next state
-> 1101 self.transition_to(next_state)
1102
1103 finally:
/home/sphuber/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/plumpy/base/state_machine.pyc in transition_to(self, new_state, *args, **kwargs)
324 raise
325 self._transition_failing = True
--> 326 self.transition_failed(initial_state_label, label, *sys.exc_info()[1:])
327 finally:
328 self._transition_failing = False
/home/sphuber/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/plumpy/base/state_machine.pyc in transition_failed(self, initial_state, final_state, exception, trace)
337 :type exception: :class:`Exception`
338 """
--> 339 six.reraise(type(exception), exception, trace)
340
341 def get_debug(self):
/home/sphuber/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/plumpy/base/state_machine.pyc in transition_to(self, new_state, *args, **kwargs)
308
309 try:
--> 310 self._enter_next_state(new_state)
311 except StateEntryFailed as exception:
312 new_state = exception.state
/home/sphuber/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/plumpy/base/state_machine.pyc in _enter_next_state(self, next_state)
372 next_state.do_enter()
373 self._state = next_state
--> 374 self._fire_state_event(StateEventHook.ENTERED_STATE, last_state)
375
376 def _create_state_instance(self, state, *args, **kwargs):
/home/sphuber/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/plumpy/base/state_machine.pyc in _fire_state_event(self, hook, state)
286 def _fire_state_event(self, hook, state):
287 for callback in self._event_callbacks.get(hook, []):
--> 288 callback(self, hook, state)
289
290 @super_check
/home/sphuber/.virtualenvs/aiida_dev/local/lib/python2.7/site-packages/plumpy/processes.pyc in <lambda>(_s, _h, from_state)
306 lambda _s, _h, state: self.on_entering(state))
307 self.add_state_event_callback(state_machine.StateEventHook.ENTERED_STATE,
--> 308 lambda _s, _h, from_state: self.on_entered(from_state))
309 self.add_state_event_callback(state_machine.StateEventHook.EXITING_STATE,
310 lambda _s, _h, _state: self.on_exiting())
/home/sphuber/code/aiida/env/dev/aiida-core/aiida/engine/processes/process.pyc in on_entered(self, from_state)
284 # pylint: disable=cyclic-import
285 from aiida.engine.utils import set_process_state_change_timestamp
--> 286 self.update_node_state(self._state)
287 self._save_checkpoint()
288 # Update the latest process state change timestamp
/home/sphuber/code/aiida/env/dev/aiida-core/aiida/engine/processes/process.pyc in update_node_state(self, state)
496
497 def update_node_state(self, state):
--> 498 self.update_outputs()
499 self.node.set_process_state(state.LABEL)
500
/home/sphuber/code/aiida/env/dev/aiida-core/aiida/engine/processes/process.pyc in update_outputs(self)
518 output.add_incoming(self.node, LinkType.RETURN, link_label)
519
--> 520 output.store()
521
522 def _setup_db_record(self):
/home/sphuber/code/aiida/env/dev/aiida-core/aiida/orm/nodes/node.pyc in store(self, with_transaction, use_cache)
1024 self._store_from_cache(same_node, with_transaction=with_transaction)
1025 else:
-> 1026 self._store(with_transaction=with_transaction)
1027
1028 # Set up autogrouping used by verdi run
/home/sphuber/code/aiida/env/dev/aiida-core/aiida/orm/nodes/node.pyc in _store(self, with_transaction)
1055 attributes = self._attrs_cache
1056 links = self._incoming_cache
-> 1057 self._backend_entity.store(attributes, links, with_transaction=with_transaction)
1058 except Exception:
1059 # I put back the files in the sandbox folder since the transaction did not succeed
/home/sphuber/code/aiida/env/dev/aiida-core/aiida/orm/implementation/django/nodes.pyc in store(self, attributes, links, with_transaction)
391
392 if attributes:
--> 393 self.ATTRIBUTE_CLASS.reset_values_for_node(self.dbmodel, attributes, with_transaction=False)
394
395 if links:
/home/sphuber/code/aiida/env/dev/aiida-core/aiida/backends/djsite/db/models.pyc in reset_values_for_node(cls, dbnode, attributes, with_transaction, return_not_store)
1106 nodes_to_store.extend(
1107 cls.create_value(k, v,
-> 1108 subspecifier_value=dbnode_node,
1109 ))
1110
/home/sphuber/code/aiida/env/dev/aiida-core/aiida/backends/djsite/db/models.pyc in create_value(cls, key, value, subspecifier_value, other_attribs)
820 jsondata = json.dumps(value)
821 except TypeError:
--> 822 raise ValueError("Unable to store the value: it must be either a basic datatype, or json-serializable: {}".format(value))
823
824 new_entry.datatype = 'json'
ValueError: Unable to store the value: it must be either a basic datatype, or json-serializable: uuid: 060ab975-fb0f-4415-9660-ff88d6037f87 (pk: 3084)
```
This happens in `Process.on_entered` when the outputs of the calculation function is updated, which tries to store the `Dict` output, but it fails on the database level. Since the exception in one of the state change triggers `on_entered` the process is not properly exited and excepted. As a result the `process_state` on the node is still `Running`.
|
process
|
exception raised in process on entered will not put process in excepted state this can be triggered as follows calcfunction def test function inp return dict dict a inp test function dict dict a because the value stored in the output dict of the calculation function is not serializable the following exception will be triggered in test function dict dict a home sphuber code aiida env dev aiida core aiida engine processes functions pyc in decorated function args kwargs def decorated function args kwargs this wrapper function is the actual function that is called result run get node args kwargs return result home sphuber code aiida env dev aiida core aiida engine processes functions pyc in run get node args kwargs process process class inputs inputs runner runner result process execute close the runner properly home sphuber code aiida env dev aiida core aiida engine processes functions pyc in execute self def execute self execute the process result super functionprocess self execute functionprocesses can return a single value as output and not a dictionary so we should also return that home sphuber virtualenvs aiida dev local lib site packages plumpy processes pyc in func wrapper self args kwargs if self closed raise exceptions closederror process is closed return func self args kwargs return func wrapper home sphuber virtualenvs aiida dev local lib site packages plumpy processes pyc in execute self if not self has terminated self loop run sync self step until terminated return self future result home sphuber virtualenvs aiida dev local lib site packages tornado ioloop pyc in run sync self func timeout if not future cell done raise timeouterror operation timed out after s seconds timeout return future cell result def time self home sphuber virtualenvs aiida dev local lib site packages tornado concurrent pyc in result self timeout if self exc info is not none try raise exc info self exc info finally self none home sphuber virtualenvs aiida dev local lib site packages tornado gen pyc in run self if exc info is not none try yielded self gen throw exc info finally break up a reference to itself home sphuber virtualenvs aiida dev local lib site packages plumpy processes pyc in step until terminated self def step until terminated self while not self has terminated yield self step endregion home sphuber virtualenvs aiida dev local lib site packages tornado gen pyc in run self try value future result except exception self had exception true home sphuber virtualenvs aiida dev local lib site packages tornado concurrent pyc in result self timeout if self exc info is not none try raise exc info self exc info finally self none home sphuber virtualenvs aiida dev local lib site packages tornado gen pyc in run self exc info none else yielded self gen send value if stack context state contexts is not orig stack contexts home sphuber virtualenvs aiida dev local lib site packages plumpy processes pyc in step self else everything nominal so transition to the next state self transition to next state finally home sphuber virtualenvs aiida dev local lib site packages plumpy base state machine pyc in transition to self new state args kwargs raise self transition failing true self transition failed initial state label label sys exc info finally self transition failing false home sphuber virtualenvs aiida dev local lib site packages plumpy base state machine pyc in transition failed self initial state final state exception trace type exception class exception six reraise type exception exception trace def get debug self home sphuber virtualenvs aiida dev local lib site packages plumpy base state machine pyc in transition to self new state args kwargs try self enter next state new state except stateentryfailed as exception new state exception state home sphuber virtualenvs aiida dev local lib site packages plumpy base state machine pyc in enter next state self next state next state do enter self state next state self fire state event stateeventhook entered state last state def create state instance self state args kwargs home sphuber virtualenvs aiida dev local lib site packages plumpy base state machine pyc in fire state event self hook state def fire state event self hook state for callback in self event callbacks get hook callback self hook state super check home sphuber virtualenvs aiida dev local lib site packages plumpy processes pyc in s h from state lambda s h state self on entering state self add state event callback state machine stateeventhook entered state lambda s h from state self on entered from state self add state event callback state machine stateeventhook exiting state lambda s h state self on exiting home sphuber code aiida env dev aiida core aiida engine processes process pyc in on entered self from state pylint disable cyclic import from aiida engine utils import set process state change timestamp self update node state self state self save checkpoint update the latest process state change timestamp home sphuber code aiida env dev aiida core aiida engine processes process pyc in update node state self state def update node state self state self update outputs self node set process state state label home sphuber code aiida env dev aiida core aiida engine processes process pyc in update outputs self output add incoming self node linktype return link label output store def setup db record self home sphuber code aiida env dev aiida core aiida orm nodes node pyc in store self with transaction use cache self store from cache same node with transaction with transaction else self store with transaction with transaction set up autogrouping used by verdi run home sphuber code aiida env dev aiida core aiida orm nodes node pyc in store self with transaction attributes self attrs cache links self incoming cache self backend entity store attributes links with transaction with transaction except exception i put back the files in the sandbox folder since the transaction did not succeed home sphuber code aiida env dev aiida core aiida orm implementation django nodes pyc in store self attributes links with transaction if attributes self attribute class reset values for node self dbmodel attributes with transaction false if links home sphuber code aiida env dev aiida core aiida backends djsite db models pyc in reset values for node cls dbnode attributes with transaction return not store nodes to store extend cls create value k v subspecifier value dbnode node home sphuber code aiida env dev aiida core aiida backends djsite db models pyc in create value cls key value subspecifier value other attribs jsondata json dumps value except typeerror raise valueerror unable to store the value it must be either a basic datatype or json serializable format value new entry datatype json valueerror unable to store the value it must be either a basic datatype or json serializable uuid pk this happens in process on entered when the outputs of the calculation function is updated which tries to store the dict output but it fails on the database level since the exception in one of the state change triggers on entered the process is not properly exited and excepted as a result the process state on the node is still running
| 1
|
7,905
| 11,089,780,043
|
IssuesEvent
|
2019-12-14 21:00:57
|
dita-ot/dita-ot
|
https://api.github.com/repos/dita-ot/dita-ot
|
closed
|
[DOTX010E] Error in conref processing
|
bug preprocess preprocess/conref stale
|
When I build the attached DITA instance, I encountered [DOTX010E] error in conref processing.
[20170612-conkeyref-diff.zip](https://github.com/dita-ot/dita-ot/files/1066951/20170612-conkeyref-diff.zip)
- I believe this to be a bug, not a question about using DITA-OT.
- I read the [CONTRIBUTING][] file.
## Expected Behavior
The error message is shown as follows:
```
[conref] file:/D:/SVN/pdf5/diffs/20170612-conkeyref-diff/ja-JP/topic/tStep2CheckThePrinterStatus.dita:4:65: [DOTX010E][ERROR]: Unable to find target for conref="../../collection-topic/tWhenConnectingPrintersStep02Template.dita#title_step02".
[conref] java.io.FileNotFoundException: D:\SVN\pdf5\diffs\20170612-conkeyref-diff\ja-JP\temp\html5\ja-JP\topic\..\..\collection-topic\tWhenConnectingPrintersStep02Template.dita (指定されたパスが見つかりません。)
```
I believe that the conref correct path is `../collection-topic/tWhenConnectingPrintersStep02Template.dita#title_step02`
Please refer to the attached temp folder structure.
Conref processing should find the target normally.
## Actual Behavior
The conref'ed portion of the document has been vanished from the output.
## Steps to Reproduce
1. Open the attached 20170612-conkeyref-diff.zip.
2. Run HTML5 transformation scenario. (It will be faster using oXygen project scenario "DITA Map HTML5 - DITA-OT 2.4.6" changing DITA-OT directory.)
3. The phenomenon should occur.
## Copy of the error message, log file or stack trace
error-20170612.log is attached in the ZIP file.
## Environment
* DITA-OT version:
2.5
* Operating system and version _(Linux, macOS, Windows)_:
Windows 10
* How did you run DITA-OT?
oXygen, other editor, CMS, etc.
* Transformation type _(HTML5, PDF, custom, etc.)_:
HTML5
[CONTRIBUTING]: https://github.com/dita-ot/dita-ot/blob/develop/.github/CONTRIBUTING.md
|
2.0
|
[DOTX010E] Error in conref processing - When I build the attached DITA instance, I encountered [DOTX010E] error in conref processing.
[20170612-conkeyref-diff.zip](https://github.com/dita-ot/dita-ot/files/1066951/20170612-conkeyref-diff.zip)
- I believe this to be a bug, not a question about using DITA-OT.
- I read the [CONTRIBUTING][] file.
## Expected Behavior
The error message is shown as follows:
```
[conref] file:/D:/SVN/pdf5/diffs/20170612-conkeyref-diff/ja-JP/topic/tStep2CheckThePrinterStatus.dita:4:65: [DOTX010E][ERROR]: Unable to find target for conref="../../collection-topic/tWhenConnectingPrintersStep02Template.dita#title_step02".
[conref] java.io.FileNotFoundException: D:\SVN\pdf5\diffs\20170612-conkeyref-diff\ja-JP\temp\html5\ja-JP\topic\..\..\collection-topic\tWhenConnectingPrintersStep02Template.dita (指定されたパスが見つかりません。)
```
I believe that the conref correct path is `../collection-topic/tWhenConnectingPrintersStep02Template.dita#title_step02`
Please refer to the attached temp folder structure.
Conref processing should find the target normally.
## Actual Behavior
The conref'ed portion of the document has been vanished from the output.
## Steps to Reproduce
1. Open the attached 20170612-conkeyref-diff.zip.
2. Run HTML5 transformation scenario. (It will be faster using oXygen project scenario "DITA Map HTML5 - DITA-OT 2.4.6" changing DITA-OT directory.)
3. The phenomenon should occur.
## Copy of the error message, log file or stack trace
error-20170612.log is attached in the ZIP file.
## Environment
* DITA-OT version:
2.5
* Operating system and version _(Linux, macOS, Windows)_:
Windows 10
* How did you run DITA-OT?
oXygen, other editor, CMS, etc.
* Transformation type _(HTML5, PDF, custom, etc.)_:
HTML5
[CONTRIBUTING]: https://github.com/dita-ot/dita-ot/blob/develop/.github/CONTRIBUTING.md
|
process
|
error in conref processing when i build the attached dita instance i encountered error in conref processing i believe this to be a bug not a question about using dita ot i read the file expected behavior the error message is shown as follows file d svn diffs conkeyref diff ja jp topic dita unable to find target for conref collection topic dita title java io filenotfoundexception d svn diffs conkeyref diff ja jp temp ja jp topic collection topic dita 指定されたパスが見つかりません。 i believe that the conref correct path is collection topic dita title please refer to the attached temp folder structure conref processing should find the target normally actual behavior the conref ed portion of the document has been vanished from the output steps to reproduce open the attached conkeyref diff zip run transformation scenario it will be faster using oxygen project scenario dita map dita ot changing dita ot directory the phenomenon should occur copy of the error message log file or stack trace error log is attached in the zip file environment dita ot version operating system and version linux macos windows windows how did you run dita ot oxygen other editor cms etc transformation type pdf custom etc
| 1
|
351,573
| 10,520,712,313
|
IssuesEvent
|
2019-09-30 02:37:56
|
AY1920S1-CS2113T-W17-2/main
|
https://api.github.com/repos/AY1920S1-CS2113T-W17-2/main
|
closed
|
As a user, I can list expenses so that I can see all types of expenses and the amount associated to each of them.
|
priority.High type.Story
|
Shows a detailed expense list.
|
1.0
|
As a user, I can list expenses so that I can see all types of expenses and the amount associated to each of them. - Shows a detailed expense list.
|
non_process
|
as a user i can list expenses so that i can see all types of expenses and the amount associated to each of them shows a detailed expense list
| 0
|
13,855
| 16,615,148,898
|
IssuesEvent
|
2021-06-02 15:47:16
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
closed
|
Error in endpoints example code using a service connection on container jobs
|
Pri1 devops-cicd-process/tech devops/prod doc-enhancement
|
In this section of the [Container Jobs doc](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/container-phases?view=azure-devops#endpoints), the example code for the `image` setting has an extra segment. The example is this: `image: myprivate/registry:ubuntu1604` but should be this: `image: registry:ubuntu1604`. The `myprivate/` part of the image does not correspond to anything and causes an exception when running. Below is the complete example:
The page currently shows this:
```YAML
container:
image: myprivate/registry:ubuntu1604
endpoint: private_dockerhub_connection
```
The corrected example should be this:
```YAML
container:
image: registry:ubuntu1604
endpoint: private_dockerhub_connection
```
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 3339a2e0-be29-1363-f588-b231d4472c02
* Version Independent ID: 72dd11a3-704d-d0fd-6dfa-cf49f3352de3
* Content: [Container Jobs in Azure Pipelines and TFS - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/container-phases?view=azure-devops)
* Content Source: [docs/pipelines/process/container-phases.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/container-phases.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
1.0
|
Error in endpoints example code using a service connection on container jobs -
In this section of the [Container Jobs doc](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/container-phases?view=azure-devops#endpoints), the example code for the `image` setting has an extra segment. The example is this: `image: myprivate/registry:ubuntu1604` but should be this: `image: registry:ubuntu1604`. The `myprivate/` part of the image does not correspond to anything and causes an exception when running. Below is the complete example:
The page currently shows this:
```YAML
container:
image: myprivate/registry:ubuntu1604
endpoint: private_dockerhub_connection
```
The corrected example should be this:
```YAML
container:
image: registry:ubuntu1604
endpoint: private_dockerhub_connection
```
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 3339a2e0-be29-1363-f588-b231d4472c02
* Version Independent ID: 72dd11a3-704d-d0fd-6dfa-cf49f3352de3
* Content: [Container Jobs in Azure Pipelines and TFS - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/container-phases?view=azure-devops)
* Content Source: [docs/pipelines/process/container-phases.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/container-phases.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
process
|
error in endpoints example code using a service connection on container jobs in this section of the the example code for the image setting has an extra segment the example is this image myprivate registry but should be this image registry the myprivate part of the image does not correspond to anything and causes an exception when running below is the complete example the page currently shows this yaml container image myprivate registry endpoint private dockerhub connection the corrected example should be this yaml container image registry endpoint private dockerhub connection document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
| 1
|
522,902
| 15,169,265,689
|
IssuesEvent
|
2021-02-12 20:50:14
|
kubernetes-sigs/cluster-api-provider-aws
|
https://api.github.com/repos/kubernetes-sigs/cluster-api-provider-aws
|
closed
|
Move to a CNCF-owned AWS account for project-owned AMIs
|
area/release kind/cleanup lifecycle/rotten priority/important-longterm
|
/kind feature
**Describe the solution you'd like**
We are currently publishing CAPA AMIs to a VMware-owned AWS account. This account is the hard-coded default account to use when looking up AMIs (this is overridable). It would be nice if we could transition the AMIs to a CNCF-owned AWS account.
**Anything else you would like to add:**
Bonus points if we can automate, via Prow and image-builder, publishing new AMIs when new Kubernetes versions are released.
cc @detiber @randomvariable
|
1.0
|
Move to a CNCF-owned AWS account for project-owned AMIs - /kind feature
**Describe the solution you'd like**
We are currently publishing CAPA AMIs to a VMware-owned AWS account. This account is the hard-coded default account to use when looking up AMIs (this is overridable). It would be nice if we could transition the AMIs to a CNCF-owned AWS account.
**Anything else you would like to add:**
Bonus points if we can automate, via Prow and image-builder, publishing new AMIs when new Kubernetes versions are released.
cc @detiber @randomvariable
|
non_process
|
move to a cncf owned aws account for project owned amis kind feature describe the solution you d like we are currently publishing capa amis to a vmware owned aws account this account is the hard coded default account to use when looking up amis this is overridable it would be nice if we could transition the amis to a cncf owned aws account anything else you would like to add bonus points if we can automate via prow and image builder publishing new amis when new kubernetes versions are released cc detiber randomvariable
| 0
|
6,423
| 9,530,755,995
|
IssuesEvent
|
2019-04-29 14:33:28
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
Piping w/ spawn is broken in Node 11 / 12
|
child_process confirmed-bug
|
* **Version**: 11.13
* **Platform**: OSX
* **Subsystem**: child_process
Ref #18016, ping @elibarzilay and @gireeshpunathil
It seems that Node 11's behavior changed compared to Node 10, and pipes are now automatically closed after being used as output stream from a process. I think this is a bug, because it makes it impossible to use the same pipe as output from two different processes (which would be the case if I was to implement `(foo; bar) | cat` - both `foo` and `bar` would write into the `cat` process).
Additionally, it causes previously working code to "randomly" throw internal exceptions. The random part is likely caused by a race condition, since `perl` throws consistently while `rev` doesn't cause problems. The exception is as such:
```js
const {spawn} = require(`child_process`);
const p2 = spawn(`perl`, [`-ne`, `print uc`], {
stdio: [`pipe`, process.stdout, process.stderr],
});
const p1 = spawn(`node`, [`-p`, `"hello world"`], {
stdio: [process.stdin, p2.stdin, process.stderr],
});
p1.on(`exit`, code => {
p2.stdin.end();
});
```
```
❯ [mael-mbp?] /Users/mael ❯ node test
HELLO WORLD
internal/validators.js:130
throw new ERR_INVALID_ARG_TYPE(name, 'number', value);
^
TypeError [ERR_INVALID_ARG_TYPE]: The "err" argument must be of type number. Received type undefined
at validateNumber (internal/validators.js:130:11)
at Object.getSystemErrorName (util.js:231:3)
at errnoException (internal/errors.js:383:21)
at Socket._final (net.js:371:25)
at callFinal (_stream_writable.js:617:10)
at processTicksAndRejections (internal/process/task_queues.js:81:17)
```
As you can see the code executed fine (`HELLO WORLD` got printed), but during cleanup an internal assertion failed and Node crashed.
|
1.0
|
Piping w/ spawn is broken in Node 11 / 12 - * **Version**: 11.13
* **Platform**: OSX
* **Subsystem**: child_process
Ref #18016, ping @elibarzilay and @gireeshpunathil
It seems that Node 11's behavior changed compared to Node 10, and pipes are now automatically closed after being used as output stream from a process. I think this is a bug, because it makes it impossible to use the same pipe as output from two different processes (which would be the case if I was to implement `(foo; bar) | cat` - both `foo` and `bar` would write into the `cat` process).
Additionally, it causes previously working code to "randomly" throw internal exceptions. The random part is likely caused by a race condition, since `perl` throws consistently while `rev` doesn't cause problems. The exception is as such:
```js
const {spawn} = require(`child_process`);
const p2 = spawn(`perl`, [`-ne`, `print uc`], {
stdio: [`pipe`, process.stdout, process.stderr],
});
const p1 = spawn(`node`, [`-p`, `"hello world"`], {
stdio: [process.stdin, p2.stdin, process.stderr],
});
p1.on(`exit`, code => {
p2.stdin.end();
});
```
```
❯ [mael-mbp?] /Users/mael ❯ node test
HELLO WORLD
internal/validators.js:130
throw new ERR_INVALID_ARG_TYPE(name, 'number', value);
^
TypeError [ERR_INVALID_ARG_TYPE]: The "err" argument must be of type number. Received type undefined
at validateNumber (internal/validators.js:130:11)
at Object.getSystemErrorName (util.js:231:3)
at errnoException (internal/errors.js:383:21)
at Socket._final (net.js:371:25)
at callFinal (_stream_writable.js:617:10)
at processTicksAndRejections (internal/process/task_queues.js:81:17)
```
As you can see the code executed fine (`HELLO WORLD` got printed), but during cleanup an internal assertion failed and Node crashed.
|
process
|
piping w spawn is broken in node version platform osx subsystem child process ref ping elibarzilay and gireeshpunathil it seems that node s behavior changed compared to node and pipes are now automatically closed after being used as output stream from a process i think this is a bug because it makes it impossible to use the same pipe as output from two different processes which would be the case if i was to implement foo bar cat both foo and bar would write into the cat process additionally it causes previously working code to randomly throw internal exceptions the random part is likely caused by a race condition since perl throws consistently while rev doesn t cause problems the exception is as such js const spawn require child process const spawn perl stdio const spawn node stdio on exit code stdin end ❯ users mael ❯ node test hello world internal validators js throw new err invalid arg type name number value typeerror the err argument must be of type number received type undefined at validatenumber internal validators js at object getsystemerrorname util js at errnoexception internal errors js at socket final net js at callfinal stream writable js at processticksandrejections internal process task queues js as you can see the code executed fine hello world got printed but during cleanup an internal assertion failed and node crashed
| 1
|
821,982
| 30,845,999,108
|
IssuesEvent
|
2023-08-02 13:46:17
|
dataesr/works-finder
|
https://api.github.com/repos/dataesr/works-finder
|
closed
|
Normalize OpenAlex types of works
|
priority:2
|
Map everything to barometre type
(use an aggregate on bso-publications + bso-datacite indices to gel all possibles values)
e.g article from OpenAlex ==> journal-article
|
1.0
|
Normalize OpenAlex types of works - Map everything to barometre type
(use an aggregate on bso-publications + bso-datacite indices to gel all possibles values)
e.g article from OpenAlex ==> journal-article
|
non_process
|
normalize openalex types of works map everything to barometre type use an aggregate on bso publications bso datacite indices to gel all possibles values e g article from openalex journal article
| 0
|
7,200
| 10,337,020,638
|
IssuesEvent
|
2019-09-03 14:06:22
|
dita-ot/dita-ot
|
https://api.github.com/repos/dita-ot/dita-ot
|
closed
|
Using onlytopic.in.map parameter breaks link
|
bug preprocess
|
## Expected Behavior
I have a map that only references one topic:
`<topicref href="justthis.dita"/>`
That one topic has an up-and-over link:
`<xref href="../peer/upandover.dita"/>`
When I build HTML5 with no other options, this works. I would expect the same result when `--onlytopic.in.map=true` is used.
## Actual Behavior
When I build HTML5 with `--onlytopic.in.map=true`, the up and over link is changed in the `debug-filter` step from -- the original `../peer/upandover.dita` adds an extra level up, and the reference in the temp dir becomes `../../peer/upandover.dita`. This also results in several new errors:
```
C:\dcs\test\jira119\content>call \dita-ot\dita-ot-3.3.2\bin\dita -i jira119.ditamap -f html5 -o ditaot332 -temp temp -Dclean.temp=no -Donlytopic.in.map=true
[filter] Failed to create output directory C:\dcs\test\jira119\content\temp\C:\dcs\test\jira119\peer
[topicpull] I/O error reported by XML parser processing file:/C:/dcs/test/jira119/peer/upandover.dita: C:\dcs\test\jira119\peer\topic.dtd (The system cannot find the file specified.)
[topicpull] file:/C:/dcs/test/jira119/content/justthis.dita:6:71: [DOTX056W][WARN]: The file '../../peer/upandover.dita' is not available to resolve link information.
C:\dcs\test\jira119\content>
```
## Possible Solution
Not sure, but uplevels is frequently a problem and I think this might be related to some of the big difficulty I've had with finalizing https://github.com/dita-ot/dita-ot/pull/3316 -- the code for figuring out a resolved relative path, when the linked file doesn't exist in the temp directory, seems to have a lot of holes.
## Steps to Reproduce
Attaching sample files here; building `content/jira119.ditamap` to HTML5 with `onlytopic.in.map=true` will show the problem.
[jira119.zip](https://github.com/dita-ot/dita-ot/files/3340925/jira119.zip)
## Copy of the error message, log file or stack trace
Shown above
## Environment
<!-- Include relevant details about the environment you experienced this in. -->
* DITA-OT version: 3.3.2
* Operating system and version: Windows, also reproduced broken links on Linux
* How did you run DITA-OT? `dita` command
* Transformation type: HTML5
|
1.0
|
Using onlytopic.in.map parameter breaks link - ## Expected Behavior
I have a map that only references one topic:
`<topicref href="justthis.dita"/>`
That one topic has an up-and-over link:
`<xref href="../peer/upandover.dita"/>`
When I build HTML5 with no other options, this works. I would expect the same result when `--onlytopic.in.map=true` is used.
## Actual Behavior
When I build HTML5 with `--onlytopic.in.map=true`, the up and over link is changed in the `debug-filter` step from -- the original `../peer/upandover.dita` adds an extra level up, and the reference in the temp dir becomes `../../peer/upandover.dita`. This also results in several new errors:
```
C:\dcs\test\jira119\content>call \dita-ot\dita-ot-3.3.2\bin\dita -i jira119.ditamap -f html5 -o ditaot332 -temp temp -Dclean.temp=no -Donlytopic.in.map=true
[filter] Failed to create output directory C:\dcs\test\jira119\content\temp\C:\dcs\test\jira119\peer
[topicpull] I/O error reported by XML parser processing file:/C:/dcs/test/jira119/peer/upandover.dita: C:\dcs\test\jira119\peer\topic.dtd (The system cannot find the file specified.)
[topicpull] file:/C:/dcs/test/jira119/content/justthis.dita:6:71: [DOTX056W][WARN]: The file '../../peer/upandover.dita' is not available to resolve link information.
C:\dcs\test\jira119\content>
```
## Possible Solution
Not sure, but uplevels is frequently a problem and I think this might be related to some of the big difficulty I've had with finalizing https://github.com/dita-ot/dita-ot/pull/3316 -- the code for figuring out a resolved relative path, when the linked file doesn't exist in the temp directory, seems to have a lot of holes.
## Steps to Reproduce
Attaching sample files here; building `content/jira119.ditamap` to HTML5 with `onlytopic.in.map=true` will show the problem.
[jira119.zip](https://github.com/dita-ot/dita-ot/files/3340925/jira119.zip)
## Copy of the error message, log file or stack trace
Shown above
## Environment
<!-- Include relevant details about the environment you experienced this in. -->
* DITA-OT version: 3.3.2
* Operating system and version: Windows, also reproduced broken links on Linux
* How did you run DITA-OT? `dita` command
* Transformation type: HTML5
|
process
|
using onlytopic in map parameter breaks link expected behavior i have a map that only references one topic that one topic has an up and over link when i build with no other options this works i would expect the same result when onlytopic in map true is used actual behavior when i build with onlytopic in map true the up and over link is changed in the debug filter step from the original peer upandover dita adds an extra level up and the reference in the temp dir becomes peer upandover dita this also results in several new errors c dcs test content call dita ot dita ot bin dita i ditamap f o temp temp dclean temp no donlytopic in map true failed to create output directory c dcs test content temp c dcs test peer i o error reported by xml parser processing file c dcs test peer upandover dita c dcs test peer topic dtd the system cannot find the file specified file c dcs test content justthis dita the file peer upandover dita is not available to resolve link information c dcs test content possible solution not sure but uplevels is frequently a problem and i think this might be related to some of the big difficulty i ve had with finalizing the code for figuring out a resolved relative path when the linked file doesn t exist in the temp directory seems to have a lot of holes steps to reproduce attaching sample files here building content ditamap to with onlytopic in map true will show the problem copy of the error message log file or stack trace shown above environment dita ot version operating system and version windows also reproduced broken links on linux how did you run dita ot dita command transformation type
| 1
|
13,370
| 15,833,849,602
|
IssuesEvent
|
2021-04-06 16:05:09
|
ZbayApp/zbay
|
https://api.github.com/repos/ZbayApp/zbay
|
closed
|
Automate building of Zecwallet on Windows (not cross compile)
|
dev process done
|
Sometimes zbay will fail on a windows machine due to cross compile of zecwallet not working.
we need to compile zecwallet on windows.
this means we need to do it by hand right now, for every release, because we need to update checkpoints.
we should automate using github actions
|
1.0
|
Automate building of Zecwallet on Windows (not cross compile) - Sometimes zbay will fail on a windows machine due to cross compile of zecwallet not working.
we need to compile zecwallet on windows.
this means we need to do it by hand right now, for every release, because we need to update checkpoints.
we should automate using github actions
|
process
|
automate building of zecwallet on windows not cross compile sometimes zbay will fail on a windows machine due to cross compile of zecwallet not working we need to compile zecwallet on windows this means we need to do it by hand right now for every release because we need to update checkpoints we should automate using github actions
| 1
|
13,809
| 3,776,799,149
|
IssuesEvent
|
2016-03-17 17:48:41
|
Sylius/Sylius
|
https://api.github.com/repos/Sylius/Sylius
|
closed
|
Translations on Crowdin
|
Documentation
|
**Last export : 02/21/14**
Hi guys !
All Github translations are now on Crowdin with the yaml format. I'm sure we are lost a few Crowdin translations during the migration, but nothing real important IMO.
Documentation will be updated soon to explain how to translate Sylius. Short story here :
* **translators** : please contribute on Crowdin here https://crowdin.net/project/sylius.
* **developers** : only add or edit English translations keys in your PR.
**From now on, no PR about translations (except English files of course) will be merged.**
The **synchronization** between Github and Crowdin is done manually at the moment :
* if you need lastest Crowdin translations, please ping us here, we'll update them on Github as soon as possible
* if you have added new English keys (or updated existing keys) in your translations, please remind it to us in your PR. We'll upload your modifications on Crowdin as soon as possible.
|
1.0
|
Translations on Crowdin - **Last export : 02/21/14**
Hi guys !
All Github translations are now on Crowdin with the yaml format. I'm sure we are lost a few Crowdin translations during the migration, but nothing real important IMO.
Documentation will be updated soon to explain how to translate Sylius. Short story here :
* **translators** : please contribute on Crowdin here https://crowdin.net/project/sylius.
* **developers** : only add or edit English translations keys in your PR.
**From now on, no PR about translations (except English files of course) will be merged.**
The **synchronization** between Github and Crowdin is done manually at the moment :
* if you need lastest Crowdin translations, please ping us here, we'll update them on Github as soon as possible
* if you have added new English keys (or updated existing keys) in your translations, please remind it to us in your PR. We'll upload your modifications on Crowdin as soon as possible.
|
non_process
|
translations on crowdin last export hi guys all github translations are now on crowdin with the yaml format i m sure we are lost a few crowdin translations during the migration but nothing real important imo documentation will be updated soon to explain how to translate sylius short story here translators please contribute on crowdin here developers only add or edit english translations keys in your pr from now on no pr about translations except english files of course will be merged the synchronization between github and crowdin is done manually at the moment if you need lastest crowdin translations please ping us here we ll update them on github as soon as possible if you have added new english keys or updated existing keys in your translations please remind it to us in your pr we ll upload your modifications on crowdin as soon as possible
| 0
|
296
| 2,732,238,963
|
IssuesEvent
|
2015-04-17 03:16:08
|
mitchellh/packer
|
https://api.github.com/repos/mitchellh/packer
|
opened
|
Atlas Post-Processor: Interpolate in metadata block
|
bug post-processor/atlas
|
We need to be able to interpolate the template in the nested metadata block in the atlas post-processor:
```json
{
"type": "atlas",
"only": ["virtualbox-iso"],
"artifact": "pearkes/test",
"artifact_type": "vagrant.box",
"metadata": {
"provider": "virtualbox",
"version": "0.0.1",
"created_at": "{{timestamp}}"
}
}]
```
|
1.0
|
Atlas Post-Processor: Interpolate in metadata block - We need to be able to interpolate the template in the nested metadata block in the atlas post-processor:
```json
{
"type": "atlas",
"only": ["virtualbox-iso"],
"artifact": "pearkes/test",
"artifact_type": "vagrant.box",
"metadata": {
"provider": "virtualbox",
"version": "0.0.1",
"created_at": "{{timestamp}}"
}
}]
```
|
process
|
atlas post processor interpolate in metadata block we need to be able to interpolate the template in the nested metadata block in the atlas post processor json type atlas only artifact pearkes test artifact type vagrant box metadata provider virtualbox version created at timestamp
| 1
|
15,137
| 18,891,961,638
|
IssuesEvent
|
2021-11-15 14:10:11
|
googleapis/nodejs-cloud-rad
|
https://api.github.com/repos/googleapis/nodejs-cloud-rad
|
closed
|
Dependency Dashboard
|
type: process
|
This issue provides visibility into Renovate updates and their statuses. [Learn more](https://docs.renovatebot.com/key-concepts/dashboard/)
This repository currently has no open or pending branches.
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
1.0
|
Dependency Dashboard - This issue provides visibility into Renovate updates and their statuses. [Learn more](https://docs.renovatebot.com/key-concepts/dashboard/)
This repository currently has no open or pending branches.
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
process
|
dependency dashboard this issue provides visibility into renovate updates and their statuses this repository currently has no open or pending branches check this box to trigger a request for renovate to run again on this repository
| 1
|
187,395
| 14,427,677,432
|
IssuesEvent
|
2020-12-06 05:31:39
|
apache/skywalking
|
https://api.github.com/repos/apache/skywalking
|
closed
|
[Test] Recheck E2E / Meter
|
test
|
This test suddenly becomes very unstable, fails repeated in https://github.com/apache/skywalking/pull/5946/checks?check_run_id=1499245325 and master branch https://github.com/apache/skywalking/runs/1496049540
Both changes have no relationship with this feature. We need to recheck locally and try to find out what is going on.
|
1.0
|
[Test] Recheck E2E / Meter - This test suddenly becomes very unstable, fails repeated in https://github.com/apache/skywalking/pull/5946/checks?check_run_id=1499245325 and master branch https://github.com/apache/skywalking/runs/1496049540
Both changes have no relationship with this feature. We need to recheck locally and try to find out what is going on.
|
non_process
|
recheck meter this test suddenly becomes very unstable fails repeated in and master branch both changes have no relationship with this feature we need to recheck locally and try to find out what is going on
| 0
|
242,035
| 18,510,317,440
|
IssuesEvent
|
2021-10-20 01:32:02
|
briandelmsft/SentinelAutomationModules
|
https://api.github.com/repos/briandelmsft/SentinelAutomationModules
|
opened
|
Related Alerts - Add return schema
|
documentation
|
Add schema for the returned JSON to make it easier to use with Parse JSON step in calling logic app
|
1.0
|
Related Alerts - Add return schema - Add schema for the returned JSON to make it easier to use with Parse JSON step in calling logic app
|
non_process
|
related alerts add return schema add schema for the returned json to make it easier to use with parse json step in calling logic app
| 0
|
7,279
| 10,431,869,090
|
IssuesEvent
|
2019-09-17 09:59:24
|
linnovate/root
|
https://api.github.com/repos/linnovate/root
|
closed
|
waiting for approval in document
|
2.0.7 Fixed Internal Test Process bug
|
go to documents
open new documents
click on waiting for approval in right side
go to status by all
result : the entity that was waiting for approval became a new status
go to documents
open new documents
click on waiting for approval in right side

result : the entity that was waiting for approval became a new status

|
1.0
|
waiting for approval in document - go to documents
open new documents
click on waiting for approval in right side
go to status by all
result : the entity that was waiting for approval became a new status
go to documents
open new documents
click on waiting for approval in right side

result : the entity that was waiting for approval became a new status

|
process
|
waiting for approval in document go to documents open new documents click on waiting for approval in right side go to status by all result the entity that was waiting for approval became a new status go to documents open new documents click on waiting for approval in right side result the entity that was waiting for approval became a new status
| 1
|
65,284
| 6,954,326,090
|
IssuesEvent
|
2017-12-07 00:47:59
|
equella/Equella
|
https://api.github.com/repos/equella/Equella
|
closed
|
6.5 Beta contribution wizard - fileAttachment size restriction.
|
bug Ready for 6.5 GA Testing
|
Once turned on, it restricts all file sizes, not just the size limit. Tested both with the new drag and drop and file select.
|
1.0
|
6.5 Beta contribution wizard - fileAttachment size restriction. - Once turned on, it restricts all file sizes, not just the size limit. Tested both with the new drag and drop and file select.
|
non_process
|
beta contribution wizard fileattachment size restriction once turned on it restricts all file sizes not just the size limit tested both with the new drag and drop and file select
| 0
|
66,706
| 7,010,877,805
|
IssuesEvent
|
2017-12-20 01:59:00
|
chainer/chainermn
|
https://api.github.com/repos/chainer/chainermn
|
closed
|
Refactor `tests` directory
|
test
|
The directory structure in `tests` should generally follow that of the source directory.
|
1.0
|
Refactor `tests` directory - The directory structure in `tests` should generally follow that of the source directory.
|
non_process
|
refactor tests directory the directory structure in tests should generally follow that of the source directory
| 0
|
10,785
| 13,608,984,259
|
IssuesEvent
|
2020-09-23 03:56:15
|
googleapis/java-resourcemanager
|
https://api.github.com/repos/googleapis/java-resourcemanager
|
closed
|
Dependency Dashboard
|
api: cloudresourcemanager type: process
|
This issue contains a list of Renovate updates and their statuses.
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/org.apache.maven.plugins-maven-project-info-reports-plugin-3.x -->build(deps): update dependency org.apache.maven.plugins:maven-project-info-reports-plugin to v3.1.1
- [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-resourcemanager-0.x -->chore(deps): update dependency com.google.cloud:google-cloud-resourcemanager to v0.118.0-alpha
- [ ] <!-- rebase-branch=renovate/com.google.apis-google-api-services-cloudresourcemanager-1.x -->deps: update dependency com.google.apis:google-api-services-cloudresourcemanager to v1-rev20200907-1.30.10
- [ ] <!-- rebase-branch=renovate/com.google.errorprone-error_prone_annotations-2.x -->deps: update dependency com.google.errorprone:error_prone_annotations to v2.4.0
- [ ] <!-- rebase-branch=renovate/com.google.apis-google-api-services-cloudresourcemanager-2.x -->deps: update dependency com.google.apis:google-api-services-cloudresourcemanager to v2
- [ ] <!-- rebase-branch=renovate/org.easymock-easymock-4.x -->deps: update dependency org.easymock:easymock to v4
- [ ] <!-- rebase-branch=renovate/org.objenesis-objenesis-3.x -->deps: update dependency org.objenesis:objenesis to v3
- [ ] <!-- rebase-all-open-prs -->**Check this option to rebase all the above open PRs at once**
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
1.0
|
Dependency Dashboard - This issue contains a list of Renovate updates and their statuses.
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/org.apache.maven.plugins-maven-project-info-reports-plugin-3.x -->build(deps): update dependency org.apache.maven.plugins:maven-project-info-reports-plugin to v3.1.1
- [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-resourcemanager-0.x -->chore(deps): update dependency com.google.cloud:google-cloud-resourcemanager to v0.118.0-alpha
- [ ] <!-- rebase-branch=renovate/com.google.apis-google-api-services-cloudresourcemanager-1.x -->deps: update dependency com.google.apis:google-api-services-cloudresourcemanager to v1-rev20200907-1.30.10
- [ ] <!-- rebase-branch=renovate/com.google.errorprone-error_prone_annotations-2.x -->deps: update dependency com.google.errorprone:error_prone_annotations to v2.4.0
- [ ] <!-- rebase-branch=renovate/com.google.apis-google-api-services-cloudresourcemanager-2.x -->deps: update dependency com.google.apis:google-api-services-cloudresourcemanager to v2
- [ ] <!-- rebase-branch=renovate/org.easymock-easymock-4.x -->deps: update dependency org.easymock:easymock to v4
- [ ] <!-- rebase-branch=renovate/org.objenesis-objenesis-3.x -->deps: update dependency org.objenesis:objenesis to v3
- [ ] <!-- rebase-all-open-prs -->**Check this option to rebase all the above open PRs at once**
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
process
|
dependency dashboard this issue contains a list of renovate updates and their statuses open these updates have all been created already click a checkbox below to force a retry rebase of any build deps update dependency org apache maven plugins maven project info reports plugin to chore deps update dependency com google cloud google cloud resourcemanager to alpha deps update dependency com google apis google api services cloudresourcemanager to deps update dependency com google errorprone error prone annotations to deps update dependency com google apis google api services cloudresourcemanager to deps update dependency org easymock easymock to deps update dependency org objenesis objenesis to check this option to rebase all the above open prs at once check this box to trigger a request for renovate to run again on this repository
| 1
|
17,992
| 24,010,944,708
|
IssuesEvent
|
2022-09-14 18:45:23
|
andyxcore/mlops_fraud_detection
|
https://api.github.com/repos/andyxcore/mlops_fraud_detection
|
opened
|
Наполнение хранилища данных
|
preprocessing
|
Наполнение хранилища, в том числе предобработка данных для обучения моделей
Срок: 1 неделя
|
1.0
|
Наполнение хранилища данных - Наполнение хранилища, в том числе предобработка данных для обучения моделей
Срок: 1 неделя
|
process
|
наполнение хранилища данных наполнение хранилища в том числе предобработка данных для обучения моделей срок неделя
| 1
|
92
| 2,506,892,042
|
IssuesEvent
|
2015-01-12 14:45:51
|
cgeo/cgeo
|
https://api.github.com/repos/cgeo/cgeo
|
opened
|
Integrate retrolambda to our development tools
|
Build tools Feature Request
|
[Retrolambdda](https://github.com/orfjackal/retrolambda) lets one use lambda expressions from Java 8 in Java 6, which is currently the target we use for Android.
Integrating Retrolambda into our compilation chains would let us have clearer, shorter code at many places. It would need to be done:
- [ ] for Gradle build (which itself is not polished at all and would need to get migrated to a more recent Gradle version)
- [ ] for ANT build (unless we deprecate it to use the Gradle build)
- [ ] for Eclipse
- [ ] for Intellij IDEA / Android Studio (although those could use the Gradle build)
|
1.0
|
Integrate retrolambda to our development tools - [Retrolambdda](https://github.com/orfjackal/retrolambda) lets one use lambda expressions from Java 8 in Java 6, which is currently the target we use for Android.
Integrating Retrolambda into our compilation chains would let us have clearer, shorter code at many places. It would need to be done:
- [ ] for Gradle build (which itself is not polished at all and would need to get migrated to a more recent Gradle version)
- [ ] for ANT build (unless we deprecate it to use the Gradle build)
- [ ] for Eclipse
- [ ] for Intellij IDEA / Android Studio (although those could use the Gradle build)
|
non_process
|
integrate retrolambda to our development tools lets one use lambda expressions from java in java which is currently the target we use for android integrating retrolambda into our compilation chains would let us have clearer shorter code at many places it would need to be done for gradle build which itself is not polished at all and would need to get migrated to a more recent gradle version for ant build unless we deprecate it to use the gradle build for eclipse for intellij idea android studio although those could use the gradle build
| 0
|
831,990
| 32,068,242,828
|
IssuesEvent
|
2023-09-25 05:55:03
|
oceanbase/odc
|
https://api.github.com/repos/oceanbase/odc
|
opened
|
[Feature]: ODC support connect to oracle datasource
|
type-feature module-Data source management priority-high
|
### Is your feature request related to a problem?
no
### Describe the solution you'd like
ODC support connect to native oracle datasource
### Additional context
_No response_
|
1.0
|
[Feature]: ODC support connect to oracle datasource - ### Is your feature request related to a problem?
no
### Describe the solution you'd like
ODC support connect to native oracle datasource
### Additional context
_No response_
|
non_process
|
odc support connect to oracle datasource is your feature request related to a problem no describe the solution you d like odc support connect to native oracle datasource additional context no response
| 0
|
19,033
| 25,041,599,871
|
IssuesEvent
|
2022-11-04 21:29:25
|
apache/arrow-datafusion
|
https://api.github.com/repos/apache/arrow-datafusion
|
closed
|
DataFusion 14.0.0 Release
|
enhancement development-process
|
**Is your feature request related to a problem or challenge? Please describe what you are trying to do.**
RC target date: November 4th, 2022
Issues: https://github.com/apache/arrow-datafusion/issues?q=is%3Aopen+is%3Aissue+milestone%3A14.0.0
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
|
1.0
|
DataFusion 14.0.0 Release - **Is your feature request related to a problem or challenge? Please describe what you are trying to do.**
RC target date: November 4th, 2022
Issues: https://github.com/apache/arrow-datafusion/issues?q=is%3Aopen+is%3Aissue+milestone%3A14.0.0
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
|
process
|
datafusion release is your feature request related to a problem or challenge please describe what you are trying to do rc target date november issues describe the solution you d like a clear and concise description of what you want to happen describe alternatives you ve considered a clear and concise description of any alternative solutions or features you ve considered additional context add any other context or screenshots about the feature request here
| 1
|
10,415
| 13,208,456,107
|
IssuesEvent
|
2020-08-15 04:53:21
|
jyn514/saltwater
|
https://api.github.com/repos/jyn514/saltwater
|
opened
|
Can no longer run hello world :(
|
bug preprocessor
|
### Expected behavior
`cargo run tests/runner-tests/hello_world.c` works on linux platforms.
### Code
<!-- The code that was not interpreted correctly goes here.
This should also include the error message you got. -->
```c
#include<stdio.h>
int main() {
puts("Hello, world!");
}
bits/libc-header-start.h:56:4 error: invalid macro: trailing tokens in `#if` expression
#if __GLIBC_USE (IEC_60559_BFP_EXT) || __GLIBC_USE (ISOC2X)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
```
|
1.0
|
Can no longer run hello world :( - ### Expected behavior
`cargo run tests/runner-tests/hello_world.c` works on linux platforms.
### Code
<!-- The code that was not interpreted correctly goes here.
This should also include the error message you got. -->
```c
#include<stdio.h>
int main() {
puts("Hello, world!");
}
bits/libc-header-start.h:56:4 error: invalid macro: trailing tokens in `#if` expression
#if __GLIBC_USE (IEC_60559_BFP_EXT) || __GLIBC_USE (ISOC2X)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
```
|
process
|
can no longer run hello world expected behavior cargo run tests runner tests hello world c works on linux platforms code the code that was not interpreted correctly goes here this should also include the error message you got c include int main puts hello world bits libc header start h error invalid macro trailing tokens in if expression if glibc use iec bfp ext glibc use
| 1
|
4,541
| 7,374,704,145
|
IssuesEvent
|
2018-03-13 21:13:09
|
KantaraInitiative/wg-uma
|
https://api.github.com/repos/KantaraInitiative/wg-uma
|
closed
|
Update the Case Studies and potentially Use Cases pages
|
V2.0 process
|
These pages/documents are referenced from Core, so it's probably a good idea to ensure they have current content before final publication of UMA2 Recommendations.
|
1.0
|
Update the Case Studies and potentially Use Cases pages - These pages/documents are referenced from Core, so it's probably a good idea to ensure they have current content before final publication of UMA2 Recommendations.
|
process
|
update the case studies and potentially use cases pages these pages documents are referenced from core so it s probably a good idea to ensure they have current content before final publication of recommendations
| 1
|
425,279
| 12,338,148,767
|
IssuesEvent
|
2020-05-14 16:00:42
|
inspireui/support
|
https://api.github.com/repos/inspireui/support
|
closed
|
FluxstoreMV Latest Products & New Collections not showing on main page
|
FluxstoreMV priority !!!
|
FluxstoreMV
Latest Products & New Collections not showing on main page. Other pages working fine.
|
1.0
|
FluxstoreMV Latest Products & New Collections not showing on main page - FluxstoreMV
Latest Products & New Collections not showing on main page. Other pages working fine.
|
non_process
|
fluxstoremv latest products new collections not showing on main page fluxstoremv latest products new collections not showing on main page other pages working fine
| 0
|
150,633
| 5,782,941,012
|
IssuesEvent
|
2017-04-30 02:53:08
|
KellenWatt/hundis
|
https://api.github.com/repos/KellenWatt/hundis
|
closed
|
Problem Creation
|
enhancement high-priority
|
Allow admins to create and modify problems via the web interface. Also allow them to set up and modify problem tags and keywords.
|
1.0
|
Problem Creation - Allow admins to create and modify problems via the web interface. Also allow them to set up and modify problem tags and keywords.
|
non_process
|
problem creation allow admins to create and modify problems via the web interface also allow them to set up and modify problem tags and keywords
| 0
|
335,613
| 10,164,115,063
|
IssuesEvent
|
2019-08-07 10:51:15
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
www.wunderlist.com - see bug description
|
browser-firefox engine-gecko priority-normal status-needsinfo
|
<!-- @browser: Firefox 58.0 -->
<!-- @ua_header: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.13; rv:58.0) Gecko/20100101 Firefox/58.0 -->
<!-- @reported_with: addon-reporter-firefox -->
**URL**: https://www.wunderlist.com/webapp#/lists/inbox
**Browser / Version**: Firefox 58.0
**Operating System**: Mac OS X 10.13
**Tested Another Browser**: Unknown
**Problem type**: Something else
**Description**: Right click on Wunderlist tasks misbehaves
**Steps to Reproduce**:
* Visit wunderlist.com and sign-in
* Do a right click (a dropdown of tasks shows up) and dissapears
Expected results:
* The dropdown stays visible
Actual results:
* The dropdown immediately dissapears
Workaround:
* Hold the right click and move the mouse over the dropdown menu. The contextual menu will stay even after releasing the right click menu
Did you tested it on other browsers? Yes. It works as expected on Chrome.
This affects Nightly, Developer edition and Release on Mac.
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
www.wunderlist.com - see bug description - <!-- @browser: Firefox 58.0 -->
<!-- @ua_header: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.13; rv:58.0) Gecko/20100101 Firefox/58.0 -->
<!-- @reported_with: addon-reporter-firefox -->
**URL**: https://www.wunderlist.com/webapp#/lists/inbox
**Browser / Version**: Firefox 58.0
**Operating System**: Mac OS X 10.13
**Tested Another Browser**: Unknown
**Problem type**: Something else
**Description**: Right click on Wunderlist tasks misbehaves
**Steps to Reproduce**:
* Visit wunderlist.com and sign-in
* Do a right click (a dropdown of tasks shows up) and dissapears
Expected results:
* The dropdown stays visible
Actual results:
* The dropdown immediately dissapears
Workaround:
* Hold the right click and move the mouse over the dropdown menu. The contextual menu will stay even after releasing the right click menu
Did you tested it on other browsers? Yes. It works as expected on Chrome.
This affects Nightly, Developer edition and Release on Mac.
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
see bug description url browser version firefox operating system mac os x tested another browser unknown problem type something else description right click on wunderlist tasks misbehaves steps to reproduce visit wunderlist com and sign in do a right click a dropdown of tasks shows up and dissapears expected results the dropdown stays visible actual results the dropdown immediately dissapears workaround hold the right click and move the mouse over the dropdown menu the contextual menu will stay even after releasing the right click menu did you tested it on other browsers yes it works as expected on chrome this affects nightly developer edition and release on mac from with ❤️
| 0
|
371,941
| 25,974,216,897
|
IssuesEvent
|
2022-12-19 13:41:44
|
chutney-testing/chutney-testing.github.io
|
https://api.github.com/repos/chutney-testing/chutney-testing.github.io
|
closed
|
🙋 | Improve documentation consistency
|
documentation enhancement
|
### Ask your question or suggest your idea !
As we talked about it,
there is a lot of overlap or unclear use cases between Getting Started, Installation and Integration.
First idea would be to explicit use cases such as :
- Quick start with local development
- Installing a Chutney server in production
- Daily use syncing local development and shared server
### Example
**Left menu**
* Installation
* Local Development
* On-premise
* Introduction
* Minimal configuration
* Maven confguration
* Logback
* Minimal Application.yml
* Further details
* Database
* Logs
* Server (TLS/SSL)
* etc.
* advance topic
* liquibase
* metrics produite
* authentification ldap
* compression
* session management
* actuator
* specifics value
* CI/CD integration
* synchronize
|
1.0
|
🙋 | Improve documentation consistency - ### Ask your question or suggest your idea !
As we talked about it,
there is a lot of overlap or unclear use cases between Getting Started, Installation and Integration.
First idea would be to explicit use cases such as :
- Quick start with local development
- Installing a Chutney server in production
- Daily use syncing local development and shared server
### Example
**Left menu**
* Installation
* Local Development
* On-premise
* Introduction
* Minimal configuration
* Maven confguration
* Logback
* Minimal Application.yml
* Further details
* Database
* Logs
* Server (TLS/SSL)
* etc.
* advance topic
* liquibase
* metrics produite
* authentification ldap
* compression
* session management
* actuator
* specifics value
* CI/CD integration
* synchronize
|
non_process
|
🙋 improve documentation consistency ask your question or suggest your idea as we talked about it there is a lot of overlap or unclear use cases between getting started installation and integration first idea would be to explicit use cases such as quick start with local development installing a chutney server in production daily use syncing local development and shared server example left menu installation local development on premise introduction minimal configuration maven confguration logback minimal application yml further details database logs server tls ssl etc advance topic liquibase metrics produite authentification ldap compression session management actuator specifics value ci cd integration synchronize
| 0
|
24,018
| 2,665,517,095
|
IssuesEvent
|
2015-03-20 21:03:35
|
iFixit/iFixitAndroid
|
https://api.github.com/repos/iFixit/iFixitAndroid
|
closed
|
Add preferred image capture method on step edit
|
feature request low priority r-All someday
|
It sure would be nice to have a preferred option for image capture. I'd guess that 95+% of image capture on step edit is from the camera so it would be nice to have one-click camera open functionality from the step edit page. Right now the user is forced to click on the empty thumbnail then select "Camera" every time.
Ideally we would convert our current dialog to be as close to the default Android ["Complete action using" dialog](http://developer.android.com/training/basics/intents/sending.html#StartActivity) complete with preferred selection. The only sticking points are how to store the preference (per user or per device?) and allowing the user to select a different option. I suppose long clicking on the thumbnail could override the preferred selection if one exists.
|
1.0
|
Add preferred image capture method on step edit - It sure would be nice to have a preferred option for image capture. I'd guess that 95+% of image capture on step edit is from the camera so it would be nice to have one-click camera open functionality from the step edit page. Right now the user is forced to click on the empty thumbnail then select "Camera" every time.
Ideally we would convert our current dialog to be as close to the default Android ["Complete action using" dialog](http://developer.android.com/training/basics/intents/sending.html#StartActivity) complete with preferred selection. The only sticking points are how to store the preference (per user or per device?) and allowing the user to select a different option. I suppose long clicking on the thumbnail could override the preferred selection if one exists.
|
non_process
|
add preferred image capture method on step edit it sure would be nice to have a preferred option for image capture i d guess that of image capture on step edit is from the camera so it would be nice to have one click camera open functionality from the step edit page right now the user is forced to click on the empty thumbnail then select camera every time ideally we would convert our current dialog to be as close to the default android complete with preferred selection the only sticking points are how to store the preference per user or per device and allowing the user to select a different option i suppose long clicking on the thumbnail could override the preferred selection if one exists
| 0
|
81,473
| 3,591,381,595
|
IssuesEvent
|
2016-02-01 11:28:28
|
FStarLang/FStar
|
https://api.github.com/repos/FStarLang/FStar
|
opened
|
Collisions from different namespaces
|
bug Priority 2 question
|
```
module Bug
open FStar.Strange
open Bob.Strange
let test = Strange.my_valueX
```
If two namespaces contain a module with the same name and the same method, it is unclear which one will be used...
```
$ fstar.exe foo.fst
Verifying module: FStar.Strange
Verifying module: Bob.Strange
Verifying module: Bug
All verification conditions discharged successfully
```
I believe there is two problems here :
1 - `Strange.my_valueX` should not go through as neither `open FStar` nor `open Bob` are present
2 - In the case both are open, F* should fail, saying there is a possible clash (which is not the case even if you remove `Strange.` on top of the )
I think we should enforce the two rules but the first one should be discussed..
WDYT ? ;)
|
1.0
|
Collisions from different namespaces - ```
module Bug
open FStar.Strange
open Bob.Strange
let test = Strange.my_valueX
```
If two namespaces contain a module with the same name and the same method, it is unclear which one will be used...
```
$ fstar.exe foo.fst
Verifying module: FStar.Strange
Verifying module: Bob.Strange
Verifying module: Bug
All verification conditions discharged successfully
```
I believe there is two problems here :
1 - `Strange.my_valueX` should not go through as neither `open FStar` nor `open Bob` are present
2 - In the case both are open, F* should fail, saying there is a possible clash (which is not the case even if you remove `Strange.` on top of the )
I think we should enforce the two rules but the first one should be discussed..
WDYT ? ;)
|
non_process
|
collisions from different namespaces module bug open fstar strange open bob strange let test strange my valuex if two namespaces contain a module with the same name and the same method it is unclear which one will be used fstar exe foo fst verifying module fstar strange verifying module bob strange verifying module bug all verification conditions discharged successfully i believe there is two problems here strange my valuex should not go through as neither open fstar nor open bob are present in the case both are open f should fail saying there is a possible clash which is not the case even if you remove strange on top of the i think we should enforce the two rules but the first one should be discussed wdyt
| 0
|
52,560
| 13,002,394,871
|
IssuesEvent
|
2020-07-24 03:08:45
|
ballerina-platform/ballerina-lang
|
https://api.github.com/repos/ballerina-platform/ballerina-lang
|
closed
|
Maven dependency resolving info message given when running a module
|
Area/BuildTools Type/Bug
|
I tried running an existing module in a project with a build from the current master (slp3-snapshot) and I got the following output on the console:
```
$ ballerina run fizz
Resolving maven dependencies...
Downloading dependencies into /home/pubudu/b7a-projects/22930/target/platform-libs
Compiling source
pubudu/fizz:0.1.0
Creating balos
target/balo/fizz-2020r2-java8-0.1.0.balo
target/bin/fizz.jar
Running executables
typedesc int
one=10 two=20
```
A couple of issues with the first two lines in the above output. Why does it print `Resolving maven dependencies...`? I have the following Ballerina.toml file:
```
[project]
org-name= "pubudu"
version= "0.1.0"
[dependencies]
"pubudu/bar" = { path = "/home/pubudu/b7a-projects/test/target/balo/bar-2020r1-java8-0.1.0.balo" }
[platform]
target = "java8"
[[platform.libraries]]
path = "./javalibs/hello-1.0-SNAPSHOT.jar"
modules = ["foo", "baz", "fizz"]
```
Also, why does it print `Downloading dependencies into /home/pubudu/b7a-projects/22930/target/platform-libs`? When I checked the target directory, there wasn't a directory named `platform-libs`.
And then, a couple of issues with the output messages themselves:
- There's an empty newline between the prompt and `Resolving maven dependencies...`. We haven't kept a new line in other cases. e.g.,
```
$ ballerina run ../test2.bal
Compiling source
test2.bal
error: .::test2.bal:12:5: undefined symbol 'x'
error: .::test2.bal:12:9: undefined symbol 'a'
error: .::test2.bal:12:9: invalid remote method call: expected a client object, but found 'other'
error: .::test2.bal:12:20: undefined symbol 'c'
```
- Typo in the same message: maven -> Maven
|
1.0
|
Maven dependency resolving info message given when running a module - I tried running an existing module in a project with a build from the current master (slp3-snapshot) and I got the following output on the console:
```
$ ballerina run fizz
Resolving maven dependencies...
Downloading dependencies into /home/pubudu/b7a-projects/22930/target/platform-libs
Compiling source
pubudu/fizz:0.1.0
Creating balos
target/balo/fizz-2020r2-java8-0.1.0.balo
target/bin/fizz.jar
Running executables
typedesc int
one=10 two=20
```
A couple of issues with the first two lines in the above output. Why does it print `Resolving maven dependencies...`? I have the following Ballerina.toml file:
```
[project]
org-name= "pubudu"
version= "0.1.0"
[dependencies]
"pubudu/bar" = { path = "/home/pubudu/b7a-projects/test/target/balo/bar-2020r1-java8-0.1.0.balo" }
[platform]
target = "java8"
[[platform.libraries]]
path = "./javalibs/hello-1.0-SNAPSHOT.jar"
modules = ["foo", "baz", "fizz"]
```
Also, why does it print `Downloading dependencies into /home/pubudu/b7a-projects/22930/target/platform-libs`? When I checked the target directory, there wasn't a directory named `platform-libs`.
And then, a couple of issues with the output messages themselves:
- There's an empty newline between the prompt and `Resolving maven dependencies...`. We haven't kept a new line in other cases. e.g.,
```
$ ballerina run ../test2.bal
Compiling source
test2.bal
error: .::test2.bal:12:5: undefined symbol 'x'
error: .::test2.bal:12:9: undefined symbol 'a'
error: .::test2.bal:12:9: invalid remote method call: expected a client object, but found 'other'
error: .::test2.bal:12:20: undefined symbol 'c'
```
- Typo in the same message: maven -> Maven
|
non_process
|
maven dependency resolving info message given when running a module i tried running an existing module in a project with a build from the current master snapshot and i got the following output on the console ballerina run fizz resolving maven dependencies downloading dependencies into home pubudu projects target platform libs compiling source pubudu fizz creating balos target balo fizz balo target bin fizz jar running executables typedesc int one two a couple of issues with the first two lines in the above output why does it print resolving maven dependencies i have the following ballerina toml file org name pubudu version pubudu bar path home pubudu projects test target balo bar balo target path javalibs hello snapshot jar modules also why does it print downloading dependencies into home pubudu projects target platform libs when i checked the target directory there wasn t a directory named platform libs and then a couple of issues with the output messages themselves there s an empty newline between the prompt and resolving maven dependencies we haven t kept a new line in other cases e g ballerina run bal compiling source bal error bal undefined symbol x error bal undefined symbol a error bal invalid remote method call expected a client object but found other error bal undefined symbol c typo in the same message maven maven
| 0
|
9,005
| 12,121,136,782
|
IssuesEvent
|
2020-04-22 08:49:44
|
googleapis/python-bigquery
|
https://api.github.com/repos/googleapis/python-bigquery
|
closed
|
chore: replace Artman with bazel for synthesizing code
|
api: bigquery type: process
|
The synthtool should start using bazel instead of Artman.
|
1.0
|
chore: replace Artman with bazel for synthesizing code - The synthtool should start using bazel instead of Artman.
|
process
|
chore replace artman with bazel for synthesizing code the synthtool should start using bazel instead of artman
| 1
|
166,356
| 12,951,041,248
|
IssuesEvent
|
2020-07-19 15:21:48
|
measurement-kit/measurement-kit
|
https://api.github.com/repos/measurement-kit/measurement-kit
|
closed
|
IPv6 Testing
|
new test priority/low
|
Currently, MK tests are still have some IPv4 assumptions. It would be great to extend the testing to include a comparison with v6 connectivity to determine how access to sites differ when using a v6 connection.
This involves:
* [ ] AAAA dns lookups.
* [ ] Explicit connection attempts to v6 hosts.
* [ ] HTTP requests over both v4 and v6 connections
|
1.0
|
IPv6 Testing - Currently, MK tests are still have some IPv4 assumptions. It would be great to extend the testing to include a comparison with v6 connectivity to determine how access to sites differ when using a v6 connection.
This involves:
* [ ] AAAA dns lookups.
* [ ] Explicit connection attempts to v6 hosts.
* [ ] HTTP requests over both v4 and v6 connections
|
non_process
|
testing currently mk tests are still have some assumptions it would be great to extend the testing to include a comparison with connectivity to determine how access to sites differ when using a connection this involves aaaa dns lookups explicit connection attempts to hosts http requests over both and connections
| 0
|
123,445
| 16,494,516,624
|
IssuesEvent
|
2021-05-25 08:51:07
|
nextcloud/server
|
https://api.github.com/repos/nextcloud/server
|
closed
|
Suggest servers to trust based on existing federated shares
|
1. to develop design enhancement feature: federation feature: settings
|
* from #2672
* in the federation section show suggestions for trusted servers based on received/send shares sorted by usage stats
cc @schiessle @jancborchardt
|
1.0
|
Suggest servers to trust based on existing federated shares - * from #2672
* in the federation section show suggestions for trusted servers based on received/send shares sorted by usage stats
cc @schiessle @jancborchardt
|
non_process
|
suggest servers to trust based on existing federated shares from in the federation section show suggestions for trusted servers based on received send shares sorted by usage stats cc schiessle jancborchardt
| 0
|
6,174
| 9,083,773,109
|
IssuesEvent
|
2019-02-17 23:10:06
|
googleapis/google-cloud-cpp
|
https://api.github.com/repos/googleapis/google-cloud-cpp
|
closed
|
Manually running clang-format is more strict that check-style.sh?
|
type: process
|
It seems that manually running clang-format produces more strict formatting (particularly the 80 column limit) than running it via `ci/check-style.sh`. Maybe I am just seeing things, but recording it in case it happens to others and we can detect a pattern if that is the case.
|
1.0
|
Manually running clang-format is more strict that check-style.sh? - It seems that manually running clang-format produces more strict formatting (particularly the 80 column limit) than running it via `ci/check-style.sh`. Maybe I am just seeing things, but recording it in case it happens to others and we can detect a pattern if that is the case.
|
process
|
manually running clang format is more strict that check style sh it seems that manually running clang format produces more strict formatting particularly the column limit than running it via ci check style sh maybe i am just seeing things but recording it in case it happens to others and we can detect a pattern if that is the case
| 1
|
16,102
| 20,323,800,853
|
IssuesEvent
|
2022-02-18 02:42:17
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
store settings for recently used algorithms
|
Feedback stale Processing Feature Request
|
Author Name: **Salvatore Larosa** (@slarosa)
Original Redmine Issue: [6160](https://issues.qgis.org/issues/6160)
Redmine category:processing/gui
Assignee: Victor Olaya
---
Would be useful add the support to store settings for recently used algs, especially for FieldPyculator alg and other in which is need to write of the code, so speed up the operation!
|
1.0
|
store settings for recently used algorithms - Author Name: **Salvatore Larosa** (@slarosa)
Original Redmine Issue: [6160](https://issues.qgis.org/issues/6160)
Redmine category:processing/gui
Assignee: Victor Olaya
---
Would be useful add the support to store settings for recently used algs, especially for FieldPyculator alg and other in which is need to write of the code, so speed up the operation!
|
process
|
store settings for recently used algorithms author name salvatore larosa slarosa original redmine issue redmine category processing gui assignee victor olaya would be useful add the support to store settings for recently used algs especially for fieldpyculator alg and other in which is need to write of the code so speed up the operation
| 1
|
172,277
| 14,355,478,260
|
IssuesEvent
|
2020-11-30 10:09:28
|
radicle-dev/radicle-upstream
|
https://api.github.com/repos/radicle-dev/radicle-upstream
|
closed
|
Announce releases on discourse
|
documentation infrastructure
|
When cutting a release we should with the help of a template announce the following information:
* summary of changes
* links to binaries
* where to get support
* where to file bugs
|
1.0
|
Announce releases on discourse - When cutting a release we should with the help of a template announce the following information:
* summary of changes
* links to binaries
* where to get support
* where to file bugs
|
non_process
|
announce releases on discourse when cutting a release we should with the help of a template announce the following information summary of changes links to binaries where to get support where to file bugs
| 0
|
17,137
| 22,675,474,582
|
IssuesEvent
|
2022-07-04 03:47:01
|
pingcap/tiflash
|
https://api.github.com/repos/pingcap/tiflash
|
closed
|
The order of the executor summary is wrong for the executor list
|
type/bug severity/moderate component/coprocessor component/compute
|
## Bug Report
In https://github.com/pingcap/tiflash/pull/3937, The type of `profile_streams_map` is changed from `std::map<String, ProfileStreamsInfo>` to `std::unordered_map<String, BlockInputStreams>`. https://github.com/pingcap/tiflash/pull/3937/files#diff-6bc200816961f4a8ad0c24cfaf2d0975ad614e78d153fda6566c44c931908646R263
For executor list, executor_id is in the form of `${i}_${type}`, so the order of `map<executor_id, profile_stream>` is the same as the order of executor list. And the executor summary depends on the order of the executor list.
https://github.com/pingcap/tiflash/blob/f84d7e37e7c850891048ec3efb2cf80e5a32adb3/dbms/src/Flash/Coprocessor/DAGQueryBlock.cpp#L172-L232
https://github.com/pingcap/tiflash/blob/f84d7e37e7c850891048ec3efb2cf80e5a32adb3/dbms/src/Flash/Coprocessor/DAGResponseWriter.cpp#L25
Fortunately, executor list is currently only used in TiSpark :)
|
1.0
|
The order of the executor summary is wrong for the executor list - ## Bug Report
In https://github.com/pingcap/tiflash/pull/3937, The type of `profile_streams_map` is changed from `std::map<String, ProfileStreamsInfo>` to `std::unordered_map<String, BlockInputStreams>`. https://github.com/pingcap/tiflash/pull/3937/files#diff-6bc200816961f4a8ad0c24cfaf2d0975ad614e78d153fda6566c44c931908646R263
For executor list, executor_id is in the form of `${i}_${type}`, so the order of `map<executor_id, profile_stream>` is the same as the order of executor list. And the executor summary depends on the order of the executor list.
https://github.com/pingcap/tiflash/blob/f84d7e37e7c850891048ec3efb2cf80e5a32adb3/dbms/src/Flash/Coprocessor/DAGQueryBlock.cpp#L172-L232
https://github.com/pingcap/tiflash/blob/f84d7e37e7c850891048ec3efb2cf80e5a32adb3/dbms/src/Flash/Coprocessor/DAGResponseWriter.cpp#L25
Fortunately, executor list is currently only used in TiSpark :)
|
process
|
the order of the executor summary is wrong for the executor list bug report in the type of profile streams map is changed from std map to std unordered map for executor list executor id is in the form of i type so the order of map is the same as the order of executor list and the executor summary depends on the order of the executor list fortunately executor list is currently only used in tispark
| 1
|
18,330
| 24,446,076,538
|
IssuesEvent
|
2022-10-06 18:05:30
|
maticnetwork/miden
|
https://api.github.com/repos/maticnetwork/miden
|
closed
|
Memory layout of procedure locals
|
enhancement processor
|
The current issue is related to procedure locals. The following example illustrates the issue:
```
proc.outer.2
locaddr.0
add.1
locaddr.1
assert_eq
end
begin
exec.outer
end
```
Intuitively, the following code should work but it raises an error because of the assertion. After discussing it with @bobbinth, it seems that the memory layout is not in increasing order but is decreasing. This means that the above code should work when substituting `add.1` with `sub.1` and that is in fact the case.
I think, and @bobbinth too, that it is better to have the layout in an increasing order, unless there are any strong arguments for the current design. The required changes should be pretty much straightforward but it would be great to hear from everybody, and especially @itzmeanjan , if this might create any issues with some of our current implementations.
|
1.0
|
Memory layout of procedure locals - The current issue is related to procedure locals. The following example illustrates the issue:
```
proc.outer.2
locaddr.0
add.1
locaddr.1
assert_eq
end
begin
exec.outer
end
```
Intuitively, the following code should work but it raises an error because of the assertion. After discussing it with @bobbinth, it seems that the memory layout is not in increasing order but is decreasing. This means that the above code should work when substituting `add.1` with `sub.1` and that is in fact the case.
I think, and @bobbinth too, that it is better to have the layout in an increasing order, unless there are any strong arguments for the current design. The required changes should be pretty much straightforward but it would be great to hear from everybody, and especially @itzmeanjan , if this might create any issues with some of our current implementations.
|
process
|
memory layout of procedure locals the current issue is related to procedure locals the following example illustrates the issue proc outer locaddr add locaddr assert eq end begin exec outer end intuitively the following code should work but it raises an error because of the assertion after discussing it with bobbinth it seems that the memory layout is not in increasing order but is decreasing this means that the above code should work when substituting add with sub and that is in fact the case i think and bobbinth too that it is better to have the layout in an increasing order unless there are any strong arguments for the current design the required changes should be pretty much straightforward but it would be great to hear from everybody and especially itzmeanjan if this might create any issues with some of our current implementations
| 1
|
31,518
| 11,949,661,597
|
IssuesEvent
|
2020-04-03 14:00:31
|
gcdevops/HRWhiteListing
|
https://api.github.com/repos/gcdevops/HRWhiteListing
|
closed
|
Dataflow/Network Diagram
|
security
|
A diagram outlining all the systems, how they communicate (port/protocol, type of data), users interacting with the system and actions taken.
Ideally this would be generated as much as possible from your actual infrastructure or your infrastructure as code.
This will be used as input into SA&A and ATO process. Will also be required for Privacy Checklist.
|
True
|
Dataflow/Network Diagram - A diagram outlining all the systems, how they communicate (port/protocol, type of data), users interacting with the system and actions taken.
Ideally this would be generated as much as possible from your actual infrastructure or your infrastructure as code.
This will be used as input into SA&A and ATO process. Will also be required for Privacy Checklist.
|
non_process
|
dataflow network diagram a diagram outlining all the systems how they communicate port protocol type of data users interacting with the system and actions taken ideally this would be generated as much as possible from your actual infrastructure or your infrastructure as code this will be used as input into sa a and ato process will also be required for privacy checklist
| 0
|
7,161
| 10,309,529,832
|
IssuesEvent
|
2019-08-29 13:28:42
|
zammad/zammad
|
https://api.github.com/repos/zammad/zammad
|
opened
|
mail processing impossible: invalid byte sequence in UTF-8
|
bug mail processing verified
|
<!--
Hi there - thanks for filing an issue. Please ensure the following things before creating an issue - thank you! 🤓
Since november 15th we handle all requests, except real bugs, at our community board.
Full explanation: https://community.zammad.org/t/major-change-regarding-github-issues-community-board/21
Please post:
- Feature requests
- Development questions
- Technical questions
on the board -> https://community.zammad.org !
If you think you hit a bug, please continue:
- Search existing issues and the CHANGELOG.md for your issue - there might be a solution already
- Make sure to use the latest version of Zammad if possible
- Add the `log/production.log` file from your system. Attention: Make sure no confidential data is in it!
- Please write the issue in english
- Don't remove the template - otherwise we will close the issue without further comments
- Ask questions about Zammad configuration and usage at our mailinglist. See: https://zammad.org/participate
Note: We always do our best. Unfortunately, sometimes there are too many requests and we can't handle everything at once. If you want to prioritize/escalate your issue, you can do so by means of a support contract (see https://zammad.com/pricing#selfhosted).
* The upper textblock will be removed automatically when you submit your issue *
-->
### Infos:
* Used Zammad version: 3.1
* Installation method (source, package, ..): any
* Operating system: any
* Database + version: any
* Elasticsearch version: any
* Browser + version: any
### Expected behavior:
Zammad importants mails without trouble.
### Actual behavior:
Zammad is unable to import specific mails with throwing the following Traceback:
```
/home/xxx/zammad/app/models/channel/email_parser.rb:133:in `rescue in process': #<ArgumentError: invalid byte sequence in UTF-8> (RuntimeError)
/home/xxx/zammad/lib/core_ext/string.rb:289:in `gsub!'
/home/xxx/zammad/lib/core_ext/string.rb:289:in `html2text'
/home/xxx/zammad/app/models/channel/filter/follow_up_check.rb:34:in `block in run'
/home/xxx/zammad/app/models/channel/filter/follow_up_check.rb:31:in `each'
/home/xxx/zammad/app/models/channel/filter/follow_up_check.rb:31:in `run'
/home/xxx/zammad/app/models/channel/email_parser.rb:152:in `block in _process'
/home/xxx/zammad/app/models/channel/email_parser.rb:149:in `each'
/home/xxx/zammad/app/models/channel/email_parser.rb:149:in `_process'
/home/xxx/zammad/app/models/channel/email_parser.rb:115:in `block in process'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/timeout.rb:93:in `block in timeout'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/timeout.rb:33:in `block in catch'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/timeout.rb:33:in `catch'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/timeout.rb:33:in `catch'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/timeout.rb:108:in `timeout'
/home/xxx/zammad/app/models/channel/email_parser.rb:114:in `process'
/home/xxx/zammad/app/models/channel/email_parser.rb:482:in `block in process_unprocessable_mails'
/home/xxx/zammad/app/models/channel/email_parser.rb:481:in `glob'
/home/xxx/zammad/app/models/channel/email_parser.rb:481:in `process_unprocessable_mails'
```
### Steps to reproduce the behavior:
* Have a somewhat broken mail
* try to import it automatically, via Channel.fetch and via STD:IN
-> fails
Yes I'm sure this is a bug and no feature request or a general question.
|
1.0
|
mail processing impossible: invalid byte sequence in UTF-8 - <!--
Hi there - thanks for filing an issue. Please ensure the following things before creating an issue - thank you! 🤓
Since november 15th we handle all requests, except real bugs, at our community board.
Full explanation: https://community.zammad.org/t/major-change-regarding-github-issues-community-board/21
Please post:
- Feature requests
- Development questions
- Technical questions
on the board -> https://community.zammad.org !
If you think you hit a bug, please continue:
- Search existing issues and the CHANGELOG.md for your issue - there might be a solution already
- Make sure to use the latest version of Zammad if possible
- Add the `log/production.log` file from your system. Attention: Make sure no confidential data is in it!
- Please write the issue in english
- Don't remove the template - otherwise we will close the issue without further comments
- Ask questions about Zammad configuration and usage at our mailinglist. See: https://zammad.org/participate
Note: We always do our best. Unfortunately, sometimes there are too many requests and we can't handle everything at once. If you want to prioritize/escalate your issue, you can do so by means of a support contract (see https://zammad.com/pricing#selfhosted).
* The upper textblock will be removed automatically when you submit your issue *
-->
### Infos:
* Used Zammad version: 3.1
* Installation method (source, package, ..): any
* Operating system: any
* Database + version: any
* Elasticsearch version: any
* Browser + version: any
### Expected behavior:
Zammad importants mails without trouble.
### Actual behavior:
Zammad is unable to import specific mails with throwing the following Traceback:
```
/home/xxx/zammad/app/models/channel/email_parser.rb:133:in `rescue in process': #<ArgumentError: invalid byte sequence in UTF-8> (RuntimeError)
/home/xxx/zammad/lib/core_ext/string.rb:289:in `gsub!'
/home/xxx/zammad/lib/core_ext/string.rb:289:in `html2text'
/home/xxx/zammad/app/models/channel/filter/follow_up_check.rb:34:in `block in run'
/home/xxx/zammad/app/models/channel/filter/follow_up_check.rb:31:in `each'
/home/xxx/zammad/app/models/channel/filter/follow_up_check.rb:31:in `run'
/home/xxx/zammad/app/models/channel/email_parser.rb:152:in `block in _process'
/home/xxx/zammad/app/models/channel/email_parser.rb:149:in `each'
/home/xxx/zammad/app/models/channel/email_parser.rb:149:in `_process'
/home/xxx/zammad/app/models/channel/email_parser.rb:115:in `block in process'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/timeout.rb:93:in `block in timeout'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/timeout.rb:33:in `block in catch'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/timeout.rb:33:in `catch'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/timeout.rb:33:in `catch'
/usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/timeout.rb:108:in `timeout'
/home/xxx/zammad/app/models/channel/email_parser.rb:114:in `process'
/home/xxx/zammad/app/models/channel/email_parser.rb:482:in `block in process_unprocessable_mails'
/home/xxx/zammad/app/models/channel/email_parser.rb:481:in `glob'
/home/xxx/zammad/app/models/channel/email_parser.rb:481:in `process_unprocessable_mails'
```
### Steps to reproduce the behavior:
* Have a somewhat broken mail
* try to import it automatically, via Channel.fetch and via STD:IN
-> fails
Yes I'm sure this is a bug and no feature request or a general question.
|
process
|
mail processing impossible invalid byte sequence in utf hi there thanks for filing an issue please ensure the following things before creating an issue thank you 🤓 since november we handle all requests except real bugs at our community board full explanation please post feature requests development questions technical questions on the board if you think you hit a bug please continue search existing issues and the changelog md for your issue there might be a solution already make sure to use the latest version of zammad if possible add the log production log file from your system attention make sure no confidential data is in it please write the issue in english don t remove the template otherwise we will close the issue without further comments ask questions about zammad configuration and usage at our mailinglist see note we always do our best unfortunately sometimes there are too many requests and we can t handle everything at once if you want to prioritize escalate your issue you can do so by means of a support contract see the upper textblock will be removed automatically when you submit your issue infos used zammad version installation method source package any operating system any database version any elasticsearch version any browser version any expected behavior zammad importants mails without trouble actual behavior zammad is unable to import specific mails with throwing the following traceback home xxx zammad app models channel email parser rb in rescue in process runtimeerror home xxx zammad lib core ext string rb in gsub home xxx zammad lib core ext string rb in home xxx zammad app models channel filter follow up check rb in block in run home xxx zammad app models channel filter follow up check rb in each home xxx zammad app models channel filter follow up check rb in run home xxx zammad app models channel email parser rb in block in process home xxx zammad app models channel email parser rb in each home xxx zammad app models channel email parser rb in process home xxx zammad app models channel email parser rb in block in process usr local rvm rubies ruby lib ruby timeout rb in block in timeout usr local rvm rubies ruby lib ruby timeout rb in block in catch usr local rvm rubies ruby lib ruby timeout rb in catch usr local rvm rubies ruby lib ruby timeout rb in catch usr local rvm rubies ruby lib ruby timeout rb in timeout home xxx zammad app models channel email parser rb in process home xxx zammad app models channel email parser rb in block in process unprocessable mails home xxx zammad app models channel email parser rb in glob home xxx zammad app models channel email parser rb in process unprocessable mails steps to reproduce the behavior have a somewhat broken mail try to import it automatically via channel fetch and via std in fails yes i m sure this is a bug and no feature request or a general question
| 1
|
804,642
| 29,495,908,258
|
IssuesEvent
|
2023-06-02 16:57:14
|
apcountryman/picolibrary
|
https://api.github.com/repos/apcountryman/picolibrary
|
closed
|
Overhaul Microchip MCP23S08 communication controller automated tests
|
priority-normal status-awaiting_review type-refactoring
|
Overhaul Microchip MCP23S08 communication controller (`::picolibrary::Microchip::MCP23S08::Communication_Controller`) automated tests.
- [x] Replace runtime random number generation with pre-generated random numbers
- [x] Remove trivial tests where appropriate
- [x] Replace test loops with value parameterized tests
- [x] Split tests with blocks into multiple named tests
- [x] Overhaul test style/structure
|
1.0
|
Overhaul Microchip MCP23S08 communication controller automated tests - Overhaul Microchip MCP23S08 communication controller (`::picolibrary::Microchip::MCP23S08::Communication_Controller`) automated tests.
- [x] Replace runtime random number generation with pre-generated random numbers
- [x] Remove trivial tests where appropriate
- [x] Replace test loops with value parameterized tests
- [x] Split tests with blocks into multiple named tests
- [x] Overhaul test style/structure
|
non_process
|
overhaul microchip communication controller automated tests overhaul microchip communication controller picolibrary microchip communication controller automated tests replace runtime random number generation with pre generated random numbers remove trivial tests where appropriate replace test loops with value parameterized tests split tests with blocks into multiple named tests overhaul test style structure
| 0
|
368,610
| 25,800,178,258
|
IssuesEvent
|
2022-12-10 23:33:20
|
SteamDeckHomebrew/decky-loader
|
https://api.github.com/repos/SteamDeckHomebrew/decky-loader
|
closed
|
Repository - Issue and Pull Request templates
|
documentation enhancement help wanted
|
Provide issue templates for bug reporting, requesting a feature and provide a pull request template for consistency. Would facilitates investigating bugs.
|
1.0
|
Repository - Issue and Pull Request templates - Provide issue templates for bug reporting, requesting a feature and provide a pull request template for consistency. Would facilitates investigating bugs.
|
non_process
|
repository issue and pull request templates provide issue templates for bug reporting requesting a feature and provide a pull request template for consistency would facilitates investigating bugs
| 0
|
482,055
| 13,896,298,839
|
IssuesEvent
|
2020-10-19 16:59:41
|
yalelibrary/YUL-DC
|
https://api.github.com/repos/yalelibrary/YUL-DC
|
closed
|
Advanced Search will clear if I switch from one view to another (Bugherd) HIGH VALUE
|
bug high priority software engineering
|
See bugherd ticket: https://www.bugherd.com/t/lvDfEKWQx4Tan6RBXIxMtw
If I did an advanced search and then switch from gallery to list view, the search clears (and vice versa)
1. Go to advanced search and go a search (I'm partial to gen mss in call number)
2. on the screen, switch to whatever view you aren't on (so gallery -> list or list -> gallery)
3. You'll see your search is wiped
|
1.0
|
Advanced Search will clear if I switch from one view to another (Bugherd) HIGH VALUE - See bugherd ticket: https://www.bugherd.com/t/lvDfEKWQx4Tan6RBXIxMtw
If I did an advanced search and then switch from gallery to list view, the search clears (and vice versa)
1. Go to advanced search and go a search (I'm partial to gen mss in call number)
2. on the screen, switch to whatever view you aren't on (so gallery -> list or list -> gallery)
3. You'll see your search is wiped
|
non_process
|
advanced search will clear if i switch from one view to another bugherd high value see bugherd ticket if i did an advanced search and then switch from gallery to list view the search clears and vice versa go to advanced search and go a search i m partial to gen mss in call number on the screen switch to whatever view you aren t on so gallery list or list gallery you ll see your search is wiped
| 0
|
2,237
| 5,088,622,088
|
IssuesEvent
|
2016-12-31 23:26:18
|
sw4j-org/tool-jpa-processor
|
https://api.github.com/repos/sw4j-org/tool-jpa-processor
|
opened
|
Handle @JoinColumn Annotation
|
annotation processor task
|
Handle the `@JoinColumn` annotation for a property or field.
See [JSR 338: Java Persistence API, Version 2.1](http://download.oracle.com/otn-pub/jcp/persistence-2_1-fr-eval-spec/JavaPersistence.pdf)
- 11.1.25 JoinColumn Annotation
|
1.0
|
Handle @JoinColumn Annotation - Handle the `@JoinColumn` annotation for a property or field.
See [JSR 338: Java Persistence API, Version 2.1](http://download.oracle.com/otn-pub/jcp/persistence-2_1-fr-eval-spec/JavaPersistence.pdf)
- 11.1.25 JoinColumn Annotation
|
process
|
handle joincolumn annotation handle the joincolumn annotation for a property or field see joincolumn annotation
| 1
|
161,812
| 12,566,642,098
|
IssuesEvent
|
2020-06-08 11:35:41
|
zoe/covid-tracker-react-native
|
https://api.github.com/repos/zoe/covid-tracker-react-native
|
closed
|
Create Unit test for "Thank you.." page
|
test
|
"Thank you.." page
a) Share this app button
b) News feed link
c) Done button
|
1.0
|
Create Unit test for "Thank you.." page - "Thank you.." page
a) Share this app button
b) News feed link
c) Done button
|
non_process
|
create unit test for thank you page thank you page a share this app button b news feed link c done button
| 0
|
70,250
| 3,321,608,482
|
IssuesEvent
|
2015-11-09 09:57:57
|
CoderDojo/community-platform
|
https://api.github.com/repos/CoderDojo/community-platform
|
closed
|
Cannot update founder of Dojo
|
bug dojo admin dojo listings high priority
|
Getting a 400 bad request error. When logged in as a cdf admin and trying to update the founder.
Request details sent to Mihai.
|
1.0
|
Cannot update founder of Dojo - Getting a 400 bad request error. When logged in as a cdf admin and trying to update the founder.
Request details sent to Mihai.
|
non_process
|
cannot update founder of dojo getting a bad request error when logged in as a cdf admin and trying to update the founder request details sent to mihai
| 0
|
8,878
| 11,980,306,566
|
IssuesEvent
|
2020-04-07 09:07:11
|
prisma/prisma2-e2e-tests
|
https://api.github.com/repos/prisma/prisma2-e2e-tests
|
closed
|
Heroku: remove VCS committed generated code if possible
|
kind/improvement process/candidate
|
We currently commit generated on Heroku:
https://github.com/prisma/prisma2-e2e-tests/tree/master/platforms/heroku/prisma/prisma-client-js
@divyenduz I'd like to undo this in the future if possible. If it's not possible, please write here why.
|
1.0
|
Heroku: remove VCS committed generated code if possible - We currently commit generated on Heroku:
https://github.com/prisma/prisma2-e2e-tests/tree/master/platforms/heroku/prisma/prisma-client-js
@divyenduz I'd like to undo this in the future if possible. If it's not possible, please write here why.
|
process
|
heroku remove vcs committed generated code if possible we currently commit generated on heroku divyenduz i d like to undo this in the future if possible if it s not possible please write here why
| 1
|
19,970
| 26,450,731,002
|
IssuesEvent
|
2023-01-16 11:00:04
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
closed
|
System.Diagnostics.Process.StandardInput.WriteLine does not work properly on macOS.
|
area-System.Diagnostics.Process untriaged
|
### Description
When I start zsh using `System.Diagnostics.Process.Start` on macOS and try to enter a command using `System.Diagnostics.Process.StandardInput.WriteLine` to zsh, it is not entered.
### Reproduction Steps
The following code can be easily checked:
```fsharp
open System.Diagnostics
open System.Text
[<Literal>]
let private zsh' = "/System/Applications/Utilities/Terminal.app/Contents/MacOS/Terminal"
let exec () =
use p =
ProcessStartInfo (zsh',
UseShellExecute = false,
RedirectStandardInput = true,
RedirectStandardOutput = true,
CreateNoWindow = true)
|> System.Diagnostics.Process.Start
let stdout = StringBuilder()
p.OutputDataReceived.Add (fun e -> if e.Data <> null then stdout.AppendLine(e.Data) |> ignore)
p.BeginOutputReadLine()
p.StandardInput.WriteLine "ls ./"
p.StandardInput.WriteLine "exit"
p.WaitForExit()
stdout.ToString()
```
It is reproduced for both net6.0 and net7.0.
### Expected behavior
Must be able to enter commands into zsh with `System.Diagnostics.Process.StandardInput.WriteLine` and get execution results.
### Actual behavior
`System.Diagnostics.Process.StandardInput.WriteLine` cannot be used to enter commands into zsh, so `exit` is not executed and `System.Diagnostics.Process.WaitForExit` waits indefinitely.
### Regression?
No idea.
### Known Workarounds
Nothing.
### Configuration
- Which version of .NET is the code running on?
→ net6.0 / net7.0
- What OS and version, and what distro if applicable?
→ macOS Venture 13.1
- What is the architecture (x64, x86, ARM, ARM64)?
→ ARM64
### Other information
_No response_
|
1.0
|
System.Diagnostics.Process.StandardInput.WriteLine does not work properly on macOS. - ### Description
When I start zsh using `System.Diagnostics.Process.Start` on macOS and try to enter a command using `System.Diagnostics.Process.StandardInput.WriteLine` to zsh, it is not entered.
### Reproduction Steps
The following code can be easily checked:
```fsharp
open System.Diagnostics
open System.Text
[<Literal>]
let private zsh' = "/System/Applications/Utilities/Terminal.app/Contents/MacOS/Terminal"
let exec () =
use p =
ProcessStartInfo (zsh',
UseShellExecute = false,
RedirectStandardInput = true,
RedirectStandardOutput = true,
CreateNoWindow = true)
|> System.Diagnostics.Process.Start
let stdout = StringBuilder()
p.OutputDataReceived.Add (fun e -> if e.Data <> null then stdout.AppendLine(e.Data) |> ignore)
p.BeginOutputReadLine()
p.StandardInput.WriteLine "ls ./"
p.StandardInput.WriteLine "exit"
p.WaitForExit()
stdout.ToString()
```
It is reproduced for both net6.0 and net7.0.
### Expected behavior
Must be able to enter commands into zsh with `System.Diagnostics.Process.StandardInput.WriteLine` and get execution results.
### Actual behavior
`System.Diagnostics.Process.StandardInput.WriteLine` cannot be used to enter commands into zsh, so `exit` is not executed and `System.Diagnostics.Process.WaitForExit` waits indefinitely.
### Regression?
No idea.
### Known Workarounds
Nothing.
### Configuration
- Which version of .NET is the code running on?
→ net6.0 / net7.0
- What OS and version, and what distro if applicable?
→ macOS Venture 13.1
- What is the architecture (x64, x86, ARM, ARM64)?
→ ARM64
### Other information
_No response_
|
process
|
system diagnostics process standardinput writeline does not work properly on macos description when i start zsh using system diagnostics process start on macos and try to enter a command using system diagnostics process standardinput writeline to zsh it is not entered reproduction steps the following code can be easily checked fsharp open system diagnostics open system text let private zsh system applications utilities terminal app contents macos terminal let exec use p processstartinfo zsh useshellexecute false redirectstandardinput true redirectstandardoutput true createnowindow true system diagnostics process start let stdout stringbuilder p outputdatareceived add fun e if e data null then stdout appendline e data ignore p beginoutputreadline p standardinput writeline ls p standardinput writeline exit p waitforexit stdout tostring it is reproduced for both and expected behavior must be able to enter commands into zsh with system diagnostics process standardinput writeline and get execution results actual behavior system diagnostics process standardinput writeline cannot be used to enter commands into zsh so exit is not executed and system diagnostics process waitforexit waits indefinitely regression no idea known workarounds nothing configuration which version of net is the code running on → what os and version and what distro if applicable → macos venture what is the architecture arm → other information no response
| 1
|
184,237
| 6,711,157,169
|
IssuesEvent
|
2017-10-13 01:53:04
|
FreezingMoon/AncientBeast
|
https://api.github.com/repos/FreezingMoon/AncientBeast
|
closed
|
more obvious active unit
|
Coding Easy first-timers-only good first issue Hacktoberfest help wanted Priority Visuals
|
In order to make it more obvious which unit is currently active, all inactive units should be displayed with colored outlined hexagons instead of full ones. This has to do with the `src/utility/hexagons.js` file.
|
1.0
|
more obvious active unit - In order to make it more obvious which unit is currently active, all inactive units should be displayed with colored outlined hexagons instead of full ones. This has to do with the `src/utility/hexagons.js` file.
|
non_process
|
more obvious active unit in order to make it more obvious which unit is currently active all inactive units should be displayed with colored outlined hexagons instead of full ones this has to do with the src utility hexagons js file
| 0
|
267,198
| 23,287,131,141
|
IssuesEvent
|
2022-08-05 17:48:22
|
nucleus-security/Test-repo
|
https://api.github.com/repos/nucleus-security/Test-repo
|
opened
|
Nucleus - [Critical] - License-Banned
|
Test
|
Source: SONATYPE
Finding Description: A license was found that violated this policy. View output of each instance for details
Target(s): Asset name: sandbox-application
Path:org.mobicents.ussd.management ussd-ui-management : 3.0.0.2
Solution: Replace component with one that utilizes an acceptable license
Severity: Critical
Date Discovered: 2022-08-03 21:25:22
Nucleus Notification Rules Triggered: r2
Project Name: 4288-1
Please see Nucleus for more information on these vulnerabilities:https://192.168.56.101/nucleus/public/app/index.html#vuln/168000005/ZDk3NjRkYzgxOGJhNDE4NmI1MDkwZTU0YjE2OTVkNzQ-/U09OQVRZUEU-/VnVsbg--/false/MTY4MDAwMDA1/c3VtbWFyeQ--/false
|
1.0
|
Nucleus - [Critical] - License-Banned - Source: SONATYPE
Finding Description: A license was found that violated this policy. View output of each instance for details
Target(s): Asset name: sandbox-application
Path:org.mobicents.ussd.management ussd-ui-management : 3.0.0.2
Solution: Replace component with one that utilizes an acceptable license
Severity: Critical
Date Discovered: 2022-08-03 21:25:22
Nucleus Notification Rules Triggered: r2
Project Name: 4288-1
Please see Nucleus for more information on these vulnerabilities:https://192.168.56.101/nucleus/public/app/index.html#vuln/168000005/ZDk3NjRkYzgxOGJhNDE4NmI1MDkwZTU0YjE2OTVkNzQ-/U09OQVRZUEU-/VnVsbg--/false/MTY4MDAwMDA1/c3VtbWFyeQ--/false
|
non_process
|
nucleus license banned source sonatype finding description a license was found that violated this policy view output of each instance for details target s asset name sandbox application path org mobicents ussd management ussd ui management solution replace component with one that utilizes an acceptable license severity critical date discovered nucleus notification rules triggered project name please see nucleus for more information on these vulnerabilities
| 0
|
11,908
| 2,668,983,801
|
IssuesEvent
|
2015-03-23 13:01:03
|
contao/core-bundle
|
https://api.github.com/repos/contao/core-bundle
|
opened
|
Some routes do not work
|
defect
|
> <a href="https://github.com/leofeyer"><img src="https://avatars.githubusercontent.com/u/1192057?v=3" align="left" width="42" height="42" hspace="10"></img></a> [Issue](https://github.com/contao/contao/issues/64) by @leofeyer
Friday Mar 06, 2015 at 20:48 GMT
The following routes no longer work:
- `http://contao.local/app_dev.php/`
- `http://contao.local/app_dev.php/en`
@aschempp I guess that's why we had a `contao_root` route in the previous version of the front end loader?
|
1.0
|
Some routes do not work - > <a href="https://github.com/leofeyer"><img src="https://avatars.githubusercontent.com/u/1192057?v=3" align="left" width="42" height="42" hspace="10"></img></a> [Issue](https://github.com/contao/contao/issues/64) by @leofeyer
Friday Mar 06, 2015 at 20:48 GMT
The following routes no longer work:
- `http://contao.local/app_dev.php/`
- `http://contao.local/app_dev.php/en`
@aschempp I guess that's why we had a `contao_root` route in the previous version of the front end loader?
|
non_process
|
some routes do not work by leofeyer friday mar at gmt the following routes no longer work aschempp i guess that s why we had a contao root route in the previous version of the front end loader
| 0
|
218,863
| 17,027,509,347
|
IssuesEvent
|
2021-07-03 21:19:25
|
QubesOS/updates-status
|
https://api.github.com/repos/QubesOS/updates-status
|
closed
|
core-qrexec v4.1.14 (r4.1)
|
r4.1-archlinux-cur-test r4.1-bullseye-cur-test r4.1-buster-cur-test r4.1-centos-stream8-cur-test r4.1-dom0-cur-test r4.1-fc31-cur-test r4.1-fc32-cur-test r4.1-fc33-cur-test r4.1-fc34-cur-test
|
Update of core-qrexec to v4.1.14 for Qubes r4.1, see comments below for details.
Built from: https://github.com/QubesOS/qubes-core-qrexec/commit/bf76f79f50b8aee13f902e689dd73d441e4b02c8
[Changes since previous version](https://github.com/QubesOS/qubes-core-qrexec/compare/v4.1.11...v4.1.14):
QubesOS/qubes-core-qrexec@bf76f79 version 4.1.14
QubesOS/qubes-core-qrexec@ba0775b Merge remote-tracking branch 'origin/pr/76'
QubesOS/qubes-core-qrexec@85f67e5 Merge remote-tracking branch 'origin/pr/69'
QubesOS/qubes-core-qrexec@f7d510b Force color pytest output in gitlab
QubesOS/qubes-core-qrexec@4fc2578 Add a test for qrexec policy allowing an operation
QubesOS/qubes-core-qrexec@46f57e8 Parse the qrexec call metadata before untrusted data
QubesOS/qubes-core-qrexec@3ba64f3 Avoid calling get_system_info() twice
QubesOS/qubes-core-qrexec@4276035 Automatically install dependencies when possible
QubesOS/qubes-core-qrexec@831208f Handle partial reads from StreamReader.read
QubesOS/qubes-core-qrexec@0b2332f Set socket modes properly
QubesOS/qubes-core-qrexec@61d603c Shut up pylint
QubesOS/qubes-core-qrexec@dcff4f7 Lots of unit tests and some bug fixes
QubesOS/qubes-core-qrexec@405f5d7 Merge remote-tracking branch 'origin/pr/75'
QubesOS/qubes-core-qrexec@92d0331 Merge remote-tracking branch 'origin/pr/74'
QubesOS/qubes-core-qrexec@8086a79 Merge remote-tracking branch 'origin/pr/70'
QubesOS/qubes-core-qrexec@121aff1 Use separate sockets for different services
QubesOS/qubes-core-qrexec@c1d8dd9 Do not use the asynctest module
QubesOS/qubes-core-qrexec@93190cf Add policy.EvalGUI service
QubesOS/qubes-core-qrexec@587dac3 Create a DispVMTemplate instance when needed
QubesOS/qubes-core-qrexec@2f0d37a Use generic 'guivm' service to tell if running inside GUI VM
QubesOS/qubes-core-qrexec@0d79f74 Tell pylint not to whine about extra parentheses
QubesOS/qubes-core-qrexec@adb1301 Add unit tests for policy.EvalSimple
QubesOS/qubes-core-qrexec@e08a112 Add a policy.EvalSimple qrexec service
QubesOS/qubes-core-qrexec@cbab1fc Switch from __gcov_flush to __gcov_dump + __gcov_reset
QubesOS/qubes-core-qrexec@46dc9c1 Be stricter about command-line parsing
QubesOS/qubes-core-qrexec@c636c53 daemon: fix checking qrexec-policy-daemon response
QubesOS/qubes-core-qrexec@afea444 Adjust vchan_{send,recv} error checking
QubesOS/qubes-core-qrexec@fdd306f winusb: check if pam include file exists and set appropriate flags
QubesOS/qubes-core-qrexec@2ce1a34 winsub: fix broad exception
QubesOS/qubes-core-qrexec@4c5ba18 winusb: append LDLIBS
QubesOS/qubes-core-qrexec@dbb6b51 winusb: set guivm to None on unknown source
QubesOS/qubes-core-qrexec@ef92037 winusb: allow to build without pam
QubesOS/qubes-core-qrexec@7d75d31 version 4.1.13
QubesOS/qubes-core-qrexec@41e36f7 agent: do not interrupt established connections on restart
QubesOS/qubes-core-qrexec@ed34dc7 version 4.1.12
QubesOS/qubes-core-qrexec@7e4e562 debian: update compat
QubesOS/qubes-core-qrexec@7f35ef8 debian: update control
QubesOS/qubes-core-qrexec@835ea75 Merge branch 'ci'
QubesOS/qubes-core-qrexec@463ce10 pylint: temporarily disable unsubscriptable-object - buggy with py3.9
QubesOS/qubes-core-qrexec@863242c gitlab-ci: include custom jobs
QubesOS/qubes-core-qrexec@f748160 Allow to override vchan variant selection with BACKEND_VMM variable
QubesOS/qubes-core-qrexec@031a321 Add .gitlab-ci.yml
QubesOS/qubes-core-qrexec@c86360f Use pkg-config to get BACKEND_VMM
QubesOS/qubes-core-qrexec@c69202b Set default BACKEND_VMM value to xen
Referenced issues:
QubesOS/qubes-issues#4186
QubesOS/qubes-issues#1148
QubesOS/qubes-issues#6629
If you're release manager, you can issue GPG-inline signed command:
* `Upload core-qrexec bf76f79f50b8aee13f902e689dd73d441e4b02c8 r4.1 current repo` (available 7 days from now)
* `Upload core-qrexec bf76f79f50b8aee13f902e689dd73d441e4b02c8 r4.1 current (dists) repo`, you can choose subset of distributions, like `vm-fc24 vm-fc25` (available 7 days from now)
* `Upload core-qrexec bf76f79f50b8aee13f902e689dd73d441e4b02c8 r4.1 security-testing repo`
Above commands will work only if packages in current-testing repository were built from given commit (i.e. no new version superseded it).
|
9.0
|
core-qrexec v4.1.14 (r4.1) - Update of core-qrexec to v4.1.14 for Qubes r4.1, see comments below for details.
Built from: https://github.com/QubesOS/qubes-core-qrexec/commit/bf76f79f50b8aee13f902e689dd73d441e4b02c8
[Changes since previous version](https://github.com/QubesOS/qubes-core-qrexec/compare/v4.1.11...v4.1.14):
QubesOS/qubes-core-qrexec@bf76f79 version 4.1.14
QubesOS/qubes-core-qrexec@ba0775b Merge remote-tracking branch 'origin/pr/76'
QubesOS/qubes-core-qrexec@85f67e5 Merge remote-tracking branch 'origin/pr/69'
QubesOS/qubes-core-qrexec@f7d510b Force color pytest output in gitlab
QubesOS/qubes-core-qrexec@4fc2578 Add a test for qrexec policy allowing an operation
QubesOS/qubes-core-qrexec@46f57e8 Parse the qrexec call metadata before untrusted data
QubesOS/qubes-core-qrexec@3ba64f3 Avoid calling get_system_info() twice
QubesOS/qubes-core-qrexec@4276035 Automatically install dependencies when possible
QubesOS/qubes-core-qrexec@831208f Handle partial reads from StreamReader.read
QubesOS/qubes-core-qrexec@0b2332f Set socket modes properly
QubesOS/qubes-core-qrexec@61d603c Shut up pylint
QubesOS/qubes-core-qrexec@dcff4f7 Lots of unit tests and some bug fixes
QubesOS/qubes-core-qrexec@405f5d7 Merge remote-tracking branch 'origin/pr/75'
QubesOS/qubes-core-qrexec@92d0331 Merge remote-tracking branch 'origin/pr/74'
QubesOS/qubes-core-qrexec@8086a79 Merge remote-tracking branch 'origin/pr/70'
QubesOS/qubes-core-qrexec@121aff1 Use separate sockets for different services
QubesOS/qubes-core-qrexec@c1d8dd9 Do not use the asynctest module
QubesOS/qubes-core-qrexec@93190cf Add policy.EvalGUI service
QubesOS/qubes-core-qrexec@587dac3 Create a DispVMTemplate instance when needed
QubesOS/qubes-core-qrexec@2f0d37a Use generic 'guivm' service to tell if running inside GUI VM
QubesOS/qubes-core-qrexec@0d79f74 Tell pylint not to whine about extra parentheses
QubesOS/qubes-core-qrexec@adb1301 Add unit tests for policy.EvalSimple
QubesOS/qubes-core-qrexec@e08a112 Add a policy.EvalSimple qrexec service
QubesOS/qubes-core-qrexec@cbab1fc Switch from __gcov_flush to __gcov_dump + __gcov_reset
QubesOS/qubes-core-qrexec@46dc9c1 Be stricter about command-line parsing
QubesOS/qubes-core-qrexec@c636c53 daemon: fix checking qrexec-policy-daemon response
QubesOS/qubes-core-qrexec@afea444 Adjust vchan_{send,recv} error checking
QubesOS/qubes-core-qrexec@fdd306f winusb: check if pam include file exists and set appropriate flags
QubesOS/qubes-core-qrexec@2ce1a34 winsub: fix broad exception
QubesOS/qubes-core-qrexec@4c5ba18 winusb: append LDLIBS
QubesOS/qubes-core-qrexec@dbb6b51 winusb: set guivm to None on unknown source
QubesOS/qubes-core-qrexec@ef92037 winusb: allow to build without pam
QubesOS/qubes-core-qrexec@7d75d31 version 4.1.13
QubesOS/qubes-core-qrexec@41e36f7 agent: do not interrupt established connections on restart
QubesOS/qubes-core-qrexec@ed34dc7 version 4.1.12
QubesOS/qubes-core-qrexec@7e4e562 debian: update compat
QubesOS/qubes-core-qrexec@7f35ef8 debian: update control
QubesOS/qubes-core-qrexec@835ea75 Merge branch 'ci'
QubesOS/qubes-core-qrexec@463ce10 pylint: temporarily disable unsubscriptable-object - buggy with py3.9
QubesOS/qubes-core-qrexec@863242c gitlab-ci: include custom jobs
QubesOS/qubes-core-qrexec@f748160 Allow to override vchan variant selection with BACKEND_VMM variable
QubesOS/qubes-core-qrexec@031a321 Add .gitlab-ci.yml
QubesOS/qubes-core-qrexec@c86360f Use pkg-config to get BACKEND_VMM
QubesOS/qubes-core-qrexec@c69202b Set default BACKEND_VMM value to xen
Referenced issues:
QubesOS/qubes-issues#4186
QubesOS/qubes-issues#1148
QubesOS/qubes-issues#6629
If you're release manager, you can issue GPG-inline signed command:
* `Upload core-qrexec bf76f79f50b8aee13f902e689dd73d441e4b02c8 r4.1 current repo` (available 7 days from now)
* `Upload core-qrexec bf76f79f50b8aee13f902e689dd73d441e4b02c8 r4.1 current (dists) repo`, you can choose subset of distributions, like `vm-fc24 vm-fc25` (available 7 days from now)
* `Upload core-qrexec bf76f79f50b8aee13f902e689dd73d441e4b02c8 r4.1 security-testing repo`
Above commands will work only if packages in current-testing repository were built from given commit (i.e. no new version superseded it).
|
non_process
|
core qrexec update of core qrexec to for qubes see comments below for details built from qubesos qubes core qrexec version qubesos qubes core qrexec merge remote tracking branch origin pr qubesos qubes core qrexec merge remote tracking branch origin pr qubesos qubes core qrexec force color pytest output in gitlab qubesos qubes core qrexec add a test for qrexec policy allowing an operation qubesos qubes core qrexec parse the qrexec call metadata before untrusted data qubesos qubes core qrexec avoid calling get system info twice qubesos qubes core qrexec automatically install dependencies when possible qubesos qubes core qrexec handle partial reads from streamreader read qubesos qubes core qrexec set socket modes properly qubesos qubes core qrexec shut up pylint qubesos qubes core qrexec lots of unit tests and some bug fixes qubesos qubes core qrexec merge remote tracking branch origin pr qubesos qubes core qrexec merge remote tracking branch origin pr qubesos qubes core qrexec merge remote tracking branch origin pr qubesos qubes core qrexec use separate sockets for different services qubesos qubes core qrexec do not use the asynctest module qubesos qubes core qrexec add policy evalgui service qubesos qubes core qrexec create a dispvmtemplate instance when needed qubesos qubes core qrexec use generic guivm service to tell if running inside gui vm qubesos qubes core qrexec tell pylint not to whine about extra parentheses qubesos qubes core qrexec add unit tests for policy evalsimple qubesos qubes core qrexec add a policy evalsimple qrexec service qubesos qubes core qrexec switch from gcov flush to gcov dump gcov reset qubesos qubes core qrexec be stricter about command line parsing qubesos qubes core qrexec daemon fix checking qrexec policy daemon response qubesos qubes core qrexec adjust vchan send recv error checking qubesos qubes core qrexec winusb check if pam include file exists and set appropriate flags qubesos qubes core qrexec winsub fix broad exception qubesos qubes core qrexec winusb append ldlibs qubesos qubes core qrexec winusb set guivm to none on unknown source qubesos qubes core qrexec winusb allow to build without pam qubesos qubes core qrexec version qubesos qubes core qrexec agent do not interrupt established connections on restart qubesos qubes core qrexec version qubesos qubes core qrexec debian update compat qubesos qubes core qrexec debian update control qubesos qubes core qrexec merge branch ci qubesos qubes core qrexec pylint temporarily disable unsubscriptable object buggy with qubesos qubes core qrexec gitlab ci include custom jobs qubesos qubes core qrexec allow to override vchan variant selection with backend vmm variable qubesos qubes core qrexec add gitlab ci yml qubesos qubes core qrexec use pkg config to get backend vmm qubesos qubes core qrexec set default backend vmm value to xen referenced issues qubesos qubes issues qubesos qubes issues qubesos qubes issues if you re release manager you can issue gpg inline signed command upload core qrexec current repo available days from now upload core qrexec current dists repo you can choose subset of distributions like vm vm available days from now upload core qrexec security testing repo above commands will work only if packages in current testing repository were built from given commit i e no new version superseded it
| 0
|
473,361
| 13,641,128,665
|
IssuesEvent
|
2020-09-25 13:45:22
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
jsfiddle.net - see bug description
|
browser-firefox engine-gecko priority-normal
|
<!-- @browser: Firefox 79.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:81.0) Gecko/20100101 Firefox/81.0 -->
<!-- @reported_with: unknown -->
**URL**: http://jsfiddle.net/1nt9xr64/3
**Browser / Version**: Firefox 79.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes Chrome
**Problem type**: Something else
**Description**: Problem with CSS-filtering as from version 79
**Steps to Reproduce**:
When adding a CSSfilter to a canvas which is using WebGL, some computer configurations display a gray-ish screen, the image disapears. It worked fine until version 78.0.2.
It also works on a lot of other computer-config but for example a computer with following config it does not work:
NVIDIA Quadro P2000
Intel core i7-8850H
16gb RAM
windows 10 pro 64bit
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2020/9/de1cd7ca-324f-45c0-8518-36df6292f66f.jpg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
jsfiddle.net - see bug description - <!-- @browser: Firefox 79.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:81.0) Gecko/20100101 Firefox/81.0 -->
<!-- @reported_with: unknown -->
**URL**: http://jsfiddle.net/1nt9xr64/3
**Browser / Version**: Firefox 79.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes Chrome
**Problem type**: Something else
**Description**: Problem with CSS-filtering as from version 79
**Steps to Reproduce**:
When adding a CSSfilter to a canvas which is using WebGL, some computer configurations display a gray-ish screen, the image disapears. It worked fine until version 78.0.2.
It also works on a lot of other computer-config but for example a computer with following config it does not work:
NVIDIA Quadro P2000
Intel core i7-8850H
16gb RAM
windows 10 pro 64bit
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2020/9/de1cd7ca-324f-45c0-8518-36df6292f66f.jpg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
jsfiddle net see bug description url browser version firefox operating system windows tested another browser yes chrome problem type something else description problem with css filtering as from version steps to reproduce when adding a cssfilter to a canvas which is using webgl some computer configurations display a gray ish screen the image disapears it worked fine until version it also works on a lot of other computer config but for example a computer with following config it does not work nvidia quadro intel core ram windows pro view the screenshot img alt screenshot src browser configuration none from with ❤️
| 0
|
411,595
| 12,026,341,070
|
IssuesEvent
|
2020-04-12 13:45:07
|
open-wa/wa-automate-nodejs
|
https://api.github.com/repos/open-wa/wa-automate-nodejs
|
closed
|
publishing status
|
PRIORITY Requires Investigation enhancement
|
**Feature request**
I tried to send status
this code helps to get all broadcasts and user status among them
```javascript
const contacts = await client.getAllContacts();
for (const contact of contacts) {
if (contact.id.server === 'broadcast' && contact.id._serialized !== 'status@broadcast') {
console.log(contact)
}
}
```
which mean `status@broadcast` is also broadcast
when sending message to broadcast evrything works just greate: text, media, location etc.
we need to add ability for 'status@broadcast' as:
`await client.sendText("status@broadcast", "test");`
and figure out how to do that.
let me help invastigate it if you need some help
i assume we need to select background color for text status, limit video length for 15 seconds etc.
i dont see any option to do that by whatsapp web write now
|
1.0
|
publishing status - **Feature request**
I tried to send status
this code helps to get all broadcasts and user status among them
```javascript
const contacts = await client.getAllContacts();
for (const contact of contacts) {
if (contact.id.server === 'broadcast' && contact.id._serialized !== 'status@broadcast') {
console.log(contact)
}
}
```
which mean `status@broadcast` is also broadcast
when sending message to broadcast evrything works just greate: text, media, location etc.
we need to add ability for 'status@broadcast' as:
`await client.sendText("status@broadcast", "test");`
and figure out how to do that.
let me help invastigate it if you need some help
i assume we need to select background color for text status, limit video length for 15 seconds etc.
i dont see any option to do that by whatsapp web write now
|
non_process
|
publishing status feature request i tried to send status this code helps to get all broadcasts and user status among them javascript const contacts await client getallcontacts for const contact of contacts if contact id server broadcast contact id serialized status broadcast console log contact which mean status broadcast is also broadcast when sending message to broadcast evrything works just greate text media location etc we need to add ability for status broadcast as await client sendtext status broadcast test and figure out how to do that let me help invastigate it if you need some help i assume we need to select background color for text status limit video length for seconds etc i dont see any option to do that by whatsapp web write now
| 0
|
163,272
| 25,783,097,225
|
IssuesEvent
|
2022-12-09 17:40:46
|
kubernetes-sigs/kustomize
|
https://api.github.com/repos/kubernetes-sigs/kustomize
|
closed
|
`kustomize localize` repetitive base
|
kind/design needs-triage
|
**Describe the bug**
According to the way [`kustomize localize`](https://github.com/kubernetes-sigs/kustomize/blob/master/proposals/22-04-localize-command.md) is currently designed, multiple overlays that reference the same remote base will produce multiple copies of said base, which is against kustomize design principles.
**Files that can reproduce the issue**
Say we have the setup below:
```
#/overlay1/kustomization.yaml
resources:
- https://github.com/kubernetes-sigs/kustomize//examples/multibases/base?ref=v3.3.1
namePrefix:
woo
```
```
#/overlay2/kustomization.yaml
resources:
- https://github.com/kubernetes-sigs/kustomize//examples/multibases/base?ref=v3.3.1
namePrefix:
hoo
```
**Expected output**
After we run `kustomize localize`, we would like for the remote base to be downloaded once and used by both overlays. This would match the current layout `kustomize build` expects for a completely local setup.
**Actual output**
Instead, we get the following file structure:
```
├── overlay1
│ └── kustomization.yaml
│ └── localized-files
│ ... # many intermediate directories
│ └── base
└── overlay2
└── kustomization.yaml
└── localized-files
... # many intermediate directories
└── base
```
where there is a copy of the base for each overlay.
**Kustomize version**
`kustomize localize` has yet to be implemented in source code, but regardless, the master branch is on [this commit](https://github.com/kubernetes-sigs/kustomize/commit/1c5393216672c87d5bd553ba255d8dd3044bbf0c).
|
1.0
|
`kustomize localize` repetitive base - **Describe the bug**
According to the way [`kustomize localize`](https://github.com/kubernetes-sigs/kustomize/blob/master/proposals/22-04-localize-command.md) is currently designed, multiple overlays that reference the same remote base will produce multiple copies of said base, which is against kustomize design principles.
**Files that can reproduce the issue**
Say we have the setup below:
```
#/overlay1/kustomization.yaml
resources:
- https://github.com/kubernetes-sigs/kustomize//examples/multibases/base?ref=v3.3.1
namePrefix:
woo
```
```
#/overlay2/kustomization.yaml
resources:
- https://github.com/kubernetes-sigs/kustomize//examples/multibases/base?ref=v3.3.1
namePrefix:
hoo
```
**Expected output**
After we run `kustomize localize`, we would like for the remote base to be downloaded once and used by both overlays. This would match the current layout `kustomize build` expects for a completely local setup.
**Actual output**
Instead, we get the following file structure:
```
├── overlay1
│ └── kustomization.yaml
│ └── localized-files
│ ... # many intermediate directories
│ └── base
└── overlay2
└── kustomization.yaml
└── localized-files
... # many intermediate directories
└── base
```
where there is a copy of the base for each overlay.
**Kustomize version**
`kustomize localize` has yet to be implemented in source code, but regardless, the master branch is on [this commit](https://github.com/kubernetes-sigs/kustomize/commit/1c5393216672c87d5bd553ba255d8dd3044bbf0c).
|
non_process
|
kustomize localize repetitive base describe the bug according to the way is currently designed multiple overlays that reference the same remote base will produce multiple copies of said base which is against kustomize design principles files that can reproduce the issue say we have the setup below kustomization yaml resources nameprefix woo kustomization yaml resources nameprefix hoo expected output after we run kustomize localize we would like for the remote base to be downloaded once and used by both overlays this would match the current layout kustomize build expects for a completely local setup actual output instead we get the following file structure ├── │ └── kustomization yaml │ └── localized files │ many intermediate directories │ └── base └── └── kustomization yaml └── localized files many intermediate directories └── base where there is a copy of the base for each overlay kustomize version kustomize localize has yet to be implemented in source code but regardless the master branch is on
| 0
|
21,184
| 28,153,316,537
|
IssuesEvent
|
2023-04-03 04:42:30
|
ssytnt/papers
|
https://api.github.com/repos/ssytnt/papers
|
opened
|
A fast, scalable and reliable deghosting method for Extreme Exposure Fusion[Prabhakar+(IISc), ICCP2019]
|
ImageProcessing
|
## 概要
極端に露光時間が異なる複数のフレームからHDR画像を生成。
## 背景
DNNベースの方式は参照画像の構造情報を利用するため、露光時間に差がある場合にアーチファクトが発生。また、入力するフレーム数は既知という前提のため拡張性がない。
## 方法
・オプティカルフローに基づいて各フレームを参照画像に位置合わせ。
・露光時間に依存しないように、特徴抽出ネットワークを重み共有で学習。
・拡張性獲得のため各フレームの特徴マップの平均と最大から画像復元するように学習。
## 結果
動きのあるシーンを撮影したデータセットで評価を行い、PSNR/SSIMで最高性能を達成。処理速度も既存手法の1/50。

|
1.0
|
A fast, scalable and reliable deghosting method for Extreme Exposure Fusion[Prabhakar+(IISc), ICCP2019] - ## 概要
極端に露光時間が異なる複数のフレームからHDR画像を生成。
## 背景
DNNベースの方式は参照画像の構造情報を利用するため、露光時間に差がある場合にアーチファクトが発生。また、入力するフレーム数は既知という前提のため拡張性がない。
## 方法
・オプティカルフローに基づいて各フレームを参照画像に位置合わせ。
・露光時間に依存しないように、特徴抽出ネットワークを重み共有で学習。
・拡張性獲得のため各フレームの特徴マップの平均と最大から画像復元するように学習。
## 結果
動きのあるシーンを撮影したデータセットで評価を行い、PSNR/SSIMで最高性能を達成。処理速度も既存手法の1/50。

|
process
|
a fast scalable and reliable deghosting method for extreme exposure fusion 概要 極端に露光時間が異なる複数のフレームからhdr画像を生成。 背景 dnnベースの方式は参照画像の構造情報を利用するため、露光時間に差がある場合にアーチファクトが発生。また、入力するフレーム数は既知という前提のため拡張性がない。 方法 ・オプティカルフローに基づいて各フレームを参照画像に位置合わせ。 ・露光時間に依存しないように、特徴抽出ネットワークを重み共有で学習。 ・拡張性獲得のため各フレームの特徴マップの平均と最大から画像復元するように学習。 結果 動きのあるシーンを撮影したデータセットで評価を行い、psnr ssimで最高性能を達成。 。
| 1
|
159
| 2,582,906,267
|
IssuesEvent
|
2015-02-15 19:44:37
|
dalehenrich/metacello-work
|
https://api.github.com/repos/dalehenrich/metacello-work
|
closed
|
Pharo-4.0 test failing ...
|
in process
|
from [test](https://travis-ci.org/dalehenrich/metacello-work/jobs/47979802):
```
**************************************************************************************
Results for #('BaselineOfMetacello') Test Suite
558 run, 557 passes, 0 skipped, 0 expected failures, 0 failures, 1 errors, 0 unexpected passes
**************************************************************************************
*** ERRORS *******************
MetacelloGoferFunctionalTest debug: #testCommitNewPackageSpec.
**************************************************************************************
```
|
1.0
|
Pharo-4.0 test failing ... - from [test](https://travis-ci.org/dalehenrich/metacello-work/jobs/47979802):
```
**************************************************************************************
Results for #('BaselineOfMetacello') Test Suite
558 run, 557 passes, 0 skipped, 0 expected failures, 0 failures, 1 errors, 0 unexpected passes
**************************************************************************************
*** ERRORS *******************
MetacelloGoferFunctionalTest debug: #testCommitNewPackageSpec.
**************************************************************************************
```
|
process
|
pharo test failing from results for baselineofmetacello test suite run passes skipped expected failures failures errors unexpected passes errors metacellogoferfunctionaltest debug testcommitnewpackagespec
| 1
|
9,944
| 8,271,271,614
|
IssuesEvent
|
2018-09-16 06:39:22
|
rust-lang-nursery/mdBook
|
https://api.github.com/repos/rust-lang-nursery/mdBook
|
closed
|
Make mdBook more contributor friendly
|
A-Infrastructure E-Easy T-Enhancement
|
I just watched [this excellent talk](https://www.youtube.com/watch?v=AHprJNUCgQ0) by @Manishearth and decided to apply some of his tips to make this repository more contributor friendly!
Here are the things we should do:
### Issues
- [x] Reorganize the labels in a very similar way as most big Rust projects (e.g. Rust, Servo, Clippy, ...). Labels are grouped: Area, Experience, Meta, Status and Type. Other groups can be made if need be.
- [x] Go through all the open issues, remove the old labels and apply the new ones
- [x] Remove the old labels
### Contribution File
- [x] Create a new `CONTRIBUTING.md` file
- [x] Explain that style fixes need to modify the Stylus files and rebuild the CSS
- [ ] Explain how to run Clippy and Rustmft. They should not be a requirement, but it's nice if the code is constantly kept tidy
- [ ] Mention the different useful ways a beginner could help out: Documentation, Tests, Examples, Updating dependencies, E-Easy issues
|
1.0
|
Make mdBook more contributor friendly - I just watched [this excellent talk](https://www.youtube.com/watch?v=AHprJNUCgQ0) by @Manishearth and decided to apply some of his tips to make this repository more contributor friendly!
Here are the things we should do:
### Issues
- [x] Reorganize the labels in a very similar way as most big Rust projects (e.g. Rust, Servo, Clippy, ...). Labels are grouped: Area, Experience, Meta, Status and Type. Other groups can be made if need be.
- [x] Go through all the open issues, remove the old labels and apply the new ones
- [x] Remove the old labels
### Contribution File
- [x] Create a new `CONTRIBUTING.md` file
- [x] Explain that style fixes need to modify the Stylus files and rebuild the CSS
- [ ] Explain how to run Clippy and Rustmft. They should not be a requirement, but it's nice if the code is constantly kept tidy
- [ ] Mention the different useful ways a beginner could help out: Documentation, Tests, Examples, Updating dependencies, E-Easy issues
|
non_process
|
make mdbook more contributor friendly i just watched by manishearth and decided to apply some of his tips to make this repository more contributor friendly here are the things we should do issues reorganize the labels in a very similar way as most big rust projects e g rust servo clippy labels are grouped area experience meta status and type other groups can be made if need be go through all the open issues remove the old labels and apply the new ones remove the old labels contribution file create a new contributing md file explain that style fixes need to modify the stylus files and rebuild the css explain how to run clippy and rustmft they should not be a requirement but it s nice if the code is constantly kept tidy mention the different useful ways a beginner could help out documentation tests examples updating dependencies e easy issues
| 0
|
521,591
| 15,112,199,725
|
IssuesEvent
|
2021-02-08 21:29:50
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
closed
|
dump-to-h2 with --dump-plaintext should check for presence of MB_ENCRYPTION_SECRET_KEY
|
Operation/H2 Priority:P1 Type:Bug
|
**Describe the bug**
When dumping a Metabase app db to H2 and decrypting it in the process, we should test for the existence of `MB_ENCRYPTION_SECRET_KEY` and exit with a non-zero exit code if it is not set, or we fail to decrypt values.
It appears that the exception is being swallowed somewhere, so the process exits with 0.
**Logs**
```text
21-01-29 19:35:33 pop-os DEBUG [submarine.snapshots:76] - run: exit code: 0
21-01-29 19:35:33 pop-os DEBUG [submarine.snapshots:77] - run: stdout: WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will impact performance.
2021-01-29 11:35:15,353 INFO metabase.util :: Maximum memory available to JVM: 7.8 GB
2021-01-29 11:35:22,465 INFO util.encryption :: Saved credentials encryption is DISABLED for this Metabase instance. ?
For more information, see https://metabase.com/docs/latest/operations-guide/encrypting-database-details-at-rest.html
2021-01-29 11:35:25,913 INFO metabase.core ::
Metabase v0.38.0-rc3 (df26940 master)
Copyright ? 2021 Metabase, Inc.
Metabase Enterprise Edition extensions are NOT PRESENT.
2021-01-29 11:35:25,919 WARN metabase.core :: WARNING: You have enabled namespace tracing, which could log sensitive information like db passwords.
Dumping from configured Metabase db to H2 file /tmp/b9c4cbfe-c860-4a5f-9b58-9b5a7c161d69.db
2021-01-29 11:35:25,947 INFO db.setup :: Verifying postgres Database Connection ...
2021-01-29 11:35:26,659 INFO db.setup :: Successfully verified PostgreSQL 11.7 application database connection. ?
2021-01-29 11:35:26,661 INFO db.setup :: Running Database Migrations...
2021-01-29 11:35:26,909 INFO db.setup :: Setting up Liquibase...
2021-01-29 11:35:27,044 INFO db.setup :: Liquibase is ready.
2021-01-29 11:35:27,045 INFO db.liquibase :: Checking if Database has unrun migrations...
2021-01-29 11:35:28,769 INFO db.liquibase :: Database has unrun migrations. Waiting for migration lock to be cleared...
2021-01-29 11:35:28,814 INFO db.liquibase :: Migration lock is cleared. Running migrations...
2021-01-29 11:35:28,969 INFO db.setup :: Database Migrations Current ... ?
2021-01-29 11:35:28,969 INFO db.data-migrations :: Running all necessary data migrations, this may take a minute.
2021-01-29 11:35:29,163 INFO db.data-migrations :: Finished running data migrations.
Set up postgres source database and run migrations... Database setup took 3.2 s
[OK]
2021-01-29 11:35:29,169 INFO db.setup :: [36mVerifying h2 Database Connection ...
2021-01-29 11:35:29,378 INFO db.setup :: Successfully verified H2 1.4.197 (2018-03-18) application database connection. ?
2021-01-29 11:35:29,381 INFO db.setup :: Running Database Migrations...
2021-01-29 11:35:29,392 INFO db.setup :: Setting up Liquibase...
2021-01-29 11:35:29,417 INFO db.setup :: Liquibase is ready.
2021-01-29 11:35:29,417 INFO db.liquibase :: Checking if Database has unrun migrations...
2021-01-29 11:35:29,872 INFO db.liquibase :: Database has unrun migrations. Waiting for migration lock to be cleared...
2021-01-29 11:35:29,924 INFO db.liquibase :: Migration lock is cleared. Running migrations...
2021-01-29 11:35:31,742 INFO db.setup :: Database Migrations Current ... ?
Set up h2 target database and run migrations... Database setup took 2.6 s
[OK]
Testing if target h2 database is already populated... [OK]
Temporarily disabling DB constraints... [OK]
Copying instances of Database.... copied 1 instances.
Copying instances of User.... copied 1 instances.
Copying instances of Setting.... copied 6 instances.
Copying instances of Table.... copied 4 instances.
Copying instances of Field.... copied 36 instances.
Copying instances of FieldValues.... copied 5 instances.
Copying instances of Revision.... copied 5 instances.
Copying instances of ViewLog.... copied 4 instances.
Copying instances of Session.... copied 1 instances.
Copying instances of Collection.... copied 1 instances.
Copying instances of Dashboard.... copied 1 instances.
Copying instances of Card.... copied 1 instances.
Copying instances of DashboardCard.... copied 1 instances.
Copying instances of Activity.... copied 5 instances.
Copying instances of PermissionsGroup.... copied 3 instances.
Copying instances of PermissionsGroupMembership.... copied 2 instances.
Copying instances of Permissions.... copied 4 instances.
Copying instances of DataMigrations.... copied 13 instances.
Re-enabling DB constraints... [OK]
2021-01-29 11:35:33,394 ERROR models.interface :: Error parsing JSON
com.fasterxml.jackson.core.JsonParseException: Unrecognized token 'kJY': was expecting (JSON String, Number, Array, Object or token 'null', 'true' or 'false')
at [Source: (StringReader); line: 1, column: 4]
at com.fasterxml.jackson.core.JsonParser._constructError(JsonParser.java:1840) ~[v0.38.0-rc3.jar:?]
at com.fasterxml.jackson.core.base.ParserMinimalBase._reportError(ParserMinimalBase.java:722) ~[v0.38.0-rc3.jar:?]
at com.fasterxml.jackson.core.json.ReaderBasedJsonParser._reportInvalidToken(ReaderBasedJsonParser.java:2868) ~[v0.38.0-rc3.jar:?]
at com.fasterxml.jackson.core.json.ReaderBasedJsonParser._handleOddValue(ReaderBasedJsonParser.java:1914) ~[v0.38.0-rc3.jar:?]
at com.fasterxml.jackson.core.json.ReaderBasedJsonParser.nextToken(ReaderBasedJsonParser.java:773) ~[v0.38.0-rc3.jar:?]
at cheshire.parse$parse.invokeStatic(parse.clj:90) ~[v0.38.0-rc3.jar:?]
at cheshire.parse$parse.invoke(parse.clj:88) ~[v0.38.0-rc3.jar:?]
at cheshire.core$parse_string.invokeStatic(core.clj:208) ~[v0.38.0-rc3.jar:?]
at cheshire.core$parse_string.invoke(core.clj:194) ~[v0.38.0-rc3.jar:?]
at cheshire.core$parse_string.invokeStatic(core.clj:205) ~[v0.38.0-rc3.jar:?]
at cheshire.core$parse_string.invoke(core.clj:194) ~[v0.38.0-rc3.jar:?]
at metabase.models.interface$json_out.invokeStatic(interface.clj:39) [v0.38.0-rc3.jar:?]
at metabase.models.interface$json_out.invoke(interface.clj:36) [v0.38.0-rc3.jar:?]
at metabase.models.interface$json_out_with_keywordization.invokeStatic(interface.clj:48) [v0.38.0-rc3.jar:?]
at metabase.models.interface$json_out_with_keywordization.invoke(interface.clj:45) [v0.38.0-rc3.jar:?]
at clojure.core$comp$fn__5807.invoke(core.clj:2569) [v0.38.0-rc3.jar:?]
at clojure.lang.AFn.applyToHelper(AFn.java:154) [v0.38.0-rc3.jar:?]
at clojure.lang.RestFn.applyTo(RestFn.java:132) [v0.38.0-rc3.jar:?]
at clojure.core$apply.invokeStatic(core.clj:665) [v0.38.0-rc3.jar:?]
at clojure.core$apply.invoke(core.clj:660) [v0.38.0-rc3.jar:?]
at clojure.core.memoize$through_STAR_$fn__4818.invoke(memoize.clj:107) [v0.38.0-rc3.jar:?]
at clojure.core.cache$through$fn__4541.invoke(cache.clj:55) [v0.38.0-rc3.jar:?]
at clojure.core.memoize$through_STAR_$fn__4814$fn__4815.invoke(memoize.clj:106) [v0.38.0-rc3.jar:?]
at clojure.core.memoize.RetryingDelay.deref(memoize.clj:47) [v0.38.0-rc3.jar:?]
at clojure.core$deref.invokeStatic(core.clj:2320) [v0.38.0-rc3.jar:?]
at clojure.core$deref.invoke(core.clj:2306) [v0.38.0-rc3.jar:?]
at clojure.core.memoize$cached_function$fn__4882.doInvoke(memoize.clj:231) [v0.38.0-rc3.jar:?]
at clojure.lang.RestFn.applyTo(RestFn.java:137) [v0.38.0-rc3.jar:?]
at clojure.lang.AFunction$1.doInvoke(AFunction.java:31) [v0.38.0-rc3.jar:?]
at clojure.lang.RestFn.invoke(RestFn.java:408) [v0.38.0-rc3.jar:?]
at toucan.models$apply_type_fns$iter__25324__25328$fn__25329.invoke(models.clj:304) [v0.38.0-rc3.jar:?]
at clojure.lang.LazySeq.sval(LazySeq.java:42) [v0.38.0-rc3.jar:?]
at clojure.lang.LazySeq.seq(LazySeq.java:51) [v0.38.0-rc3.jar:?]
at clojure.lang.RT.seq(RT.java:535) [v0.38.0-rc3.jar:?]
at clojure.core$seq__5402.invokeStatic(core.clj:137) [v0.38.0-rc3.jar:?]
at clojure.core.protocols$seq_reduce.invokeStatic(protocols.clj:24) [v0.38.0-rc3.jar:?]
at clojure.core.protocols$fn__8146.invokeStatic(protocols.clj:75) [v0.38.0-rc3.jar:?]
at clojure.core.protocols$fn__8146.invoke(protocols.clj:75) [v0.38.0-rc3.jar:?]
at clojure.core.protocols$fn__8088$G__8083__8101.invoke(protocols.clj:13) [v0.38.0-rc3.jar:?]
at clojure.core$reduce.invokeStatic(core.clj:6828) [v0.38.0-rc3.jar:?]
at clojure.core$into.invokeStatic(core.clj:6895) [v0.38.0-rc3.jar:?]
at clojure.core$into.invoke(core.clj:6887) [v0.38.0-rc3.jar:?]
at toucan.models$apply_type_fns.invokeStatic(models.clj:302) [v0.38.0-rc3.jar:?]
at toucan.models$apply_type_fns.invoke(models.clj:299) [v0.38.0-rc3.jar:?]
at toucan.models$do_post_select.invokeStatic(models.clj:349) [v0.38.0-rc3.jar:?]
at toucan.models$do_post_select.invoke(models.clj:344) [v0.38.0-rc3.jar:?]
at toucan.db$do_post_select$iter__27965__27969$fn__27970.invoke(db.clj:373) [v0.38.0-rc3.jar:?]
at clojure.lang.LazySeq.sval(LazySeq.java:42) [v0.38.0-rc3.jar:?]
at clojure.lang.LazySeq.seq(LazySeq.java:51) [v0.38.0-rc3.jar:?]
at clojure.lang.RT.seq(RT.java:535) [v0.38.0-rc3.jar:?]
at clojure.lang.LazilyPersistentVector.create(LazilyPersistentVector.java:44) [v0.38.0-rc3.jar:?]
at clojure.core$vec.invokeStatic(core.clj:377) [v0.38.0-rc3.jar:?]
at clojure.core$vec.invoke(core.clj:367) [v0.38.0-rc3.jar:?]
at toucan.db$do_post_select.invokeStatic(db.clj:372) [v0.38.0-rc3.jar:?]
at toucan.db$do_post_select.invoke(db.clj:363) [v0.38.0-rc3.jar:?]
at toucan.db$simple_select.invokeStatic(db.clj:394) [v0.38.0-rc3.jar:?]
at toucan.db$simple_select.invoke(db.clj:383) [v0.38.0-rc3.jar:?]
at toucan.db$select.invokeStatic(db.clj:662) [v0.38.0-rc3.jar:?]
at toucan.db$select.doInvoke(db.clj:656) [v0.38.0-rc3.jar:?]
at clojure.lang.RestFn.invoke(RestFn.java:410) [v0.38.0-rc3.jar:?]
at clojure.lang.AFn.applyToHelper(AFn.java:154) [v0.38.0-rc3.jar:?]
at clojure.lang.RestFn.applyTo(RestFn.java:132) [v0.38.0-rc3.jar:?]
at clojure.core$apply.invokeStatic(core.clj:667) [v0.38.0-rc3.jar:?]
at clojure.core$apply.invoke(core.clj:660) [v0.38.0-rc3.jar:?]
at toucan.db$select_field__GT_field.invokeStatic(db.clj:704) [v0.38.0-rc3.jar:?]
at toucan.db$select_field__GT_field.doInvoke(db.clj:697) [v0.38.0-rc3.jar:?]
at clojure.lang.RestFn.invoke(RestFn.java:445) [v0.38.0-rc3.jar:?]
at clojure.lang.AFn.applyToHelper(AFn.java:160) [v0.38.0-rc3.jar:?]
at clojure.lang.RestFn.applyTo(RestFn.java:132) [v0.38.0-rc3.jar:?]
at clojure.core$apply.invokeStatic(core.clj:671) [v0.38.0-rc3.jar:?]
at clojure.core$apply.invoke(core.clj:660) [v0.38.0-rc3.jar:?]
at toucan.db$select_id__GT_field.invokeStatic(db.clj:723) [v0.38.0-rc3.jar:?]
at toucan.db$select_id__GT_field.doInvoke(db.clj:716) [v0.38.0-rc3.jar:?]
at clojure.lang.RestFn.invoke(RestFn.java:425) [v0.38.0-rc3.jar:?]
at metabase.cmd.copy$fn__74585$overwrite_encrypted_fields_to_plaintext_BANG___74590$fn__74591.invoke(copy.clj:296) [v0.38.0-rc3.jar:?]
at metabase.cmd.copy$fn__74585$overwrite_encrypted_fields_to_plaintext_BANG___74590.invoke(copy.clj:285) [v0.38.0-rc3.jar:?]
at metabase.cmd.dump_to_h2$dump_to_h2_BANG_.invokeStatic(dump_to_h2.clj:38) [v0.38.0-rc3.jar:?]
at metabase.cmd.dump_to_h2$dump_to_h2_BANG_.invoke(dump_to_h2.clj:19) [v0.38.0-rc3.jar:?]
at clojure.lang.Var.invoke(Var.java:388) [v0.38.0-rc3.jar:?]
at metabase.cmd$dump_to_h2.invokeStatic(cmd.clj:54) [v0.38.0-rc3.jar:?]
at metabase.cmd$dump_to_h2.doInvoke(cmd.clj:46) [v0.38.0-rc3.jar:?]
at clojure.lang.RestFn.applyTo(RestFn.java:139) [v0.38.0-rc3.jar:?]
at clojure.core$apply.invokeStatic(core.clj:665) [v0.38.0-rc3.jar:?]
at clojure.core$apply.invoke(core.clj:660) [v0.38.0-rc3.jar:?]
at metabase.cmd$run_cmd$fn__74121.invoke(cmd.clj:171) [v0.38.0-rc3.jar:?]
at metabase.cmd$run_cmd.invokeStatic(cmd.clj:171) [v0.38.0-rc3.jar:?]
at metabase.cmd$run_cmd.invoke(cmd.clj:167) [v0.38.0-rc3.jar:?]
at clojure.lang.Var.invoke(Var.java:388) [v0.38.0-rc3.jar:?]
at metabase.core$run_cmd.invokeStatic(core.clj:148) [v0.38.0-rc3.jar:?]
at metabase.core$run_cmd.invoke(core.clj:146) [v0.38.0-rc3.jar:?]
at metabase.core$_main.invokeStatic(core.clj:170) [v0.38.0-rc3.jar:?]
at metabase.core$_main.doInvoke(core.clj:165) [v0.38.0-rc3.jar:?]
at clojure.lang.RestFn.applyTo(RestFn.java:137) [v0.38.0-rc3.jar:?]
at metabase.core.main(Unknown Source) [v0.38.0-rc3.jar:?]
Dump complete
```
|
1.0
|
dump-to-h2 with --dump-plaintext should check for presence of MB_ENCRYPTION_SECRET_KEY - **Describe the bug**
When dumping a Metabase app db to H2 and decrypting it in the process, we should test for the existence of `MB_ENCRYPTION_SECRET_KEY` and exit with a non-zero exit code if it is not set, or we fail to decrypt values.
It appears that the exception is being swallowed somewhere, so the process exits with 0.
**Logs**
```text
21-01-29 19:35:33 pop-os DEBUG [submarine.snapshots:76] - run: exit code: 0
21-01-29 19:35:33 pop-os DEBUG [submarine.snapshots:77] - run: stdout: WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will impact performance.
2021-01-29 11:35:15,353 INFO metabase.util :: Maximum memory available to JVM: 7.8 GB
2021-01-29 11:35:22,465 INFO util.encryption :: Saved credentials encryption is DISABLED for this Metabase instance. ?
For more information, see https://metabase.com/docs/latest/operations-guide/encrypting-database-details-at-rest.html
2021-01-29 11:35:25,913 INFO metabase.core ::
Metabase v0.38.0-rc3 (df26940 master)
Copyright ? 2021 Metabase, Inc.
Metabase Enterprise Edition extensions are NOT PRESENT.
2021-01-29 11:35:25,919 WARN metabase.core :: WARNING: You have enabled namespace tracing, which could log sensitive information like db passwords.
Dumping from configured Metabase db to H2 file /tmp/b9c4cbfe-c860-4a5f-9b58-9b5a7c161d69.db
2021-01-29 11:35:25,947 INFO db.setup :: Verifying postgres Database Connection ...
2021-01-29 11:35:26,659 INFO db.setup :: Successfully verified PostgreSQL 11.7 application database connection. ?
2021-01-29 11:35:26,661 INFO db.setup :: Running Database Migrations...
2021-01-29 11:35:26,909 INFO db.setup :: Setting up Liquibase...
2021-01-29 11:35:27,044 INFO db.setup :: Liquibase is ready.
2021-01-29 11:35:27,045 INFO db.liquibase :: Checking if Database has unrun migrations...
2021-01-29 11:35:28,769 INFO db.liquibase :: Database has unrun migrations. Waiting for migration lock to be cleared...
2021-01-29 11:35:28,814 INFO db.liquibase :: Migration lock is cleared. Running migrations...
2021-01-29 11:35:28,969 INFO db.setup :: Database Migrations Current ... ?
2021-01-29 11:35:28,969 INFO db.data-migrations :: Running all necessary data migrations, this may take a minute.
2021-01-29 11:35:29,163 INFO db.data-migrations :: Finished running data migrations.
Set up postgres source database and run migrations... Database setup took 3.2 s
[OK]
2021-01-29 11:35:29,169 INFO db.setup :: [36mVerifying h2 Database Connection ...
2021-01-29 11:35:29,378 INFO db.setup :: Successfully verified H2 1.4.197 (2018-03-18) application database connection. ?
2021-01-29 11:35:29,381 INFO db.setup :: Running Database Migrations...
2021-01-29 11:35:29,392 INFO db.setup :: Setting up Liquibase...
2021-01-29 11:35:29,417 INFO db.setup :: Liquibase is ready.
2021-01-29 11:35:29,417 INFO db.liquibase :: Checking if Database has unrun migrations...
2021-01-29 11:35:29,872 INFO db.liquibase :: Database has unrun migrations. Waiting for migration lock to be cleared...
2021-01-29 11:35:29,924 INFO db.liquibase :: Migration lock is cleared. Running migrations...
2021-01-29 11:35:31,742 INFO db.setup :: Database Migrations Current ... ?
Set up h2 target database and run migrations... Database setup took 2.6 s
[OK]
Testing if target h2 database is already populated... [OK]
Temporarily disabling DB constraints... [OK]
Copying instances of Database.... copied 1 instances.
Copying instances of User.... copied 1 instances.
Copying instances of Setting.... copied 6 instances.
Copying instances of Table.... copied 4 instances.
Copying instances of Field.... copied 36 instances.
Copying instances of FieldValues.... copied 5 instances.
Copying instances of Revision.... copied 5 instances.
Copying instances of ViewLog.... copied 4 instances.
Copying instances of Session.... copied 1 instances.
Copying instances of Collection.... copied 1 instances.
Copying instances of Dashboard.... copied 1 instances.
Copying instances of Card.... copied 1 instances.
Copying instances of DashboardCard.... copied 1 instances.
Copying instances of Activity.... copied 5 instances.
Copying instances of PermissionsGroup.... copied 3 instances.
Copying instances of PermissionsGroupMembership.... copied 2 instances.
Copying instances of Permissions.... copied 4 instances.
Copying instances of DataMigrations.... copied 13 instances.
Re-enabling DB constraints... [OK]
2021-01-29 11:35:33,394 ERROR models.interface :: Error parsing JSON
com.fasterxml.jackson.core.JsonParseException: Unrecognized token 'kJY': was expecting (JSON String, Number, Array, Object or token 'null', 'true' or 'false')
at [Source: (StringReader); line: 1, column: 4]
at com.fasterxml.jackson.core.JsonParser._constructError(JsonParser.java:1840) ~[v0.38.0-rc3.jar:?]
at com.fasterxml.jackson.core.base.ParserMinimalBase._reportError(ParserMinimalBase.java:722) ~[v0.38.0-rc3.jar:?]
at com.fasterxml.jackson.core.json.ReaderBasedJsonParser._reportInvalidToken(ReaderBasedJsonParser.java:2868) ~[v0.38.0-rc3.jar:?]
at com.fasterxml.jackson.core.json.ReaderBasedJsonParser._handleOddValue(ReaderBasedJsonParser.java:1914) ~[v0.38.0-rc3.jar:?]
at com.fasterxml.jackson.core.json.ReaderBasedJsonParser.nextToken(ReaderBasedJsonParser.java:773) ~[v0.38.0-rc3.jar:?]
at cheshire.parse$parse.invokeStatic(parse.clj:90) ~[v0.38.0-rc3.jar:?]
at cheshire.parse$parse.invoke(parse.clj:88) ~[v0.38.0-rc3.jar:?]
at cheshire.core$parse_string.invokeStatic(core.clj:208) ~[v0.38.0-rc3.jar:?]
at cheshire.core$parse_string.invoke(core.clj:194) ~[v0.38.0-rc3.jar:?]
at cheshire.core$parse_string.invokeStatic(core.clj:205) ~[v0.38.0-rc3.jar:?]
at cheshire.core$parse_string.invoke(core.clj:194) ~[v0.38.0-rc3.jar:?]
at metabase.models.interface$json_out.invokeStatic(interface.clj:39) [v0.38.0-rc3.jar:?]
at metabase.models.interface$json_out.invoke(interface.clj:36) [v0.38.0-rc3.jar:?]
at metabase.models.interface$json_out_with_keywordization.invokeStatic(interface.clj:48) [v0.38.0-rc3.jar:?]
at metabase.models.interface$json_out_with_keywordization.invoke(interface.clj:45) [v0.38.0-rc3.jar:?]
at clojure.core$comp$fn__5807.invoke(core.clj:2569) [v0.38.0-rc3.jar:?]
at clojure.lang.AFn.applyToHelper(AFn.java:154) [v0.38.0-rc3.jar:?]
at clojure.lang.RestFn.applyTo(RestFn.java:132) [v0.38.0-rc3.jar:?]
at clojure.core$apply.invokeStatic(core.clj:665) [v0.38.0-rc3.jar:?]
at clojure.core$apply.invoke(core.clj:660) [v0.38.0-rc3.jar:?]
at clojure.core.memoize$through_STAR_$fn__4818.invoke(memoize.clj:107) [v0.38.0-rc3.jar:?]
at clojure.core.cache$through$fn__4541.invoke(cache.clj:55) [v0.38.0-rc3.jar:?]
at clojure.core.memoize$through_STAR_$fn__4814$fn__4815.invoke(memoize.clj:106) [v0.38.0-rc3.jar:?]
at clojure.core.memoize.RetryingDelay.deref(memoize.clj:47) [v0.38.0-rc3.jar:?]
at clojure.core$deref.invokeStatic(core.clj:2320) [v0.38.0-rc3.jar:?]
at clojure.core$deref.invoke(core.clj:2306) [v0.38.0-rc3.jar:?]
at clojure.core.memoize$cached_function$fn__4882.doInvoke(memoize.clj:231) [v0.38.0-rc3.jar:?]
at clojure.lang.RestFn.applyTo(RestFn.java:137) [v0.38.0-rc3.jar:?]
at clojure.lang.AFunction$1.doInvoke(AFunction.java:31) [v0.38.0-rc3.jar:?]
at clojure.lang.RestFn.invoke(RestFn.java:408) [v0.38.0-rc3.jar:?]
at toucan.models$apply_type_fns$iter__25324__25328$fn__25329.invoke(models.clj:304) [v0.38.0-rc3.jar:?]
at clojure.lang.LazySeq.sval(LazySeq.java:42) [v0.38.0-rc3.jar:?]
at clojure.lang.LazySeq.seq(LazySeq.java:51) [v0.38.0-rc3.jar:?]
at clojure.lang.RT.seq(RT.java:535) [v0.38.0-rc3.jar:?]
at clojure.core$seq__5402.invokeStatic(core.clj:137) [v0.38.0-rc3.jar:?]
at clojure.core.protocols$seq_reduce.invokeStatic(protocols.clj:24) [v0.38.0-rc3.jar:?]
at clojure.core.protocols$fn__8146.invokeStatic(protocols.clj:75) [v0.38.0-rc3.jar:?]
at clojure.core.protocols$fn__8146.invoke(protocols.clj:75) [v0.38.0-rc3.jar:?]
at clojure.core.protocols$fn__8088$G__8083__8101.invoke(protocols.clj:13) [v0.38.0-rc3.jar:?]
at clojure.core$reduce.invokeStatic(core.clj:6828) [v0.38.0-rc3.jar:?]
at clojure.core$into.invokeStatic(core.clj:6895) [v0.38.0-rc3.jar:?]
at clojure.core$into.invoke(core.clj:6887) [v0.38.0-rc3.jar:?]
at toucan.models$apply_type_fns.invokeStatic(models.clj:302) [v0.38.0-rc3.jar:?]
at toucan.models$apply_type_fns.invoke(models.clj:299) [v0.38.0-rc3.jar:?]
at toucan.models$do_post_select.invokeStatic(models.clj:349) [v0.38.0-rc3.jar:?]
at toucan.models$do_post_select.invoke(models.clj:344) [v0.38.0-rc3.jar:?]
at toucan.db$do_post_select$iter__27965__27969$fn__27970.invoke(db.clj:373) [v0.38.0-rc3.jar:?]
at clojure.lang.LazySeq.sval(LazySeq.java:42) [v0.38.0-rc3.jar:?]
at clojure.lang.LazySeq.seq(LazySeq.java:51) [v0.38.0-rc3.jar:?]
at clojure.lang.RT.seq(RT.java:535) [v0.38.0-rc3.jar:?]
at clojure.lang.LazilyPersistentVector.create(LazilyPersistentVector.java:44) [v0.38.0-rc3.jar:?]
at clojure.core$vec.invokeStatic(core.clj:377) [v0.38.0-rc3.jar:?]
at clojure.core$vec.invoke(core.clj:367) [v0.38.0-rc3.jar:?]
at toucan.db$do_post_select.invokeStatic(db.clj:372) [v0.38.0-rc3.jar:?]
at toucan.db$do_post_select.invoke(db.clj:363) [v0.38.0-rc3.jar:?]
at toucan.db$simple_select.invokeStatic(db.clj:394) [v0.38.0-rc3.jar:?]
at toucan.db$simple_select.invoke(db.clj:383) [v0.38.0-rc3.jar:?]
at toucan.db$select.invokeStatic(db.clj:662) [v0.38.0-rc3.jar:?]
at toucan.db$select.doInvoke(db.clj:656) [v0.38.0-rc3.jar:?]
at clojure.lang.RestFn.invoke(RestFn.java:410) [v0.38.0-rc3.jar:?]
at clojure.lang.AFn.applyToHelper(AFn.java:154) [v0.38.0-rc3.jar:?]
at clojure.lang.RestFn.applyTo(RestFn.java:132) [v0.38.0-rc3.jar:?]
at clojure.core$apply.invokeStatic(core.clj:667) [v0.38.0-rc3.jar:?]
at clojure.core$apply.invoke(core.clj:660) [v0.38.0-rc3.jar:?]
at toucan.db$select_field__GT_field.invokeStatic(db.clj:704) [v0.38.0-rc3.jar:?]
at toucan.db$select_field__GT_field.doInvoke(db.clj:697) [v0.38.0-rc3.jar:?]
at clojure.lang.RestFn.invoke(RestFn.java:445) [v0.38.0-rc3.jar:?]
at clojure.lang.AFn.applyToHelper(AFn.java:160) [v0.38.0-rc3.jar:?]
at clojure.lang.RestFn.applyTo(RestFn.java:132) [v0.38.0-rc3.jar:?]
at clojure.core$apply.invokeStatic(core.clj:671) [v0.38.0-rc3.jar:?]
at clojure.core$apply.invoke(core.clj:660) [v0.38.0-rc3.jar:?]
at toucan.db$select_id__GT_field.invokeStatic(db.clj:723) [v0.38.0-rc3.jar:?]
at toucan.db$select_id__GT_field.doInvoke(db.clj:716) [v0.38.0-rc3.jar:?]
at clojure.lang.RestFn.invoke(RestFn.java:425) [v0.38.0-rc3.jar:?]
at metabase.cmd.copy$fn__74585$overwrite_encrypted_fields_to_plaintext_BANG___74590$fn__74591.invoke(copy.clj:296) [v0.38.0-rc3.jar:?]
at metabase.cmd.copy$fn__74585$overwrite_encrypted_fields_to_plaintext_BANG___74590.invoke(copy.clj:285) [v0.38.0-rc3.jar:?]
at metabase.cmd.dump_to_h2$dump_to_h2_BANG_.invokeStatic(dump_to_h2.clj:38) [v0.38.0-rc3.jar:?]
at metabase.cmd.dump_to_h2$dump_to_h2_BANG_.invoke(dump_to_h2.clj:19) [v0.38.0-rc3.jar:?]
at clojure.lang.Var.invoke(Var.java:388) [v0.38.0-rc3.jar:?]
at metabase.cmd$dump_to_h2.invokeStatic(cmd.clj:54) [v0.38.0-rc3.jar:?]
at metabase.cmd$dump_to_h2.doInvoke(cmd.clj:46) [v0.38.0-rc3.jar:?]
at clojure.lang.RestFn.applyTo(RestFn.java:139) [v0.38.0-rc3.jar:?]
at clojure.core$apply.invokeStatic(core.clj:665) [v0.38.0-rc3.jar:?]
at clojure.core$apply.invoke(core.clj:660) [v0.38.0-rc3.jar:?]
at metabase.cmd$run_cmd$fn__74121.invoke(cmd.clj:171) [v0.38.0-rc3.jar:?]
at metabase.cmd$run_cmd.invokeStatic(cmd.clj:171) [v0.38.0-rc3.jar:?]
at metabase.cmd$run_cmd.invoke(cmd.clj:167) [v0.38.0-rc3.jar:?]
at clojure.lang.Var.invoke(Var.java:388) [v0.38.0-rc3.jar:?]
at metabase.core$run_cmd.invokeStatic(core.clj:148) [v0.38.0-rc3.jar:?]
at metabase.core$run_cmd.invoke(core.clj:146) [v0.38.0-rc3.jar:?]
at metabase.core$_main.invokeStatic(core.clj:170) [v0.38.0-rc3.jar:?]
at metabase.core$_main.doInvoke(core.clj:165) [v0.38.0-rc3.jar:?]
at clojure.lang.RestFn.applyTo(RestFn.java:137) [v0.38.0-rc3.jar:?]
at metabase.core.main(Unknown Source) [v0.38.0-rc3.jar:?]
Dump complete
```
|
non_process
|
dump to with dump plaintext should check for presence of mb encryption secret key describe the bug when dumping a metabase app db to and decrypting it in the process we should test for the existence of mb encryption secret key and exit with a non zero exit code if it is not set or we fail to decrypt values it appears that the exception is being swallowed somewhere so the process exits with logs text pop os debug run exit code pop os debug run stdout warning sun reflect reflection getcallerclass is not supported this will impact performance info metabase util maximum memory available to jvm gb info util encryption saved credentials encryption is disabled for this metabase instance for more information see info metabase core metabase master copyright metabase inc metabase enterprise edition extensions are not present warn metabase core warning you have enabled namespace tracing which could log sensitive information like db passwords dumping from configured metabase db to file tmp db info db setup verifying postgres database connection info db setup successfully verified postgresql application database connection info db setup running database migrations info db setup setting up liquibase info db setup liquibase is ready info db liquibase checking if database has unrun migrations info db liquibase database has unrun migrations waiting for migration lock to be cleared info db liquibase migration lock is cleared running migrations info db setup database migrations current info db data migrations running all necessary data migrations this may take a minute info db data migrations finished running data migrations set up postgres source database and run migrations database setup took s info db setup database connection info db setup successfully verified application database connection info db setup running database migrations info db setup setting up liquibase info db setup liquibase is ready info db liquibase checking if database has unrun migrations info db liquibase database has unrun migrations waiting for migration lock to be cleared info db liquibase migration lock is cleared running migrations info db setup database migrations current set up target database and run migrations database setup took s testing if target database is already populated temporarily disabling db constraints copying instances of database copied instances copying instances of user copied instances copying instances of setting copied instances copying instances of table copied instances copying instances of field copied instances copying instances of fieldvalues copied instances copying instances of revision copied instances copying instances of viewlog copied instances copying instances of session copied instances copying instances of collection copied instances copying instances of dashboard copied instances copying instances of card copied instances copying instances of dashboardcard copied instances copying instances of activity copied instances copying instances of permissionsgroup copied instances copying instances of permissionsgroupmembership copied instances copying instances of permissions copied instances copying instances of datamigrations copied instances re enabling db constraints error models interface error parsing json com fasterxml jackson core jsonparseexception unrecognized token kjy was expecting json string number array object or token null true or false at at com fasterxml jackson core jsonparser constructerror jsonparser java at com fasterxml jackson core base parserminimalbase reporterror parserminimalbase java at com fasterxml jackson core json readerbasedjsonparser reportinvalidtoken readerbasedjsonparser java at com fasterxml jackson core json readerbasedjsonparser handleoddvalue readerbasedjsonparser java at com fasterxml jackson core json readerbasedjsonparser nexttoken readerbasedjsonparser java at cheshire parse parse invokestatic parse clj at cheshire parse parse invoke parse clj at cheshire core parse string invokestatic core clj at cheshire core parse string invoke core clj at cheshire core parse string invokestatic core clj at cheshire core parse string invoke core clj at metabase models interface json out invokestatic interface clj at metabase models interface json out invoke interface clj at metabase models interface json out with keywordization invokestatic interface clj at metabase models interface json out with keywordization invoke interface clj at clojure core comp fn invoke core clj at clojure lang afn applytohelper afn java at clojure lang restfn applyto restfn java at clojure core apply invokestatic core clj at clojure core apply invoke core clj at clojure core memoize through star fn invoke memoize clj at clojure core cache through fn invoke cache clj at clojure core memoize through star fn fn invoke memoize clj at clojure core memoize retryingdelay deref memoize clj at clojure core deref invokestatic core clj at clojure core deref invoke core clj at clojure core memoize cached function fn doinvoke memoize clj at clojure lang restfn applyto restfn java at clojure lang afunction doinvoke afunction java at clojure lang restfn invoke restfn java at toucan models apply type fns iter fn invoke models clj at clojure lang lazyseq sval lazyseq java at clojure lang lazyseq seq lazyseq java at clojure lang rt seq rt java at clojure core seq invokestatic core clj at clojure core protocols seq reduce invokestatic protocols clj at clojure core protocols fn invokestatic protocols clj at clojure core protocols fn invoke protocols clj at clojure core protocols fn g invoke protocols clj at clojure core reduce invokestatic core clj at clojure core into invokestatic core clj at clojure core into invoke core clj at toucan models apply type fns invokestatic models clj at toucan models apply type fns invoke models clj at toucan models do post select invokestatic models clj at toucan models do post select invoke models clj at toucan db do post select iter fn invoke db clj at clojure lang lazyseq sval lazyseq java at clojure lang lazyseq seq lazyseq java at clojure lang rt seq rt java at clojure lang lazilypersistentvector create lazilypersistentvector java at clojure core vec invokestatic core clj at clojure core vec invoke core clj at toucan db do post select invokestatic db clj at toucan db do post select invoke db clj at toucan db simple select invokestatic db clj at toucan db simple select invoke db clj at toucan db select invokestatic db clj at toucan db select doinvoke db clj at clojure lang restfn invoke restfn java at clojure lang afn applytohelper afn java at clojure lang restfn applyto restfn java at clojure core apply invokestatic core clj at clojure core apply invoke core clj at toucan db select field gt field invokestatic db clj at toucan db select field gt field doinvoke db clj at clojure lang restfn invoke restfn java at clojure lang afn applytohelper afn java at clojure lang restfn applyto restfn java at clojure core apply invokestatic core clj at clojure core apply invoke core clj at toucan db select id gt field invokestatic db clj at toucan db select id gt field doinvoke db clj at clojure lang restfn invoke restfn java at metabase cmd copy fn overwrite encrypted fields to plaintext bang fn invoke copy clj at metabase cmd copy fn overwrite encrypted fields to plaintext bang invoke copy clj at metabase cmd dump to dump to bang invokestatic dump to clj at metabase cmd dump to dump to bang invoke dump to clj at clojure lang var invoke var java at metabase cmd dump to invokestatic cmd clj at metabase cmd dump to doinvoke cmd clj at clojure lang restfn applyto restfn java at clojure core apply invokestatic core clj at clojure core apply invoke core clj at metabase cmd run cmd fn invoke cmd clj at metabase cmd run cmd invokestatic cmd clj at metabase cmd run cmd invoke cmd clj at clojure lang var invoke var java at metabase core run cmd invokestatic core clj at metabase core run cmd invoke core clj at metabase core main invokestatic core clj at metabase core main doinvoke core clj at clojure lang restfn applyto restfn java at metabase core main unknown source dump complete
| 0
|
13,753
| 16,504,062,527
|
IssuesEvent
|
2021-05-25 17:03:18
|
GoogleCloudPlatform/anthos-samples
|
https://api.github.com/repos/GoogleCloudPlatform/anthos-samples
|
closed
|
Anthos Bare Metal Terraform Testing Framework and Release Design
|
priority: p2 samples type: process
|
ABM Terraform needs testing & test framework design document. The setup needs to be tested at few levels -
1. Write Test Cases and come up with few examples for the design
2. Acceptance Tests [https://www.terraform.io/docs/extend/testing/acceptance-tests/index.html](url) and Unit Tests [https://www.terraform.io/docs/extend/testing/unit-testing.html](url)
3. Release Versions, and Releases (Github Branches)
|
1.0
|
Anthos Bare Metal Terraform Testing Framework and Release Design - ABM Terraform needs testing & test framework design document. The setup needs to be tested at few levels -
1. Write Test Cases and come up with few examples for the design
2. Acceptance Tests [https://www.terraform.io/docs/extend/testing/acceptance-tests/index.html](url) and Unit Tests [https://www.terraform.io/docs/extend/testing/unit-testing.html](url)
3. Release Versions, and Releases (Github Branches)
|
process
|
anthos bare metal terraform testing framework and release design abm terraform needs testing test framework design document the setup needs to be tested at few levels write test cases and come up with few examples for the design acceptance tests url and unit tests url release versions and releases github branches
| 1
|
344,555
| 24,818,548,183
|
IssuesEvent
|
2022-10-25 14:47:01
|
bc-compsci-club/club-connect
|
https://api.github.com/repos/bc-compsci-club/club-connect
|
opened
|
Modify the glossary into a table in the Contributing.md file
|
documentation good first issue
|
## Description
Update the glossary in the [Contributing.md](https://github.com/bc-compsci-club/club-connect/blob/master/CONTRIBUTING.md#glossary) file into a table
## Desired Outcome
The table should have 2 columns with a header row. See example below
|Abbreviation|Term|
|---|---|
|cs|Computer Science|
|
1.0
|
Modify the glossary into a table in the Contributing.md file - ## Description
Update the glossary in the [Contributing.md](https://github.com/bc-compsci-club/club-connect/blob/master/CONTRIBUTING.md#glossary) file into a table
## Desired Outcome
The table should have 2 columns with a header row. See example below
|Abbreviation|Term|
|---|---|
|cs|Computer Science|
|
non_process
|
modify the glossary into a table in the contributing md file description update the glossary in the file into a table desired outcome the table should have columns with a header row see example below abbreviation term cs computer science
| 0
|
19,251
| 25,445,170,993
|
IssuesEvent
|
2022-11-24 04:56:33
|
python/cpython
|
https://api.github.com/repos/python/cpython
|
reopened
|
asyncio: support multiprocessing (support fork)
|
type-feature expert-asyncio 3.12 expert-multiprocessing
|
BPO | [22087](https://bugs.python.org/issue22087)
--- | :---
Nosy | @gvanrossum, @pitrou, @1st1, @thehesiod, @miss-islington
PRs | <li>python/cpython#7208</li><li>python/cpython#7215</li><li>python/cpython#7218</li><li>python/cpython#7226</li><li>python/cpython#7232</li><li>python/cpython#7233</li>
Files | <li>[test_loop.py](https://bugs.python.org/file36117/test_loop.py "Uploaded as text/plain at 2014-07-26.18:01:38 by dan.oreilly"): Test script demonstrating the issue</li><li>[handle_mp_unix.diff](https://bugs.python.org/file36118/handle_mp_unix.diff "Uploaded as text/plain at 2014-07-26.18:20:15 by dan.oreilly"): Patch that makes _UnixDefaultEventLoopPolicy create a new loop object if get_event_loop is called in a forked mp child process</li><li>[handle-mp_unix2.patch](https://bugs.python.org/file36119/handle-mp_unix2.patch "Uploaded as text/plain at 2014-07-26.20:13:57 by dan.oreilly"): Use os.getpid() instead of multiprocessing. Store pid state in Policy instance rather than the Loop instance.</li><li>[handle_mp_unix_with_test.diff](https://bugs.python.org/file36134/handle_mp_unix_with_test.diff "Uploaded as text/plain at 2014-07-27.16:09:52 by dan.oreilly"): Adds a unit test to previous patch</li>
<sup>*Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.*</sup>
<details><summary>Show more details</summary><p>
GitHub fields:
```python
assignee = None
closed_at = None
created_at = <Date 2014-07-26.18:01:10.150>
labels = ['type-bug', 'expert-asyncio']
title = 'asyncio: support multiprocessing (support fork)'
updated_at = <Date 2018-05-30.00:56:36.541>
user = 'https://bugs.python.org/danoreilly'
```
bugs.python.org fields:
```python
activity = <Date 2018-05-30.00:56:36.541>
actor = 'yselivanov'
assignee = 'none'
closed = False
closed_date = None
closer = None
components = ['asyncio']
creation = <Date 2014-07-26.18:01:10.150>
creator = 'dan.oreilly'
dependencies = []
files = ['36117', '36118', '36119', '36134']
hgrepos = []
issue_num = 22087
keywords = ['patch']
message_count = 23.0
messages = ['224082', '224084', '224085', '224097', '224125', '224140', '224143', '224144', '224145', '226698', '235404', '235411', '288327', '297222', '297226', '297227', '297229', '318077', '318092', '318135', '318140', '318143', '318144']
nosy_count = 7.0
nosy_names = ['gvanrossum', 'pitrou', 'zmedico', 'yselivanov', 'thehesiod', 'dan.oreilly', 'miss-islington']
pr_nums = ['7208', '7215', '7218', '7226', '7232', '7233']
priority = 'normal'
resolution = None
stage = 'patch review'
status = 'open'
superseder = None
type = 'behavior'
url = 'https://bugs.python.org/issue22087'
versions = ['Python 3.4', 'Python 3.5', 'Python 3.6']
```
</p></details>
<!-- gh-linked-prs -->
### Linked PRs
* gh-99539
* gh-99745
<!-- /gh-linked-prs -->
|
1.0
|
asyncio: support multiprocessing (support fork) - BPO | [22087](https://bugs.python.org/issue22087)
--- | :---
Nosy | @gvanrossum, @pitrou, @1st1, @thehesiod, @miss-islington
PRs | <li>python/cpython#7208</li><li>python/cpython#7215</li><li>python/cpython#7218</li><li>python/cpython#7226</li><li>python/cpython#7232</li><li>python/cpython#7233</li>
Files | <li>[test_loop.py](https://bugs.python.org/file36117/test_loop.py "Uploaded as text/plain at 2014-07-26.18:01:38 by dan.oreilly"): Test script demonstrating the issue</li><li>[handle_mp_unix.diff](https://bugs.python.org/file36118/handle_mp_unix.diff "Uploaded as text/plain at 2014-07-26.18:20:15 by dan.oreilly"): Patch that makes _UnixDefaultEventLoopPolicy create a new loop object if get_event_loop is called in a forked mp child process</li><li>[handle-mp_unix2.patch](https://bugs.python.org/file36119/handle-mp_unix2.patch "Uploaded as text/plain at 2014-07-26.20:13:57 by dan.oreilly"): Use os.getpid() instead of multiprocessing. Store pid state in Policy instance rather than the Loop instance.</li><li>[handle_mp_unix_with_test.diff](https://bugs.python.org/file36134/handle_mp_unix_with_test.diff "Uploaded as text/plain at 2014-07-27.16:09:52 by dan.oreilly"): Adds a unit test to previous patch</li>
<sup>*Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.*</sup>
<details><summary>Show more details</summary><p>
GitHub fields:
```python
assignee = None
closed_at = None
created_at = <Date 2014-07-26.18:01:10.150>
labels = ['type-bug', 'expert-asyncio']
title = 'asyncio: support multiprocessing (support fork)'
updated_at = <Date 2018-05-30.00:56:36.541>
user = 'https://bugs.python.org/danoreilly'
```
bugs.python.org fields:
```python
activity = <Date 2018-05-30.00:56:36.541>
actor = 'yselivanov'
assignee = 'none'
closed = False
closed_date = None
closer = None
components = ['asyncio']
creation = <Date 2014-07-26.18:01:10.150>
creator = 'dan.oreilly'
dependencies = []
files = ['36117', '36118', '36119', '36134']
hgrepos = []
issue_num = 22087
keywords = ['patch']
message_count = 23.0
messages = ['224082', '224084', '224085', '224097', '224125', '224140', '224143', '224144', '224145', '226698', '235404', '235411', '288327', '297222', '297226', '297227', '297229', '318077', '318092', '318135', '318140', '318143', '318144']
nosy_count = 7.0
nosy_names = ['gvanrossum', 'pitrou', 'zmedico', 'yselivanov', 'thehesiod', 'dan.oreilly', 'miss-islington']
pr_nums = ['7208', '7215', '7218', '7226', '7232', '7233']
priority = 'normal'
resolution = None
stage = 'patch review'
status = 'open'
superseder = None
type = 'behavior'
url = 'https://bugs.python.org/issue22087'
versions = ['Python 3.4', 'Python 3.5', 'Python 3.6']
```
</p></details>
<!-- gh-linked-prs -->
### Linked PRs
* gh-99539
* gh-99745
<!-- /gh-linked-prs -->
|
process
|
asyncio support multiprocessing support fork bpo nosy gvanrossum pitrou thehesiod miss islington prs python cpython python cpython python cpython python cpython python cpython python cpython files uploaded as text plain at by dan oreilly test script demonstrating the issue uploaded as text plain at by dan oreilly patch that makes unixdefaulteventlooppolicy create a new loop object if get event loop is called in a forked mp child process uploaded as text plain at by dan oreilly use os getpid instead of multiprocessing store pid state in policy instance rather than the loop instance uploaded as text plain at by dan oreilly adds a unit test to previous patch note these values reflect the state of the issue at the time it was migrated and might not reflect the current state show more details github fields python assignee none closed at none created at labels title asyncio support multiprocessing support fork updated at user bugs python org fields python activity actor yselivanov assignee none closed false closed date none closer none components creation creator dan oreilly dependencies files hgrepos issue num keywords message count messages nosy count nosy names pr nums priority normal resolution none stage patch review status open superseder none type behavior url versions linked prs gh gh
| 1
|
1,575
| 4,167,473,691
|
IssuesEvent
|
2016-06-20 09:39:26
|
e-government-ua/iBP
|
https://api.github.com/repos/e-government-ua/iBP
|
closed
|
Видача копій, витягів з розпоряджень міського голови, рішень, прийнятих міською радою та виконавчим комітетом - ID_Service - 4 - Изюм - Харьковская обл.
|
In process of testing in work test
|
Анна Азатовна - сотрудник ЦНАП, хочет все сначала на тестовом посмотреть, услуги первые у них
0506488880
admposl-izyum@ukr.net
|
1.0
|
Видача копій, витягів з розпоряджень міського голови, рішень, прийнятих міською радою та виконавчим комітетом - ID_Service - 4 - Изюм - Харьковская обл. -
Анна Азатовна - сотрудник ЦНАП, хочет все сначала на тестовом посмотреть, услуги первые у них
0506488880
admposl-izyum@ukr.net
|
process
|
видача копій витягів з розпоряджень міського голови рішень прийнятих міською радою та виконавчим комітетом id service изюм харьковская обл анна азатовна сотрудник цнап хочет все сначала на тестовом посмотреть услуги первые у них admposl izyum ukr net
| 1
|
137,006
| 20,027,342,125
|
IssuesEvent
|
2022-02-01 23:05:36
|
urbit/bridge
|
https://api.github.com/repos/urbit/bridge
|
opened
|
Proxy visibility & set-ability is inconsistent in the wrong way
|
bug L2 design
|
When logged in as a galaxy owner or management proxy, it lets me see & set all of the proxies. This is correct.
When logged in as a galaxy spawn proxy, it lets me see & set both the spawn and voting proxy. The former is correct, the latter will fail if I attempt to change it.
When logged in as a galaxy voting proxy, the "ID" section doesn't show up, so I cannot change the voting proxy, even though I should be able to.
In all cases, I also might want to _see_ the current proxies and/or copy their addresses. This is currently not possible, because they might not show up.
|
1.0
|
Proxy visibility & set-ability is inconsistent in the wrong way - When logged in as a galaxy owner or management proxy, it lets me see & set all of the proxies. This is correct.
When logged in as a galaxy spawn proxy, it lets me see & set both the spawn and voting proxy. The former is correct, the latter will fail if I attempt to change it.
When logged in as a galaxy voting proxy, the "ID" section doesn't show up, so I cannot change the voting proxy, even though I should be able to.
In all cases, I also might want to _see_ the current proxies and/or copy their addresses. This is currently not possible, because they might not show up.
|
non_process
|
proxy visibility set ability is inconsistent in the wrong way when logged in as a galaxy owner or management proxy it lets me see set all of the proxies this is correct when logged in as a galaxy spawn proxy it lets me see set both the spawn and voting proxy the former is correct the latter will fail if i attempt to change it when logged in as a galaxy voting proxy the id section doesn t show up so i cannot change the voting proxy even though i should be able to in all cases i also might want to see the current proxies and or copy their addresses this is currently not possible because they might not show up
| 0
|
90,141
| 11,353,460,418
|
IssuesEvent
|
2020-01-24 15:37:21
|
department-of-veterans-affairs/va.gov-team
|
https://api.github.com/repos/department-of-veterans-affairs/va.gov-team
|
closed
|
Conduct Platform Design Review of "View Dependents" Mockup
|
design vsa vsa-ebenefits
|
## Goal
We need the View Dependents screen layout reviewed and approved by the Platform Design Team so we can proceed with frontend development of a system-compliant page.
## Tasks
- [x] Resolve IA questions on landing page
- [x] Schedule a Platform Design Team review of the "View Dependents" mockup
- [ ] Incorporate feedback/directives from the session
- [ ] Schedule a walkthrough of the changes to the updated mockups BE/FE/PM
## Acceptance Criteria
- [ ] Review scheduled with the Platform Design Team
## Next Steps
- [ ] After incorporating rough feedback, a team walkthrough conducted to discuss changes and next steps.
|
1.0
|
Conduct Platform Design Review of "View Dependents" Mockup - ## Goal
We need the View Dependents screen layout reviewed and approved by the Platform Design Team so we can proceed with frontend development of a system-compliant page.
## Tasks
- [x] Resolve IA questions on landing page
- [x] Schedule a Platform Design Team review of the "View Dependents" mockup
- [ ] Incorporate feedback/directives from the session
- [ ] Schedule a walkthrough of the changes to the updated mockups BE/FE/PM
## Acceptance Criteria
- [ ] Review scheduled with the Platform Design Team
## Next Steps
- [ ] After incorporating rough feedback, a team walkthrough conducted to discuss changes and next steps.
|
non_process
|
conduct platform design review of view dependents mockup goal we need the view dependents screen layout reviewed and approved by the platform design team so we can proceed with frontend development of a system compliant page tasks resolve ia questions on landing page schedule a platform design team review of the view dependents mockup incorporate feedback directives from the session schedule a walkthrough of the changes to the updated mockups be fe pm acceptance criteria review scheduled with the platform design team next steps after incorporating rough feedback a team walkthrough conducted to discuss changes and next steps
| 0
|
421,626
| 28,349,671,852
|
IssuesEvent
|
2023-04-12 01:04:07
|
MLRG-CEFET-RJ/atmoseer
|
https://api.github.com/repos/MLRG-CEFET-RJ/atmoseer
|
opened
|
Entrada de Dados: Criação de um Mock
|
documentation enhancement
|
## Entrada de Dados: Criação de um Mock
- As três principais fontes de alimentação do modelo criado para previsão do tempo são:
1. Radiossondas;
2. Modelos Numéricos (eq. diferenciais);
3. Estações Meteorológicas;
- Entretanto, não será possível utilizar as informações das Estações Meteorológicas para o projeto atual;
- Assim, caberá a um dos contribuintes criar um Mock que sirva como substituto para a entrada de dados das Estações Meteorológicas, gerando dados que possam ser lidos e processados pelo modelo.
### Previsão de Entrega: 19/04
|
1.0
|
Entrada de Dados: Criação de um Mock - ## Entrada de Dados: Criação de um Mock
- As três principais fontes de alimentação do modelo criado para previsão do tempo são:
1. Radiossondas;
2. Modelos Numéricos (eq. diferenciais);
3. Estações Meteorológicas;
- Entretanto, não será possível utilizar as informações das Estações Meteorológicas para o projeto atual;
- Assim, caberá a um dos contribuintes criar um Mock que sirva como substituto para a entrada de dados das Estações Meteorológicas, gerando dados que possam ser lidos e processados pelo modelo.
### Previsão de Entrega: 19/04
|
non_process
|
entrada de dados criação de um mock entrada de dados criação de um mock as três principais fontes de alimentação do modelo criado para previsão do tempo são radiossondas modelos numéricos eq diferenciais estações meteorológicas entretanto não será possível utilizar as informações das estações meteorológicas para o projeto atual assim caberá a um dos contribuintes criar um mock que sirva como substituto para a entrada de dados das estações meteorológicas gerando dados que possam ser lidos e processados pelo modelo previsão de entrega
| 0
|
8,062
| 11,223,717,170
|
IssuesEvent
|
2020-01-07 23:34:34
|
GoogleCloudPlatform/python-docs-samples
|
https://api.github.com/repos/GoogleCloudPlatform/python-docs-samples
|
closed
|
Run tests on new library versions
|
testing type: process
|
Although our library versions are pinned in the samples, each time there is a new library version, it'd be good to run the sample tests again to catch breaking changes that cause our samples to be out of date.
- Potentially first limiting to any minor versions, where a major version change would have an expected breaking change.
|
1.0
|
Run tests on new library versions - Although our library versions are pinned in the samples, each time there is a new library version, it'd be good to run the sample tests again to catch breaking changes that cause our samples to be out of date.
- Potentially first limiting to any minor versions, where a major version change would have an expected breaking change.
|
process
|
run tests on new library versions although our library versions are pinned in the samples each time there is a new library version it d be good to run the sample tests again to catch breaking changes that cause our samples to be out of date potentially first limiting to any minor versions where a major version change would have an expected breaking change
| 1
|
5,169
| 7,941,204,602
|
IssuesEvent
|
2018-07-10 03:06:40
|
Great-Hill-Corporation/quickBlocks
|
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
|
closed
|
Defeating the Ddos blocks
|
libs-etherlib status-inprocess type-enhancement
|
I need to create two sets of blooms. The first, which is checked by default, should eliminate the DDos transactions from the bloom filter. There should be a separate set of bloom filters (called blooms_deep) that allow for scanning the entire list of addresses, but only when users ask for it.
Re-write blooms between devCon2 and end of state clear. Write the bloom without any transactions with more than 2000 traces (or better yet see if I can identify the sender of the offending transactions). Then when doing acctScrape, check only the non-Ddos blooms unless told to look through everything.
|
1.0
|
Defeating the Ddos blocks - I need to create two sets of blooms. The first, which is checked by default, should eliminate the DDos transactions from the bloom filter. There should be a separate set of bloom filters (called blooms_deep) that allow for scanning the entire list of addresses, but only when users ask for it.
Re-write blooms between devCon2 and end of state clear. Write the bloom without any transactions with more than 2000 traces (or better yet see if I can identify the sender of the offending transactions). Then when doing acctScrape, check only the non-Ddos blooms unless told to look through everything.
|
process
|
defeating the ddos blocks i need to create two sets of blooms the first which is checked by default should eliminate the ddos transactions from the bloom filter there should be a separate set of bloom filters called blooms deep that allow for scanning the entire list of addresses but only when users ask for it re write blooms between and end of state clear write the bloom without any transactions with more than traces or better yet see if i can identify the sender of the offending transactions then when doing acctscrape check only the non ddos blooms unless told to look through everything
| 1
|
17,674
| 23,506,231,237
|
IssuesEvent
|
2022-08-18 12:52:25
|
NVIDIA/aistore
|
https://api.github.com/repos/NVIDIA/aistore
|
closed
|
Sometimes failing to restart AIS in GKE: getting "lost or missing mountpath" fatal error
|
good first issue in process
|
Hi,
We're using AIStore in a GKE cluster using the AIS K8S operator.
We have multiple mounts specified in the current spec we're using
```yaml
mounts:
- path: "/ais1"
size: 1000Gi
- path: "/ais2"
size: 1000Gi
```
However over time we notice that target pods can crash and then fail to start up. The startup message indicates the following error:
`FATAL ERROR: t[rlNlzeeu]: [storage integrity error sie#50, for troubleshooting see https://github.com/NVIDIA/aistore/blob/master/docs/troubleshooting.md]: lost or missing mountpath "/ais1" ({Fs:/dev/sdb FsType:ext4 FsID:543639085,1533152499} vs {Path:/ais1 Fs:/dev/sdc FsType:ext4 FsID:543639085,1533152499 Ext:<nil> Enabled:true})
`
It seems that the 2 disks are getting mounted to different devices (`/dev/sdc` and `/dev/sdb`) after restarting which then fails the integrity check on the start up. I can resolve this by removing `.ais.vmd` files (unclear if this is causing data issues yet.)
Do you have any suggestions around this issue? Is there a way to enforce which mount gets mapped to which device?
|
1.0
|
Sometimes failing to restart AIS in GKE: getting "lost or missing mountpath" fatal error - Hi,
We're using AIStore in a GKE cluster using the AIS K8S operator.
We have multiple mounts specified in the current spec we're using
```yaml
mounts:
- path: "/ais1"
size: 1000Gi
- path: "/ais2"
size: 1000Gi
```
However over time we notice that target pods can crash and then fail to start up. The startup message indicates the following error:
`FATAL ERROR: t[rlNlzeeu]: [storage integrity error sie#50, for troubleshooting see https://github.com/NVIDIA/aistore/blob/master/docs/troubleshooting.md]: lost or missing mountpath "/ais1" ({Fs:/dev/sdb FsType:ext4 FsID:543639085,1533152499} vs {Path:/ais1 Fs:/dev/sdc FsType:ext4 FsID:543639085,1533152499 Ext:<nil> Enabled:true})
`
It seems that the 2 disks are getting mounted to different devices (`/dev/sdc` and `/dev/sdb`) after restarting which then fails the integrity check on the start up. I can resolve this by removing `.ais.vmd` files (unclear if this is causing data issues yet.)
Do you have any suggestions around this issue? Is there a way to enforce which mount gets mapped to which device?
|
process
|
sometimes failing to restart ais in gke getting lost or missing mountpath fatal error hi we re using aistore in a gke cluster using the ais operator we have multiple mounts specified in the current spec we re using yaml mounts path size path size however over time we notice that target pods can crash and then fail to start up the startup message indicates the following error fatal error t lost or missing mountpath fs dev sdb fstype fsid vs path fs dev sdc fstype fsid ext enabled true it seems that the disks are getting mounted to different devices dev sdc and dev sdb after restarting which then fails the integrity check on the start up i can resolve this by removing ais vmd files unclear if this is causing data issues yet do you have any suggestions around this issue is there a way to enforce which mount gets mapped to which device
| 1
|
8,151
| 4,172,986,640
|
IssuesEvent
|
2016-06-21 08:54:25
|
qorelanguage/qore
|
https://api.github.com/repos/qorelanguage/qore
|
closed
|
build-break in Operator.cpp, gcc 5.3.1
|
bug build fixed
|
```
[ 22%] Building CXX object CMakeFiles/libqore.dir/lib/Operator.cpp.o
/home/kveton/src/qore/git/qore/lib/Operator.cpp: In function ‘int64 op_cmp_double(double, double, ExceptionSink*)’:
/home/kveton/src/qore/git/qore/lib/Operator.cpp:763:18: error: ‘isnan’ was not declared in this scope
if (isnan(left) || isnan(right)) {
^
/home/kveton/src/qore/git/qore/lib/Operator.cpp:763:18: note: suggested alternative:
In file included from /usr/include/c++/5/random:38:0,
from /usr/include/c++/5/bits/stl_algo.h:66,
from /usr/include/c++/5/algorithm:62,
from /home/kveton/src/qore/git/qore/include/qore/common.h:52,
from /home/kveton/src/qore/git/qore/include/qore/Qore.h:45,
from /home/kveton/src/qore/git/qore/lib/Operator.cpp:31:
/usr/include/c++/5/cmath:641:5: note: ‘std::isnan’
isnan(_Tp __x)
^
CMakeFiles/libqore.dir/build.make:654: recipe for target 'CMakeFiles/libqore.dir/lib/Operator.cpp.o' failed
make[2]: *** [CMakeFiles/libqore.dir/lib/Operator.cpp.o] Error 1
CMakeFiles/Makefile2:308: recipe for target 'CMakeFiles/libqore.dir/all' failed
make[1]: *** [CMakeFiles/libqore.dir/all] Error 2
Makefile:127: recipe for target 'all' failed
make: *** [all] Error 2
```
(using mkdir build; cd build; cmake ..; make)
|
1.0
|
build-break in Operator.cpp, gcc 5.3.1 - ```
[ 22%] Building CXX object CMakeFiles/libqore.dir/lib/Operator.cpp.o
/home/kveton/src/qore/git/qore/lib/Operator.cpp: In function ‘int64 op_cmp_double(double, double, ExceptionSink*)’:
/home/kveton/src/qore/git/qore/lib/Operator.cpp:763:18: error: ‘isnan’ was not declared in this scope
if (isnan(left) || isnan(right)) {
^
/home/kveton/src/qore/git/qore/lib/Operator.cpp:763:18: note: suggested alternative:
In file included from /usr/include/c++/5/random:38:0,
from /usr/include/c++/5/bits/stl_algo.h:66,
from /usr/include/c++/5/algorithm:62,
from /home/kveton/src/qore/git/qore/include/qore/common.h:52,
from /home/kveton/src/qore/git/qore/include/qore/Qore.h:45,
from /home/kveton/src/qore/git/qore/lib/Operator.cpp:31:
/usr/include/c++/5/cmath:641:5: note: ‘std::isnan’
isnan(_Tp __x)
^
CMakeFiles/libqore.dir/build.make:654: recipe for target 'CMakeFiles/libqore.dir/lib/Operator.cpp.o' failed
make[2]: *** [CMakeFiles/libqore.dir/lib/Operator.cpp.o] Error 1
CMakeFiles/Makefile2:308: recipe for target 'CMakeFiles/libqore.dir/all' failed
make[1]: *** [CMakeFiles/libqore.dir/all] Error 2
Makefile:127: recipe for target 'all' failed
make: *** [all] Error 2
```
(using mkdir build; cd build; cmake ..; make)
|
non_process
|
build break in operator cpp gcc building cxx object cmakefiles libqore dir lib operator cpp o home kveton src qore git qore lib operator cpp in function ‘ op cmp double double double exceptionsink ’ home kveton src qore git qore lib operator cpp error ‘isnan’ was not declared in this scope if isnan left isnan right home kveton src qore git qore lib operator cpp note suggested alternative in file included from usr include c random from usr include c bits stl algo h from usr include c algorithm from home kveton src qore git qore include qore common h from home kveton src qore git qore include qore qore h from home kveton src qore git qore lib operator cpp usr include c cmath note ‘std isnan’ isnan tp x cmakefiles libqore dir build make recipe for target cmakefiles libqore dir lib operator cpp o failed make error cmakefiles recipe for target cmakefiles libqore dir all failed make error makefile recipe for target all failed make error using mkdir build cd build cmake make
| 0
|
73,080
| 19,569,503,451
|
IssuesEvent
|
2022-01-04 08:03:47
|
envoyproxy/envoy
|
https://api.github.com/repos/envoyproxy/envoy
|
closed
|
Newer release available `rules_foreign_cc`: 0.7.0 (current: 6c0c2af)
|
area/build no stalebot dependencies
|
Package Name: rules_foreign_cc
Current Version: 6c0c2af@2021-09-22 00:35:40
Available Version: 0.7.0@2021-12-03 16:53:29
Upstream releases: https://github.com/bazelbuild/rules_foreign_cc/releases
|
1.0
|
Newer release available `rules_foreign_cc`: 0.7.0 (current: 6c0c2af) -
Package Name: rules_foreign_cc
Current Version: 6c0c2af@2021-09-22 00:35:40
Available Version: 0.7.0@2021-12-03 16:53:29
Upstream releases: https://github.com/bazelbuild/rules_foreign_cc/releases
|
non_process
|
newer release available rules foreign cc current package name rules foreign cc current version available version upstream releases
| 0
|
4,717
| 7,552,566,403
|
IssuesEvent
|
2018-04-19 01:04:28
|
UnbFeelings/unb-feelings-docs
|
https://api.github.com/repos/UnbFeelings/unb-feelings-docs
|
closed
|
Necessidades da Equipe de GQA
|
Processo
|
Para realizar a auditoria da melhor forma possível, precisamos saber de algumas coisas que o processo não deixa claro como será realizado.
- [ ] Onde os produtos gerados pelo processo serão armazenados ? Wiki, Drive ?
- [ ] Como será a estrutura desses artefatos de saída (produtos)? Por exemplo: Qual a estrutura do _backlog_ ? O que esse _backlog_ vai conter e como ? ( Um template que deve ser seguido );
- [ ] Precisamos que o GQM seja definido o quanto antes;
- OBS: Qualquer dúvida, criem uma _issue_ no nosso repositório.
|
1.0
|
Necessidades da Equipe de GQA - Para realizar a auditoria da melhor forma possível, precisamos saber de algumas coisas que o processo não deixa claro como será realizado.
- [ ] Onde os produtos gerados pelo processo serão armazenados ? Wiki, Drive ?
- [ ] Como será a estrutura desses artefatos de saída (produtos)? Por exemplo: Qual a estrutura do _backlog_ ? O que esse _backlog_ vai conter e como ? ( Um template que deve ser seguido );
- [ ] Precisamos que o GQM seja definido o quanto antes;
- OBS: Qualquer dúvida, criem uma _issue_ no nosso repositório.
|
process
|
necessidades da equipe de gqa para realizar a auditoria da melhor forma possível precisamos saber de algumas coisas que o processo não deixa claro como será realizado onde os produtos gerados pelo processo serão armazenados wiki drive como será a estrutura desses artefatos de saída produtos por exemplo qual a estrutura do backlog o que esse backlog vai conter e como um template que deve ser seguido precisamos que o gqm seja definido o quanto antes obs qualquer dúvida criem uma issue no nosso repositório
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.