Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
430
| 2,862,172,125
|
IssuesEvent
|
2015-06-04 01:42:44
|
rancherio/rancher
|
https://api.github.com/repos/rancherio/rancher
|
closed
|
Name is not set for containers deployed as part of a service.
|
area/service status/blocker status/to-test
|
Server version - rancher/server:v0.23.0-rc2
Steps to reproduce the problem:
Create a service providing name.
Activate service.
When containers get created , they have no name set.
|
1.0
|
Name is not set for containers deployed as part of a service. - Server version - rancher/server:v0.23.0-rc2
Steps to reproduce the problem:
Create a service providing name.
Activate service.
When containers get created , they have no name set.
|
non_process
|
name is not set for containers deployed as part of a service server version rancher server steps to reproduce the problem create a service providing name activate service when containers get created they have no name set
| 0
|
17,806
| 6,517,012,795
|
IssuesEvent
|
2017-08-27 17:24:19
|
mikeboers/PyAV
|
https://api.github.com/repos/mikeboers/PyAV
|
closed
|
Python3 on source scripts/activate.sh
|
bug build
|
```
[w495@localhost PyAV]$ python --version
Python 3.6.1 :: Continuum Analytics, Inc.
[w495@localhost PyAV]$
[w495@localhost PyAV]$ source scripts/activate.sh
File "<string>", line 1
import sys; print sys.prefix
^
SyntaxError: invalid syntax
No $PYAV_LIBRARY_NAME set; defaulting to ffmpeg
No $PYAV_LIBRARY_VERSION set; defaulting to 3.2
File "<string>", line 1
import sys; print "%d.%d" % sys.version_info[:2]
^
SyntaxError: invalid syntax
```
|
1.0
|
Python3 on source scripts/activate.sh - ```
[w495@localhost PyAV]$ python --version
Python 3.6.1 :: Continuum Analytics, Inc.
[w495@localhost PyAV]$
[w495@localhost PyAV]$ source scripts/activate.sh
File "<string>", line 1
import sys; print sys.prefix
^
SyntaxError: invalid syntax
No $PYAV_LIBRARY_NAME set; defaulting to ffmpeg
No $PYAV_LIBRARY_VERSION set; defaulting to 3.2
File "<string>", line 1
import sys; print "%d.%d" % sys.version_info[:2]
^
SyntaxError: invalid syntax
```
|
non_process
|
on source scripts activate sh python version python continuum analytics inc source scripts activate sh file line import sys print sys prefix syntaxerror invalid syntax no pyav library name set defaulting to ffmpeg no pyav library version set defaulting to file line import sys print d d sys version info syntaxerror invalid syntax
| 0
|
784,433
| 27,570,590,688
|
IssuesEvent
|
2023-03-08 08:58:54
|
canonical/vanilla-framework
|
https://api.github.com/repos/canonical/vanilla-framework
|
closed
|
Side navigation drawer on small screens doesn't trap the focus
|
Priority: Medium Bug 🐛 Accessibility
|
via @petermakowski comment via @petermakowski comment https://github.com/canonical/juju.is/pull/431#issuecomment-1231665391
> [...] the side navigation is missing a focus trap (if you keep pressing `Tab` you'll eventually start tabbing through elements on the page outside of it).
**Describe the bug**
The side navigation drawer "popup" on small screen doesn't trap the focus. Ideally it should move back up to "Toggle" side navigation button on top once you focus out of last link in the side nav (but only on small screens where side nav is collapsible).
|
1.0
|
Side navigation drawer on small screens doesn't trap the focus - via @petermakowski comment via @petermakowski comment https://github.com/canonical/juju.is/pull/431#issuecomment-1231665391
> [...] the side navigation is missing a focus trap (if you keep pressing `Tab` you'll eventually start tabbing through elements on the page outside of it).
**Describe the bug**
The side navigation drawer "popup" on small screen doesn't trap the focus. Ideally it should move back up to "Toggle" side navigation button on top once you focus out of last link in the side nav (but only on small screens where side nav is collapsible).
|
non_process
|
side navigation drawer on small screens doesn t trap the focus via petermakowski comment via petermakowski comment the side navigation is missing a focus trap if you keep pressing tab you ll eventually start tabbing through elements on the page outside of it describe the bug the side navigation drawer popup on small screen doesn t trap the focus ideally it should move back up to toggle side navigation button on top once you focus out of last link in the side nav but only on small screens where side nav is collapsible
| 0
|
5,061
| 7,867,139,482
|
IssuesEvent
|
2018-06-23 04:08:57
|
StrikeNP/trac_test
|
https://api.github.com/repos/StrikeNP/trac_test
|
closed
|
Include Lscale as a new panel in the default plotgen plots (Trac #496)
|
Migrated from Trac bladornr@uwm.edu enhancement post_processing
|
We include a lot of variables in our plotgen plots, but we omit one important one, namely, Lscale, which is output in clubb's zt files. Let's output Lscale for all our CLUBB cases on the standard plotgen plots.
The variables to plot in plotgen are specified in the case files. For instance, for RICO, the case file is [http://carson.math.uwm.edu/trac/clubb/browser/trunk/postprocessing/plotgen/cases/clubb/rico.case here]. More information about plotgen and the case files is given in the TWiki.
Note: there is no Lscale output by SAM, WRF, COAMPS, etc., and so we can't include a thick red benchmark line.
Attachments:
[plot_explicit_ta_configs.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_explicit_ta_configs.maff)
[plot_new_pdf_config_1_plot_2.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_new_pdf_config_1_plot_2.maff)
[plot_combo_pdf_run_3.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_combo_pdf_run_3.maff)
[plot_input_fields_rtp3_thlp3_1.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_input_fields_rtp3_thlp3_1.maff)
[plot_new_pdf_20180522_test_1.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_new_pdf_20180522_test_1.maff)
[plot_attempts_8_10.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_attempts_8_10.maff)
[plot_attempt_8_only.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_attempt_8_only.maff)
[plot_beta_1p3.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_beta_1p3.maff)
[plot_beta_1p3_all.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_beta_1p3_all.maff)
Migrated from http://carson.math.uwm.edu/trac/clubb/ticket/496
```json
{
"status": "closed",
"changetime": "2012-06-28T21:55:53",
"description": "We include a lot of variables in our plotgen plots, but we omit one important one, namely, Lscale, which is output in clubb's zt files. Let's output Lscale for all our CLUBB cases on the standard plotgen plots. \n\nThe variables to plot in plotgen are specified in the case files. For instance, for RICO, the case file is [http://carson.math.uwm.edu/trac/clubb/browser/trunk/postprocessing/plotgen/cases/clubb/rico.case here]. More information about plotgen and the case files is given in the TWiki.\n\nNote: there is no Lscale output by SAM, WRF, COAMPS, etc., and so we can't include a thick red benchmark line.\n\n\n\n",
"reporter": "vlarson@uwm.edu",
"cc": "vlarson@uwm.edu",
"resolution": "fixed",
"_ts": "1340920553955391",
"component": "post_processing",
"summary": "Include Lscale as a new panel in the default plotgen plots",
"priority": "minor",
"keywords": "",
"time": "2012-02-07T20:54:59",
"milestone": "",
"owner": "bladornr@uwm.edu",
"type": "enhancement"
}
```
|
1.0
|
Include Lscale as a new panel in the default plotgen plots (Trac #496) - We include a lot of variables in our plotgen plots, but we omit one important one, namely, Lscale, which is output in clubb's zt files. Let's output Lscale for all our CLUBB cases on the standard plotgen plots.
The variables to plot in plotgen are specified in the case files. For instance, for RICO, the case file is [http://carson.math.uwm.edu/trac/clubb/browser/trunk/postprocessing/plotgen/cases/clubb/rico.case here]. More information about plotgen and the case files is given in the TWiki.
Note: there is no Lscale output by SAM, WRF, COAMPS, etc., and so we can't include a thick red benchmark line.
Attachments:
[plot_explicit_ta_configs.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_explicit_ta_configs.maff)
[plot_new_pdf_config_1_plot_2.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_new_pdf_config_1_plot_2.maff)
[plot_combo_pdf_run_3.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_combo_pdf_run_3.maff)
[plot_input_fields_rtp3_thlp3_1.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_input_fields_rtp3_thlp3_1.maff)
[plot_new_pdf_20180522_test_1.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_new_pdf_20180522_test_1.maff)
[plot_attempts_8_10.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_attempts_8_10.maff)
[plot_attempt_8_only.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_attempt_8_only.maff)
[plot_beta_1p3.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_beta_1p3.maff)
[plot_beta_1p3_all.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_beta_1p3_all.maff)
Migrated from http://carson.math.uwm.edu/trac/clubb/ticket/496
```json
{
"status": "closed",
"changetime": "2012-06-28T21:55:53",
"description": "We include a lot of variables in our plotgen plots, but we omit one important one, namely, Lscale, which is output in clubb's zt files. Let's output Lscale for all our CLUBB cases on the standard plotgen plots. \n\nThe variables to plot in plotgen are specified in the case files. For instance, for RICO, the case file is [http://carson.math.uwm.edu/trac/clubb/browser/trunk/postprocessing/plotgen/cases/clubb/rico.case here]. More information about plotgen and the case files is given in the TWiki.\n\nNote: there is no Lscale output by SAM, WRF, COAMPS, etc., and so we can't include a thick red benchmark line.\n\n\n\n",
"reporter": "vlarson@uwm.edu",
"cc": "vlarson@uwm.edu",
"resolution": "fixed",
"_ts": "1340920553955391",
"component": "post_processing",
"summary": "Include Lscale as a new panel in the default plotgen plots",
"priority": "minor",
"keywords": "",
"time": "2012-02-07T20:54:59",
"milestone": "",
"owner": "bladornr@uwm.edu",
"type": "enhancement"
}
```
|
process
|
include lscale as a new panel in the default plotgen plots trac we include a lot of variables in our plotgen plots but we omit one important one namely lscale which is output in clubb s zt files let s output lscale for all our clubb cases on the standard plotgen plots the variables to plot in plotgen are specified in the case files for instance for rico the case file is more information about plotgen and the case files is given in the twiki note there is no lscale output by sam wrf coamps etc and so we can t include a thick red benchmark line attachments migrated from json status closed changetime description we include a lot of variables in our plotgen plots but we omit one important one namely lscale which is output in clubb s zt files let s output lscale for all our clubb cases on the standard plotgen plots n nthe variables to plot in plotgen are specified in the case files for instance for rico the case file is more information about plotgen and the case files is given in the twiki n nnote there is no lscale output by sam wrf coamps etc and so we can t include a thick red benchmark line n n n n reporter vlarson uwm edu cc vlarson uwm edu resolution fixed ts component post processing summary include lscale as a new panel in the default plotgen plots priority minor keywords time milestone owner bladornr uwm edu type enhancement
| 1
|
547,320
| 16,041,177,771
|
IssuesEvent
|
2021-04-22 08:05:38
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
openvellum.ecollege.com - site is not usable
|
browser-firefox engine-gecko os-ios priority-normal
|
<!-- @browser: Firefox iOS 33.0 -->
<!-- @ua_header: Mozilla/5.0 (iPhone; CPU OS 14_4_2 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) FxiOS/33.0 Mobile/15E148 Safari/605.1.15 -->
<!-- @reported_with: mobile-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/71508 -->
**URL**: https://openvellum.ecollege.com/course.html?courseId=16507825
**Browser / Version**: Firefox iOS 33.0
**Operating System**: iOS 14.4.2
**Tested Another Browser**: No
**Problem type**: Site is not usable
**Description**: Browser unsupported
**Steps to Reproduce**:
doesn’t allow mobile firefox, which is disappointing.
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
openvellum.ecollege.com - site is not usable - <!-- @browser: Firefox iOS 33.0 -->
<!-- @ua_header: Mozilla/5.0 (iPhone; CPU OS 14_4_2 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) FxiOS/33.0 Mobile/15E148 Safari/605.1.15 -->
<!-- @reported_with: mobile-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/71508 -->
**URL**: https://openvellum.ecollege.com/course.html?courseId=16507825
**Browser / Version**: Firefox iOS 33.0
**Operating System**: iOS 14.4.2
**Tested Another Browser**: No
**Problem type**: Site is not usable
**Description**: Browser unsupported
**Steps to Reproduce**:
doesn’t allow mobile firefox, which is disappointing.
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
openvellum ecollege com site is not usable url browser version firefox ios operating system ios tested another browser no problem type site is not usable description browser unsupported steps to reproduce doesn’t allow mobile firefox which is disappointing browser configuration none from with ❤️
| 0
|
1,343
| 3,901,647,139
|
IssuesEvent
|
2016-04-18 11:48:48
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
mRNA stability/turnover/degradation
|
auto-migrated PomBase RNA processes
|
1\.
regulation of mRNA stability \(add related synonym mRNA turnover\)?
Any process that modulates the propensity of mRNA molecules to degradation. Includes processes that both stabilize and destabilize mRNAs.
has no relationship to
mRNA catabolic process
The chemical reactions and pathways resulting in the breakdown of mRNA, messenger RNA, which is responsible for carrying the coded genetic 'message', transcribed from DNA, to sites of protein assembly at the ribosomes.
2\. deadenylated and or decapped mRNAs are subject to degradation \(5' 3' degradation by the exosome\), so should mRNA deadenylation/mRNA decapping be a child of
regulation of mRNA stability or regulation of catabolism?, or should thsi be captured by concurrent annotations?
Reported by: ValWood
Original Ticket: [geneontology/ontology-requests/6697](https://sourceforge.net/p/geneontology/ontology-requests/6697)
|
1.0
|
mRNA stability/turnover/degradation - 1\.
regulation of mRNA stability \(add related synonym mRNA turnover\)?
Any process that modulates the propensity of mRNA molecules to degradation. Includes processes that both stabilize and destabilize mRNAs.
has no relationship to
mRNA catabolic process
The chemical reactions and pathways resulting in the breakdown of mRNA, messenger RNA, which is responsible for carrying the coded genetic 'message', transcribed from DNA, to sites of protein assembly at the ribosomes.
2\. deadenylated and or decapped mRNAs are subject to degradation \(5' 3' degradation by the exosome\), so should mRNA deadenylation/mRNA decapping be a child of
regulation of mRNA stability or regulation of catabolism?, or should thsi be captured by concurrent annotations?
Reported by: ValWood
Original Ticket: [geneontology/ontology-requests/6697](https://sourceforge.net/p/geneontology/ontology-requests/6697)
|
process
|
mrna stability turnover degradation regulation of mrna stability add related synonym mrna turnover any process that modulates the propensity of mrna molecules to degradation includes processes that both stabilize and destabilize mrnas has no relationship to mrna catabolic process the chemical reactions and pathways resulting in the breakdown of mrna messenger rna which is responsible for carrying the coded genetic message transcribed from dna to sites of protein assembly at the ribosomes deadenylated and or decapped mrnas are subject to degradation degradation by the exosome so should mrna deadenylation mrna decapping be a child of regulation of mrna stability or regulation of catabolism or should thsi be captured by concurrent annotations reported by valwood original ticket
| 1
|
8,531
| 11,705,511,057
|
IssuesEvent
|
2020-03-07 16:20:43
|
Jeffail/benthos
|
https://api.github.com/repos/Jeffail/benthos
|
closed
|
Add flatten as a json operator
|
enhancement processors
|
```json
{
"a": "b",
"c": { "d": "e", "f": "g" },
"h": [
{ "i": "j" },
{ "k": "l" }
]
}
```
becomes
```
{
"a": "b",
"c.d": "e",
"c.f": "g",
"h.0.i": "j",
"h.1.k": "l"
}
```
Not sure on the best representation of arrays, but a feature that flattened the json would be valuable for outputting a fully nested json struct to a kafka .e.g and at the same time have a separate topic with a flat json output.
|
1.0
|
Add flatten as a json operator - ```json
{
"a": "b",
"c": { "d": "e", "f": "g" },
"h": [
{ "i": "j" },
{ "k": "l" }
]
}
```
becomes
```
{
"a": "b",
"c.d": "e",
"c.f": "g",
"h.0.i": "j",
"h.1.k": "l"
}
```
Not sure on the best representation of arrays, but a feature that flattened the json would be valuable for outputting a fully nested json struct to a kafka .e.g and at the same time have a separate topic with a flat json output.
|
process
|
add flatten as a json operator json a b c d e f g h i j k l becomes a b c d e c f g h i j h k l not sure on the best representation of arrays but a feature that flattened the json would be valuable for outputting a fully nested json struct to a kafka e g and at the same time have a separate topic with a flat json output
| 1
|
10,927
| 13,726,946,577
|
IssuesEvent
|
2020-10-04 03:14:00
|
SpencerTSterling/AdvancedWebsite
|
https://api.github.com/repos/SpencerTSterling/AdvancedWebsite
|
closed
|
Set up Continuous Integration with GitHub Actions
|
dev process
|
GitHub actions should be used to build the project on each commit
|
1.0
|
Set up Continuous Integration with GitHub Actions - GitHub actions should be used to build the project on each commit
|
process
|
set up continuous integration with github actions github actions should be used to build the project on each commit
| 1
|
237,342
| 19,617,870,523
|
IssuesEvent
|
2022-01-07 00:07:51
|
kubernetes/test-infra
|
https://api.github.com/repos/kubernetes/test-infra
|
closed
|
Feature request: ability to search through component logs from CI runs
|
area/prow sig/testing kind/feature lifecycle/rotten
|
<!-- Please only use this template for submitting enhancement requests -->
/area prow
**What would you like to be added**:
Search capability for CI logs so I can search for jobs that match certain failure text. For example, I would really like to be able to search for `Observed a panic` in component logs and find all matching jobs and files.
Guberator (https://storage.googleapis.com/k8s-gubernator/triage/index.html) is great but doesn't allow me to search node or component logs, only the test output.
**Why is this needed**:
Right now it is impossible to proactively check for failures in CI artifacts; someone has to notice the problem. There is no aggregate way to search node logs.
|
1.0
|
Feature request: ability to search through component logs from CI runs - <!-- Please only use this template for submitting enhancement requests -->
/area prow
**What would you like to be added**:
Search capability for CI logs so I can search for jobs that match certain failure text. For example, I would really like to be able to search for `Observed a panic` in component logs and find all matching jobs and files.
Guberator (https://storage.googleapis.com/k8s-gubernator/triage/index.html) is great but doesn't allow me to search node or component logs, only the test output.
**Why is this needed**:
Right now it is impossible to proactively check for failures in CI artifacts; someone has to notice the problem. There is no aggregate way to search node logs.
|
non_process
|
feature request ability to search through component logs from ci runs area prow what would you like to be added search capability for ci logs so i can search for jobs that match certain failure text for example i would really like to be able to search for observed a panic in component logs and find all matching jobs and files guberator is great but doesn t allow me to search node or component logs only the test output why is this needed right now it is impossible to proactively check for failures in ci artifacts someone has to notice the problem there is no aggregate way to search node logs
| 0
|
12,514
| 14,963,807,346
|
IssuesEvent
|
2021-01-27 11:04:20
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[PM] [Audit Logs] 'studyId','siteId' and 'appId' is incorrect for the events
|
Bug P2 Participant manager datastore Process: Fixed Process: Tested dev
|
**Issue 1: 'studyId','siteId' and 'appId' is incorrect for the events**
1. SITE_ADDED_FOR_STUDY
2. PARTICIPANT_EMAIL_ADDED
3. PARTICIPANTS_EMAIL_LIST_IMPORTED
4. PARTICIPANTS_EMAIL_LIST_IMPORT_FAILED
5. PARTICIPANTS_EMAIL_LIST_IMPORT_PARTIAL_FAILED
6. SITE_DECOMMISSIONED_FOR_STUDY
7. SITE_ACTIVATED_FOR_STUDY
8. PARTICIPANT_INVITATION_DISABLED
9. CONSENT_DOCUMENT_DOWNLOADED
10. INVITATION_EMAIL_SENT
11. INVITATION_EMAIL_FAILED
12. PARTICIPANT_INVITATION_ENABLED
13. ENROLLMENT_TARGET_UPDATED
14. SITE_PARTICIPANT_REGISTRY_VIEWED
**Issue 2: 'studyId' and 'appId' is incorrect for the event**
15. STUDY_PARTICIPANT_REGISTRY_VIEWED
**Issue 3: 'appId' is incorrect for the event**
16. APP_PARTICIPANT_REGISTRY_VIEWED
Actual: 'studyId','siteId' and 'appId' displaying DB value
Expected: 'studyId','siteId' and 'appId' should be custom IDs i.e IDs entered from user in SB/PM web app
Sample snippet for event `SITE_ADDED_FOR_STUDY` event
```
{
"insertId": "epzd98g1b9f0co",
"jsonPayload": {
"correlationId": "2eb996c6-4d84-435b-b438-35caf65ae6b2",
"userAccessLevel": null,
"eventCode": "SITE_ADDED_FOR_STUDY",
"platformVersion": "1.0",
"source": "PARTICIPANT MANAGER",
"occurred": 1611313124346,
"mobilePlatform": "UNKNOWN",
"userId": "2c9180897689364401768a08f0060000",
"studyId": "2c91808876fa9409017706d12918002a",
"destinationApplicationVersion": "1.0",
"participantId": null,
"appVersion": "v0.1",
"studyVersion": null,
"siteId": "2c91808977290f8f017729bf13eb0006",
"sourceApplicationVersion": "1.0",
"destination": "PARTICIPANT USER DATASTORE",
"resourceServer": null,
"userIp": "117.211.20.33",
"description": "Site added to study (site ID- 2c91808977290f8f017729bf13eb0006).",
"appId": "2c91808876fa9409017706d11ea10028"
}
```
|
2.0
|
[PM] [Audit Logs] 'studyId','siteId' and 'appId' is incorrect for the events - **Issue 1: 'studyId','siteId' and 'appId' is incorrect for the events**
1. SITE_ADDED_FOR_STUDY
2. PARTICIPANT_EMAIL_ADDED
3. PARTICIPANTS_EMAIL_LIST_IMPORTED
4. PARTICIPANTS_EMAIL_LIST_IMPORT_FAILED
5. PARTICIPANTS_EMAIL_LIST_IMPORT_PARTIAL_FAILED
6. SITE_DECOMMISSIONED_FOR_STUDY
7. SITE_ACTIVATED_FOR_STUDY
8. PARTICIPANT_INVITATION_DISABLED
9. CONSENT_DOCUMENT_DOWNLOADED
10. INVITATION_EMAIL_SENT
11. INVITATION_EMAIL_FAILED
12. PARTICIPANT_INVITATION_ENABLED
13. ENROLLMENT_TARGET_UPDATED
14. SITE_PARTICIPANT_REGISTRY_VIEWED
**Issue 2: 'studyId' and 'appId' is incorrect for the event**
15. STUDY_PARTICIPANT_REGISTRY_VIEWED
**Issue 3: 'appId' is incorrect for the event**
16. APP_PARTICIPANT_REGISTRY_VIEWED
Actual: 'studyId','siteId' and 'appId' displaying DB value
Expected: 'studyId','siteId' and 'appId' should be custom IDs i.e IDs entered from user in SB/PM web app
Sample snippet for event `SITE_ADDED_FOR_STUDY` event
```
{
"insertId": "epzd98g1b9f0co",
"jsonPayload": {
"correlationId": "2eb996c6-4d84-435b-b438-35caf65ae6b2",
"userAccessLevel": null,
"eventCode": "SITE_ADDED_FOR_STUDY",
"platformVersion": "1.0",
"source": "PARTICIPANT MANAGER",
"occurred": 1611313124346,
"mobilePlatform": "UNKNOWN",
"userId": "2c9180897689364401768a08f0060000",
"studyId": "2c91808876fa9409017706d12918002a",
"destinationApplicationVersion": "1.0",
"participantId": null,
"appVersion": "v0.1",
"studyVersion": null,
"siteId": "2c91808977290f8f017729bf13eb0006",
"sourceApplicationVersion": "1.0",
"destination": "PARTICIPANT USER DATASTORE",
"resourceServer": null,
"userIp": "117.211.20.33",
"description": "Site added to study (site ID- 2c91808977290f8f017729bf13eb0006).",
"appId": "2c91808876fa9409017706d11ea10028"
}
```
|
process
|
studyid siteid and appid is incorrect for the events issue studyid siteid and appid is incorrect for the events site added for study participant email added participants email list imported participants email list import failed participants email list import partial failed site decommissioned for study site activated for study participant invitation disabled consent document downloaded invitation email sent invitation email failed participant invitation enabled enrollment target updated site participant registry viewed issue studyid and appid is incorrect for the event study participant registry viewed issue appid is incorrect for the event app participant registry viewed actual studyid siteid and appid displaying db value expected studyid siteid and appid should be custom ids i e ids entered from user in sb pm web app sample snippet for event site added for study event insertid jsonpayload correlationid useraccesslevel null eventcode site added for study platformversion source participant manager occurred mobileplatform unknown userid studyid destinationapplicationversion participantid null appversion studyversion null siteid sourceapplicationversion destination participant user datastore resourceserver null userip description site added to study site id appid
| 1
|
68,129
| 8,216,809,553
|
IssuesEvent
|
2018-09-05 10:18:42
|
WordPress/gutenberg
|
https://api.github.com/repos/WordPress/gutenberg
|
closed
|
Transform "Fix Toolbar to Top" into "Focus Mode"
|
Needs Design Feedback [Type] Enhancement
|
Inspired by the "Focus" mode that @mtias introduced in #8920, I'd like to propose we try a different kind of Focus Mode.
## Problem
A key feedback point we hear is that Gutenberg’s interface can be a little overwhelming. This often comes from users who more commonly focus on "writing" versus "building" their posts. They find the contextual block controls and block hover states to be distracting: When they're focused on writing, they don't necessarily want to think about blocks — they just want to write.
Oftentimes, this subset of users also miss the common "formatting toolbar at the top of the page" paradigm that's present in Google Docs, Microsoft Word, and the Classic Editor.
I think we can introduce an alternate editing mode that addresses both these concerns for them.
## Suggested Solution
We already have a "Fix Toolbar to Top" option that moves the contextual block toolbar to the top of the page. For the user I described above, this is already a step towards the interface they're used to. It's also a good first step to decluttering the writing interface — relocating heavy UI to a less-disctracting area of the screen.
I suggest we take that option further, and adapt it into a more complete "Focus Mode":

This new editing mode would consist of a collection of UI updates aimed at decluttering the interface so that the user can focus on writing their content.
▶️ **Video demo:** https://cloudup.com/cMr22auRtXC
## Details
Focus Mode would be activated via the "More" menu. To accomodate this new mode, I propose renaming the "Fix Toolbar to Top" option to "Focus Mode" and including this as a new "Writing" option:

_(Since this would leave "Show Tips" all alone under "Settings", I suggest moving it into the "Tools" section at the bottom)_
When users have this new mode active, the editor would include the following UI updates:
### 1. The block toolbar would be pinned to the top of the screen.
(This is an existing feature.)

### 2. The editor would be full screen.
This is one of the highest impact changes, and would be the default for this mode. Users could exit out of full screen mode — and retain all other features of Focus Mode — via a new toggle in the upper left of the toolbar.

### 3. Block outlines would be removed for both hover and selected states.
I initially thought this change would be confusing, but (as a power user myself) I find it quite usable. Since this is an optional mode, and this is a high-impact change in terms of eliminating distractions, I'm all for it.

### 4. The block label would appear on a delay, and be toned down visually.
This label is less essential in this mode, but including it will help with wayfinding. (The delay aspect of this change is already in progress in #9197)

### 5. Block mover + block options would also appear on a delay.
(For non-selected blocks). When a block is selected, they'll appear just as quickly as they usually do.
_Non-selected Blocks_

_Selected Blocks (This is the same as our current behavior)_

---
In case you missed it above, here's a short video demo to convey how these changes work in practice:
▶️ https://cloudup.com/cMr22auRtXC
## FAQ
I foresee a few likely questions to this approach, so I'll try to address them in advance:
- **How does this relate to the "Focus Mode" in #8920?** These can work together. The pattern of focusing in on a single block is compatible with both writing modes, so we could either allow it to be available in both modes, or we could make it an add-on feature for this Focus mode.
- **There are acessibility issues with not showing block borders.** That's very likely true. I'd reinforce the fact that this is a non-default, opt-in mode. If block borders and contextual controls are important to your use of Gutenberg, the default option will still be available. This direction would _not_ take that away.
- **Nested blocks will still require borders in this mode**. Yes, maybe they will! We'll need to address this, and I think it's reasonable to include some sort of minimal border in this case.
- **What about mobile?** We currently don't offer the "Fix Toolbar to Top" option on small screens, due to a [Safari issue noted here](https://github.com/WordPress/gutenberg/issues/7479#issuecomment-410988762). That will still be a problem for us. The other enhancements in this mode (full screen, hover state changes, etc.) would have little to no effect on mobile. For those reasons I suggest we limit this to larger screens only at this point.
Looking forward to thoughts and reactions. 🙂
|
1.0
|
Transform "Fix Toolbar to Top" into "Focus Mode" - Inspired by the "Focus" mode that @mtias introduced in #8920, I'd like to propose we try a different kind of Focus Mode.
## Problem
A key feedback point we hear is that Gutenberg’s interface can be a little overwhelming. This often comes from users who more commonly focus on "writing" versus "building" their posts. They find the contextual block controls and block hover states to be distracting: When they're focused on writing, they don't necessarily want to think about blocks — they just want to write.
Oftentimes, this subset of users also miss the common "formatting toolbar at the top of the page" paradigm that's present in Google Docs, Microsoft Word, and the Classic Editor.
I think we can introduce an alternate editing mode that addresses both these concerns for them.
## Suggested Solution
We already have a "Fix Toolbar to Top" option that moves the contextual block toolbar to the top of the page. For the user I described above, this is already a step towards the interface they're used to. It's also a good first step to decluttering the writing interface — relocating heavy UI to a less-disctracting area of the screen.
I suggest we take that option further, and adapt it into a more complete "Focus Mode":

This new editing mode would consist of a collection of UI updates aimed at decluttering the interface so that the user can focus on writing their content.
▶️ **Video demo:** https://cloudup.com/cMr22auRtXC
## Details
Focus Mode would be activated via the "More" menu. To accomodate this new mode, I propose renaming the "Fix Toolbar to Top" option to "Focus Mode" and including this as a new "Writing" option:

_(Since this would leave "Show Tips" all alone under "Settings", I suggest moving it into the "Tools" section at the bottom)_
When users have this new mode active, the editor would include the following UI updates:
### 1. The block toolbar would be pinned to the top of the screen.
(This is an existing feature.)

### 2. The editor would be full screen.
This is one of the highest impact changes, and would be the default for this mode. Users could exit out of full screen mode — and retain all other features of Focus Mode — via a new toggle in the upper left of the toolbar.

### 3. Block outlines would be removed for both hover and selected states.
I initially thought this change would be confusing, but (as a power user myself) I find it quite usable. Since this is an optional mode, and this is a high-impact change in terms of eliminating distractions, I'm all for it.

### 4. The block label would appear on a delay, and be toned down visually.
This label is less essential in this mode, but including it will help with wayfinding. (The delay aspect of this change is already in progress in #9197)

### 5. Block mover + block options would also appear on a delay.
(For non-selected blocks). When a block is selected, they'll appear just as quickly as they usually do.
_Non-selected Blocks_

_Selected Blocks (This is the same as our current behavior)_

---
In case you missed it above, here's a short video demo to convey how these changes work in practice:
▶️ https://cloudup.com/cMr22auRtXC
## FAQ
I foresee a few likely questions to this approach, so I'll try to address them in advance:
- **How does this relate to the "Focus Mode" in #8920?** These can work together. The pattern of focusing in on a single block is compatible with both writing modes, so we could either allow it to be available in both modes, or we could make it an add-on feature for this Focus mode.
- **There are acessibility issues with not showing block borders.** That's very likely true. I'd reinforce the fact that this is a non-default, opt-in mode. If block borders and contextual controls are important to your use of Gutenberg, the default option will still be available. This direction would _not_ take that away.
- **Nested blocks will still require borders in this mode**. Yes, maybe they will! We'll need to address this, and I think it's reasonable to include some sort of minimal border in this case.
- **What about mobile?** We currently don't offer the "Fix Toolbar to Top" option on small screens, due to a [Safari issue noted here](https://github.com/WordPress/gutenberg/issues/7479#issuecomment-410988762). That will still be a problem for us. The other enhancements in this mode (full screen, hover state changes, etc.) would have little to no effect on mobile. For those reasons I suggest we limit this to larger screens only at this point.
Looking forward to thoughts and reactions. 🙂
|
non_process
|
transform fix toolbar to top into focus mode inspired by the focus mode that mtias introduced in i d like to propose we try a different kind of focus mode problem a key feedback point we hear is that gutenberg’s interface can be a little overwhelming this often comes from users who more commonly focus on writing versus building their posts they find the contextual block controls and block hover states to be distracting when they re focused on writing they don t necessarily want to think about blocks — they just want to write oftentimes this subset of users also miss the common formatting toolbar at the top of the page paradigm that s present in google docs microsoft word and the classic editor i think we can introduce an alternate editing mode that addresses both these concerns for them suggested solution we already have a fix toolbar to top option that moves the contextual block toolbar to the top of the page for the user i described above this is already a step towards the interface they re used to it s also a good first step to decluttering the writing interface — relocating heavy ui to a less disctracting area of the screen i suggest we take that option further and adapt it into a more complete focus mode this new editing mode would consist of a collection of ui updates aimed at decluttering the interface so that the user can focus on writing their content ▶️ video demo details focus mode would be activated via the more menu to accomodate this new mode i propose renaming the fix toolbar to top option to focus mode and including this as a new writing option since this would leave show tips all alone under settings i suggest moving it into the tools section at the bottom when users have this new mode active the editor would include the following ui updates the block toolbar would be pinned to the top of the screen this is an existing feature the editor would be full screen this is one of the highest impact changes and would be the default for this mode users could exit out of full screen mode — and retain all other features of focus mode — via a new toggle in the upper left of the toolbar block outlines would be removed for both hover and selected states i initially thought this change would be confusing but as a power user myself i find it quite usable since this is an optional mode and this is a high impact change in terms of eliminating distractions i m all for it the block label would appear on a delay and be toned down visually this label is less essential in this mode but including it will help with wayfinding the delay aspect of this change is already in progress in block mover block options would also appear on a delay for non selected blocks when a block is selected they ll appear just as quickly as they usually do non selected blocks selected blocks this is the same as our current behavior in case you missed it above here s a short video demo to convey how these changes work in practice ▶️ faq i foresee a few likely questions to this approach so i ll try to address them in advance how does this relate to the focus mode in these can work together the pattern of focusing in on a single block is compatible with both writing modes so we could either allow it to be available in both modes or we could make it an add on feature for this focus mode there are acessibility issues with not showing block borders that s very likely true i d reinforce the fact that this is a non default opt in mode if block borders and contextual controls are important to your use of gutenberg the default option will still be available this direction would not take that away nested blocks will still require borders in this mode yes maybe they will we ll need to address this and i think it s reasonable to include some sort of minimal border in this case what about mobile we currently don t offer the fix toolbar to top option on small screens due to a that will still be a problem for us the other enhancements in this mode full screen hover state changes etc would have little to no effect on mobile for those reasons i suggest we limit this to larger screens only at this point looking forward to thoughts and reactions 🙂
| 0
|
7,615
| 4,020,461,369
|
IssuesEvent
|
2016-05-16 18:30:00
|
mitchellh/packer
|
https://api.github.com/repos/mitchellh/packer
|
opened
|
Azure: Where's the Best Error Message
|
builder/azure
|
A customer experienced an issue when the *capture_name_prefix* was set to an unacceptable value (#3535). The method signature for the API call returns an HTTP response and an error. The Azure builder code discards the HTTP response, and checks the error only. The error's message is not intuitive, and does not return any information to help to debug the issue. The HTTP response is more helpful, but it is discarded.
The fix is for the builder code to check and surface both. (Checking the error only is sufficient to indicate there is an error with the API call.) I am tracking the [issue](https://github.com/Azure/azure-sdk-for-go/issues/328) with the Azure SDK team too, to see what their recommendations are.
|
1.0
|
Azure: Where's the Best Error Message - A customer experienced an issue when the *capture_name_prefix* was set to an unacceptable value (#3535). The method signature for the API call returns an HTTP response and an error. The Azure builder code discards the HTTP response, and checks the error only. The error's message is not intuitive, and does not return any information to help to debug the issue. The HTTP response is more helpful, but it is discarded.
The fix is for the builder code to check and surface both. (Checking the error only is sufficient to indicate there is an error with the API call.) I am tracking the [issue](https://github.com/Azure/azure-sdk-for-go/issues/328) with the Azure SDK team too, to see what their recommendations are.
|
non_process
|
azure where s the best error message a customer experienced an issue when the capture name prefix was set to an unacceptable value the method signature for the api call returns an http response and an error the azure builder code discards the http response and checks the error only the error s message is not intuitive and does not return any information to help to debug the issue the http response is more helpful but it is discarded the fix is for the builder code to check and surface both checking the error only is sufficient to indicate there is an error with the api call i am tracking the with the azure sdk team too to see what their recommendations are
| 0
|
19,053
| 25,068,200,200
|
IssuesEvent
|
2022-11-07 10:01:30
|
ESMValGroup/ESMValCore
|
https://api.github.com/repos/ESMValGroup/ESMValCore
|
closed
|
Anomaly calculation for OBS got broken early march.
|
bug preprocessor
|
**Describe the bug**
Unfortunately, it seems like none of the tests has flagged (something to look into later I would say!). But for several observational datasets the calculation of anomalies goes wrong with non-physical values coming out of the preprocessor. I could track down the problems to 3-4 March 2020. With everything working fine on the 3rd of March (`git checkout 'master@{2020-03-03}'`) and wrong results from 4 March onwards (`git checkout 'master@{2020-03-04}'`). To create the plots and run the recipe, one needs a specific ESMValTool branch: `git checkout C3S_511_MPQB`. But reproducing it and simply inspecting NetCDF output files would work as well of course. Since the changes in the `anomalies` preprocessor were authored by @jvegasbsc my hope is that he can solve this issue. It would be a good additional check if someone can reproduce the error (@hb326 @BenMGeo or @mattiarighi). The error also occurred for `ERA5`, a dataset that is more widely used. Tagging @hirschim just to keep you updated.
Fine:

_Generated using Copernicus Climate Change Service information 2020_
Wrong:

_Generated using Copernicus Climate Change Service information 2020_
Excerpt from the differences of `ESMValCore` between 3rd and 4th of March 2020:
```
diff --git a/esmvalcore/preprocessor/_time.py b/esmvalcore/preprocessor/_time.py
index 1e29395e3..c1cf2071b 100644
--- a/esmvalcore/preprocessor/_time.py
+++ b/esmvalcore/preprocessor/_time.py
@@ -462,16 +462,15 @@ def anomalies(cube, period, reference):
cube_time = cube.coord('time')
ref = {}
for ref_slice in reference.slices_over(ref_coord):
- ref[ref_slice.coord(ref_coord).points[0]] = da.ravel(
- ref_slice.core_data())
+ ref[ref_slice.coord(ref_coord).points[0]] = ref_slice.core_data()
+
cube_coord_dim = cube.coord_dims(cube_coord)[0]
+ slicer = [slice(None)] * len(data.shape)
+ new_data = []
for i in range(cube_time.shape[0]):
- time = cube_time.points[i]
- indexes = cube_time.points == time
- indexes = iris.util.broadcast_to_shape(indexes, data.shape,
- (cube_coord_dim, ))
- data[indexes] = data[indexes] - ref[cube_coord.points[i]]
-
+ slicer[cube_coord_dim] = i
+ new_data.append(data[tuple(slicer)] - ref[cube_coord.points[i]])
+ data = da.stack(new_data, axis=cube_coord_dim)
cube = cube.copy(data)
cube.remove_coord(cube_coord)
return cube
commit f61cc0e946dd4cd1de02fb738046f5273db16025
Author: Javier Vegas <javier.vegas@bsc.es>
Date: Tue Jan 21 16:31:42 2020 +0100
Remove print and extra coordinate
diff --git a/esmvalcore/preprocessor/_time.py b/esmvalcore/preprocessor/_time.py
index 612bfa47a..1e29395e3 100644
--- a/esmvalcore/preprocessor/_time.py
+++ b/esmvalcore/preprocessor/_time.py
@@ -411,7 +411,6 @@ def climate_statistics(cube, operator='mean', period='full'):
operator = get_iris_analysis_operation(operator)
clim_cube = cube.aggregated_by(clim_coord, operator)
clim_cube.remove_coord('time')
- print(clim_cube)
if clim_cube.coord(clim_coord.name()).is_monotonic():
iris.util.promote_aux_coord_to_dim_coord(clim_cube, clim_coord.name())
else:
@@ -474,6 +473,7 @@ def anomalies(cube, period, reference):
data[indexes] = data[indexes] - ref[cube_coord.points[i]]
cube = cube.copy(data)
+ cube.remove_coord(cube_coord)
return cube
```
Recipe:
```
# ESMValTool
# recipe_anom_bug.yml
---
documentation:
description: |
Recipe for demonstrating a bug. To get wrong results run `git checkout ' master@{2020-03-04}'` in ESMValCore dir. To get good results run `git checkout 'master@{2020-03-03}'` in ESMValCore dir. Use branch `C3S_511_MPQB` from ESMValTool.
authors:
- crezee_bas
################################################
# Define some default parameters using anchors #
################################################
commongrid: &commongrid
regrid:
target_grid: 0.25x0.25
scheme: nearest
regrid_time: # this is needed for a fully homogeneous time coordinate
frequency: mon
icefreeland: &icefreeland
mask_landsea:
mask_out: sea
mask_glaciated:
mask_out: glaciated
commonmask: &commonmask # should be preceded by commongrid
mask_fillvalues:
threshold_fraction: 0.0 # keep all missing values
min_value: -1e20 # small enough not to alter the data
nonnegative: &nonnegative
clip:
minimum: 0.0
################################################
################################################
################################################
datasets_from_1992_2019: &datasets_from_1992_2019
additional_datasets:
- {dataset: CDS-SATELLITE-SOIL-MOISTURE, type: sat, project: OBS, mip: Lmon,
version: CUSTOM-TCDR-ICDR-20200602, tier: 3, start_year: 2015, end_year: 2018}
# - {dataset: CDS-SATELLITE-SOIL-MOISTURE, project: OBS, tier: 3, type: sat,
# version: CUSTOM-TCDR-ICDR-20200602, start_year: 1992, end_year: 2019, mip: Lmon}
# - {dataset: cds-era5-land-monthly, type: reanaly, project: OBS, mip: Lmon,
# version: 1, tier: 3, start_year: 1992, end_year: 2019}
# - {dataset: cds-era5-monthly, type: reanaly, project: OBS, mip: Lmon,
# version: 1, tier: 3, start_year: 1992, end_year: 2019}
# - {dataset: MERRA2, type: reanaly, project: OBS6, mip: Lmon,
# version: 5.12.4, tier: 3, start_year: 1992, end_year: 2019}
preprocessors:
pp_lineplots_ano:
custom_order: true
<<: *icefreeland
<<: *commongrid
<<: *commonmask
<<: *nonnegative
anomalies:
period: monthly
reference: [2015,2018]
standardize: false
area_statistics:
operator: mean
diagnostics:
lineplots_ano:
variables:
sm:
preprocessor: pp_lineplots_ano
mip: Lmon
scripts:
lineplot:
script: mpqb/mpqb_lineplot.py
<<: *datasets_from_1992_2019
```
|
1.0
|
Anomaly calculation for OBS got broken early march. - **Describe the bug**
Unfortunately, it seems like none of the tests has flagged (something to look into later I would say!). But for several observational datasets the calculation of anomalies goes wrong with non-physical values coming out of the preprocessor. I could track down the problems to 3-4 March 2020. With everything working fine on the 3rd of March (`git checkout 'master@{2020-03-03}'`) and wrong results from 4 March onwards (`git checkout 'master@{2020-03-04}'`). To create the plots and run the recipe, one needs a specific ESMValTool branch: `git checkout C3S_511_MPQB`. But reproducing it and simply inspecting NetCDF output files would work as well of course. Since the changes in the `anomalies` preprocessor were authored by @jvegasbsc my hope is that he can solve this issue. It would be a good additional check if someone can reproduce the error (@hb326 @BenMGeo or @mattiarighi). The error also occurred for `ERA5`, a dataset that is more widely used. Tagging @hirschim just to keep you updated.
Fine:

_Generated using Copernicus Climate Change Service information 2020_
Wrong:

_Generated using Copernicus Climate Change Service information 2020_
Excerpt from the differences of `ESMValCore` between 3rd and 4th of March 2020:
```
diff --git a/esmvalcore/preprocessor/_time.py b/esmvalcore/preprocessor/_time.py
index 1e29395e3..c1cf2071b 100644
--- a/esmvalcore/preprocessor/_time.py
+++ b/esmvalcore/preprocessor/_time.py
@@ -462,16 +462,15 @@ def anomalies(cube, period, reference):
cube_time = cube.coord('time')
ref = {}
for ref_slice in reference.slices_over(ref_coord):
- ref[ref_slice.coord(ref_coord).points[0]] = da.ravel(
- ref_slice.core_data())
+ ref[ref_slice.coord(ref_coord).points[0]] = ref_slice.core_data()
+
cube_coord_dim = cube.coord_dims(cube_coord)[0]
+ slicer = [slice(None)] * len(data.shape)
+ new_data = []
for i in range(cube_time.shape[0]):
- time = cube_time.points[i]
- indexes = cube_time.points == time
- indexes = iris.util.broadcast_to_shape(indexes, data.shape,
- (cube_coord_dim, ))
- data[indexes] = data[indexes] - ref[cube_coord.points[i]]
-
+ slicer[cube_coord_dim] = i
+ new_data.append(data[tuple(slicer)] - ref[cube_coord.points[i]])
+ data = da.stack(new_data, axis=cube_coord_dim)
cube = cube.copy(data)
cube.remove_coord(cube_coord)
return cube
commit f61cc0e946dd4cd1de02fb738046f5273db16025
Author: Javier Vegas <javier.vegas@bsc.es>
Date: Tue Jan 21 16:31:42 2020 +0100
Remove print and extra coordinate
diff --git a/esmvalcore/preprocessor/_time.py b/esmvalcore/preprocessor/_time.py
index 612bfa47a..1e29395e3 100644
--- a/esmvalcore/preprocessor/_time.py
+++ b/esmvalcore/preprocessor/_time.py
@@ -411,7 +411,6 @@ def climate_statistics(cube, operator='mean', period='full'):
operator = get_iris_analysis_operation(operator)
clim_cube = cube.aggregated_by(clim_coord, operator)
clim_cube.remove_coord('time')
- print(clim_cube)
if clim_cube.coord(clim_coord.name()).is_monotonic():
iris.util.promote_aux_coord_to_dim_coord(clim_cube, clim_coord.name())
else:
@@ -474,6 +473,7 @@ def anomalies(cube, period, reference):
data[indexes] = data[indexes] - ref[cube_coord.points[i]]
cube = cube.copy(data)
+ cube.remove_coord(cube_coord)
return cube
```
Recipe:
```
# ESMValTool
# recipe_anom_bug.yml
---
documentation:
description: |
Recipe for demonstrating a bug. To get wrong results run `git checkout ' master@{2020-03-04}'` in ESMValCore dir. To get good results run `git checkout 'master@{2020-03-03}'` in ESMValCore dir. Use branch `C3S_511_MPQB` from ESMValTool.
authors:
- crezee_bas
################################################
# Define some default parameters using anchors #
################################################
commongrid: &commongrid
regrid:
target_grid: 0.25x0.25
scheme: nearest
regrid_time: # this is needed for a fully homogeneous time coordinate
frequency: mon
icefreeland: &icefreeland
mask_landsea:
mask_out: sea
mask_glaciated:
mask_out: glaciated
commonmask: &commonmask # should be preceded by commongrid
mask_fillvalues:
threshold_fraction: 0.0 # keep all missing values
min_value: -1e20 # small enough not to alter the data
nonnegative: &nonnegative
clip:
minimum: 0.0
################################################
################################################
################################################
datasets_from_1992_2019: &datasets_from_1992_2019
additional_datasets:
- {dataset: CDS-SATELLITE-SOIL-MOISTURE, type: sat, project: OBS, mip: Lmon,
version: CUSTOM-TCDR-ICDR-20200602, tier: 3, start_year: 2015, end_year: 2018}
# - {dataset: CDS-SATELLITE-SOIL-MOISTURE, project: OBS, tier: 3, type: sat,
# version: CUSTOM-TCDR-ICDR-20200602, start_year: 1992, end_year: 2019, mip: Lmon}
# - {dataset: cds-era5-land-monthly, type: reanaly, project: OBS, mip: Lmon,
# version: 1, tier: 3, start_year: 1992, end_year: 2019}
# - {dataset: cds-era5-monthly, type: reanaly, project: OBS, mip: Lmon,
# version: 1, tier: 3, start_year: 1992, end_year: 2019}
# - {dataset: MERRA2, type: reanaly, project: OBS6, mip: Lmon,
# version: 5.12.4, tier: 3, start_year: 1992, end_year: 2019}
preprocessors:
pp_lineplots_ano:
custom_order: true
<<: *icefreeland
<<: *commongrid
<<: *commonmask
<<: *nonnegative
anomalies:
period: monthly
reference: [2015,2018]
standardize: false
area_statistics:
operator: mean
diagnostics:
lineplots_ano:
variables:
sm:
preprocessor: pp_lineplots_ano
mip: Lmon
scripts:
lineplot:
script: mpqb/mpqb_lineplot.py
<<: *datasets_from_1992_2019
```
|
process
|
anomaly calculation for obs got broken early march describe the bug unfortunately it seems like none of the tests has flagged something to look into later i would say but for several observational datasets the calculation of anomalies goes wrong with non physical values coming out of the preprocessor i could track down the problems to march with everything working fine on the of march git checkout master and wrong results from march onwards git checkout master to create the plots and run the recipe one needs a specific esmvaltool branch git checkout mpqb but reproducing it and simply inspecting netcdf output files would work as well of course since the changes in the anomalies preprocessor were authored by jvegasbsc my hope is that he can solve this issue it would be a good additional check if someone can reproduce the error benmgeo or mattiarighi the error also occurred for a dataset that is more widely used tagging hirschim just to keep you updated fine generated using copernicus climate change service information wrong generated using copernicus climate change service information excerpt from the differences of esmvalcore between and of march diff git a esmvalcore preprocessor time py b esmvalcore preprocessor time py index a esmvalcore preprocessor time py b esmvalcore preprocessor time py def anomalies cube period reference cube time cube coord time ref for ref slice in reference slices over ref coord ref da ravel ref slice core data ref ref slice core data cube coord dim cube coord dims cube coord slicer len data shape new data for i in range cube time shape time cube time points indexes cube time points time indexes iris util broadcast to shape indexes data shape cube coord dim data data ref slicer i new data append data ref data da stack new data axis cube coord dim cube cube copy data cube remove coord cube coord return cube commit author javier vegas date tue jan remove print and extra coordinate diff git a esmvalcore preprocessor time py b esmvalcore preprocessor time py index a esmvalcore preprocessor time py b esmvalcore preprocessor time py def climate statistics cube operator mean period full operator get iris analysis operation operator clim cube cube aggregated by clim coord operator clim cube remove coord time print clim cube if clim cube coord clim coord name is monotonic iris util promote aux coord to dim coord clim cube clim coord name else def anomalies cube period reference data data ref cube cube copy data cube remove coord cube coord return cube recipe esmvaltool recipe anom bug yml documentation description recipe for demonstrating a bug to get wrong results run git checkout master in esmvalcore dir to get good results run git checkout master in esmvalcore dir use branch mpqb from esmvaltool authors crezee bas define some default parameters using anchors commongrid commongrid regrid target grid scheme nearest regrid time this is needed for a fully homogeneous time coordinate frequency mon icefreeland icefreeland mask landsea mask out sea mask glaciated mask out glaciated commonmask commonmask should be preceded by commongrid mask fillvalues threshold fraction keep all missing values min value small enough not to alter the data nonnegative nonnegative clip minimum datasets from datasets from additional datasets dataset cds satellite soil moisture type sat project obs mip lmon version custom tcdr icdr tier start year end year dataset cds satellite soil moisture project obs tier type sat version custom tcdr icdr start year end year mip lmon dataset cds land monthly type reanaly project obs mip lmon version tier start year end year dataset cds monthly type reanaly project obs mip lmon version tier start year end year dataset type reanaly project mip lmon version tier start year end year preprocessors pp lineplots ano custom order true icefreeland commongrid commonmask nonnegative anomalies period monthly reference standardize false area statistics operator mean diagnostics lineplots ano variables sm preprocessor pp lineplots ano mip lmon scripts lineplot script mpqb mpqb lineplot py datasets from
| 1
|
20,781
| 27,518,422,464
|
IssuesEvent
|
2023-03-06 13:34:04
|
oda-hub/dispatcher-app
|
https://api.github.com/repos/oda-hub/dispatcher-app
|
opened
|
if we have several frontends, each requesting the same dispatcher, what is the proper value for product_url option?
|
multi-process
|
asked by @dsavchenko
https://github.com/oda-hub/oda_api/issues/189
|
1.0
|
if we have several frontends, each requesting the same dispatcher, what is the proper value for product_url option? - asked by @dsavchenko
https://github.com/oda-hub/oda_api/issues/189
|
process
|
if we have several frontends each requesting the same dispatcher what is the proper value for product url option asked by dsavchenko
| 1
|
22,690
| 15,378,073,333
|
IssuesEvent
|
2021-03-02 17:51:26
|
department-of-veterans-affairs/va.gov-team
|
https://api.github.com/repos/department-of-veterans-affairs/va.gov-team
|
closed
|
Salesforce-GIBFT Connection for vets-api in STAGING not working
|
backend external-request infrastructure operations security
|
## Description
Salesforce-GIBFT Connection for vets-api in STAGING is not working and VSP Operations DevOps engineers have been brought in to help debug and fix the connection. Users of the front end application are receiving an error when trying to submit data through the GI Bill Feedback Tool (https://staging.va.gov/education/submit-school-feedback/introduction)
## Background/context/resources
- rotating variable consumer key
- consumer key gets updated periodically (monthly?)
- STAGING to UAT environment was previous connection
- STAGING to REG environment is new connection
- [currently open pr/branch in devops with new changes](https://github.com/department-of-veterans-affairs/devops/pull/8449/files)
- [merged pr in vets-api to update environment naming scheme](https://github.com/department-of-veterans-affairs/vets-api/pull/5807)
- [knowledge dump left for us by Johnny Holton](https://github.com/department-of-veterans-affairs/va.gov-team/issues/14921)
## Technical notes
- @jbritt1 has worked w/ @lihanli, @LindseySaari, and @dginther to try and debug the connection
- Jeremy has been in touch with external teams to try and resolve this issue
- Consumer key is now up to date with exact value provided by Salesforce
- The [cert signing key](https://github.com/department-of-veterans-affairs/devops/blob/master/ansible/deployment/config/vets-api-server-vagov-staging.yml#L182) also had to be changed from the one we use in STAGING, to the one that we use for DEV (ref: https://dsva.slack.com/archives/CJYRZK2HH/p1611956508150000?thread_ts=1611872226.101200&cid=CJYRZK2HH)
- [Invalid header article found on stackexchange ](https://salesforce.stackexchange.com/questions/234316/messageinvalid-header-type-errorcodeinvalid-auth-header-received)
- [ Sentry errors found that may be related](http://sentry.vfs.va.gov/organizations/vsp/issues/5210/?environment=staging&query=is%3Aunresolved&statsPeriod=24h)
- This connection WORKS when using the rails console to trigger the action (forcing a query / POST from rails console)
- Example of past rails console commands that have worked:
```
Gibft::Service::CONSUMER_KEY = 'REDACTED'
Gibft::Service::SALESFORCE_USERNAME = 'vetsgov-devops-ci-feedback@listserv.gsa.gov.reg'
Gibft::Configuration::SALESFORCE_INSTANCE_URL = 'https://va--reg.my.salesforce.com/'
service = Gibft::Service.new
body = service.send(:request, :post, '', service.oauth_params).body
```
---
## Tasks
- [ ] External team wants to meet again to continue troubleshooting
## Definition of Done
- [ ] Salesforce-GIBFT connection for vets-api will be working in STAGING
---
### Reminders
- [X] Please attach your team label and any other appropriate label(s)
- [X] Please attach the needs grooming tag if needed
- [X] Please connect to an epic
|
1.0
|
Salesforce-GIBFT Connection for vets-api in STAGING not working - ## Description
Salesforce-GIBFT Connection for vets-api in STAGING is not working and VSP Operations DevOps engineers have been brought in to help debug and fix the connection. Users of the front end application are receiving an error when trying to submit data through the GI Bill Feedback Tool (https://staging.va.gov/education/submit-school-feedback/introduction)
## Background/context/resources
- rotating variable consumer key
- consumer key gets updated periodically (monthly?)
- STAGING to UAT environment was previous connection
- STAGING to REG environment is new connection
- [currently open pr/branch in devops with new changes](https://github.com/department-of-veterans-affairs/devops/pull/8449/files)
- [merged pr in vets-api to update environment naming scheme](https://github.com/department-of-veterans-affairs/vets-api/pull/5807)
- [knowledge dump left for us by Johnny Holton](https://github.com/department-of-veterans-affairs/va.gov-team/issues/14921)
## Technical notes
- @jbritt1 has worked w/ @lihanli, @LindseySaari, and @dginther to try and debug the connection
- Jeremy has been in touch with external teams to try and resolve this issue
- Consumer key is now up to date with exact value provided by Salesforce
- The [cert signing key](https://github.com/department-of-veterans-affairs/devops/blob/master/ansible/deployment/config/vets-api-server-vagov-staging.yml#L182) also had to be changed from the one we use in STAGING, to the one that we use for DEV (ref: https://dsva.slack.com/archives/CJYRZK2HH/p1611956508150000?thread_ts=1611872226.101200&cid=CJYRZK2HH)
- [Invalid header article found on stackexchange ](https://salesforce.stackexchange.com/questions/234316/messageinvalid-header-type-errorcodeinvalid-auth-header-received)
- [ Sentry errors found that may be related](http://sentry.vfs.va.gov/organizations/vsp/issues/5210/?environment=staging&query=is%3Aunresolved&statsPeriod=24h)
- This connection WORKS when using the rails console to trigger the action (forcing a query / POST from rails console)
- Example of past rails console commands that have worked:
```
Gibft::Service::CONSUMER_KEY = 'REDACTED'
Gibft::Service::SALESFORCE_USERNAME = 'vetsgov-devops-ci-feedback@listserv.gsa.gov.reg'
Gibft::Configuration::SALESFORCE_INSTANCE_URL = 'https://va--reg.my.salesforce.com/'
service = Gibft::Service.new
body = service.send(:request, :post, '', service.oauth_params).body
```
---
## Tasks
- [ ] External team wants to meet again to continue troubleshooting
## Definition of Done
- [ ] Salesforce-GIBFT connection for vets-api will be working in STAGING
---
### Reminders
- [X] Please attach your team label and any other appropriate label(s)
- [X] Please attach the needs grooming tag if needed
- [X] Please connect to an epic
|
non_process
|
salesforce gibft connection for vets api in staging not working description salesforce gibft connection for vets api in staging is not working and vsp operations devops engineers have been brought in to help debug and fix the connection users of the front end application are receiving an error when trying to submit data through the gi bill feedback tool background context resources rotating variable consumer key consumer key gets updated periodically monthly staging to uat environment was previous connection staging to reg environment is new connection technical notes has worked w lihanli lindseysaari and dginther to try and debug the connection jeremy has been in touch with external teams to try and resolve this issue consumer key is now up to date with exact value provided by salesforce the also had to be changed from the one we use in staging to the one that we use for dev ref this connection works when using the rails console to trigger the action forcing a query post from rails console example of past rails console commands that have worked gibft service consumer key redacted gibft service salesforce username vetsgov devops ci feedback listserv gsa gov reg gibft configuration salesforce instance url service gibft service new body service send request post service oauth params body tasks external team wants to meet again to continue troubleshooting definition of done salesforce gibft connection for vets api will be working in staging reminders please attach your team label and any other appropriate label s please attach the needs grooming tag if needed please connect to an epic
| 0
|
44,709
| 11,493,613,457
|
IssuesEvent
|
2020-02-11 23:27:30
|
ShabadOS/database
|
https://api.github.com/repos/ShabadOS/database
|
opened
|
Bot transformations on data via command line commit?
|
Priority: 2 Medium Scope: Build Status: In Research Type: Question
|
Let's say someone wants to make a mass change across all of the gurmukhi (renaming the field from "gurmukhi" to "bani" or such), or fixes a typo in the author's name. It affects like 4000 files, the transformation is applied locally, and then committed and diffed on GH.
Anyone looking at this might be confused on how/why, and we can't easily check that there wasn't a problem during this transformation, short of spotting something wrong with the data (which requires going through all the lines in the commit and this takes a very long time...)
So, can we design a process whereby someone instead commits what they'd like to happen, and let a machine do the transformation in a separate commit.
Then, the intent is clear, and you're able to focus on reviewing the transformer, instead of going through a tonne of lines individually
_Interpreted by @Harjot1Singh_
|
1.0
|
Bot transformations on data via command line commit? - Let's say someone wants to make a mass change across all of the gurmukhi (renaming the field from "gurmukhi" to "bani" or such), or fixes a typo in the author's name. It affects like 4000 files, the transformation is applied locally, and then committed and diffed on GH.
Anyone looking at this might be confused on how/why, and we can't easily check that there wasn't a problem during this transformation, short of spotting something wrong with the data (which requires going through all the lines in the commit and this takes a very long time...)
So, can we design a process whereby someone instead commits what they'd like to happen, and let a machine do the transformation in a separate commit.
Then, the intent is clear, and you're able to focus on reviewing the transformer, instead of going through a tonne of lines individually
_Interpreted by @Harjot1Singh_
|
non_process
|
bot transformations on data via command line commit let s say someone wants to make a mass change across all of the gurmukhi renaming the field from gurmukhi to bani or such or fixes a typo in the author s name it affects like files the transformation is applied locally and then committed and diffed on gh anyone looking at this might be confused on how why and we can t easily check that there wasn t a problem during this transformation short of spotting something wrong with the data which requires going through all the lines in the commit and this takes a very long time so can we design a process whereby someone instead commits what they d like to happen and let a machine do the transformation in a separate commit then the intent is clear and you re able to focus on reviewing the transformer instead of going through a tonne of lines individually interpreted by
| 0
|
9,790
| 12,805,793,127
|
IssuesEvent
|
2020-07-03 08:13:58
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
closed
|
Add new line checks to integration tests
|
kind/improvement process/candidate team/typescript topic: tests
|
Due to differences in operating systems / environment, I could not get the recent change of [adding trailing empty lines](https://github.com/prisma/prisma-engines/commit/ea035543e59571161e00ccd4063f5638283bfba7) working in the integration tests.
Instead for now I added a `.trim()` call to those tests to make them work.
We should look into it and find a way to get the tests running without `.trim()`
|
1.0
|
Add new line checks to integration tests - Due to differences in operating systems / environment, I could not get the recent change of [adding trailing empty lines](https://github.com/prisma/prisma-engines/commit/ea035543e59571161e00ccd4063f5638283bfba7) working in the integration tests.
Instead for now I added a `.trim()` call to those tests to make them work.
We should look into it and find a way to get the tests running without `.trim()`
|
process
|
add new line checks to integration tests due to differences in operating systems environment i could not get the recent change of working in the integration tests instead for now i added a trim call to those tests to make them work we should look into it and find a way to get the tests running without trim
| 1
|
22,753
| 32,074,922,908
|
IssuesEvent
|
2023-09-25 10:21:31
|
equinor/flyt
|
https://api.github.com/repos/equinor/flyt
|
closed
|
Add Threat Modeling to issue templates
|
process improvement
|
Add threat modeling as a development checklist point in issue templates to make sure we continuously perform threat modeling.
|
1.0
|
Add Threat Modeling to issue templates - Add threat modeling as a development checklist point in issue templates to make sure we continuously perform threat modeling.
|
process
|
add threat modeling to issue templates add threat modeling as a development checklist point in issue templates to make sure we continuously perform threat modeling
| 1
|
3,189
| 6,259,501,777
|
IssuesEvent
|
2017-07-14 18:13:12
|
PeaceGeeksSociety/salesforce
|
https://api.github.com/repos/PeaceGeeksSociety/salesforce
|
opened
|
Publicize PeaceTalks, hackathons, volunteer/job opportunities to garner participants
|
Community Processes Recruitment Processes
|
We would like to publicize PeaceTalks, hackathons and volunteer/job opportunities with PG for the purpose of garnering participants.
Done when:
- PeaceTals: filter and send mail merge emails to PeaceTalks marketers, including: relevant student associations, academic departments, relevant professors, community groups, non-profits (generally those we have partnered within the past)
- Hackathons: filter and send mail merge emails to Hackathon marketers, including: tech partners, relevant student associations, academic departments (Computer Science), coding schools
- Volunteer/job opportunities: filter and send mail merge emails to volunteer/job opportunity marketers, including: university career centres, relevant (to the job) student associations and academic departments/schools
|
2.0
|
Publicize PeaceTalks, hackathons, volunteer/job opportunities to garner participants - We would like to publicize PeaceTalks, hackathons and volunteer/job opportunities with PG for the purpose of garnering participants.
Done when:
- PeaceTals: filter and send mail merge emails to PeaceTalks marketers, including: relevant student associations, academic departments, relevant professors, community groups, non-profits (generally those we have partnered within the past)
- Hackathons: filter and send mail merge emails to Hackathon marketers, including: tech partners, relevant student associations, academic departments (Computer Science), coding schools
- Volunteer/job opportunities: filter and send mail merge emails to volunteer/job opportunity marketers, including: university career centres, relevant (to the job) student associations and academic departments/schools
|
process
|
publicize peacetalks hackathons volunteer job opportunities to garner participants we would like to publicize peacetalks hackathons and volunteer job opportunities with pg for the purpose of garnering participants done when peacetals filter and send mail merge emails to peacetalks marketers including relevant student associations academic departments relevant professors community groups non profits generally those we have partnered within the past hackathons filter and send mail merge emails to hackathon marketers including tech partners relevant student associations academic departments computer science coding schools volunteer job opportunities filter and send mail merge emails to volunteer job opportunity marketers including university career centres relevant to the job student associations and academic departments schools
| 1
|
11,467
| 14,289,748,127
|
IssuesEvent
|
2020-11-23 19:45:44
|
Maximus5/ConEmu
|
https://api.github.com/repos/Maximus5/ConEmu
|
closed
|
"Failed to create/open registry key" popup
|
processes
|
### Versions
ConEmu build: 201101 x64
OS version: Windows 10 x64
### Problem description
I've manually denied permission "Create Subkey" for HKCU\Console.
The reason is that I don't want every single app to have its own console settings permanently changed and stored there when I change their console properties temporarily, e.g. to make the window bigger.
When I start ConEmu, it shows the following popup window:
```---------------------------
ConEmu 201101 [64]
---------------------------
Failed to create/open registry key
[HKCU\Console\ConEmu]
LastError=0x00000005
Access is denied.
---------------------------
OK
---------------------------
```
After closing this popup it immediately shows another, almost identical:
```
---------------------------
ConEmu 201101 [64]
---------------------------
Failed to create/open registry key 'HKCU\Console\ConEmu'
LastError=0x00000005
Access is denied.
---------------------------
OK
---------------------------
```
Is there a real need to:
- create that particular key on program startup?
- bother the user with these popups, especially given that there was no attempt to modify & save settings or something like that?
### Steps to reproduce
1. Deny permission "Create Subkey" for HKCU\Console using Regedit.
2. Start ConEmu.
### Actual results
Two annoying blocking popups.
### Expected results
No popups.
|
1.0
|
"Failed to create/open registry key" popup - ### Versions
ConEmu build: 201101 x64
OS version: Windows 10 x64
### Problem description
I've manually denied permission "Create Subkey" for HKCU\Console.
The reason is that I don't want every single app to have its own console settings permanently changed and stored there when I change their console properties temporarily, e.g. to make the window bigger.
When I start ConEmu, it shows the following popup window:
```---------------------------
ConEmu 201101 [64]
---------------------------
Failed to create/open registry key
[HKCU\Console\ConEmu]
LastError=0x00000005
Access is denied.
---------------------------
OK
---------------------------
```
After closing this popup it immediately shows another, almost identical:
```
---------------------------
ConEmu 201101 [64]
---------------------------
Failed to create/open registry key 'HKCU\Console\ConEmu'
LastError=0x00000005
Access is denied.
---------------------------
OK
---------------------------
```
Is there a real need to:
- create that particular key on program startup?
- bother the user with these popups, especially given that there was no attempt to modify & save settings or something like that?
### Steps to reproduce
1. Deny permission "Create Subkey" for HKCU\Console using Regedit.
2. Start ConEmu.
### Actual results
Two annoying blocking popups.
### Expected results
No popups.
|
process
|
failed to create open registry key popup versions conemu build os version windows problem description i ve manually denied permission create subkey for hkcu console the reason is that i don t want every single app to have its own console settings permanently changed and stored there when i change their console properties temporarily e g to make the window bigger when i start conemu it shows the following popup window conemu failed to create open registry key lasterror access is denied ok after closing this popup it immediately shows another almost identical conemu failed to create open registry key hkcu console conemu lasterror access is denied ok is there a real need to create that particular key on program startup bother the user with these popups especially given that there was no attempt to modify save settings or something like that steps to reproduce deny permission create subkey for hkcu console using regedit start conemu actual results two annoying blocking popups expected results no popups
| 1
|
10,490
| 13,257,910,830
|
IssuesEvent
|
2020-08-20 14:42:26
|
kubeflow/kubeflow
|
https://api.github.com/repos/kubeflow/kubeflow
|
closed
|
PodDefaults needs an OWNERs file
|
area/community kind/process priority/p1
|
/kind process
The directory for the PodDefaults controller is missing an OWNERs file
https://github.com/kubeflow/kubeflow/tree/master/components/admission-webhook
We should also consider renaming the directory.
@yanniszark mentioned in kubeflow/community#381 he might be willing to be an OWNER.
@discordianfish might also be interested.
|
1.0
|
PodDefaults needs an OWNERs file - /kind process
The directory for the PodDefaults controller is missing an OWNERs file
https://github.com/kubeflow/kubeflow/tree/master/components/admission-webhook
We should also consider renaming the directory.
@yanniszark mentioned in kubeflow/community#381 he might be willing to be an OWNER.
@discordianfish might also be interested.
|
process
|
poddefaults needs an owners file kind process the directory for the poddefaults controller is missing an owners file we should also consider renaming the directory yanniszark mentioned in kubeflow community he might be willing to be an owner discordianfish might also be interested
| 1
|
10,083
| 13,044,161,980
|
IssuesEvent
|
2020-07-29 03:47:28
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
UCP: Migrate scalar function `SubTimeDurationNull` from TiDB
|
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
|
## Description
Port the scalar function `SubTimeDurationNull` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @mapleFU
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
2.0
|
UCP: Migrate scalar function `SubTimeDurationNull` from TiDB -
## Description
Port the scalar function `SubTimeDurationNull` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @mapleFU
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
process
|
ucp migrate scalar function subtimedurationnull from tidb description port the scalar function subtimedurationnull from tidb to coprocessor score mentor s maplefu recommended skills rust programming learning materials already implemented expressions ported from tidb
| 1
|
19,931
| 26,397,570,712
|
IssuesEvent
|
2023-01-12 20:58:45
|
GoogleCloudPlatform/spring-cloud-gcp
|
https://api.github.com/repos/GoogleCloudPlatform/spring-cloud-gcp
|
closed
|
Missing information about Spring Cloud GCP 3.4.1
|
priority: p1 type: process
|
I noticed that Spring Cloud GCP 3.4.1 was released to Maven Central last week: https://search.maven.org/artifact/com.google.cloud/spring-cloud-gcp-dependencies
However, there is no mention of version 3.4.1 on https://github.com/GoogleCloudPlatform/spring-cloud-gcp, which still lists 3.4.0 as the current version.
Version 3.4.1 is also not mentioned in the [releases](https://github.com/GoogleCloudPlatform/spring-cloud-gcp/releases), [tags](https://github.com/GoogleCloudPlatform/spring-cloud-gcp/tags) or [milestones](https://github.com/GoogleCloudPlatform/spring-cloud-gcp/milestones) overviews.
Where can I find more information about version 3.4.1?
|
1.0
|
Missing information about Spring Cloud GCP 3.4.1 - I noticed that Spring Cloud GCP 3.4.1 was released to Maven Central last week: https://search.maven.org/artifact/com.google.cloud/spring-cloud-gcp-dependencies
However, there is no mention of version 3.4.1 on https://github.com/GoogleCloudPlatform/spring-cloud-gcp, which still lists 3.4.0 as the current version.
Version 3.4.1 is also not mentioned in the [releases](https://github.com/GoogleCloudPlatform/spring-cloud-gcp/releases), [tags](https://github.com/GoogleCloudPlatform/spring-cloud-gcp/tags) or [milestones](https://github.com/GoogleCloudPlatform/spring-cloud-gcp/milestones) overviews.
Where can I find more information about version 3.4.1?
|
process
|
missing information about spring cloud gcp i noticed that spring cloud gcp was released to maven central last week however there is no mention of version on which still lists as the current version version is also not mentioned in the or overviews where can i find more information about version
| 1
|
447,795
| 31,722,711,891
|
IssuesEvent
|
2023-09-10 15:44:01
|
Reesfarrington/AGICyberSecTools-
|
https://api.github.com/repos/Reesfarrington/AGICyberSecTools-
|
opened
|
Lesson 1 GitHub Link on ipynb
|
documentation
|
So I'm new to the whole actual doings of this but I would like a link to the GitHub (even though it's on GitHub and hr YouTubelink as well.
|
1.0
|
Lesson 1 GitHub Link on ipynb - So I'm new to the whole actual doings of this but I would like a link to the GitHub (even though it's on GitHub and hr YouTubelink as well.
|
non_process
|
lesson github link on ipynb so i m new to the whole actual doings of this but i would like a link to the github even though it s on github and hr youtubelink as well
| 0
|
11,626
| 14,485,398,912
|
IssuesEvent
|
2020-12-10 17:32:15
|
Feryi/5a
|
https://api.github.com/repos/Feryi/5a
|
opened
|
fill_size_estimating_template
|
process_dashboard
|
-llenado de estimacion de linjeas de codigo en process dashboard y correr el probe wizard
|
1.0
|
fill_size_estimating_template - -llenado de estimacion de linjeas de codigo en process dashboard y correr el probe wizard
|
process
|
fill size estimating template llenado de estimacion de linjeas de codigo en process dashboard y correr el probe wizard
| 1
|
1,178
| 3,681,568,353
|
IssuesEvent
|
2016-02-24 04:11:52
|
18F/FEC
|
https://api.github.com/repos/18F/FEC
|
closed
|
Positive: Calendar features
|
processed
|
## What were you trying to do and how can we improve it?
I was checking out the cool new calendar features
## General feedback?
I like them!
## Tell us about yourself
I'm new to FEC.GOV
## Details
* URL: https://fec-proxy.18f.gov/calendar/
* User Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.10; rv:44.0) Gecko/20100101 Firefox/44.0
|
1.0
|
Positive: Calendar features -
## What were you trying to do and how can we improve it?
I was checking out the cool new calendar features
## General feedback?
I like them!
## Tell us about yourself
I'm new to FEC.GOV
## Details
* URL: https://fec-proxy.18f.gov/calendar/
* User Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.10; rv:44.0) Gecko/20100101 Firefox/44.0
|
process
|
positive calendar features what were you trying to do and how can we improve it i was checking out the cool new calendar features general feedback i like them tell us about yourself i m new to fec gov details url user agent mozilla macintosh intel mac os x rv gecko firefox
| 1
|
17,748
| 23,660,934,689
|
IssuesEvent
|
2022-08-26 15:30:08
|
carbon-design-system/ibm-cloud-cognitive
|
https://api.github.com/repos/carbon-design-system/ibm-cloud-cognitive
|
opened
|
Update release review guidelines and template
|
type: process improvement
|
Discussing with @matthewgallo, we found some areas where we could clarify or elaborate further in our release review process.
> The UI produced is accessible, responsive, translatable, cross-browser, and responds to the currently set Carbon theme.
This step requires several important checks but wraps them into a single bullet. These should be broken out to capture and set clearer expectations. e.g. _how do we check for translatability_? Accessibility may be broken into a separate review process as well.
> All significant DOM elements have meaningful classes.
Should add mention of following BEM as well.
|
1.0
|
Update release review guidelines and template - Discussing with @matthewgallo, we found some areas where we could clarify or elaborate further in our release review process.
> The UI produced is accessible, responsive, translatable, cross-browser, and responds to the currently set Carbon theme.
This step requires several important checks but wraps them into a single bullet. These should be broken out to capture and set clearer expectations. e.g. _how do we check for translatability_? Accessibility may be broken into a separate review process as well.
> All significant DOM elements have meaningful classes.
Should add mention of following BEM as well.
|
process
|
update release review guidelines and template discussing with matthewgallo we found some areas where we could clarify or elaborate further in our release review process the ui produced is accessible responsive translatable cross browser and responds to the currently set carbon theme this step requires several important checks but wraps them into a single bullet these should be broken out to capture and set clearer expectations e g how do we check for translatability accessibility may be broken into a separate review process as well all significant dom elements have meaningful classes should add mention of following bem as well
| 1
|
135,073
| 18,666,843,881
|
IssuesEvent
|
2021-10-30 01:08:43
|
capitalone/global-attribution-mapping
|
https://api.github.com/repos/capitalone/global-attribution-mapping
|
opened
|
CVE-2021-42343 (High) detected in dask-2021.2.0-py3-none-any.whl
|
security vulnerability
|
## CVE-2021-42343 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>dask-2021.2.0-py3-none-any.whl</b></p></summary>
<p>Parallel PyData with Task Scheduling</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/6c/1f/df19d049556ced7ea46e0cae397c18efd1664501b2042da321b8df81f2bf/dask-2021.2.0-py3-none-any.whl">https://files.pythonhosted.org/packages/6c/1f/df19d049556ced7ea46e0cae397c18efd1664501b2042da321b8df81f2bf/dask-2021.2.0-py3-none-any.whl</a></p>
<p>Path to dependency file: global-attribution-mapping</p>
<p>Path to vulnerable library: global-attribution-mapping</p>
<p>
Dependency Hierarchy:
- :x: **dask-2021.2.0-py3-none-any.whl** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in Dask (aka python-dask) through 2021.09.1. Single machine Dask clusters started with dask.distributed.LocalCluster or dask.distributed.Client (which defaults to using LocalCluster) would mistakenly configure their respective Dask workers to listen on external interfaces (typically with a randomly selected high port) rather than only on localhost. A Dask cluster created using this method (when running on a machine that has an applicable port exposed) could be used by a sophisticated attacker to achieve remote code execution.
<p>Publish Date: 2021-10-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-42343>CVE-2021-42343</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-42343">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-42343</a></p>
<p>Release Date: 2021-10-26</p>
<p>Fix Resolution: dask - 2021.10.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-42343 (High) detected in dask-2021.2.0-py3-none-any.whl - ## CVE-2021-42343 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>dask-2021.2.0-py3-none-any.whl</b></p></summary>
<p>Parallel PyData with Task Scheduling</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/6c/1f/df19d049556ced7ea46e0cae397c18efd1664501b2042da321b8df81f2bf/dask-2021.2.0-py3-none-any.whl">https://files.pythonhosted.org/packages/6c/1f/df19d049556ced7ea46e0cae397c18efd1664501b2042da321b8df81f2bf/dask-2021.2.0-py3-none-any.whl</a></p>
<p>Path to dependency file: global-attribution-mapping</p>
<p>Path to vulnerable library: global-attribution-mapping</p>
<p>
Dependency Hierarchy:
- :x: **dask-2021.2.0-py3-none-any.whl** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in Dask (aka python-dask) through 2021.09.1. Single machine Dask clusters started with dask.distributed.LocalCluster or dask.distributed.Client (which defaults to using LocalCluster) would mistakenly configure their respective Dask workers to listen on external interfaces (typically with a randomly selected high port) rather than only on localhost. A Dask cluster created using this method (when running on a machine that has an applicable port exposed) could be used by a sophisticated attacker to achieve remote code execution.
<p>Publish Date: 2021-10-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-42343>CVE-2021-42343</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-42343">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-42343</a></p>
<p>Release Date: 2021-10-26</p>
<p>Fix Resolution: dask - 2021.10.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in dask none any whl cve high severity vulnerability vulnerable library dask none any whl parallel pydata with task scheduling library home page a href path to dependency file global attribution mapping path to vulnerable library global attribution mapping dependency hierarchy x dask none any whl vulnerable library found in base branch master vulnerability details an issue was discovered in dask aka python dask through single machine dask clusters started with dask distributed localcluster or dask distributed client which defaults to using localcluster would mistakenly configure their respective dask workers to listen on external interfaces typically with a randomly selected high port rather than only on localhost a dask cluster created using this method when running on a machine that has an applicable port exposed could be used by a sophisticated attacker to achieve remote code execution publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution dask step up your open source security game with whitesource
| 0
|
15,605
| 19,728,131,972
|
IssuesEvent
|
2022-01-13 22:15:50
|
DSpace/dspace-angular
|
https://api.github.com/repos/DSpace/dspace-angular
|
closed
|
Scripts & processes without parameter
|
bug e/2 tools:processes
|
**Describe the bug**
When starting a process without parameter, Angular refuses to start the process. This happens when no parameter was selected (but there was one), or for a script without parameters
**To Reproduce**
Steps to reproduce the behavior:
1. Open /processes/new in Angular
2. Start a process without selecting parameters
3. The save button doesn't work
Steps to reproduce the behavior for a script without any parameter:
1. Open /processes/new in Angular
2. Start a process which doesn't have any parameters
3. The save button doesn't work
**Expected behavior**
Processes should also start without parameters
|
1.0
|
Scripts & processes without parameter - **Describe the bug**
When starting a process without parameter, Angular refuses to start the process. This happens when no parameter was selected (but there was one), or for a script without parameters
**To Reproduce**
Steps to reproduce the behavior:
1. Open /processes/new in Angular
2. Start a process without selecting parameters
3. The save button doesn't work
Steps to reproduce the behavior for a script without any parameter:
1. Open /processes/new in Angular
2. Start a process which doesn't have any parameters
3. The save button doesn't work
**Expected behavior**
Processes should also start without parameters
|
process
|
scripts processes without parameter describe the bug when starting a process without parameter angular refuses to start the process this happens when no parameter was selected but there was one or for a script without parameters to reproduce steps to reproduce the behavior open processes new in angular start a process without selecting parameters the save button doesn t work steps to reproduce the behavior for a script without any parameter open processes new in angular start a process which doesn t have any parameters the save button doesn t work expected behavior processes should also start without parameters
| 1
|
368,650
| 10,882,640,897
|
IssuesEvent
|
2019-11-18 01:22:21
|
ArcBlock/forge-cli
|
https://api.github.com/repos/ArcBlock/forge-cli
|
closed
|
support custom config by a config file when create a chain
|
priority/blocking-dev
|
And some commands should support --yes param for automation:
1. forge chain:create
1. forge config
|
1.0
|
support custom config by a config file when create a chain - And some commands should support --yes param for automation:
1. forge chain:create
1. forge config
|
non_process
|
support custom config by a config file when create a chain and some commands should support yes param for automation forge chain create forge config
| 0
|
8,970
| 12,086,628,740
|
IssuesEvent
|
2020-04-18 10:56:10
|
spring-projects/spring-hateoas
|
https://api.github.com/repos/spring-projects/spring-hateoas
|
closed
|
URLs cannot contain a colon
|
process: waiting for feedback type: bug
|
I'm trying to create a resource whose ID contains a colon (f.e. "AB:TRE"). When I use createResourceWithId(), a UriComponentsBuilder is used to split the id wrongly into a scheme and ssp, even though it should only be part of the path. Thus, the resulting uri does not contain the id.
According to the relevant RFC 3986, colons are allowed in segments:
> segment = *pchar
> pchar = unreserved / pct-encoded / sub-delims / ":" / "@"
|
1.0
|
URLs cannot contain a colon - I'm trying to create a resource whose ID contains a colon (f.e. "AB:TRE"). When I use createResourceWithId(), a UriComponentsBuilder is used to split the id wrongly into a scheme and ssp, even though it should only be part of the path. Thus, the resulting uri does not contain the id.
According to the relevant RFC 3986, colons are allowed in segments:
> segment = *pchar
> pchar = unreserved / pct-encoded / sub-delims / ":" / "@"
|
process
|
urls cannot contain a colon i m trying to create a resource whose id contains a colon f e ab tre when i use createresourcewithid a uricomponentsbuilder is used to split the id wrongly into a scheme and ssp even though it should only be part of the path thus the resulting uri does not contain the id according to the relevant rfc colons are allowed in segments segment pchar pchar unreserved pct encoded sub delims
| 1
|
100,265
| 12,512,377,182
|
IssuesEvent
|
2020-06-02 22:38:11
|
microsoft/vscode-azurestorage
|
https://api.github.com/repos/microsoft/vscode-azurestorage
|
reopened
|
The rule of naming resource group is unfriendly when creating a basic storage account
|
AT-CTI by design
|
**OS:** Mac
**Build Version:** 20200531.1
**Repro Steps:**
1. Create a basic storage account with name "test".
2. Delete this storage account.
3. Create a basic "test" storage account again.
4. Check the output log.
**Expect:**
The log should show "Creating resource group '**test1**' in location 'westus'...".
**Actual:**
The log shows "Creating resource group '**test2**' in location 'westus'...".

|
1.0
|
The rule of naming resource group is unfriendly when creating a basic storage account - **OS:** Mac
**Build Version:** 20200531.1
**Repro Steps:**
1. Create a basic storage account with name "test".
2. Delete this storage account.
3. Create a basic "test" storage account again.
4. Check the output log.
**Expect:**
The log should show "Creating resource group '**test1**' in location 'westus'...".
**Actual:**
The log shows "Creating resource group '**test2**' in location 'westus'...".

|
non_process
|
the rule of naming resource group is unfriendly when creating a basic storage account os mac build version repro steps create a basic storage account with name test delete this storage account create a basic test storage account again check the output log expect the log should show creating resource group in location westus actual the log shows creating resource group in location westus
| 0
|
11,816
| 14,631,549,773
|
IssuesEvent
|
2020-12-23 20:09:51
|
googleapis/google-api-java-client
|
https://api.github.com/repos/googleapis/google-api-java-client
|
closed
|
CLIRR errors at head
|
type: process
|
[ERROR] 7006: com.google.api.client.googleapis.extensions.android.gms.auth.GooglePlayServicesAvailabilityIOException: Return type of method 'public com.google.android.gms.auth.GoogleAuthException getCause()' has been changed to java.lang.Throwable
[ERROR] 7006: com.google.api.client.googleapis.extensions.android.gms.auth.GooglePlayServicesAvailabilityIOException: Return type of method 'public java.lang.Throwable getCause()' has been changed to com.google.android.gms.auth.GoogleAuthException
|
1.0
|
CLIRR errors at head - [ERROR] 7006: com.google.api.client.googleapis.extensions.android.gms.auth.GooglePlayServicesAvailabilityIOException: Return type of method 'public com.google.android.gms.auth.GoogleAuthException getCause()' has been changed to java.lang.Throwable
[ERROR] 7006: com.google.api.client.googleapis.extensions.android.gms.auth.GooglePlayServicesAvailabilityIOException: Return type of method 'public java.lang.Throwable getCause()' has been changed to com.google.android.gms.auth.GoogleAuthException
|
process
|
clirr errors at head com google api client googleapis extensions android gms auth googleplayservicesavailabilityioexception return type of method public com google android gms auth googleauthexception getcause has been changed to java lang throwable com google api client googleapis extensions android gms auth googleplayservicesavailabilityioexception return type of method public java lang throwable getcause has been changed to com google android gms auth googleauthexception
| 1
|
117,717
| 15,167,873,397
|
IssuesEvent
|
2021-02-12 18:28:41
|
invenia/Transforms.jl
|
https://api.github.com/repos/invenia/Transforms.jl
|
opened
|
Use a consistent convention for `dims`
|
design
|
The `apply` method for `AbstractArray` in `transformers.jl` [uses `mapslices`](https://github.com/invenia/Transforms.jl/blob/0f49caa1fa35a7253c5880389aeee56e8630b458/src/transformers.jl#L75) to apply a transform on each slice of the array, along a given dimension `dims`.
Meanwhile, the equivalent `apply!` method [uses `eachslice`](https://github.com/invenia/Transforms.jl/blob/0f49caa1fa35a7253c5880389aeee56e8630b458/src/transformers.jl#L109-L111).
The problem is that `mapslices` and `eachslice` have opposite notions of `dims`. For example:
```julia
julia> M = [1 2 3; 4 5 6]
2×3 Array{Int64,2}:
1 2 3
4 5 6
julia> map(x -> println(x), eachslice(M; dims=1));
[1, 2, 3]
[4, 5, 6]
julia> mapslices(x -> println(x), M; dims=1);
[1, 4]
[2, 5]
[3, 6]
```
For higher dimensions, `dims=3` in `eachslice` is equivalent to `dims=[1, 2]` in `mapslices` (and note that `eachslice` only supports a single dimension in `dims`).
Note also that `Statistics.mean` and `Statistics.std` uses the same notion of `dims` as `mapslices`.
We should adopt a consistent convention for the meaning of `dims`, and explain this clearly in documentation.
|
1.0
|
Use a consistent convention for `dims` - The `apply` method for `AbstractArray` in `transformers.jl` [uses `mapslices`](https://github.com/invenia/Transforms.jl/blob/0f49caa1fa35a7253c5880389aeee56e8630b458/src/transformers.jl#L75) to apply a transform on each slice of the array, along a given dimension `dims`.
Meanwhile, the equivalent `apply!` method [uses `eachslice`](https://github.com/invenia/Transforms.jl/blob/0f49caa1fa35a7253c5880389aeee56e8630b458/src/transformers.jl#L109-L111).
The problem is that `mapslices` and `eachslice` have opposite notions of `dims`. For example:
```julia
julia> M = [1 2 3; 4 5 6]
2×3 Array{Int64,2}:
1 2 3
4 5 6
julia> map(x -> println(x), eachslice(M; dims=1));
[1, 2, 3]
[4, 5, 6]
julia> mapslices(x -> println(x), M; dims=1);
[1, 4]
[2, 5]
[3, 6]
```
For higher dimensions, `dims=3` in `eachslice` is equivalent to `dims=[1, 2]` in `mapslices` (and note that `eachslice` only supports a single dimension in `dims`).
Note also that `Statistics.mean` and `Statistics.std` uses the same notion of `dims` as `mapslices`.
We should adopt a consistent convention for the meaning of `dims`, and explain this clearly in documentation.
|
non_process
|
use a consistent convention for dims the apply method for abstractarray in transformers jl to apply a transform on each slice of the array along a given dimension dims meanwhile the equivalent apply method the problem is that mapslices and eachslice have opposite notions of dims for example julia julia m × array julia map x println x eachslice m dims julia mapslices x println x m dims for higher dimensions dims in eachslice is equivalent to dims in mapslices and note that eachslice only supports a single dimension in dims note also that statistics mean and statistics std uses the same notion of dims as mapslices we should adopt a consistent convention for the meaning of dims and explain this clearly in documentation
| 0
|
4,061
| 6,993,787,933
|
IssuesEvent
|
2017-12-15 12:54:55
|
allinurl/goaccess
|
https://api.github.com/repos/allinurl/goaccess
|
closed
|
View details of per virtual host (filtering)
|
log-processing question
|
Hi Developers,
Thanks for this great application, it really helps me a lot.
There are many virtual hosting in my server.
Is it possible to view the detail of one selected virtual host ?
Thanks.
|
1.0
|
View details of per virtual host (filtering) - Hi Developers,
Thanks for this great application, it really helps me a lot.
There are many virtual hosting in my server.
Is it possible to view the detail of one selected virtual host ?
Thanks.
|
process
|
view details of per virtual host filtering hi developers thanks for this great application it really helps me a lot there are many virtual hosting in my server is it possible to view the detail of one selected virtual host thanks
| 1
|
415,885
| 28,057,194,413
|
IssuesEvent
|
2023-03-29 10:08:25
|
fedora-infra/fmn
|
https://api.github.com/repos/fedora-infra/fmn
|
closed
|
Document Architecture
|
documentation fmn-next
|
# Story
As a contributor to FMN,
I want that its architecture is documented,
so that I can get up to speed quickly and contribute effectively.
# Acceptance Criteria
- [x] Documentation exists describing the components that make up FMN, and how they relate to each other as well as services in Fedora infrastructure:
- [x] Text
- [x] Diagram #813
|
1.0
|
Document Architecture - # Story
As a contributor to FMN,
I want that its architecture is documented,
so that I can get up to speed quickly and contribute effectively.
# Acceptance Criteria
- [x] Documentation exists describing the components that make up FMN, and how they relate to each other as well as services in Fedora infrastructure:
- [x] Text
- [x] Diagram #813
|
non_process
|
document architecture story as a contributor to fmn i want that its architecture is documented so that i can get up to speed quickly and contribute effectively acceptance criteria documentation exists describing the components that make up fmn and how they relate to each other as well as services in fedora infrastructure text diagram
| 0
|
14,017
| 16,816,819,480
|
IssuesEvent
|
2021-06-17 08:23:08
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[iOS] Form step > Stats and Trends are not updated instantly unless user logs out and logs in
|
Bug P1 Process: Fixed Process: Tested QA Process: Tested dev iOS
|
**Steps:**
1. Add an activity with stats and Trends enabled for a form step
2. Submit the response from iOS mobile
3. Navigate to dashboard
4. Observe the stats and trends
**Actual:** Stats and Trends are not updated instantly unless user logs out and logs in
**Expected:** Stats and trends should be updated instantly
Issue not observed for question step
Issue observed for all frequencies
Activity details:
Instance: Dev
Study ID: Diabetes23
App ID: BTCDEV001
Activity ID: ormtep
Activity Name: Form Step
|
3.0
|
[iOS] Form step > Stats and Trends are not updated instantly unless user logs out and logs in - **Steps:**
1. Add an activity with stats and Trends enabled for a form step
2. Submit the response from iOS mobile
3. Navigate to dashboard
4. Observe the stats and trends
**Actual:** Stats and Trends are not updated instantly unless user logs out and logs in
**Expected:** Stats and trends should be updated instantly
Issue not observed for question step
Issue observed for all frequencies
Activity details:
Instance: Dev
Study ID: Diabetes23
App ID: BTCDEV001
Activity ID: ormtep
Activity Name: Form Step
|
process
|
form step stats and trends are not updated instantly unless user logs out and logs in steps add an activity with stats and trends enabled for a form step submit the response from ios mobile navigate to dashboard observe the stats and trends actual stats and trends are not updated instantly unless user logs out and logs in expected stats and trends should be updated instantly issue not observed for question step issue observed for all frequencies activity details instance dev study id app id activity id ormtep activity name form step
| 1
|
308,399
| 26,604,855,427
|
IssuesEvent
|
2023-01-23 18:27:24
|
tailscale/tailscale
|
https://api.github.com/repos/tailscale/tailscale
|
opened
|
TestC2NPingRequest is flaky
|
testing
|
https://github.com/tailscale/corp/actions/runs/3989076415/jobs/6841058944
<details>
<summary>Click to expand test logs</summary>
```
panic: test timed out after 10m0s
goroutine 511 [running]:
testing.(*M).startAlarm.func1()
/home/ubuntu/.cache/tailscale-go/src/testing/testing.go:2036 +0xb4
created by time.goFunc
/home/ubuntu/.cache/tailscale-go/src/time/sleep.go:176 +0x48
goroutine 1 [chan receive, 9 minutes]:
testing.tRunner.func1()
/home/ubuntu/.cache/tailscale-go/src/testing/testing.go:1412 +0x5dc
testing.tRunner(0xc000228820, 0xc000161aa8)
/home/ubuntu/.cache/tailscale-go/src/testing/testing.go:1452 +0x1bc
testing.runTests(0xc000224b40?, {0x11b06a0, 0xd, 0xd}, {0x4?, 0xa?, 0x11bbf40?})
/home/ubuntu/.cache/tailscale-go/src/testing/testing.go:1844 +0x6d0
testing.(*M).Run(0xc000224b40)
/home/ubuntu/.cache/tailscale-go/src/testing/testing.go:1726 +0x880
tailscale.com/tstest/integration.TestMain(0x0?)
/home/ubuntu/go/pkg/mod/tailscale.com@v1.1.1-0.20230122221356-64547b2b86f3/tstest/integration/integration_test.go:57 +0xd4
main.main()
_testmain.go:73 +0x308
goroutine 28 [syscall, 9 minutes]:
syscall.Syscall6(0xc00070f8d8?, 0xab958?, 0xc00070f8e8?, 0xacc18?, 0xc00070f918?, 0xb77c0?, 0xc00070f938?)
/home/ubuntu/.cache/tailscale-go/src/syscall/syscall_linux.go:90 +0x34
os.(*Process).blockUntilWaitable(0xc000040540)
/home/ubuntu/.cache/tailscale-go/src/os/wait_waitid.go:32 +0x7c
os.(*Process).wait(0xc000040540)
/home/ubuntu/.cache/tailscale-go/src/os/exec_unix.go:22 +0x48
os.(*Process).Wait(...)
/home/ubuntu/.cache/tailscale-go/src/os/exec.go:132
os/exec.(*Cmd).Wait(0xc000444420)
/home/ubuntu/.cache/tailscale-go/src/os/exec/exec.go:599 +0x78
os/exec.(*Cmd).Run(0x1?)
/home/ubuntu/.cache/tailscale-go/src/os/exec/exec.go:437 +0x58
os/exec.(*Cmd).CombinedOutput(0xc000444420)
/home/ubuntu/.cache/tailscale-go/src/os/exec/exec.go:707 +0x218
tailscale.com/tstest/integration.(*testNode).MustUp(0xc00035c230, {0x0, 0x0, 0x0?})
/home/ubuntu/go/pkg/mod/tailscale.com@v1.1.1-0.20230122221356-64547b2b86f3/tstest/integration/integration_test.go:844 +0x25c
tailscale.com/tstest/integration.TestC2NPingRequest(0xc000288820)
/home/ubuntu/go/pkg/mod/tailscale.com@v1.1.1-0.20230122221356-64547b2b86f3/tstest/integration/integration_test.go:387 +0x94
testing.tRunner(0xc000288820, 0xb81090)
/home/ubuntu/.cache/tailscale-go/src/testing/testing.go:1446 +0x18c
created by testing.(*T).Run
/home/ubuntu/.cache/tailscale-go/src/testing/testing.go:1493 +0x568
goroutine 34 [IO wait, 10 minutes]:
net.(*TCPListener).accept(0xc00033afa8)
/home/ubuntu/.cache/tailscale-go/src/net/tcpsock_posix.go:142 +0x3c
net.(*TCPListener).Accept(0xc00033afa8)
/home/ubuntu/.cache/tailscale-go/src/net/tcpsock.go:288 +0x68
net/http.(*Server).Serve(0xc00035e0f0, {0xccaee0, 0xc00033afa8})
/home/ubuntu/.cache/tailscale-go/src/net/http/server.go:3070 +0x440
net/http/httptest.(*Server).goServe.func1()
/home/ubuntu/.cache/tailscale-go/src/net/http/httptest/server.go:310 +0xa4
created by net/http/httptest.(*Server).goServe
/home/ubuntu/.cache/tailscale-go/src/net/http/httptest/server.go:308 +0x9c
goroutine 36 [IO wait, 9 minutes]:
internal/poll.runtime_pollWait(0xffff9066c840, 0x72)
/home/ubuntu/.cache/tailscale-go/src/runtime/netpoll.go:305 +0xa0
internal/poll.(*pollDesc).wait(0xc000092378, 0xc00031a000?, 0x1)
/home/ubuntu/.cache/tailscale-go/src/internal/poll/fd_poll_runtime.go:84 +0xc0
internal/poll.(*pollDesc).waitRead(...)
/home/ubuntu/.cache/tailscale-go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000092360, {0xc00031a000, 0x8000, 0x8000})
/home/ubuntu/.cache/tailscale-go/src/internal/poll/fd_unix.go:167 +0x2f8
os.(*File).read(...)
/home/ubuntu/.cache/tailscale-go/src/os/file_posix.go:31
os.(*File).Read(0xc000308068, {0xc00031a000, 0x8000, 0x8000})
/home/ubuntu/.cache/tailscale-go/src/os/file.go:119 +0x9c
io.copyBuffer({0xcc30c0, 0xc0003806f0}, {0xcc2d40, 0xc000308068}, {0x0, 0x0, 0x0})
/home/ubuntu/.cache/tailscale-go/src/io/io.go:427 +0x1f0
io.Copy(...)
/home/ubuntu/.cache/tailscale-go/src/io/io.go:386
os/exec.(*Cmd).writerDescriptor.func1()
/home/ubuntu/.cache/tailscale-go/src/os/exec/exec.go:407 +0x60
os/exec.(*Cmd).Start.func1(0xc00028e040)
/home/ubuntu/.cache/tailscale-go/src/os/exec/exec.go:544 +0x38
created by os/exec.(*Cmd).Start
/home/ubuntu/.cache/tailscale-go/src/os/exec/exec.go:543 +0x988
goroutine 73 [IO wait, 9 minutes]:
internal/poll.runtime_pollWait(0xffff8879f3d8, 0x72)
/home/ubuntu/.cache/tailscale-go/src/runtime/netpoll.go:305 +0xa0
internal/poll.(*pollDesc).wait(0xc00066e5b8, 0xc000394600?, 0x1)
/home/ubuntu/.cache/tailscale-go/src/internal/poll/fd_poll_runtime.go:84 +0xc0
internal/poll.(*pollDesc).waitRead(...)
/home/ubuntu/.cache/tailscale-go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00066e5a0, {0xc000394600, 0x200, 0x200})
/home/ubuntu/.cache/tailscale-go/src/internal/poll/fd_unix.go:167 +0x2f8
os.(*File).read(...)
/home/ubuntu/.cache/tailscale-go/src/os/file_posix.go:31
os.(*File).Read(0xc0003082f8, {0xc000394600, 0x200, 0x200})
/home/ubuntu/.cache/tailscale-go/src/os/file.go:119 +0x9c
bytes.(*Buffer).ReadFrom(0xc00037a3f0, {0xcc2d40, 0xc0003082f8})
/home/ubuntu/.cache/tailscale-go/src/bytes/buffer.go:202 +0xec
io.copyBuffer({0xcc1060, 0xc00037a3f0}, {0xcc2d40, 0xc0003082f8}, {0x0, 0x0, 0x0})
/home/ubuntu/.cache/tailscale-go/src/io/io.go:413 +0x[158](https://github.com/tailscale/corp/actions/runs/3989076415/jobs/6841058944#step:5:159)
io.Copy(...)
/home/ubuntu/.cache/tailscale-go/src/io/io.go:386
os/exec.(*Cmd).writerDescriptor.func1()
/home/ubuntu/.cache/tailscale-go/src/os/exec/exec.go:407 +0x60
os/exec.(*Cmd).Start.func1(0xc00028e5e0)
/home/ubuntu/.cache/tailscale-go/src/os/exec/exec.go:544 +0x38
created by os/exec.(*Cmd).Start
/home/ubuntu/.cache/tailscale-go/src/os/exec/exec.go:543 +0x988
FAIL tailscale.com/tstest/integration 600.202s
```
</details>
|
1.0
|
TestC2NPingRequest is flaky - https://github.com/tailscale/corp/actions/runs/3989076415/jobs/6841058944
<details>
<summary>Click to expand test logs</summary>
```
panic: test timed out after 10m0s
goroutine 511 [running]:
testing.(*M).startAlarm.func1()
/home/ubuntu/.cache/tailscale-go/src/testing/testing.go:2036 +0xb4
created by time.goFunc
/home/ubuntu/.cache/tailscale-go/src/time/sleep.go:176 +0x48
goroutine 1 [chan receive, 9 minutes]:
testing.tRunner.func1()
/home/ubuntu/.cache/tailscale-go/src/testing/testing.go:1412 +0x5dc
testing.tRunner(0xc000228820, 0xc000161aa8)
/home/ubuntu/.cache/tailscale-go/src/testing/testing.go:1452 +0x1bc
testing.runTests(0xc000224b40?, {0x11b06a0, 0xd, 0xd}, {0x4?, 0xa?, 0x11bbf40?})
/home/ubuntu/.cache/tailscale-go/src/testing/testing.go:1844 +0x6d0
testing.(*M).Run(0xc000224b40)
/home/ubuntu/.cache/tailscale-go/src/testing/testing.go:1726 +0x880
tailscale.com/tstest/integration.TestMain(0x0?)
/home/ubuntu/go/pkg/mod/tailscale.com@v1.1.1-0.20230122221356-64547b2b86f3/tstest/integration/integration_test.go:57 +0xd4
main.main()
_testmain.go:73 +0x308
goroutine 28 [syscall, 9 minutes]:
syscall.Syscall6(0xc00070f8d8?, 0xab958?, 0xc00070f8e8?, 0xacc18?, 0xc00070f918?, 0xb77c0?, 0xc00070f938?)
/home/ubuntu/.cache/tailscale-go/src/syscall/syscall_linux.go:90 +0x34
os.(*Process).blockUntilWaitable(0xc000040540)
/home/ubuntu/.cache/tailscale-go/src/os/wait_waitid.go:32 +0x7c
os.(*Process).wait(0xc000040540)
/home/ubuntu/.cache/tailscale-go/src/os/exec_unix.go:22 +0x48
os.(*Process).Wait(...)
/home/ubuntu/.cache/tailscale-go/src/os/exec.go:132
os/exec.(*Cmd).Wait(0xc000444420)
/home/ubuntu/.cache/tailscale-go/src/os/exec/exec.go:599 +0x78
os/exec.(*Cmd).Run(0x1?)
/home/ubuntu/.cache/tailscale-go/src/os/exec/exec.go:437 +0x58
os/exec.(*Cmd).CombinedOutput(0xc000444420)
/home/ubuntu/.cache/tailscale-go/src/os/exec/exec.go:707 +0x218
tailscale.com/tstest/integration.(*testNode).MustUp(0xc00035c230, {0x0, 0x0, 0x0?})
/home/ubuntu/go/pkg/mod/tailscale.com@v1.1.1-0.20230122221356-64547b2b86f3/tstest/integration/integration_test.go:844 +0x25c
tailscale.com/tstest/integration.TestC2NPingRequest(0xc000288820)
/home/ubuntu/go/pkg/mod/tailscale.com@v1.1.1-0.20230122221356-64547b2b86f3/tstest/integration/integration_test.go:387 +0x94
testing.tRunner(0xc000288820, 0xb81090)
/home/ubuntu/.cache/tailscale-go/src/testing/testing.go:1446 +0x18c
created by testing.(*T).Run
/home/ubuntu/.cache/tailscale-go/src/testing/testing.go:1493 +0x568
goroutine 34 [IO wait, 10 minutes]:
net.(*TCPListener).accept(0xc00033afa8)
/home/ubuntu/.cache/tailscale-go/src/net/tcpsock_posix.go:142 +0x3c
net.(*TCPListener).Accept(0xc00033afa8)
/home/ubuntu/.cache/tailscale-go/src/net/tcpsock.go:288 +0x68
net/http.(*Server).Serve(0xc00035e0f0, {0xccaee0, 0xc00033afa8})
/home/ubuntu/.cache/tailscale-go/src/net/http/server.go:3070 +0x440
net/http/httptest.(*Server).goServe.func1()
/home/ubuntu/.cache/tailscale-go/src/net/http/httptest/server.go:310 +0xa4
created by net/http/httptest.(*Server).goServe
/home/ubuntu/.cache/tailscale-go/src/net/http/httptest/server.go:308 +0x9c
goroutine 36 [IO wait, 9 minutes]:
internal/poll.runtime_pollWait(0xffff9066c840, 0x72)
/home/ubuntu/.cache/tailscale-go/src/runtime/netpoll.go:305 +0xa0
internal/poll.(*pollDesc).wait(0xc000092378, 0xc00031a000?, 0x1)
/home/ubuntu/.cache/tailscale-go/src/internal/poll/fd_poll_runtime.go:84 +0xc0
internal/poll.(*pollDesc).waitRead(...)
/home/ubuntu/.cache/tailscale-go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000092360, {0xc00031a000, 0x8000, 0x8000})
/home/ubuntu/.cache/tailscale-go/src/internal/poll/fd_unix.go:167 +0x2f8
os.(*File).read(...)
/home/ubuntu/.cache/tailscale-go/src/os/file_posix.go:31
os.(*File).Read(0xc000308068, {0xc00031a000, 0x8000, 0x8000})
/home/ubuntu/.cache/tailscale-go/src/os/file.go:119 +0x9c
io.copyBuffer({0xcc30c0, 0xc0003806f0}, {0xcc2d40, 0xc000308068}, {0x0, 0x0, 0x0})
/home/ubuntu/.cache/tailscale-go/src/io/io.go:427 +0x1f0
io.Copy(...)
/home/ubuntu/.cache/tailscale-go/src/io/io.go:386
os/exec.(*Cmd).writerDescriptor.func1()
/home/ubuntu/.cache/tailscale-go/src/os/exec/exec.go:407 +0x60
os/exec.(*Cmd).Start.func1(0xc00028e040)
/home/ubuntu/.cache/tailscale-go/src/os/exec/exec.go:544 +0x38
created by os/exec.(*Cmd).Start
/home/ubuntu/.cache/tailscale-go/src/os/exec/exec.go:543 +0x988
goroutine 73 [IO wait, 9 minutes]:
internal/poll.runtime_pollWait(0xffff8879f3d8, 0x72)
/home/ubuntu/.cache/tailscale-go/src/runtime/netpoll.go:305 +0xa0
internal/poll.(*pollDesc).wait(0xc00066e5b8, 0xc000394600?, 0x1)
/home/ubuntu/.cache/tailscale-go/src/internal/poll/fd_poll_runtime.go:84 +0xc0
internal/poll.(*pollDesc).waitRead(...)
/home/ubuntu/.cache/tailscale-go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00066e5a0, {0xc000394600, 0x200, 0x200})
/home/ubuntu/.cache/tailscale-go/src/internal/poll/fd_unix.go:167 +0x2f8
os.(*File).read(...)
/home/ubuntu/.cache/tailscale-go/src/os/file_posix.go:31
os.(*File).Read(0xc0003082f8, {0xc000394600, 0x200, 0x200})
/home/ubuntu/.cache/tailscale-go/src/os/file.go:119 +0x9c
bytes.(*Buffer).ReadFrom(0xc00037a3f0, {0xcc2d40, 0xc0003082f8})
/home/ubuntu/.cache/tailscale-go/src/bytes/buffer.go:202 +0xec
io.copyBuffer({0xcc1060, 0xc00037a3f0}, {0xcc2d40, 0xc0003082f8}, {0x0, 0x0, 0x0})
/home/ubuntu/.cache/tailscale-go/src/io/io.go:413 +0x[158](https://github.com/tailscale/corp/actions/runs/3989076415/jobs/6841058944#step:5:159)
io.Copy(...)
/home/ubuntu/.cache/tailscale-go/src/io/io.go:386
os/exec.(*Cmd).writerDescriptor.func1()
/home/ubuntu/.cache/tailscale-go/src/os/exec/exec.go:407 +0x60
os/exec.(*Cmd).Start.func1(0xc00028e5e0)
/home/ubuntu/.cache/tailscale-go/src/os/exec/exec.go:544 +0x38
created by os/exec.(*Cmd).Start
/home/ubuntu/.cache/tailscale-go/src/os/exec/exec.go:543 +0x988
FAIL tailscale.com/tstest/integration 600.202s
```
</details>
|
non_process
|
is flaky click to expand test logs panic test timed out after goroutine testing m startalarm home ubuntu cache tailscale go src testing testing go created by time gofunc home ubuntu cache tailscale go src time sleep go goroutine testing trunner home ubuntu cache tailscale go src testing testing go testing trunner home ubuntu cache tailscale go src testing testing go testing runtests home ubuntu cache tailscale go src testing testing go testing m run home ubuntu cache tailscale go src testing testing go tailscale com tstest integration testmain home ubuntu go pkg mod tailscale com tstest integration integration test go main main testmain go goroutine syscall home ubuntu cache tailscale go src syscall syscall linux go os process blockuntilwaitable home ubuntu cache tailscale go src os wait waitid go os process wait home ubuntu cache tailscale go src os exec unix go os process wait home ubuntu cache tailscale go src os exec go os exec cmd wait home ubuntu cache tailscale go src os exec exec go os exec cmd run home ubuntu cache tailscale go src os exec exec go os exec cmd combinedoutput home ubuntu cache tailscale go src os exec exec go tailscale com tstest integration testnode mustup home ubuntu go pkg mod tailscale com tstest integration integration test go tailscale com tstest integration home ubuntu go pkg mod tailscale com tstest integration integration test go testing trunner home ubuntu cache tailscale go src testing testing go created by testing t run home ubuntu cache tailscale go src testing testing go goroutine net tcplistener accept home ubuntu cache tailscale go src net tcpsock posix go net tcplistener accept home ubuntu cache tailscale go src net tcpsock go net http server serve home ubuntu cache tailscale go src net http server go net http httptest server goserve home ubuntu cache tailscale go src net http httptest server go created by net http httptest server goserve home ubuntu cache tailscale go src net http httptest server go goroutine internal poll runtime pollwait home ubuntu cache tailscale go src runtime netpoll go internal poll polldesc wait home ubuntu cache tailscale go src internal poll fd poll runtime go internal poll polldesc waitread home ubuntu cache tailscale go src internal poll fd poll runtime go internal poll fd read home ubuntu cache tailscale go src internal poll fd unix go os file read home ubuntu cache tailscale go src os file posix go os file read home ubuntu cache tailscale go src os file go io copybuffer home ubuntu cache tailscale go src io io go io copy home ubuntu cache tailscale go src io io go os exec cmd writerdescriptor home ubuntu cache tailscale go src os exec exec go os exec cmd start home ubuntu cache tailscale go src os exec exec go created by os exec cmd start home ubuntu cache tailscale go src os exec exec go goroutine internal poll runtime pollwait home ubuntu cache tailscale go src runtime netpoll go internal poll polldesc wait home ubuntu cache tailscale go src internal poll fd poll runtime go internal poll polldesc waitread home ubuntu cache tailscale go src internal poll fd poll runtime go internal poll fd read home ubuntu cache tailscale go src internal poll fd unix go os file read home ubuntu cache tailscale go src os file posix go os file read home ubuntu cache tailscale go src os file go bytes buffer readfrom home ubuntu cache tailscale go src bytes buffer go io copybuffer home ubuntu cache tailscale go src io io go io copy home ubuntu cache tailscale go src io io go os exec cmd writerdescriptor home ubuntu cache tailscale go src os exec exec go os exec cmd start home ubuntu cache tailscale go src os exec exec go created by os exec cmd start home ubuntu cache tailscale go src os exec exec go fail tailscale com tstest integration
| 0
|
27,215
| 4,288,086,238
|
IssuesEvent
|
2016-07-17 07:10:10
|
ItsMeSterling/teenmade
|
https://api.github.com/repos/ItsMeSterling/teenmade
|
closed
|
Onboarding skip shop creation
|
Ready for Retest
|
After the page where you add your username and skills it takes you to a shop creation. . . If you want to make it so when you create profile it takes you to your profile rather than that page. . . We might add it later, so don't remove it completely.
|
1.0
|
Onboarding skip shop creation - After the page where you add your username and skills it takes you to a shop creation. . . If you want to make it so when you create profile it takes you to your profile rather than that page. . . We might add it later, so don't remove it completely.
|
non_process
|
onboarding skip shop creation after the page where you add your username and skills it takes you to a shop creation if you want to make it so when you create profile it takes you to your profile rather than that page we might add it later so don t remove it completely
| 0
|
20,811
| 27,571,709,035
|
IssuesEvent
|
2023-03-08 09:48:41
|
xataio/xata-py
|
https://api.github.com/repos/xataio/xata-py
|
opened
|
Filter to only failing records in `failed_batches`
|
bulk-processor
|
Currently, all records from a batch are stored in the `failed_batches` items. Filter out the records that went through and only keep the failed ones keep.
|
1.0
|
Filter to only failing records in `failed_batches` - Currently, all records from a batch are stored in the `failed_batches` items. Filter out the records that went through and only keep the failed ones keep.
|
process
|
filter to only failing records in failed batches currently all records from a batch are stored in the failed batches items filter out the records that went through and only keep the failed ones keep
| 1
|
128,749
| 18,070,118,152
|
IssuesEvent
|
2021-09-21 01:13:23
|
Tim-sandbox/barista
|
https://api.github.com/repos/Tim-sandbox/barista
|
opened
|
CVE-2021-3803 (Medium) detected in nth-check-1.0.2.tgz, nth-check-2.0.0.tgz
|
security vulnerability
|
## CVE-2021-3803 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>nth-check-1.0.2.tgz</b>, <b>nth-check-2.0.0.tgz</b></p></summary>
<p>
<details><summary><b>nth-check-1.0.2.tgz</b></p></summary>
<p>performant nth-check parser & compiler</p>
<p>Library home page: <a href="https://registry.npmjs.org/nth-check/-/nth-check-1.0.2.tgz">https://registry.npmjs.org/nth-check/-/nth-check-1.0.2.tgz</a></p>
<p>Path to dependency file: barista/barista-docs/package.json</p>
<p>Path to vulnerable library: barista/barista-docs/node_modules/@svgr/plugin-svgo/node_modules/nth-check/package.json,barista/barista-docs/node_modules/cheerio/node_modules/nth-check/package.json,barista/barista-web/node_modules/nth-check/package.json</p>
<p>
Dependency Hierarchy:
- compodoc-0.0.41.tgz (Root Library)
- cheerio-0.22.0.tgz
- css-select-1.2.0.tgz
- :x: **nth-check-1.0.2.tgz** (Vulnerable Library)
</details>
<details><summary><b>nth-check-2.0.0.tgz</b></p></summary>
<p>Parses and compiles CSS nth-checks to highly optimized functions.</p>
<p>Library home page: <a href="https://registry.npmjs.org/nth-check/-/nth-check-2.0.0.tgz">https://registry.npmjs.org/nth-check/-/nth-check-2.0.0.tgz</a></p>
<p>Path to dependency file: barista/barista-web/package.json</p>
<p>Path to vulnerable library: barista/barista-web/node_modules/nth-check/package.json,barista/barista-docs/node_modules/nth-check/package.json,barista/barista-scan/node_modules/nth-check/package.json</p>
<p>
Dependency Hierarchy:
- cheerio-1.0.0-rc.9.tgz (Root Library)
- cheerio-select-1.4.0.tgz
- css-select-4.1.2.tgz
- :x: **nth-check-2.0.0.tgz** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
nth-check is vulnerable to Inefficient Regular Expression Complexity
<p>Publish Date: 2021-09-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3803>CVE-2021-3803</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: N/A
- Attack Complexity: N/A
- Privileges Required: N/A
- User Interaction: N/A
- Scope: N/A
- Impact Metrics:
- Confidentiality Impact: N/A
- Integrity Impact: N/A
- Availability Impact: N/A
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/fb55/nth-check/compare/v2.0.0...v2.0.1">https://github.com/fb55/nth-check/compare/v2.0.0...v2.0.1</a></p>
<p>Release Date: 2021-09-17</p>
<p>Fix Resolution: nth-check - v2.0.1</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"nth-check","packageVersion":"1.0.2","packageFilePaths":["/barista-docs/package.json","/barista-web/package.json"],"isTransitiveDependency":true,"dependencyTree":"compodoc:0.0.41;cheerio:0.22.0;css-select:1.2.0;nth-check:1.0.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"nth-check - v2.0.1"},{"packageType":"javascript/Node.js","packageName":"nth-check","packageVersion":"2.0.0","packageFilePaths":["/barista-web/package.json","/barista-docs/package.json","/barista-scan/package.json"],"isTransitiveDependency":true,"dependencyTree":"cheerio:1.0.0-rc.9;cheerio-select:1.4.0;css-select:4.1.2;nth-check:2.0.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"nth-check - v2.0.1"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-3803","vulnerabilityDetails":"nth-check is vulnerable to Inefficient Regular Expression Complexity","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3803","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"N/A","AC":"N/A","PR":"N/A","S":"N/A","C":"N/A","UI":"N/A","AV":"N/A","I":"N/A"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2021-3803 (Medium) detected in nth-check-1.0.2.tgz, nth-check-2.0.0.tgz - ## CVE-2021-3803 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>nth-check-1.0.2.tgz</b>, <b>nth-check-2.0.0.tgz</b></p></summary>
<p>
<details><summary><b>nth-check-1.0.2.tgz</b></p></summary>
<p>performant nth-check parser & compiler</p>
<p>Library home page: <a href="https://registry.npmjs.org/nth-check/-/nth-check-1.0.2.tgz">https://registry.npmjs.org/nth-check/-/nth-check-1.0.2.tgz</a></p>
<p>Path to dependency file: barista/barista-docs/package.json</p>
<p>Path to vulnerable library: barista/barista-docs/node_modules/@svgr/plugin-svgo/node_modules/nth-check/package.json,barista/barista-docs/node_modules/cheerio/node_modules/nth-check/package.json,barista/barista-web/node_modules/nth-check/package.json</p>
<p>
Dependency Hierarchy:
- compodoc-0.0.41.tgz (Root Library)
- cheerio-0.22.0.tgz
- css-select-1.2.0.tgz
- :x: **nth-check-1.0.2.tgz** (Vulnerable Library)
</details>
<details><summary><b>nth-check-2.0.0.tgz</b></p></summary>
<p>Parses and compiles CSS nth-checks to highly optimized functions.</p>
<p>Library home page: <a href="https://registry.npmjs.org/nth-check/-/nth-check-2.0.0.tgz">https://registry.npmjs.org/nth-check/-/nth-check-2.0.0.tgz</a></p>
<p>Path to dependency file: barista/barista-web/package.json</p>
<p>Path to vulnerable library: barista/barista-web/node_modules/nth-check/package.json,barista/barista-docs/node_modules/nth-check/package.json,barista/barista-scan/node_modules/nth-check/package.json</p>
<p>
Dependency Hierarchy:
- cheerio-1.0.0-rc.9.tgz (Root Library)
- cheerio-select-1.4.0.tgz
- css-select-4.1.2.tgz
- :x: **nth-check-2.0.0.tgz** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
nth-check is vulnerable to Inefficient Regular Expression Complexity
<p>Publish Date: 2021-09-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3803>CVE-2021-3803</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: N/A
- Attack Complexity: N/A
- Privileges Required: N/A
- User Interaction: N/A
- Scope: N/A
- Impact Metrics:
- Confidentiality Impact: N/A
- Integrity Impact: N/A
- Availability Impact: N/A
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/fb55/nth-check/compare/v2.0.0...v2.0.1">https://github.com/fb55/nth-check/compare/v2.0.0...v2.0.1</a></p>
<p>Release Date: 2021-09-17</p>
<p>Fix Resolution: nth-check - v2.0.1</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"nth-check","packageVersion":"1.0.2","packageFilePaths":["/barista-docs/package.json","/barista-web/package.json"],"isTransitiveDependency":true,"dependencyTree":"compodoc:0.0.41;cheerio:0.22.0;css-select:1.2.0;nth-check:1.0.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"nth-check - v2.0.1"},{"packageType":"javascript/Node.js","packageName":"nth-check","packageVersion":"2.0.0","packageFilePaths":["/barista-web/package.json","/barista-docs/package.json","/barista-scan/package.json"],"isTransitiveDependency":true,"dependencyTree":"cheerio:1.0.0-rc.9;cheerio-select:1.4.0;css-select:4.1.2;nth-check:2.0.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"nth-check - v2.0.1"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-3803","vulnerabilityDetails":"nth-check is vulnerable to Inefficient Regular Expression Complexity","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3803","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"N/A","AC":"N/A","PR":"N/A","S":"N/A","C":"N/A","UI":"N/A","AV":"N/A","I":"N/A"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve medium detected in nth check tgz nth check tgz cve medium severity vulnerability vulnerable libraries nth check tgz nth check tgz nth check tgz performant nth check parser compiler library home page a href path to dependency file barista barista docs package json path to vulnerable library barista barista docs node modules svgr plugin svgo node modules nth check package json barista barista docs node modules cheerio node modules nth check package json barista barista web node modules nth check package json dependency hierarchy compodoc tgz root library cheerio tgz css select tgz x nth check tgz vulnerable library nth check tgz parses and compiles css nth checks to highly optimized functions library home page a href path to dependency file barista barista web package json path to vulnerable library barista barista web node modules nth check package json barista barista docs node modules nth check package json barista barista scan node modules nth check package json dependency hierarchy cheerio rc tgz root library cheerio select tgz css select tgz x nth check tgz vulnerable library found in base branch master vulnerability details nth check is vulnerable to inefficient regular expression complexity publish date url a href cvss score details base score metrics exploitability metrics attack vector n a attack complexity n a privileges required n a user interaction n a scope n a impact metrics confidentiality impact n a integrity impact n a availability impact n a for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution nth check isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree compodoc cheerio css select nth check isminimumfixversionavailable true minimumfixversion nth check packagetype javascript node js packagename nth check packageversion packagefilepaths istransitivedependency true dependencytree cheerio rc cheerio select css select nth check isminimumfixversionavailable true minimumfixversion nth check basebranches vulnerabilityidentifier cve vulnerabilitydetails nth check is vulnerable to inefficient regular expression complexity vulnerabilityurl
| 0
|
10,425
| 13,218,134,271
|
IssuesEvent
|
2020-08-17 08:11:13
|
bisq-network/proposals
|
https://api.github.com/repos/bisq-network/proposals
|
closed
|
Content of the proposal details should be committed when proposal phase is over
|
a:proposal an:idea help wanted re:processes was:stalled
|
When a contributor files a proposal in the Bisq app they have to add the link to the Github issue where the content of the proposal is described.
Once the proposal period has ended the content of that link should be persisted to an immutable datastore so it cannot be changed or deleted.
It would be good if we can use the Github API to snapshot the issue and then commit it. It would be also good if the data is still readable (html) but I think that is not a hard requirement. It can be expected that there are no conflicts anyway which require to lookup an old proposal.
Not sure what would be the best and easiest way how to implement that. A special service which gets triggered by the first block after the proposal phase to snapshot the issues and then make a commit of that data. Multiple nodes (e.g. seed nodes) could do that and before committing they check if there is already a commit with that hash.
I am not sure if that feature is really important at least for the start probably not. But I wanted to write it down here for discussion and to for maybe implementing it later.
|
1.0
|
Content of the proposal details should be committed when proposal phase is over - When a contributor files a proposal in the Bisq app they have to add the link to the Github issue where the content of the proposal is described.
Once the proposal period has ended the content of that link should be persisted to an immutable datastore so it cannot be changed or deleted.
It would be good if we can use the Github API to snapshot the issue and then commit it. It would be also good if the data is still readable (html) but I think that is not a hard requirement. It can be expected that there are no conflicts anyway which require to lookup an old proposal.
Not sure what would be the best and easiest way how to implement that. A special service which gets triggered by the first block after the proposal phase to snapshot the issues and then make a commit of that data. Multiple nodes (e.g. seed nodes) could do that and before committing they check if there is already a commit with that hash.
I am not sure if that feature is really important at least for the start probably not. But I wanted to write it down here for discussion and to for maybe implementing it later.
|
process
|
content of the proposal details should be committed when proposal phase is over when a contributor files a proposal in the bisq app they have to add the link to the github issue where the content of the proposal is described once the proposal period has ended the content of that link should be persisted to an immutable datastore so it cannot be changed or deleted it would be good if we can use the github api to snapshot the issue and then commit it it would be also good if the data is still readable html but i think that is not a hard requirement it can be expected that there are no conflicts anyway which require to lookup an old proposal not sure what would be the best and easiest way how to implement that a special service which gets triggered by the first block after the proposal phase to snapshot the issues and then make a commit of that data multiple nodes e g seed nodes could do that and before committing they check if there is already a commit with that hash i am not sure if that feature is really important at least for the start probably not but i wanted to write it down here for discussion and to for maybe implementing it later
| 1
|
11,594
| 14,448,151,903
|
IssuesEvent
|
2020-12-08 05:37:17
|
A01731346/5a
|
https://api.github.com/repos/A01731346/5a
|
opened
|
complete_size_estimating_template
|
process-dashboard
|
Completar el formato de estimación de LOC con los valores reales obtenidos.
|
1.0
|
complete_size_estimating_template - Completar el formato de estimación de LOC con los valores reales obtenidos.
|
process
|
complete size estimating template completar el formato de estimación de loc con los valores reales obtenidos
| 1
|
3,401
| 6,519,225,907
|
IssuesEvent
|
2017-08-28 11:51:08
|
openvstorage/framework
|
https://api.github.com/repos/openvstorage/framework
|
closed
|
Display overview of storage nodes connected to a cluster similar to storage routers.
|
process_wontfix type_feature
|
If you install your cluster, you only see your storage nodes when you create a backend.
Wouldn't it be sane to add a page similar to that of the storage routers to the GUI?
|
1.0
|
Display overview of storage nodes connected to a cluster similar to storage routers. - If you install your cluster, you only see your storage nodes when you create a backend.
Wouldn't it be sane to add a page similar to that of the storage routers to the GUI?
|
process
|
display overview of storage nodes connected to a cluster similar to storage routers if you install your cluster you only see your storage nodes when you create a backend wouldn t it be sane to add a page similar to that of the storage routers to the gui
| 1
|
13,859
| 16,617,648,448
|
IssuesEvent
|
2021-06-02 18:56:44
|
googleapis/python-access-context-manager
|
https://api.github.com/repos/googleapis/python-access-context-manager
|
opened
|
Generate proto-plus types for this library
|
type: process
|
This library currently generates [`_pb2.py`](https://github.com/googleapis/python-access-context-manager/blob/master/google/identity/accesscontextmanager/v1/access_level_pb2.py) types via protoc.
I would like to move this set of protos to be generated via bazel. The repository will look like a full GAPIC library but only have types (no services). If Access Context Manager adds a service in the future the change will be additive (non-breaking). See [this repo](https://github.com/googleapis/python-iam-logging/tree/master/google/cloud/iam_logging_v1) for an example of the desired end state.
This is a breaking change, but I believe the blast radius will be small - this library is only installed as a dependency of `google-cloud-asset` (see [setup.py](https://github.com/googleapis/python-asset/blob/27ac4fb2456c2ff7da3b69b6e7657f7db1dfc8d5/setup.py#L33)).
CC @parthea @danoscarmike
|
1.0
|
Generate proto-plus types for this library - This library currently generates [`_pb2.py`](https://github.com/googleapis/python-access-context-manager/blob/master/google/identity/accesscontextmanager/v1/access_level_pb2.py) types via protoc.
I would like to move this set of protos to be generated via bazel. The repository will look like a full GAPIC library but only have types (no services). If Access Context Manager adds a service in the future the change will be additive (non-breaking). See [this repo](https://github.com/googleapis/python-iam-logging/tree/master/google/cloud/iam_logging_v1) for an example of the desired end state.
This is a breaking change, but I believe the blast radius will be small - this library is only installed as a dependency of `google-cloud-asset` (see [setup.py](https://github.com/googleapis/python-asset/blob/27ac4fb2456c2ff7da3b69b6e7657f7db1dfc8d5/setup.py#L33)).
CC @parthea @danoscarmike
|
process
|
generate proto plus types for this library this library currently generates types via protoc i would like to move this set of protos to be generated via bazel the repository will look like a full gapic library but only have types no services if access context manager adds a service in the future the change will be additive non breaking see for an example of the desired end state this is a breaking change but i believe the blast radius will be small this library is only installed as a dependency of google cloud asset see cc parthea danoscarmike
| 1
|
7,888
| 11,053,844,697
|
IssuesEvent
|
2019-12-10 12:16:58
|
code4romania/expert-consultation-api
|
https://api.github.com/repos/code4romania/expert-consultation-api
|
closed
|
[Documents] Modify comment feature to take into account the new document format
|
document processing documents enhancement help wanted java spring
|
The Comment feature needs to be adapted to the changes to the document structure.
The article, chapter and document controllers need to be changed into a single controller for DocumentSection / DocumentNode comments.
The comment data model needs to be updated.
Linked to #64
|
1.0
|
[Documents] Modify comment feature to take into account the new document format - The Comment feature needs to be adapted to the changes to the document structure.
The article, chapter and document controllers need to be changed into a single controller for DocumentSection / DocumentNode comments.
The comment data model needs to be updated.
Linked to #64
|
process
|
modify comment feature to take into account the new document format the comment feature needs to be adapted to the changes to the document structure the article chapter and document controllers need to be changed into a single controller for documentsection documentnode comments the comment data model needs to be updated linked to
| 1
|
34,493
| 2,781,551,954
|
IssuesEvent
|
2015-05-06 13:58:53
|
handsontable/handsontable
|
https://api.github.com/repos/handsontable/handsontable
|
closed
|
minSpareRows > 0 crashes in datamap.get
|
Priority: normal
|
for (var i = 0, ilen = sliced.length; i < ilen; i++) {
out = out[sliced[i]];
if (typeof out === 'undefined') {
return null;
}
}
error: out is undefined
when you specify columns in which an array is inside an object, like:
columns: [
{data: 'geometry.coordinates.1'},
{data: 'geometry.coordinates.0'}
]
in this case, because of the last field in priv.settings.data with all null values, 'out' its undefined at last iteration an so it crashes.
A possible solution is checking that 'out' has a null value inside the loop:
for (var i = 0, ilen = sliced.length; i < ilen; i++) {
if (!out) {
return null;
}
out = out[sliced[i]];
if (typeof out === 'undefined') {
return null;
}
}
|
1.0
|
minSpareRows > 0 crashes in datamap.get - for (var i = 0, ilen = sliced.length; i < ilen; i++) {
out = out[sliced[i]];
if (typeof out === 'undefined') {
return null;
}
}
error: out is undefined
when you specify columns in which an array is inside an object, like:
columns: [
{data: 'geometry.coordinates.1'},
{data: 'geometry.coordinates.0'}
]
in this case, because of the last field in priv.settings.data with all null values, 'out' its undefined at last iteration an so it crashes.
A possible solution is checking that 'out' has a null value inside the loop:
for (var i = 0, ilen = sliced.length; i < ilen; i++) {
if (!out) {
return null;
}
out = out[sliced[i]];
if (typeof out === 'undefined') {
return null;
}
}
|
non_process
|
minsparerows crashes in datamap get for var i ilen sliced length i ilen i out out if typeof out undefined return null error out is undefined when you specify columns in which an array is inside an object like columns data geometry coordinates data geometry coordinates in this case because of the last field in priv settings data with all null values out its undefined at last iteration an so it crashes a possible solution is checking that out has a null value inside the loop for var i ilen sliced length i ilen i if out return null out out if typeof out undefined return null
| 0
|
7,965
| 11,147,134,021
|
IssuesEvent
|
2019-12-23 11:41:30
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Question
|
Pri2 cxp doc-enhancement machine-learning/svc team-data-science-process/subsvc triaged
|
What's the point in using blob storage as a staging area for data transfer? Why isn't the data copied directly from SQL server to SQL Azure?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 7827f519-aa8a-94e2-fd41-a0bda4ae8efb
* Version Independent ID: cf4f460b-d836-1c34-3103-48171791c395
* Content: [SQL Server data to SQL Azure with Azure Data Factory - Team Data Science Process](https://docs.microsoft.com/en-us/azure/machine-learning/team-data-science-process/move-sql-azure-adf)
* Content Source: [articles/machine-learning/team-data-science-process/move-sql-azure-adf.md](https://github.com/Microsoft/azure-docs/blob/master/articles/machine-learning/team-data-science-process/move-sql-azure-adf.md)
* Service: **machine-learning**
* Sub-service: **team-data-science-process**
* GitHub Login: @marktab
* Microsoft Alias: **tdsp**
|
1.0
|
Question - What's the point in using blob storage as a staging area for data transfer? Why isn't the data copied directly from SQL server to SQL Azure?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 7827f519-aa8a-94e2-fd41-a0bda4ae8efb
* Version Independent ID: cf4f460b-d836-1c34-3103-48171791c395
* Content: [SQL Server data to SQL Azure with Azure Data Factory - Team Data Science Process](https://docs.microsoft.com/en-us/azure/machine-learning/team-data-science-process/move-sql-azure-adf)
* Content Source: [articles/machine-learning/team-data-science-process/move-sql-azure-adf.md](https://github.com/Microsoft/azure-docs/blob/master/articles/machine-learning/team-data-science-process/move-sql-azure-adf.md)
* Service: **machine-learning**
* Sub-service: **team-data-science-process**
* GitHub Login: @marktab
* Microsoft Alias: **tdsp**
|
process
|
question what s the point in using blob storage as a staging area for data transfer why isn t the data copied directly from sql server to sql azure document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service machine learning sub service team data science process github login marktab microsoft alias tdsp
| 1
|
21,283
| 28,455,678,875
|
IssuesEvent
|
2023-04-17 06:52:00
|
TUM-Dev/NavigaTUM
|
https://api.github.com/repos/TUM-Dev/NavigaTUM
|
opened
|
[Entry] [5505.01.501]: Koordinate bearbeiten
|
entry webform delete-after-processing
|
Hallo, ich möchte diese Koordinate zum Roomfinder hinzufügen:
```yaml
"5505.01.501": { lat: 48.26555143737818, lon: 11.66884114191862 }
```
|
1.0
|
[Entry] [5505.01.501]: Koordinate bearbeiten - Hallo, ich möchte diese Koordinate zum Roomfinder hinzufügen:
```yaml
"5505.01.501": { lat: 48.26555143737818, lon: 11.66884114191862 }
```
|
process
|
koordinate bearbeiten hallo ich möchte diese koordinate zum roomfinder hinzufügen yaml lat lon
| 1
|
10,672
| 13,460,464,385
|
IssuesEvent
|
2020-09-09 13:39:10
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
reopened
|
possible obsoletion "actin filament REorganization"
|
PomBase cellular processes
|
There are a bunch of terms
Process GO:0090527 actin filament reorganization
Process GO:0031532 actin cytoskeleton reorganization
Process GO:2000249 regulation of actin cytoskeleton reorganization
Process GO:0030037 actin filament reorganization involved in cell cycle
Process GO:2000250 negative regulation of actin cytoskeleton reorganization
Process GO:2000251 positive regulation of actin cytoskeleton reorganization
which are an ontology "stub" and have only 23 experimental annotations.
These are problematic on a number of levels
1. how does "reorganization" differ from "regulation of organisation?"
(these are the only terms with "reorganization" in the name
i.e how do you "re-organize" an actin filament?
2. How is the reorganization "involved in the cell cycle", presumably via cytokinesis?
3. Mixed bag of annotation… regulation of cytokinesis , some appear to be microtubule organization (definitely inconsistent annotation)
suggest "obsoletion" of the cell cycle one.
Possible merge of the others into the "actin cytoskeleton organization terms"
What do you think?
@pgaudet
|
1.0
|
possible obsoletion "actin filament REorganization" - There are a bunch of terms
Process GO:0090527 actin filament reorganization
Process GO:0031532 actin cytoskeleton reorganization
Process GO:2000249 regulation of actin cytoskeleton reorganization
Process GO:0030037 actin filament reorganization involved in cell cycle
Process GO:2000250 negative regulation of actin cytoskeleton reorganization
Process GO:2000251 positive regulation of actin cytoskeleton reorganization
which are an ontology "stub" and have only 23 experimental annotations.
These are problematic on a number of levels
1. how does "reorganization" differ from "regulation of organisation?"
(these are the only terms with "reorganization" in the name
i.e how do you "re-organize" an actin filament?
2. How is the reorganization "involved in the cell cycle", presumably via cytokinesis?
3. Mixed bag of annotation… regulation of cytokinesis , some appear to be microtubule organization (definitely inconsistent annotation)
suggest "obsoletion" of the cell cycle one.
Possible merge of the others into the "actin cytoskeleton organization terms"
What do you think?
@pgaudet
|
process
|
possible obsoletion actin filament reorganization there are a bunch of terms process go actin filament reorganization process go actin cytoskeleton reorganization process go regulation of actin cytoskeleton reorganization process go actin filament reorganization involved in cell cycle process go negative regulation of actin cytoskeleton reorganization process go positive regulation of actin cytoskeleton reorganization which are an ontology stub and have only experimental annotations these are problematic on a number of levels how does reorganization differ from regulation of organisation these are the only terms with reorganization in the name i e how do you re organize an actin filament how is the reorganization involved in the cell cycle presumably via cytokinesis mixed bag of annotation… regulation of cytokinesis some appear to be microtubule organization definitely inconsistent annotation suggest obsoletion of the cell cycle one possible merge of the others into the actin cytoskeleton organization terms what do you think pgaudet
| 1
|
256,859
| 22,106,988,794
|
IssuesEvent
|
2022-06-01 17:46:28
|
elastic/kibana
|
https://api.github.com/repos/elastic/kibana
|
opened
|
Failing test: X-Pack Saved Object Tagging API Integration Tests - Security and Spaces integration.x-pack/test/saved_object_tagging/api_integration/security_and_spaces/apis/get·ts - saved objects tagging API - security and spaces integration GET /api/saved_objects_tagging/tags/{id} "after all" hook for "returns expected 403 response for a_kibana_rbac_default_space_advanced_settings_read_user"
|
failed-test
|
A test failed on a tracked branch
```
ResponseError: {"took":20,"timed_out":false,"total":5,"deleted":0,"batches":1,"version_conflicts":0,"noops":0,"retries":{"bulk":0,"search":0},"throttled_millis":0,"requests_per_second":-1,"throttled_until_millis":0,"failures":[{"index":".kibana_1","id":"space:space_1","cause":{"type":"cluster_block_exception","reason":"index [.kibana_1] blocked by: [FORBIDDEN/8/index write (api)];"},"status":403},{"index":".kibana_1","id":"space:space_2","cause":{"type":"cluster_block_exception","reason":"index [.kibana_1] blocked by: [FORBIDDEN/8/index write (api)];"},"status":403},{"index":".kibana_1","id":"tag:default-space-tag-1","cause":{"type":"cluster_block_exception","reason":"index [.kibana_1] blocked by: [FORBIDDEN/8/index write (api)];"},"status":403},{"index":".kibana_1","id":"tag:default-space-tag-2","cause":{"type":"cluster_block_exception","reason":"index [.kibana_1] blocked by: [FORBIDDEN/8/index write (api)];"},"status":403},{"index":".kibana_1","id":"space_1:tag:space_1-tag-3","cause":{"type":"cluster_block_exception","reason":"index [.kibana_1] blocked by: [FORBIDDEN/8/index write (api)];"},"status":403}]}
at SniffingTransport.request (node_modules/@elastic/transport/src/Transport.ts:532:17)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at Client.DeleteByQueryApi [as deleteByQuery] (node_modules/@elastic/elasticsearch/src/api/api/delete_by_query.ts:71:10)
at cleanKibanaIndices (node_modules/@kbn/es-archiver/target_node/lib/indices/kibana_index.js:104:18)
at Transform.transform [as _transform] (node_modules/@kbn/es-archiver/target_node/lib/indices/delete_index_stream.js:34:13) {
meta: {
body: {
took: 20,
timed_out: false,
total: 5,
deleted: 0,
batches: 1,
version_conflicts: 0,
noops: 0,
retries: [Object],
throttled_millis: 0,
requests_per_second: -1,
throttled_until_millis: 0,
failures: [Array]
},
statusCode: 403,
headers: {
'x-elastic-product': 'Elasticsearch',
'content-type': 'application/vnd.elasticsearch+json;compatible-with=8',
'content-length': '1112'
},
meta: {
context: null,
request: [Object],
name: 'elasticsearch-js',
connection: [Object],
attempts: 0,
aborted: false
},
warnings: [Getter]
}
}
```
First failure: [CI Build - main](https://buildkite.com/elastic/kibana-on-merge/builds/16687#0181202e-ea8f-494c-bd9f-acbeb6c67dff)
<!-- kibanaCiData = {"failed-test":{"test.class":"X-Pack Saved Object Tagging API Integration Tests - Security and Spaces integration.x-pack/test/saved_object_tagging/api_integration/security_and_spaces/apis/get·ts","test.name":"saved objects tagging API - security and spaces integration GET /api/saved_objects_tagging/tags/{id} \"after all\" hook for \"returns expected 403 response for a_kibana_rbac_default_space_advanced_settings_read_user\"","test.failCount":1}} -->
|
1.0
|
Failing test: X-Pack Saved Object Tagging API Integration Tests - Security and Spaces integration.x-pack/test/saved_object_tagging/api_integration/security_and_spaces/apis/get·ts - saved objects tagging API - security and spaces integration GET /api/saved_objects_tagging/tags/{id} "after all" hook for "returns expected 403 response for a_kibana_rbac_default_space_advanced_settings_read_user" - A test failed on a tracked branch
```
ResponseError: {"took":20,"timed_out":false,"total":5,"deleted":0,"batches":1,"version_conflicts":0,"noops":0,"retries":{"bulk":0,"search":0},"throttled_millis":0,"requests_per_second":-1,"throttled_until_millis":0,"failures":[{"index":".kibana_1","id":"space:space_1","cause":{"type":"cluster_block_exception","reason":"index [.kibana_1] blocked by: [FORBIDDEN/8/index write (api)];"},"status":403},{"index":".kibana_1","id":"space:space_2","cause":{"type":"cluster_block_exception","reason":"index [.kibana_1] blocked by: [FORBIDDEN/8/index write (api)];"},"status":403},{"index":".kibana_1","id":"tag:default-space-tag-1","cause":{"type":"cluster_block_exception","reason":"index [.kibana_1] blocked by: [FORBIDDEN/8/index write (api)];"},"status":403},{"index":".kibana_1","id":"tag:default-space-tag-2","cause":{"type":"cluster_block_exception","reason":"index [.kibana_1] blocked by: [FORBIDDEN/8/index write (api)];"},"status":403},{"index":".kibana_1","id":"space_1:tag:space_1-tag-3","cause":{"type":"cluster_block_exception","reason":"index [.kibana_1] blocked by: [FORBIDDEN/8/index write (api)];"},"status":403}]}
at SniffingTransport.request (node_modules/@elastic/transport/src/Transport.ts:532:17)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at Client.DeleteByQueryApi [as deleteByQuery] (node_modules/@elastic/elasticsearch/src/api/api/delete_by_query.ts:71:10)
at cleanKibanaIndices (node_modules/@kbn/es-archiver/target_node/lib/indices/kibana_index.js:104:18)
at Transform.transform [as _transform] (node_modules/@kbn/es-archiver/target_node/lib/indices/delete_index_stream.js:34:13) {
meta: {
body: {
took: 20,
timed_out: false,
total: 5,
deleted: 0,
batches: 1,
version_conflicts: 0,
noops: 0,
retries: [Object],
throttled_millis: 0,
requests_per_second: -1,
throttled_until_millis: 0,
failures: [Array]
},
statusCode: 403,
headers: {
'x-elastic-product': 'Elasticsearch',
'content-type': 'application/vnd.elasticsearch+json;compatible-with=8',
'content-length': '1112'
},
meta: {
context: null,
request: [Object],
name: 'elasticsearch-js',
connection: [Object],
attempts: 0,
aborted: false
},
warnings: [Getter]
}
}
```
First failure: [CI Build - main](https://buildkite.com/elastic/kibana-on-merge/builds/16687#0181202e-ea8f-494c-bd9f-acbeb6c67dff)
<!-- kibanaCiData = {"failed-test":{"test.class":"X-Pack Saved Object Tagging API Integration Tests - Security and Spaces integration.x-pack/test/saved_object_tagging/api_integration/security_and_spaces/apis/get·ts","test.name":"saved objects tagging API - security and spaces integration GET /api/saved_objects_tagging/tags/{id} \"after all\" hook for \"returns expected 403 response for a_kibana_rbac_default_space_advanced_settings_read_user\"","test.failCount":1}} -->
|
non_process
|
failing test x pack saved object tagging api integration tests security and spaces integration x pack test saved object tagging api integration security and spaces apis get·ts saved objects tagging api security and spaces integration get api saved objects tagging tags id after all hook for returns expected response for a kibana rbac default space advanced settings read user a test failed on a tracked branch responseerror took timed out false total deleted batches version conflicts noops retries bulk search throttled millis requests per second throttled until millis failures blocked by status index kibana id space space cause type cluster block exception reason index blocked by status index kibana id tag default space tag cause type cluster block exception reason index blocked by status index kibana id tag default space tag cause type cluster block exception reason index blocked by status index kibana id space tag space tag cause type cluster block exception reason index blocked by status at sniffingtransport request node modules elastic transport src transport ts at processticksandrejections node internal process task queues at client deletebyqueryapi node modules elastic elasticsearch src api api delete by query ts at cleankibanaindices node modules kbn es archiver target node lib indices kibana index js at transform transform node modules kbn es archiver target node lib indices delete index stream js meta body took timed out false total deleted batches version conflicts noops retries throttled millis requests per second throttled until millis failures statuscode headers x elastic product elasticsearch content type application vnd elasticsearch json compatible with content length meta context null request name elasticsearch js connection attempts aborted false warnings first failure
| 0
|
15,186
| 18,955,752,423
|
IssuesEvent
|
2021-11-18 20:02:46
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
closed
|
How to use protobuf well-known types in Bazel 4.2.1
|
type: support / not a bug (process) untriaged
|
### Problem description
I'd like to use google protobuf + bazel to set up communication between C++ and python parts of my program. I've got the following WORKSPACE file:
```
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
name = "com_google_protobuf",
strip_prefix = "protobuf-master",
urls = ["https://github.com/protocolbuffers/protobuf/archive/master.zip"],
)
load("@com_google_protobuf//:protobuf_deps.bzl", "protobuf_deps")
protobuf_deps()
```
and the BUILD file:
```
load("@com_google_protobuf//:protobuf.bzl", "cc_proto_library")
load("@com_google_protobuf//:protobuf.bzl", "py_proto_library")
cc_proto_library(
name = "container_cc_proto",
srcs = ["container.proto"],
visibility = ["//visibility:public"],
deps = ["@com_google_protobuf//:any_proto",],
)
py_proto_library(
name = "container_py_proto",
srcs = ["container.proto"],
visibility = ["//visibility:public"],
deps = ["@com_google_protobuf//:any_proto",],
)
```
The container.proto tries to `include "google/protobuf/any.proto"`. On my command `bazel build //:container_cc_proto` I see the following error:
```
ERROR: /home/*user*/sandbox/bypass/BUILD:31:17: no such target '@com_google_protobuf//:any_proto_genproto': target 'any_proto_genproto' not declared in package '' defined by /home/*some path*/external/com_google_protobuf/BUILD and referenced by '//:container_cc_proto_genproto'
```
Could anyone please tell me how to include any.proto correctly, or any other known workarounds. Thanks!
### What operating system are you running Bazel on?
> Ubuntu 20.04.2
### What's the output of `bazel info release`?
> release 4.2.1
|
1.0
|
How to use protobuf well-known types in Bazel 4.2.1 - ### Problem description
I'd like to use google protobuf + bazel to set up communication between C++ and python parts of my program. I've got the following WORKSPACE file:
```
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
name = "com_google_protobuf",
strip_prefix = "protobuf-master",
urls = ["https://github.com/protocolbuffers/protobuf/archive/master.zip"],
)
load("@com_google_protobuf//:protobuf_deps.bzl", "protobuf_deps")
protobuf_deps()
```
and the BUILD file:
```
load("@com_google_protobuf//:protobuf.bzl", "cc_proto_library")
load("@com_google_protobuf//:protobuf.bzl", "py_proto_library")
cc_proto_library(
name = "container_cc_proto",
srcs = ["container.proto"],
visibility = ["//visibility:public"],
deps = ["@com_google_protobuf//:any_proto",],
)
py_proto_library(
name = "container_py_proto",
srcs = ["container.proto"],
visibility = ["//visibility:public"],
deps = ["@com_google_protobuf//:any_proto",],
)
```
The container.proto tries to `include "google/protobuf/any.proto"`. On my command `bazel build //:container_cc_proto` I see the following error:
```
ERROR: /home/*user*/sandbox/bypass/BUILD:31:17: no such target '@com_google_protobuf//:any_proto_genproto': target 'any_proto_genproto' not declared in package '' defined by /home/*some path*/external/com_google_protobuf/BUILD and referenced by '//:container_cc_proto_genproto'
```
Could anyone please tell me how to include any.proto correctly, or any other known workarounds. Thanks!
### What operating system are you running Bazel on?
> Ubuntu 20.04.2
### What's the output of `bazel info release`?
> release 4.2.1
|
process
|
how to use protobuf well known types in bazel problem description i d like to use google protobuf bazel to set up communication between c and python parts of my program i ve got the following workspace file load bazel tools tools build defs repo http bzl http archive http archive name com google protobuf strip prefix protobuf master urls load com google protobuf protobuf deps bzl protobuf deps protobuf deps and the build file load com google protobuf protobuf bzl cc proto library load com google protobuf protobuf bzl py proto library cc proto library name container cc proto srcs visibility deps py proto library name container py proto srcs visibility deps the container proto tries to include google protobuf any proto on my command bazel build container cc proto i see the following error error home user sandbox bypass build no such target com google protobuf any proto genproto target any proto genproto not declared in package defined by home some path external com google protobuf build and referenced by container cc proto genproto could anyone please tell me how to include any proto correctly or any other known workarounds thanks what operating system are you running bazel on ubuntu what s the output of bazel info release release
| 1
|
52,344
| 22,155,746,358
|
IssuesEvent
|
2022-06-03 22:26:20
|
hashicorp/terraform-provider-azurerm
|
https://api.github.com/repos/hashicorp/terraform-provider-azurerm
|
closed
|
Representation examples not working for Azure API operation requests or responses
|
bug crash service/api-management
|
<!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Terraform (and AzureRM Provider) Version
Terraform v1.1.7
on windows_amd64
+ provider registry.terraform.io/hashicorp/azurerm v2.98.0
### Affected Resource(s)
<!--- Please list the affected resources and data sources. --->
* `azurerm_api_management_api_operation`
### Terraform Configuration Files
```
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 2.98.0"
}
}
required_version = ">= 1.1.7"
}
provider "azurerm" {
features {
api_management {
purge_soft_delete_on_destroy = true
}
}
}
resource "azurerm_resource_group" "matt_test" {
name = "matt_test"
location = "westus2"
}
resource "azurerm_api_management" "matts-apim" {
name = "matts-apim"
location = azurerm_resource_group.matt_test.location
resource_group_name = azurerm_resource_group.matt_test.name
publisher_name = "My Publisher"
publisher_email = "my.email@email.com"
sku_name = "Consumption_0"
}
resource "azurerm_api_management_api" "carto-replacement-api" {
name = "carto-replacement-api"
resource_group_name = azurerm_resource_group.matt_test.name
api_management_name = azurerm_api_management.matts-apim.name
revision = "1"
display_name = "CARTO Replacement"
path = "carto"
protocols = ["https"]
}
resource "azurerm_api_management_api_operation" "bbox" {
operation_id = "bbox"
api_name = azurerm_api_management_api.carto-replacement-api.name
api_management_name = azurerm_api_management_api.carto-replacement-api.api_management_name
resource_group_name = azurerm_api_management_api.carto-replacement-api.resource_group_name
display_name = "Bounding Box"
method = "POST"
url_template = "/api/v1/data/bbox"
description = "Returns datasets from a box specified by opposite corners"
request {
description = ""
header {
name = "Content-Type"
required = "false"
type = "string"
values = ["application/json"]
}
header {
name = "Authorization"
required = "true"
type = "string"
}
representation {
content_type = "application/json"
example {
name = "response example"
value = "{\"coordinates\":[[-121.2,42.2],[-122.4,43.4]],\"datasets\":[\"10m_terrain\"]}"
}
}
}
response {
status_code = 200
}
response {
status_code = 201
representation {
content_type = "application/json"
example {
name = "Response example"
value = <<JSON
{"job_ids":["0bd62539-7975-4872-ba8c-75e756296d76"]}
JSON
}
}
}
}
resource "azurerm_api_management_api_operation_policy" "bbox_inbound_policy" {
api_name = azurerm_api_management_api_operation.bbox.api_name
api_management_name = azurerm_api_management_api_operation.bbox.api_management_name
resource_group_name = azurerm_api_management_api_operation.bbox.resource_group_name
operation_id = azurerm_api_management_api_operation.bbox.operation_id
xml_content = <<XML
<policies>
<inbound>
<base />
<mock-response status-code="201" content-type="application/json" />
</inbound>
<backend>
<base />
</backend>
<outbound>
<base />
</outbound>
<on-error>
<base />
</on-error>
</policies>
XML
}
```
### Expected Behaviour
An API with a "Bounding Box" operation should have been created. The Request should have one representation added with a Sample including the JSON as specified in the above terraform. The 201 Response should also have a representation added with the Sample field populated.
### Actual Behaviour
The API is created and both the Request and 201 Response have a representation added; however the Sample field for both representations is empty.
### Steps to Reproduce
1. `terraform apply`
### Important Factoids
I can do this exact thing manually so I suspect there is either a bug in the `representation.example` code being applied or I specified it incorrectly, in which case the documentation is incomplete and the examples insufficient for me to follow.
|
1.0
|
Representation examples not working for Azure API operation requests or responses - <!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Terraform (and AzureRM Provider) Version
Terraform v1.1.7
on windows_amd64
+ provider registry.terraform.io/hashicorp/azurerm v2.98.0
### Affected Resource(s)
<!--- Please list the affected resources and data sources. --->
* `azurerm_api_management_api_operation`
### Terraform Configuration Files
```
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 2.98.0"
}
}
required_version = ">= 1.1.7"
}
provider "azurerm" {
features {
api_management {
purge_soft_delete_on_destroy = true
}
}
}
resource "azurerm_resource_group" "matt_test" {
name = "matt_test"
location = "westus2"
}
resource "azurerm_api_management" "matts-apim" {
name = "matts-apim"
location = azurerm_resource_group.matt_test.location
resource_group_name = azurerm_resource_group.matt_test.name
publisher_name = "My Publisher"
publisher_email = "my.email@email.com"
sku_name = "Consumption_0"
}
resource "azurerm_api_management_api" "carto-replacement-api" {
name = "carto-replacement-api"
resource_group_name = azurerm_resource_group.matt_test.name
api_management_name = azurerm_api_management.matts-apim.name
revision = "1"
display_name = "CARTO Replacement"
path = "carto"
protocols = ["https"]
}
resource "azurerm_api_management_api_operation" "bbox" {
operation_id = "bbox"
api_name = azurerm_api_management_api.carto-replacement-api.name
api_management_name = azurerm_api_management_api.carto-replacement-api.api_management_name
resource_group_name = azurerm_api_management_api.carto-replacement-api.resource_group_name
display_name = "Bounding Box"
method = "POST"
url_template = "/api/v1/data/bbox"
description = "Returns datasets from a box specified by opposite corners"
request {
description = ""
header {
name = "Content-Type"
required = "false"
type = "string"
values = ["application/json"]
}
header {
name = "Authorization"
required = "true"
type = "string"
}
representation {
content_type = "application/json"
example {
name = "response example"
value = "{\"coordinates\":[[-121.2,42.2],[-122.4,43.4]],\"datasets\":[\"10m_terrain\"]}"
}
}
}
response {
status_code = 200
}
response {
status_code = 201
representation {
content_type = "application/json"
example {
name = "Response example"
value = <<JSON
{"job_ids":["0bd62539-7975-4872-ba8c-75e756296d76"]}
JSON
}
}
}
}
resource "azurerm_api_management_api_operation_policy" "bbox_inbound_policy" {
api_name = azurerm_api_management_api_operation.bbox.api_name
api_management_name = azurerm_api_management_api_operation.bbox.api_management_name
resource_group_name = azurerm_api_management_api_operation.bbox.resource_group_name
operation_id = azurerm_api_management_api_operation.bbox.operation_id
xml_content = <<XML
<policies>
<inbound>
<base />
<mock-response status-code="201" content-type="application/json" />
</inbound>
<backend>
<base />
</backend>
<outbound>
<base />
</outbound>
<on-error>
<base />
</on-error>
</policies>
XML
}
```
### Expected Behaviour
An API with a "Bounding Box" operation should have been created. The Request should have one representation added with a Sample including the JSON as specified in the above terraform. The 201 Response should also have a representation added with the Sample field populated.
### Actual Behaviour
The API is created and both the Request and 201 Response have a representation added; however the Sample field for both representations is empty.
### Steps to Reproduce
1. `terraform apply`
### Important Factoids
I can do this exact thing manually so I suspect there is either a bug in the `representation.example` code being applied or I specified it incorrectly, in which case the documentation is incomplete and the examples insufficient for me to follow.
|
non_process
|
representation examples not working for azure api operation requests or responses community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or me too comments they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment terraform and azurerm provider version terraform on windows provider registry terraform io hashicorp azurerm affected resource s azurerm api management api operation terraform configuration files terraform required providers azurerm source hashicorp azurerm version required version provider azurerm features api management purge soft delete on destroy true resource azurerm resource group matt test name matt test location resource azurerm api management matts apim name matts apim location azurerm resource group matt test location resource group name azurerm resource group matt test name publisher name my publisher publisher email my email email com sku name consumption resource azurerm api management api carto replacement api name carto replacement api resource group name azurerm resource group matt test name api management name azurerm api management matts apim name revision display name carto replacement path carto protocols resource azurerm api management api operation bbox operation id bbox api name azurerm api management api carto replacement api name api management name azurerm api management api carto replacement api api management name resource group name azurerm api management api carto replacement api resource group name display name bounding box method post url template api data bbox description returns datasets from a box specified by opposite corners request description header name content type required false type string values header name authorization required true type string representation content type application json example name response example value coordinates datasets response status code response status code representation content type application json example name response example value json job ids json resource azurerm api management api operation policy bbox inbound policy api name azurerm api management api operation bbox api name api management name azurerm api management api operation bbox api management name resource group name azurerm api management api operation bbox resource group name operation id azurerm api management api operation bbox operation id xml content xml xml expected behaviour an api with a bounding box operation should have been created the request should have one representation added with a sample including the json as specified in the above terraform the response should also have a representation added with the sample field populated actual behaviour the api is created and both the request and response have a representation added however the sample field for both representations is empty steps to reproduce terraform apply important factoids i can do this exact thing manually so i suspect there is either a bug in the representation example code being applied or i specified it incorrectly in which case the documentation is incomplete and the examples insufficient for me to follow
| 0
|
17,176
| 22,752,547,219
|
IssuesEvent
|
2022-07-07 14:09:31
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
'Third-party application access via OAuth' must be enabled (non-default) in order for this to work
|
automation/svc triaged cxp doc-enhancement process-automation/subsvc Pri2
|
I was trying to set this up with a brand new Azure DevOps organisation and Azure Automation today and got 'SourceControls securityToken in invalid'
This is similar to the issue described here:
https://docs.microsoft.com/en-us/answers/questions/813444/an-error-occurred-while-creating-the-source-contro.html
I found that I needed to enable the 'Third-party application access via OAuth' in Azure DevOps 'Organization Settings / Policies / Application connection policies' in order for the connection to be created.
According to the Azure DevOps documentation this policy is 'defaulted to _off_ for all new organizations.'
https://docs.microsoft.com/en-us/azure/devops/organizations/accounts/change-application-access-policies?view=azure-devops#application-connection-policies
Once this option was enabled it all worked fine. It would be good if the Azure Automation documentation could be updated to reflect this necessary prerequisite.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 83c90e64-b615-711f-a53d-fc76606e2ecd
* Version Independent ID: 2d164036-6886-4440-50f7-369f99f41cea
* Content: [Use source control integration in Azure Automation](https://docs.microsoft.com/en-us/azure/automation/source-control-integration)
* Content Source: [articles/automation/source-control-integration.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/automation/source-control-integration.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @SGSneha
* Microsoft Alias: **sudhirsneha**
|
1.0
|
'Third-party application access via OAuth' must be enabled (non-default) in order for this to work - I was trying to set this up with a brand new Azure DevOps organisation and Azure Automation today and got 'SourceControls securityToken in invalid'
This is similar to the issue described here:
https://docs.microsoft.com/en-us/answers/questions/813444/an-error-occurred-while-creating-the-source-contro.html
I found that I needed to enable the 'Third-party application access via OAuth' in Azure DevOps 'Organization Settings / Policies / Application connection policies' in order for the connection to be created.
According to the Azure DevOps documentation this policy is 'defaulted to _off_ for all new organizations.'
https://docs.microsoft.com/en-us/azure/devops/organizations/accounts/change-application-access-policies?view=azure-devops#application-connection-policies
Once this option was enabled it all worked fine. It would be good if the Azure Automation documentation could be updated to reflect this necessary prerequisite.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 83c90e64-b615-711f-a53d-fc76606e2ecd
* Version Independent ID: 2d164036-6886-4440-50f7-369f99f41cea
* Content: [Use source control integration in Azure Automation](https://docs.microsoft.com/en-us/azure/automation/source-control-integration)
* Content Source: [articles/automation/source-control-integration.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/automation/source-control-integration.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @SGSneha
* Microsoft Alias: **sudhirsneha**
|
process
|
third party application access via oauth must be enabled non default in order for this to work i was trying to set this up with a brand new azure devops organisation and azure automation today and got sourcecontrols securitytoken in invalid this is similar to the issue described here i found that i needed to enable the third party application access via oauth in azure devops organization settings policies application connection policies in order for the connection to be created according to the azure devops documentation this policy is defaulted to off for all new organizations once this option was enabled it all worked fine it would be good if the azure automation documentation could be updated to reflect this necessary prerequisite document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation sub service process automation github login sgsneha microsoft alias sudhirsneha
| 1
|
12,763
| 15,116,341,626
|
IssuesEvent
|
2021-02-09 06:33:41
|
yuta252/startlens_learning
|
https://api.github.com/repos/yuta252/startlens_learning
|
closed
|
TripletLossモデルへのサンプルデータ取得
|
dev process
|
## 概要
#1 で加工編集した画像ファイルから学習モデルへの入力層に渡す入力サンプルデータを生成する必要がある。
そこでGenerateSample classを実装した。
## 変更点
- input_generator.pyの作成
## サンプルデータの生成方法
TripletLossはanchor_inputとpositive_inputの距離をより小さくし、anchor_inputとnegative_inputの距離をより大きくするようにDeepLearningのパラメータを学習する方法である。そのため入力層へは3つのサンプルデータが必要となる。
* **anchor_input** : 任意のクラスのサンプルデータ
* **positive_input** : anchor_inputと同じクラスのサンプルデータ
* **negative_input** : anchor_inputと異なるクラスのサンプルデータ
1. S3から取得したファイルパスより分類クラスラベルを取得し1: 1に対応するmappingを作成する
2. 分類クラスのデータ比率をもとにランダムにanchorとなるクラスを選択し、negativeクラスも取得する。
3. anchor, positive, negativeのサンプルデータそれぞれのファイルパスをもとにS3から画像をオンメモリで取得する。この時、Pillowを利用して入力層に適合するように224x224に画像のリサイズとdata augmentationとして画像反転する処理を入れた。
|
1.0
|
TripletLossモデルへのサンプルデータ取得 - ## 概要
#1 で加工編集した画像ファイルから学習モデルへの入力層に渡す入力サンプルデータを生成する必要がある。
そこでGenerateSample classを実装した。
## 変更点
- input_generator.pyの作成
## サンプルデータの生成方法
TripletLossはanchor_inputとpositive_inputの距離をより小さくし、anchor_inputとnegative_inputの距離をより大きくするようにDeepLearningのパラメータを学習する方法である。そのため入力層へは3つのサンプルデータが必要となる。
* **anchor_input** : 任意のクラスのサンプルデータ
* **positive_input** : anchor_inputと同じクラスのサンプルデータ
* **negative_input** : anchor_inputと異なるクラスのサンプルデータ
1. S3から取得したファイルパスより分類クラスラベルを取得し1: 1に対応するmappingを作成する
2. 分類クラスのデータ比率をもとにランダムにanchorとなるクラスを選択し、negativeクラスも取得する。
3. anchor, positive, negativeのサンプルデータそれぞれのファイルパスをもとにS3から画像をオンメモリで取得する。この時、Pillowを利用して入力層に適合するように224x224に画像のリサイズとdata augmentationとして画像反転する処理を入れた。
|
process
|
tripletlossモデルへのサンプルデータ取得 概要 で加工編集した画像ファイルから学習モデルへの入力層に渡す入力サンプルデータを生成する必要がある。 そこでgeneratesample classを実装した。 変更点 input generator pyの作成 サンプルデータの生成方法 tripletlossはanchor inputとpositive inputの距離をより小さくし、anchor inputとnegative inputの距離をより大きくするようにdeeplearningのパラメータを学習する方法である。 。 anchor input 任意のクラスのサンプルデータ positive input anchor inputと同じクラスのサンプルデータ negative input anchor inputと異なるクラスのサンプルデータ 分類クラスのデータ比率をもとにランダムにanchorとなるクラスを選択し、negativeクラスも取得する。 anchor positive 。この時、 augmentationとして画像反転する処理を入れた。
| 1
|
16,333
| 20,990,575,671
|
IssuesEvent
|
2022-03-29 08:56:03
|
elastic/beats
|
https://api.github.com/repos/elastic/beats
|
closed
|
[Filebeat] decode_cef - recover from errors in the CEF header
|
bug Filebeat :Processors Team:Security-External Integrations
|
It would be nice if the CEF parser would recover from errors detected in the CEF header and try to resume parsing the CEF extensions. For example the header on this message is incomplete, but the remainder of the CEF extensions are good.
`Feb 11 19:12:22 ec2-54-211-162-22 2022-02-11 19:12:22,962 sentinel - CEF:0|SentinelOne|Mgmt|activityID=1111111111111111111 activityType=3505 siteId=None siteName=None accountId=1222222222222222222 accountName=foo-bar mdr notificationScope=ACCOUNT`
The expected behavior is that there would be an `error.message` in the event because the message is not valid per the spec, but all of the `cef.extension` values would be present. This is what you get today.
```
{
"cef": {
"device": {
"product": "Mgmt",
"vendor": "SentinelOne"
},
"version": "0"
},
"error": {
"message": "unexpected end of CEF event"
},
"observer": {
"product": "Mgmt",
"vendor": "SentinelOne"
}
}
```
|
1.0
|
[Filebeat] decode_cef - recover from errors in the CEF header - It would be nice if the CEF parser would recover from errors detected in the CEF header and try to resume parsing the CEF extensions. For example the header on this message is incomplete, but the remainder of the CEF extensions are good.
`Feb 11 19:12:22 ec2-54-211-162-22 2022-02-11 19:12:22,962 sentinel - CEF:0|SentinelOne|Mgmt|activityID=1111111111111111111 activityType=3505 siteId=None siteName=None accountId=1222222222222222222 accountName=foo-bar mdr notificationScope=ACCOUNT`
The expected behavior is that there would be an `error.message` in the event because the message is not valid per the spec, but all of the `cef.extension` values would be present. This is what you get today.
```
{
"cef": {
"device": {
"product": "Mgmt",
"vendor": "SentinelOne"
},
"version": "0"
},
"error": {
"message": "unexpected end of CEF event"
},
"observer": {
"product": "Mgmt",
"vendor": "SentinelOne"
}
}
```
|
process
|
decode cef recover from errors in the cef header it would be nice if the cef parser would recover from errors detected in the cef header and try to resume parsing the cef extensions for example the header on this message is incomplete but the remainder of the cef extensions are good feb sentinel cef sentinelone mgmt activityid activitytype siteid none sitename none accountid accountname foo bar mdr notificationscope account the expected behavior is that there would be an error message in the event because the message is not valid per the spec but all of the cef extension values would be present this is what you get today cef device product mgmt vendor sentinelone version error message unexpected end of cef event observer product mgmt vendor sentinelone
| 1
|
10,431
| 13,219,872,872
|
IssuesEvent
|
2020-08-17 11:19:37
|
bisq-network/proposals
|
https://api.github.com/repos/bisq-network/proposals
|
closed
|
Create financial reserves and fix a resource leak
|
a:proposal re:processes
|
> _This is a Bisq Network proposal. Please familiarize yourself with the [submission and review process](https://docs.bisq.network/proposals.html)._
- there is a leak in Bisq that can cause Bisq revenue (created by trading fees payed in BTC) to not reach contributors
- fix it by stopping public trade events
- this supersedes https://github.com/bisq-network/proposals/issues/207
# Nomenclature
- **contributor** someone who spends her time and resources to evolve Bisq and the DAO. Examples are developers, support staff, marketing staff, team leads, first-timers, ...
- **speculative trader** someone who uses Bisq as an exchange. Pays her trading fees but does not add direct value to the Bisq software or the Bisq DAO.
# Problem Statement
There is a resource leak in Bisq that can cause Bisq revenue to not reach contributors. In detail, the issue affects the part of Bisq revenue that is created by trading fees which are payed in BTC. These fees go to Bisqs donation address and are later used to burn BSQ.
## Leak description
The burning man process is to buy at top price. What first seems like a legit solution shows some drawbacks when viewed from a contributors perspective.
Imagine the following scenario:
- A trader, Alice, puts up a buy offer over 1k BSQ for a price of 0,...6 BTC/BSQ.
- A contributor, Bob, takes the offer.
- Alice put up a sell offer over 1k BSQ for a price of 0,...9 BTC/BSQ.
- The burning man takes the offer.
As a result, the Bisq revenue is spread like
- Bob, the contributor, gets 0,...6 BTC
- Alice, the trader gets 0,...3 BTC
- from Bisq revenue generated by BTC trading fees.
## Leak effects
The leak is not new, however, as more and more speculative traders join the BSQ market, its effects become more apparent and severe. Effects are
- Bisq revenue goes to speculative traders instead of contributors
- contributors receive less USD compensation
- Bisq might start loosing contributors
- if that happens, Bisq is dead
# Proposal
- stop doing trading events
- leave profit (if there is any) at Bisqs donation address
- use the profit to refund the refund agent (nothing changed)
- if there is anything left, keep it at Bisqs donation address
- for future use in refunding the refund agent (we had multiple trading events in the past where not even the refund agent could be fully refunded)
- for future use as source for compensating victims of incoming attacks
##### How does that address the problem?
Well, if the donation address does not participate in the free market, revenue cannot go to contributors but neither can it go to speculative traders. Hence, there is no leak.
##### But then contributors cannot have it either!
Well, yes. However, following the recent (small!) security incident, a large part of Bisqs revenue is to be used to refund victims anyways. Eventually, there will be another incident with the needs of refunding the victims.
##### What if no new incident happens?
There are multiple options of course:
- keep it, create a "reserve" for next _insert time period here_
- distribute it among contributors somehow
- all of the above
##### What happens to the burning man?
The burning man role requires less efforts on scheduling and creating public trade events
##### What about https://github.com/bisq-network/proposals/issues/209
There has been this [40% number](https://github.com/bisq-network/proposals/issues/209#issuecomment-613289972). We can amend that by doing for example
- we can have 10% reserves (this proposal), 40% refund, 50% contributors out of the total Bisq revenue
- we can have 80% of revenue created by trading fees payed in BTC routed to the victims of the recent incident and 20% to be kept in reserve.
- other options are possible as well
# Closing notes
This proposal is to communicate and hopefully agree on two basic things:
- create awareness of the resource leak (and present a fix)
- introduce the concept of "financial reserves" in order to up the chances of Bisq being able to cope with future attacks.
Implementation details should be discussed separately.
|
1.0
|
Create financial reserves and fix a resource leak - > _This is a Bisq Network proposal. Please familiarize yourself with the [submission and review process](https://docs.bisq.network/proposals.html)._
- there is a leak in Bisq that can cause Bisq revenue (created by trading fees payed in BTC) to not reach contributors
- fix it by stopping public trade events
- this supersedes https://github.com/bisq-network/proposals/issues/207
# Nomenclature
- **contributor** someone who spends her time and resources to evolve Bisq and the DAO. Examples are developers, support staff, marketing staff, team leads, first-timers, ...
- **speculative trader** someone who uses Bisq as an exchange. Pays her trading fees but does not add direct value to the Bisq software or the Bisq DAO.
# Problem Statement
There is a resource leak in Bisq that can cause Bisq revenue to not reach contributors. In detail, the issue affects the part of Bisq revenue that is created by trading fees which are payed in BTC. These fees go to Bisqs donation address and are later used to burn BSQ.
## Leak description
The burning man process is to buy at top price. What first seems like a legit solution shows some drawbacks when viewed from a contributors perspective.
Imagine the following scenario:
- A trader, Alice, puts up a buy offer over 1k BSQ for a price of 0,...6 BTC/BSQ.
- A contributor, Bob, takes the offer.
- Alice put up a sell offer over 1k BSQ for a price of 0,...9 BTC/BSQ.
- The burning man takes the offer.
As a result, the Bisq revenue is spread like
- Bob, the contributor, gets 0,...6 BTC
- Alice, the trader gets 0,...3 BTC
- from Bisq revenue generated by BTC trading fees.
## Leak effects
The leak is not new, however, as more and more speculative traders join the BSQ market, its effects become more apparent and severe. Effects are
- Bisq revenue goes to speculative traders instead of contributors
- contributors receive less USD compensation
- Bisq might start loosing contributors
- if that happens, Bisq is dead
# Proposal
- stop doing trading events
- leave profit (if there is any) at Bisqs donation address
- use the profit to refund the refund agent (nothing changed)
- if there is anything left, keep it at Bisqs donation address
- for future use in refunding the refund agent (we had multiple trading events in the past where not even the refund agent could be fully refunded)
- for future use as source for compensating victims of incoming attacks
##### How does that address the problem?
Well, if the donation address does not participate in the free market, revenue cannot go to contributors but neither can it go to speculative traders. Hence, there is no leak.
##### But then contributors cannot have it either!
Well, yes. However, following the recent (small!) security incident, a large part of Bisqs revenue is to be used to refund victims anyways. Eventually, there will be another incident with the needs of refunding the victims.
##### What if no new incident happens?
There are multiple options of course:
- keep it, create a "reserve" for next _insert time period here_
- distribute it among contributors somehow
- all of the above
##### What happens to the burning man?
The burning man role requires less efforts on scheduling and creating public trade events
##### What about https://github.com/bisq-network/proposals/issues/209
There has been this [40% number](https://github.com/bisq-network/proposals/issues/209#issuecomment-613289972). We can amend that by doing for example
- we can have 10% reserves (this proposal), 40% refund, 50% contributors out of the total Bisq revenue
- we can have 80% of revenue created by trading fees payed in BTC routed to the victims of the recent incident and 20% to be kept in reserve.
- other options are possible as well
# Closing notes
This proposal is to communicate and hopefully agree on two basic things:
- create awareness of the resource leak (and present a fix)
- introduce the concept of "financial reserves" in order to up the chances of Bisq being able to cope with future attacks.
Implementation details should be discussed separately.
|
process
|
create financial reserves and fix a resource leak this is a bisq network proposal please familiarize yourself with the there is a leak in bisq that can cause bisq revenue created by trading fees payed in btc to not reach contributors fix it by stopping public trade events this supersedes nomenclature contributor someone who spends her time and resources to evolve bisq and the dao examples are developers support staff marketing staff team leads first timers speculative trader someone who uses bisq as an exchange pays her trading fees but does not add direct value to the bisq software or the bisq dao problem statement there is a resource leak in bisq that can cause bisq revenue to not reach contributors in detail the issue affects the part of bisq revenue that is created by trading fees which are payed in btc these fees go to bisqs donation address and are later used to burn bsq leak description the burning man process is to buy at top price what first seems like a legit solution shows some drawbacks when viewed from a contributors perspective imagine the following scenario a trader alice puts up a buy offer over bsq for a price of btc bsq a contributor bob takes the offer alice put up a sell offer over bsq for a price of btc bsq the burning man takes the offer as a result the bisq revenue is spread like bob the contributor gets btc alice the trader gets btc from bisq revenue generated by btc trading fees leak effects the leak is not new however as more and more speculative traders join the bsq market its effects become more apparent and severe effects are bisq revenue goes to speculative traders instead of contributors contributors receive less usd compensation bisq might start loosing contributors if that happens bisq is dead proposal stop doing trading events leave profit if there is any at bisqs donation address use the profit to refund the refund agent nothing changed if there is anything left keep it at bisqs donation address for future use in refunding the refund agent we had multiple trading events in the past where not even the refund agent could be fully refunded for future use as source for compensating victims of incoming attacks how does that address the problem well if the donation address does not participate in the free market revenue cannot go to contributors but neither can it go to speculative traders hence there is no leak but then contributors cannot have it either well yes however following the recent small security incident a large part of bisqs revenue is to be used to refund victims anyways eventually there will be another incident with the needs of refunding the victims what if no new incident happens there are multiple options of course keep it create a reserve for next insert time period here distribute it among contributors somehow all of the above what happens to the burning man the burning man role requires less efforts on scheduling and creating public trade events what about there has been this we can amend that by doing for example we can have reserves this proposal refund contributors out of the total bisq revenue we can have of revenue created by trading fees payed in btc routed to the victims of the recent incident and to be kept in reserve other options are possible as well closing notes this proposal is to communicate and hopefully agree on two basic things create awareness of the resource leak and present a fix introduce the concept of financial reserves in order to up the chances of bisq being able to cope with future attacks implementation details should be discussed separately
| 1
|
8,697
| 11,839,973,467
|
IssuesEvent
|
2020-03-23 18:01:58
|
prusa3d/PrusaSlicer
|
https://api.github.com/repos/prusa3d/PrusaSlicer
|
closed
|
Slicer can not slice anymore when deleting models while post processing script runs
|
background processing
|
### Version
1.36.2
### Operating system type + version
Mac OS 10.12.5
### Behavior
Export gcode, while post processing runs click delete all. Drop another model into Slic3r, slice now button remains deactivated.
I'm using a GPX post processing and some magic to add information about material need and time requirements into the file name, this is a quite lengthy process and the risk of deleting models by accident during the processing is quite high.
|
1.0
|
Slicer can not slice anymore when deleting models while post processing script runs - ### Version
1.36.2
### Operating system type + version
Mac OS 10.12.5
### Behavior
Export gcode, while post processing runs click delete all. Drop another model into Slic3r, slice now button remains deactivated.
I'm using a GPX post processing and some magic to add information about material need and time requirements into the file name, this is a quite lengthy process and the risk of deleting models by accident during the processing is quite high.
|
process
|
slicer can not slice anymore when deleting models while post processing script runs version operating system type version mac os behavior export gcode while post processing runs click delete all drop another model into slice now button remains deactivated i m using a gpx post processing and some magic to add information about material need and time requirements into the file name this is a quite lengthy process and the risk of deleting models by accident during the processing is quite high
| 1
|
4,266
| 7,189,385,934
|
IssuesEvent
|
2018-02-02 13:51:57
|
Great-Hill-Corporation/quickBlocks
|
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
|
closed
|
Weird idea for syncing an Ethereum node
|
apps-blockScrape status-inprocess type-enhancement
|
1. Find out how far back the --warp node stores full blocks and balance info
2. Run a full, archive node, and store all the blocks in QuickBlocks
3. After catching up to the --warp node, the archive node can be shut down (since QuickBlocks has a backed up history).
4. Shut down the archive node and work only with the --warp node
Result--much smaller, much faster full access to all the data.
|
1.0
|
Weird idea for syncing an Ethereum node - 1. Find out how far back the --warp node stores full blocks and balance info
2. Run a full, archive node, and store all the blocks in QuickBlocks
3. After catching up to the --warp node, the archive node can be shut down (since QuickBlocks has a backed up history).
4. Shut down the archive node and work only with the --warp node
Result--much smaller, much faster full access to all the data.
|
process
|
weird idea for syncing an ethereum node find out how far back the warp node stores full blocks and balance info run a full archive node and store all the blocks in quickblocks after catching up to the warp node the archive node can be shut down since quickblocks has a backed up history shut down the archive node and work only with the warp node result much smaller much faster full access to all the data
| 1
|
20,419
| 27,080,522,041
|
IssuesEvent
|
2023-02-14 13:44:23
|
evidence-dev/evidence
|
https://api.github.com/repos/evidence-dev/evidence
|
closed
|
Improved testing in development workspace
|
dev-process
|
It's clear to contributors how to add tests to the development workspace. Immediate goal here is _not_ high test coverage, just getting the systems in place in the development workspace.
Tests should include at least one example of:
- [ ] Unit testing supporting modules
- [ ] Unit testing an internal API
- [x] Browser testing components and/or a page in the dev workspace.
Playwright for browser testing, and Vitetest for unit testing seem popular in other Svelte projects, but no strong preference here.
Tests should run in CI/CD along with existing suite.
Additional considerations
* We should have a mechanism for tagging a subset of tests to run with different database connections (different environment variables set in the test runner)
|
1.0
|
Improved testing in development workspace - It's clear to contributors how to add tests to the development workspace. Immediate goal here is _not_ high test coverage, just getting the systems in place in the development workspace.
Tests should include at least one example of:
- [ ] Unit testing supporting modules
- [ ] Unit testing an internal API
- [x] Browser testing components and/or a page in the dev workspace.
Playwright for browser testing, and Vitetest for unit testing seem popular in other Svelte projects, but no strong preference here.
Tests should run in CI/CD along with existing suite.
Additional considerations
* We should have a mechanism for tagging a subset of tests to run with different database connections (different environment variables set in the test runner)
|
process
|
improved testing in development workspace it s clear to contributors how to add tests to the development workspace immediate goal here is not high test coverage just getting the systems in place in the development workspace tests should include at least one example of unit testing supporting modules unit testing an internal api browser testing components and or a page in the dev workspace playwright for browser testing and vitetest for unit testing seem popular in other svelte projects but no strong preference here tests should run in ci cd along with existing suite additional considerations we should have a mechanism for tagging a subset of tests to run with different database connections different environment variables set in the test runner
| 1
|
14,039
| 16,845,531,615
|
IssuesEvent
|
2021-06-19 11:54:53
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
closed
|
Performance regression: Process.GetCurrentProcess().Dispose() on Windows
|
area-System.Diagnostics.Process tenet-performance up-for-grabs
|
It looks like `Process.GetCurrentProcess()` have regressed on Windows.
```cmd
git clone https://github.com/dotnet/performance.git
cd performance
# if you don't have cli installed and want python script to download the latest cli for you
py .\scripts\benchmarks_ci.py -f netcoreapp2.2 netcoreapp3.0 --filter System.Diagnostics.Perf_Process.GetCurrentProcess
# if you do
dotnet run -p .\src\benchmarks\micro\MicroBenchmarks.csproj -c Release -f netcoreapp2.2 --runtimes netcoreapp2.2 netcoreapp3.0 --filter System.Diagnostics.Perf_Process.GetCurrentProcess
```
It's also suprising that the method is 4 times slower on Ubuntu 18 compared to Ubuntu 16.
## System.Diagnostics.Perf_Process.GetCurrentProcess
| conclusion | Base | Diff | Base/Diff | Modality | Operating System | Arch | Processor Name | Base Runtime | Diff Runtime |
| ---------- | ------:| ------:| ---------:| --------:| -------------------- | ----- | ------------------------------------------- | --------------- | --------------------------------- |
| Same | 445.90 | 459.75 | 0.97 | | ubuntu 18.04 | 64bit | Intel Xeon CPU E5-1650 v4 3.60GHz | .NET Core 2.2.6 | .NET Core 3.0.0-preview8-27919-09|
| Slower | 85.47 | 105.94 | 0.81 | | Windows 10.0.18362 | 64bit | Intel Xeon CPU E5-1650 v4 3.60GHz | .NET Core 2.2.6 | .NET Core 3.0.0-preview8-27919-09|
| Slower | 165.85 | 202.40 | 0.82 | | ubuntu 16.04 | 64bit | Intel Xeon CPU E5-2673 v4 2.30GHz | .NET Core 2.2.6 | .NET Core 3.0.0-preview8-27919-01|
| Same | 792.90 | 812.41 | 0.98 | | ubuntu 18.04 | 64bit | Intel Xeon CPU E5-2673 v4 2.30GHz | .NET Core 2.2.6 | .NET Core 3.0.0-preview8-27919-01|
| Slower | 119.86 | 130.78 | 0.92 | | macOS Mojave 10.14.5 | 64bit | Intel Core i7-5557U CPU 3.10GHz (Broadwell) | .NET Core 2.2.6 | .NET Core 3.0.0-preview8-27919-09|
| Slower | 100.94 | 127.77 | 0.79 | | Windows 10.0.18362 | 64bit | Intel Core i7-5557U CPU 3.10GHz (Broadwell) | .NET Core 2.2.6 | .NET Core 3.0.0-preview8-27919-09|
| Slower | 77.25 | 97.51 | 0.79 | | Windows 10.0.18362 | 64bit | Intel Core i7-7700 CPU 3.60GHz (Kaby Lake) | .NET Core 2.2.6 | .NET Core 3.0.0-preview8-27919-09|
| Slower | 94.54 | 106.37 | 0.89 | | Windows 10.0.18362 | 64bit | AMD Ryzen 7 1800X | .NET Core 2.2.6 | .NET Core 3.0.0-preview8-27919-09|
| Slower | 91.95 | 107.41 | 0.86 | | Windows 10.0.18362 | 32bit | Intel Xeon CPU E5-1650 v4 3.60GHz | .NET Core 2.2.6 | .NET Core 3.0.0-preview8-27919-09|
/cc @danmosemsft @billwert @DrewScoggins
|
1.0
|
Performance regression: Process.GetCurrentProcess().Dispose() on Windows - It looks like `Process.GetCurrentProcess()` have regressed on Windows.
```cmd
git clone https://github.com/dotnet/performance.git
cd performance
# if you don't have cli installed and want python script to download the latest cli for you
py .\scripts\benchmarks_ci.py -f netcoreapp2.2 netcoreapp3.0 --filter System.Diagnostics.Perf_Process.GetCurrentProcess
# if you do
dotnet run -p .\src\benchmarks\micro\MicroBenchmarks.csproj -c Release -f netcoreapp2.2 --runtimes netcoreapp2.2 netcoreapp3.0 --filter System.Diagnostics.Perf_Process.GetCurrentProcess
```
It's also suprising that the method is 4 times slower on Ubuntu 18 compared to Ubuntu 16.
## System.Diagnostics.Perf_Process.GetCurrentProcess
| conclusion | Base | Diff | Base/Diff | Modality | Operating System | Arch | Processor Name | Base Runtime | Diff Runtime |
| ---------- | ------:| ------:| ---------:| --------:| -------------------- | ----- | ------------------------------------------- | --------------- | --------------------------------- |
| Same | 445.90 | 459.75 | 0.97 | | ubuntu 18.04 | 64bit | Intel Xeon CPU E5-1650 v4 3.60GHz | .NET Core 2.2.6 | .NET Core 3.0.0-preview8-27919-09|
| Slower | 85.47 | 105.94 | 0.81 | | Windows 10.0.18362 | 64bit | Intel Xeon CPU E5-1650 v4 3.60GHz | .NET Core 2.2.6 | .NET Core 3.0.0-preview8-27919-09|
| Slower | 165.85 | 202.40 | 0.82 | | ubuntu 16.04 | 64bit | Intel Xeon CPU E5-2673 v4 2.30GHz | .NET Core 2.2.6 | .NET Core 3.0.0-preview8-27919-01|
| Same | 792.90 | 812.41 | 0.98 | | ubuntu 18.04 | 64bit | Intel Xeon CPU E5-2673 v4 2.30GHz | .NET Core 2.2.6 | .NET Core 3.0.0-preview8-27919-01|
| Slower | 119.86 | 130.78 | 0.92 | | macOS Mojave 10.14.5 | 64bit | Intel Core i7-5557U CPU 3.10GHz (Broadwell) | .NET Core 2.2.6 | .NET Core 3.0.0-preview8-27919-09|
| Slower | 100.94 | 127.77 | 0.79 | | Windows 10.0.18362 | 64bit | Intel Core i7-5557U CPU 3.10GHz (Broadwell) | .NET Core 2.2.6 | .NET Core 3.0.0-preview8-27919-09|
| Slower | 77.25 | 97.51 | 0.79 | | Windows 10.0.18362 | 64bit | Intel Core i7-7700 CPU 3.60GHz (Kaby Lake) | .NET Core 2.2.6 | .NET Core 3.0.0-preview8-27919-09|
| Slower | 94.54 | 106.37 | 0.89 | | Windows 10.0.18362 | 64bit | AMD Ryzen 7 1800X | .NET Core 2.2.6 | .NET Core 3.0.0-preview8-27919-09|
| Slower | 91.95 | 107.41 | 0.86 | | Windows 10.0.18362 | 32bit | Intel Xeon CPU E5-1650 v4 3.60GHz | .NET Core 2.2.6 | .NET Core 3.0.0-preview8-27919-09|
/cc @danmosemsft @billwert @DrewScoggins
|
process
|
performance regression process getcurrentprocess dispose on windows it looks like process getcurrentprocess have regressed on windows cmd git clone cd performance if you don t have cli installed and want python script to download the latest cli for you py scripts benchmarks ci py f filter system diagnostics perf process getcurrentprocess if you do dotnet run p src benchmarks micro microbenchmarks csproj c release f runtimes filter system diagnostics perf process getcurrentprocess it s also suprising that the method is times slower on ubuntu compared to ubuntu system diagnostics perf process getcurrentprocess conclusion base diff base diff modality operating system arch processor name base runtime diff runtime same ubuntu intel xeon cpu net core net core slower windows intel xeon cpu net core net core slower ubuntu intel xeon cpu net core net core same ubuntu intel xeon cpu net core net core slower macos mojave intel core cpu broadwell net core net core slower windows intel core cpu broadwell net core net core slower windows intel core cpu kaby lake net core net core slower windows amd ryzen net core net core slower windows intel xeon cpu net core net core cc danmosemsft billwert drewscoggins
| 1
|
9,066
| 12,138,926,484
|
IssuesEvent
|
2020-04-23 18:02:09
|
GoogleCloudPlatform/python-docs-samples
|
https://api.github.com/repos/GoogleCloudPlatform/python-docs-samples
|
closed
|
remove gcp-devrel-py-tools from jobs/v3/api_client/requirements-test.txt
|
priority: p2 remove-gcp-devrel-py-tools type: process
|
remove gcp-devrel-py-tools from jobs/v3/api_client/requirements-test.txt
|
1.0
|
remove gcp-devrel-py-tools from jobs/v3/api_client/requirements-test.txt - remove gcp-devrel-py-tools from jobs/v3/api_client/requirements-test.txt
|
process
|
remove gcp devrel py tools from jobs api client requirements test txt remove gcp devrel py tools from jobs api client requirements test txt
| 1
|
151,073
| 23,756,172,837
|
IssuesEvent
|
2022-09-01 03:27:33
|
open-mmlab/mmengine
|
https://api.github.com/repos/open-mmlab/mmengine
|
closed
|
Unify the copy directory of config and model-index.yml.
|
P1 need-design
|
Unify the copy directory of config and model-index.yml, which are currently copied to package/.mim in mim. Consider copy to package/.mmengine?
|
1.0
|
Unify the copy directory of config and model-index.yml. - Unify the copy directory of config and model-index.yml, which are currently copied to package/.mim in mim. Consider copy to package/.mmengine?
|
non_process
|
unify the copy directory of config and model index yml unify the copy directory of config and model index yml which are currently copied to package mim in mim consider copy to package mmengine
| 0
|
19,379
| 6,718,370,829
|
IssuesEvent
|
2017-10-15 12:01:34
|
azerothcore/azerothcore-wotlk
|
https://api.github.com/repos/azerothcore/azerothcore-wotlk
|
closed
|
Cmake Issuse
|
type: build type: enhancement type: question
|
Hello,
I did everything according to the instructions and gives me a problem when cmake.
>
> "CMake Error at C:/Program Files/CMake/share/cmake-3.7/Modules/FindPackageHandleStandardArgs.cmake:138 (message):
> Could NOT find OpenSSL (missing: OPENSSL_LIBRARIES OPENSSL_INCLUDE_DIR)
> Call Stack (most recent call first):"
Even a little confused and maybe someone hovers me what I have to do to solve the problem?
|
1.0
|
Cmake Issuse - Hello,
I did everything according to the instructions and gives me a problem when cmake.
>
> "CMake Error at C:/Program Files/CMake/share/cmake-3.7/Modules/FindPackageHandleStandardArgs.cmake:138 (message):
> Could NOT find OpenSSL (missing: OPENSSL_LIBRARIES OPENSSL_INCLUDE_DIR)
> Call Stack (most recent call first):"
Even a little confused and maybe someone hovers me what I have to do to solve the problem?
|
non_process
|
cmake issuse hello i did everything according to the instructions and gives me a problem when cmake cmake error at c program files cmake share cmake modules findpackagehandlestandardargs cmake message could not find openssl missing openssl libraries openssl include dir call stack most recent call first even a little confused and maybe someone hovers me what i have to do to solve the problem
| 0
|
698
| 3,194,196,399
|
IssuesEvent
|
2015-09-30 10:38:04
|
arduino/Arduino
|
https://api.github.com/repos/arduino/Arduino
|
closed
|
Poor error description!
|
Component: Preprocessor Type: Bug
|
Hey,
I happened to miss one "

which got the IDE to stop / loop, it's been about 10 minutes now.
I run on W7 pro 64-bit. sp1, Arduino-1.6.6-nightly-windows_2015-07-03
/ Sincerely BG
|
1.0
|
Poor error description! - Hey,
I happened to miss one "

which got the IDE to stop / loop, it's been about 10 minutes now.
I run on W7 pro 64-bit. sp1, Arduino-1.6.6-nightly-windows_2015-07-03
/ Sincerely BG
|
process
|
poor error description hey i happened to miss one which got the ide to stop loop it s been about minutes now i run on pro bit arduino nightly windows sincerely bg
| 1
|
129,471
| 18,102,524,889
|
IssuesEvent
|
2021-09-22 15:32:44
|
gms-ws-demo/JS-Demo-Sep2021
|
https://api.github.com/repos/gms-ws-demo/JS-Demo-Sep2021
|
opened
|
WS-2018-0076 (Medium) detected in tunnel-agent-0.4.3.tgz
|
security vulnerability
|
## WS-2018-0076 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tunnel-agent-0.4.3.tgz</b></p></summary>
<p>HTTP proxy tunneling agent. Formerly part of mikeal/request, now a standalone module.</p>
<p>Library home page: <a href="https://registry.npmjs.org/tunnel-agent/-/tunnel-agent-0.4.3.tgz">https://registry.npmjs.org/tunnel-agent/-/tunnel-agent-0.4.3.tgz</a></p>
<p>Path to dependency file: JS-Demo-Sep2021/package.json</p>
<p>Path to vulnerable library: JS-Demo-Sep2021/node_modules/npm/node_modules/request/node_modules/tunnel-agent/package.json,JS-Demo-Sep2021/node_modules/tunnel-agent/package.json</p>
<p>
Dependency Hierarchy:
- grunt-retire-0.3.12.tgz (Root Library)
- request-2.67.0.tgz
- :x: **tunnel-agent-0.4.3.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/gms-ws-demo/JS-Demo-Sep2021/commit/e8cd219daa23fb09c60a7e7095b13c9e8372f529">e8cd219daa23fb09c60a7e7095b13c9e8372f529</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Versions of tunnel-agent before 0.6.0 are vulnerable to memory exposure.
This is exploitable if user supplied input is provided to the auth value and is a number.
<p>Publish Date: 2017-03-05
<p>URL: <a href=https://github.com/request/tunnel-agent/commit/9ca95ec7219daface8a6fc2674000653de0922c0>WS-2018-0076</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nodesecurity.io/advisories/598">https://nodesecurity.io/advisories/598</a></p>
<p>Release Date: 2018-01-27</p>
<p>Fix Resolution: 0.6.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"tunnel-agent","packageVersion":"0.4.3","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"grunt-retire:0.3.12;request:2.67.0;tunnel-agent:0.4.3","isMinimumFixVersionAvailable":true,"minimumFixVersion":"0.6.0"}],"baseBranches":["master"],"vulnerabilityIdentifier":"WS-2018-0076","vulnerabilityDetails":"Versions of tunnel-agent before 0.6.0 are vulnerable to memory exposure.\n\nThis is exploitable if user supplied input is provided to the auth value and is a number.","vulnerabilityUrl":"https://github.com/request/tunnel-agent/commit/9ca95ec7219daface8a6fc2674000653de0922c0","cvss3Severity":"medium","cvss3Score":"5.1","cvss3Metrics":{"A":"None","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Local","I":"None"},"extraData":{}}</REMEDIATE> -->
|
True
|
WS-2018-0076 (Medium) detected in tunnel-agent-0.4.3.tgz - ## WS-2018-0076 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tunnel-agent-0.4.3.tgz</b></p></summary>
<p>HTTP proxy tunneling agent. Formerly part of mikeal/request, now a standalone module.</p>
<p>Library home page: <a href="https://registry.npmjs.org/tunnel-agent/-/tunnel-agent-0.4.3.tgz">https://registry.npmjs.org/tunnel-agent/-/tunnel-agent-0.4.3.tgz</a></p>
<p>Path to dependency file: JS-Demo-Sep2021/package.json</p>
<p>Path to vulnerable library: JS-Demo-Sep2021/node_modules/npm/node_modules/request/node_modules/tunnel-agent/package.json,JS-Demo-Sep2021/node_modules/tunnel-agent/package.json</p>
<p>
Dependency Hierarchy:
- grunt-retire-0.3.12.tgz (Root Library)
- request-2.67.0.tgz
- :x: **tunnel-agent-0.4.3.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/gms-ws-demo/JS-Demo-Sep2021/commit/e8cd219daa23fb09c60a7e7095b13c9e8372f529">e8cd219daa23fb09c60a7e7095b13c9e8372f529</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Versions of tunnel-agent before 0.6.0 are vulnerable to memory exposure.
This is exploitable if user supplied input is provided to the auth value and is a number.
<p>Publish Date: 2017-03-05
<p>URL: <a href=https://github.com/request/tunnel-agent/commit/9ca95ec7219daface8a6fc2674000653de0922c0>WS-2018-0076</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nodesecurity.io/advisories/598">https://nodesecurity.io/advisories/598</a></p>
<p>Release Date: 2018-01-27</p>
<p>Fix Resolution: 0.6.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"tunnel-agent","packageVersion":"0.4.3","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"grunt-retire:0.3.12;request:2.67.0;tunnel-agent:0.4.3","isMinimumFixVersionAvailable":true,"minimumFixVersion":"0.6.0"}],"baseBranches":["master"],"vulnerabilityIdentifier":"WS-2018-0076","vulnerabilityDetails":"Versions of tunnel-agent before 0.6.0 are vulnerable to memory exposure.\n\nThis is exploitable if user supplied input is provided to the auth value and is a number.","vulnerabilityUrl":"https://github.com/request/tunnel-agent/commit/9ca95ec7219daface8a6fc2674000653de0922c0","cvss3Severity":"medium","cvss3Score":"5.1","cvss3Metrics":{"A":"None","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Local","I":"None"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
ws medium detected in tunnel agent tgz ws medium severity vulnerability vulnerable library tunnel agent tgz http proxy tunneling agent formerly part of mikeal request now a standalone module library home page a href path to dependency file js demo package json path to vulnerable library js demo node modules npm node modules request node modules tunnel agent package json js demo node modules tunnel agent package json dependency hierarchy grunt retire tgz root library request tgz x tunnel agent tgz vulnerable library found in head commit a href found in base branch master vulnerability details versions of tunnel agent before are vulnerable to memory exposure this is exploitable if user supplied input is provided to the auth value and is a number publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree grunt retire request tunnel agent isminimumfixversionavailable true minimumfixversion basebranches vulnerabilityidentifier ws vulnerabilitydetails versions of tunnel agent before are vulnerable to memory exposure n nthis is exploitable if user supplied input is provided to the auth value and is a number vulnerabilityurl
| 0
|
10,043
| 13,044,161,632
|
IssuesEvent
|
2020-07-29 03:47:24
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
UCP: Migrate scalar function `SubDateDurationString` from TiDB
|
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
|
## Description
Port the scalar function `SubDateDurationString` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @mapleFU
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
2.0
|
UCP: Migrate scalar function `SubDateDurationString` from TiDB -
## Description
Port the scalar function `SubDateDurationString` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @mapleFU
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
process
|
ucp migrate scalar function subdatedurationstring from tidb description port the scalar function subdatedurationstring from tidb to coprocessor score mentor s maplefu recommended skills rust programming learning materials already implemented expressions ported from tidb
| 1
|
200,408
| 15,798,756,998
|
IssuesEvent
|
2021-04-02 19:22:38
|
flutter/flutter
|
https://api.github.com/repos/flutter/flutter
|
closed
|
[web]: Web plugins not implemented
|
P3 d: examples documentation plugin
|
When following the guide [Developing plugin packages](https://flutter.dev/docs/development/packages-and-plugins/developing-packages#developing-plugin-packages), there is no implementation for a `web` plugin.
I created a project via the command:
```sh
flutter create --template=plugin -i swift -a kotlin hello
```
Running the app actually works, but it displays the platform version as "Unknown".
```sh
cd hello/example
flutter run -d chrome
```
There are `android` and `ios` directories for platform-specific code, but there is no `web` directory.
When looking at the `example/lib/main.dart` file, on the `initPlatformState()` function, the `try-catch` is not raising an `PlatformException` or `MissingPluginException` or any exception. It's just failing silently (?) and the actual `setState()` function is never called. If I change the plugin implementation to not use `_channel.invokeMethod` and return a hard-coded value, it works.
I would expect a `web` directory showing how to write a platform-specific function in Dart, or maybe Javascript/Typescript showing the Chrome version. Something showing how to call a third party Javascript library would be very useful.
|
1.0
|
[web]: Web plugins not implemented - When following the guide [Developing plugin packages](https://flutter.dev/docs/development/packages-and-plugins/developing-packages#developing-plugin-packages), there is no implementation for a `web` plugin.
I created a project via the command:
```sh
flutter create --template=plugin -i swift -a kotlin hello
```
Running the app actually works, but it displays the platform version as "Unknown".
```sh
cd hello/example
flutter run -d chrome
```
There are `android` and `ios` directories for platform-specific code, but there is no `web` directory.
When looking at the `example/lib/main.dart` file, on the `initPlatformState()` function, the `try-catch` is not raising an `PlatformException` or `MissingPluginException` or any exception. It's just failing silently (?) and the actual `setState()` function is never called. If I change the plugin implementation to not use `_channel.invokeMethod` and return a hard-coded value, it works.
I would expect a `web` directory showing how to write a platform-specific function in Dart, or maybe Javascript/Typescript showing the Chrome version. Something showing how to call a third party Javascript library would be very useful.
|
non_process
|
web plugins not implemented when following the guide there is no implementation for a web plugin i created a project via the command sh flutter create template plugin i swift a kotlin hello running the app actually works but it displays the platform version as unknown sh cd hello example flutter run d chrome there are android and ios directories for platform specific code but there is no web directory when looking at the example lib main dart file on the initplatformstate function the try catch is not raising an platformexception or missingpluginexception or any exception it s just failing silently and the actual setstate function is never called if i change the plugin implementation to not use channel invokemethod and return a hard coded value it works i would expect a web directory showing how to write a platform specific function in dart or maybe javascript typescript showing the chrome version something showing how to call a third party javascript library would be very useful
| 0
|
7,469
| 10,565,744,395
|
IssuesEvent
|
2019-10-05 13:53:04
|
linked-art/linked.art
|
https://api.github.com/repos/linked-art/linked.art
|
opened
|
Make a Google Drive for Linked Art
|
process
|
We decided to collect slides in a google drive and link to them from the website, under events.
For this we should have a real drive associated with the community, not just a shared folder.
Rob to create an account for Linked Art, migrate the docs there, and make space for uploads.
(Very similar to the IIIF drive)
|
1.0
|
Make a Google Drive for Linked Art -
We decided to collect slides in a google drive and link to them from the website, under events.
For this we should have a real drive associated with the community, not just a shared folder.
Rob to create an account for Linked Art, migrate the docs there, and make space for uploads.
(Very similar to the IIIF drive)
|
process
|
make a google drive for linked art we decided to collect slides in a google drive and link to them from the website under events for this we should have a real drive associated with the community not just a shared folder rob to create an account for linked art migrate the docs there and make space for uploads very similar to the iiif drive
| 1
|
313,434
| 26,930,155,371
|
IssuesEvent
|
2023-02-07 16:17:18
|
btrfs/linux
|
https://api.github.com/repos/btrfs/linux
|
closed
|
[PATCH v3] btrfs: allow single disk devices to mount with older
|
6.3 test sent
|
Link to patches
https://lore.kernel.org/linux-btrfs/6b1f037344cd8d24566f3d9873b820a73384242c.1598995167.git.josef@toxicpanda.com/
b4 am 6b1f037344cd8d24566f3d9873b820a73384242c.1598995167.git.josef@toxicpanda.com
|
1.0
|
[PATCH v3] btrfs: allow single disk devices to mount with older - Link to patches
https://lore.kernel.org/linux-btrfs/6b1f037344cd8d24566f3d9873b820a73384242c.1598995167.git.josef@toxicpanda.com/
b4 am 6b1f037344cd8d24566f3d9873b820a73384242c.1598995167.git.josef@toxicpanda.com
|
non_process
|
btrfs allow single disk devices to mount with older link to patches am git josef toxicpanda com
| 0
|
17,551
| 23,362,875,847
|
IssuesEvent
|
2022-08-10 13:12:35
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
closed
|
OnUpdate default referential Integrity action while using Prisma to handle referential Integrity
|
bug/2-confirmed kind/bug process/candidate tech/engines team/client topic: database-provider/planetscale topic: referentialIntegrity
|
### Intro
it is mentioned in the docs that the default referential integrity actions are these here

but it is also stated in the docs that while using Prisma to handle referential integrity that the available actions are

It is not clearly mentioned in the docs that you should manually set the `onUpdate` to `noAction`
___________________________________________________________
### Problem
enabling the preview feature `referentialIntegrity` and attempting to update a record that has relation lead me to this error (creating and deleting records works fine)
```
Message:
Invalid `prisma.institution.update()` invocation:
Error occurred during query execution:
ConnectorError(ConnectorError { user_facing_error: None, kind: QueryError(Server(ServerError { code: 1105, message: "symbol db.Institution.id not found", state: "HY000" })) })
Query:
prisma.institution.update(
{
where: {
id: 44,
},
data: {
firstName: "Random",
},
select: {
id: true,
firstName: true,
},
}
)
```
___________________________________________________________
### Solution
setting `onUpdate` to `noAction` resolved this error. I would like to propose to either set the default action while using `referentialIntegrity` for update operation to `noAction` (since this is the only available option at this time) or at least mention this explicitly in the documentation
___________________________________________________________
### Notes
I should also note I was using Prisma combined with Planetscale, i don't know if this is relevant but i'll just leave it here
|
1.0
|
OnUpdate default referential Integrity action while using Prisma to handle referential Integrity - ### Intro
it is mentioned in the docs that the default referential integrity actions are these here

but it is also stated in the docs that while using Prisma to handle referential integrity that the available actions are

It is not clearly mentioned in the docs that you should manually set the `onUpdate` to `noAction`
___________________________________________________________
### Problem
enabling the preview feature `referentialIntegrity` and attempting to update a record that has relation lead me to this error (creating and deleting records works fine)
```
Message:
Invalid `prisma.institution.update()` invocation:
Error occurred during query execution:
ConnectorError(ConnectorError { user_facing_error: None, kind: QueryError(Server(ServerError { code: 1105, message: "symbol db.Institution.id not found", state: "HY000" })) })
Query:
prisma.institution.update(
{
where: {
id: 44,
},
data: {
firstName: "Random",
},
select: {
id: true,
firstName: true,
},
}
)
```
___________________________________________________________
### Solution
setting `onUpdate` to `noAction` resolved this error. I would like to propose to either set the default action while using `referentialIntegrity` for update operation to `noAction` (since this is the only available option at this time) or at least mention this explicitly in the documentation
___________________________________________________________
### Notes
I should also note I was using Prisma combined with Planetscale, i don't know if this is relevant but i'll just leave it here
|
process
|
onupdate default referential integrity action while using prisma to handle referential integrity intro it is mentioned in the docs that the default referential integrity actions are these here but it is also stated in the docs that while using prisma to handle referential integrity that the available actions are it is not clearly mentioned in the docs that you should manually set the onupdate to noaction problem enabling the preview feature referentialintegrity and attempting to update a record that has relation lead me to this error creating and deleting records works fine message invalid prisma institution update invocation error occurred during query execution connectorerror connectorerror user facing error none kind queryerror server servererror code message symbol db institution id not found state query prisma institution update where id data firstname random select id true firstname true solution setting onupdate to noaction resolved this error i would like to propose to either set the default action while using referentialintegrity for update operation to noaction since this is the only available option at this time or at least mention this explicitly in the documentation notes i should also note i was using prisma combined with planetscale i don t know if this is relevant but i ll just leave it here
| 1
|
247,129
| 7,902,273,205
|
IssuesEvent
|
2018-07-01 01:29:22
|
tgockel/zookeeper-cpp
|
https://api.github.com/repos/tgockel/zookeeper-cpp
|
opened
|
configuration defaults are not constant
|
bug lib/server priority/high
|
The "constant" values like `server::configuration::default_client_port` are completely editable by anyone. This is not meant to be.
|
1.0
|
configuration defaults are not constant - The "constant" values like `server::configuration::default_client_port` are completely editable by anyone. This is not meant to be.
|
non_process
|
configuration defaults are not constant the constant values like server configuration default client port are completely editable by anyone this is not meant to be
| 0
|
13,574
| 16,109,503,241
|
IssuesEvent
|
2021-04-27 19:07:43
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
closed
|
Azure DevOps created a service account with cluster-admin role for a RBAC-enabled cluster
|
Pri2 devops-cicd-process/tech devops/prod doc-enhancement
|
1) Created a new AKS cluster in the Azure portal
2) Enabled RBAC
3) Selected the new AKS cluster and namespace in the Azure DevOps "Add Kubernetes Resource" inside an Environment
4) Used kubectl to examine what Azure DevOps did
5) Verified **Azure DevOps created a service account with cluster-admin role for a RBAC-enabled cluster**
New service account:
```
kubectl get serviceaccount --namespace test
NAME SECRETS AGE
azdev-sa-403636 1 37m
```
New RoleBinding:
```
kubectl get rolebinding --namespace test -o json
"roleRef": {
"apiGroup": "rbac.authorization.k8s.io",
"kind": "ClusterRole",
"name": "cluster-admin"
},
"subjects": [
{
"kind": "ServiceAccount",
"name": "azdev-sa-403636",
"namespace": "test"
}
```
According to the documentation, the service account should be least-privileged:
>For an RBAC enabled cluster, RoleBinding is created as well to limit the scope of the created service account to the chosen namespace. For an RBAC disabled cluster, the ServiceAccount created has cluster-wide privileges (across namespaces).
Either the documentation needs to be updated to say _For an RBAC enabled cluster, the ServiceAccount created has a cluster-admin role_ or the DevOps bug needs to be corrected.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 7730ae4d-4101-9c83-1823-4ff43ff161ce
* Version Independent ID: 20a7e263-4819-783e-c984-c4f3b459e22f
* Content: [Environment - Kubernetes resource - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/environments-kubernetes?view=azure-devops)
* Content Source: [docs/pipelines/process/environments-kubernetes.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/environments-kubernetes.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
1.0
|
Azure DevOps created a service account with cluster-admin role for a RBAC-enabled cluster - 1) Created a new AKS cluster in the Azure portal
2) Enabled RBAC
3) Selected the new AKS cluster and namespace in the Azure DevOps "Add Kubernetes Resource" inside an Environment
4) Used kubectl to examine what Azure DevOps did
5) Verified **Azure DevOps created a service account with cluster-admin role for a RBAC-enabled cluster**
New service account:
```
kubectl get serviceaccount --namespace test
NAME SECRETS AGE
azdev-sa-403636 1 37m
```
New RoleBinding:
```
kubectl get rolebinding --namespace test -o json
"roleRef": {
"apiGroup": "rbac.authorization.k8s.io",
"kind": "ClusterRole",
"name": "cluster-admin"
},
"subjects": [
{
"kind": "ServiceAccount",
"name": "azdev-sa-403636",
"namespace": "test"
}
```
According to the documentation, the service account should be least-privileged:
>For an RBAC enabled cluster, RoleBinding is created as well to limit the scope of the created service account to the chosen namespace. For an RBAC disabled cluster, the ServiceAccount created has cluster-wide privileges (across namespaces).
Either the documentation needs to be updated to say _For an RBAC enabled cluster, the ServiceAccount created has a cluster-admin role_ or the DevOps bug needs to be corrected.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 7730ae4d-4101-9c83-1823-4ff43ff161ce
* Version Independent ID: 20a7e263-4819-783e-c984-c4f3b459e22f
* Content: [Environment - Kubernetes resource - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/environments-kubernetes?view=azure-devops)
* Content Source: [docs/pipelines/process/environments-kubernetes.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/environments-kubernetes.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
process
|
azure devops created a service account with cluster admin role for a rbac enabled cluster created a new aks cluster in the azure portal enabled rbac selected the new aks cluster and namespace in the azure devops add kubernetes resource inside an environment used kubectl to examine what azure devops did verified azure devops created a service account with cluster admin role for a rbac enabled cluster new service account kubectl get serviceaccount namespace test name secrets age azdev sa new rolebinding kubectl get rolebinding namespace test o json roleref apigroup rbac authorization io kind clusterrole name cluster admin subjects kind serviceaccount name azdev sa namespace test according to the documentation the service account should be least privileged for an rbac enabled cluster rolebinding is created as well to limit the scope of the created service account to the chosen namespace for an rbac disabled cluster the serviceaccount created has cluster wide privileges across namespaces either the documentation needs to be updated to say for an rbac enabled cluster the serviceaccount created has a cluster admin role or the devops bug needs to be corrected document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
| 1
|
4,087
| 7,043,852,184
|
IssuesEvent
|
2017-12-31 13:58:25
|
AkihikoWatanabe/paper_notes
|
https://api.github.com/repos/AkihikoWatanabe/paper_notes
|
opened
|
Artificial neural networks in business: Two decades of research, Tkac+, Applied Soft Computing 2016
|
Neural Survey TimeSeriesDataProcessing
|
http://www.sciencedirect.com/science/article/pii/S1568494615006122
|
1.0
|
Artificial neural networks in business: Two decades of research, Tkac+, Applied Soft Computing 2016 - http://www.sciencedirect.com/science/article/pii/S1568494615006122
|
process
|
artificial neural networks in business two decades of research tkac applied soft computing
| 1
|
733,453
| 25,306,698,913
|
IssuesEvent
|
2022-11-17 14:39:30
|
airqo-platform/AirQo-frontend
|
https://api.github.com/repos/airqo-platform/AirQo-frontend
|
closed
|
do automated CI/CD for the mobile application
|
mobile-app priority-medium
|
**Is your feature request related to a problem? Please describe.**
Current process is a bit manual
**Describe the solution you'd like**
https://docs.flutter.dev/deployment/cd
**Describe alternatives you've considered**
- https://docs.flutter.dev/deployment/cd
- Integrating with current tools like Github Actions, etc.
**Additional context**
N/A
|
1.0
|
do automated CI/CD for the mobile application - **Is your feature request related to a problem? Please describe.**
Current process is a bit manual
**Describe the solution you'd like**
https://docs.flutter.dev/deployment/cd
**Describe alternatives you've considered**
- https://docs.flutter.dev/deployment/cd
- Integrating with current tools like Github Actions, etc.
**Additional context**
N/A
|
non_process
|
do automated ci cd for the mobile application is your feature request related to a problem please describe current process is a bit manual describe the solution you d like describe alternatives you ve considered integrating with current tools like github actions etc additional context n a
| 0
|
5,674
| 8,556,797,530
|
IssuesEvent
|
2018-11-08 14:11:48
|
easy-software-ufal/annotations_repos
|
https://api.github.com/repos/easy-software-ufal/annotations_repos
|
opened
|
khellang/Scrutor Validate whether a ServiceDescriptorAttribute has an invalid ServiceType
|
C# no operator test wrong processing
|
Issue: `https://github.com/khellang/Scrutor/pull/9`
PR: `https://github.com/khellang/Scrutor/pull/9`
|
1.0
|
khellang/Scrutor Validate whether a ServiceDescriptorAttribute has an invalid ServiceType - Issue: `https://github.com/khellang/Scrutor/pull/9`
PR: `https://github.com/khellang/Scrutor/pull/9`
|
process
|
khellang scrutor validate whether a servicedescriptorattribute has an invalid servicetype issue pr
| 1
|
11,558
| 14,436,743,544
|
IssuesEvent
|
2020-12-07 10:32:41
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
opened
|
New count functionality
|
kind/feature process/candidate team/client tech/typescript
|
The latest DMMF from the query engine introduced a new feature as a side-effect of working on group by:
Being able to not just count by `*`, but specific columns, which can give a different result, if a column has NULL values.
Right now we don't expose this with the Client, but we should.
Proposal for the API:
```ts
prisma.user.count({
field: 'id'
})
```
```ts
prisma.user.aggregate({
count: true
})
```
and
```ts
prisma.user.aggregate({
count: {
_all: true
}
})
```
|
1.0
|
New count functionality - The latest DMMF from the query engine introduced a new feature as a side-effect of working on group by:
Being able to not just count by `*`, but specific columns, which can give a different result, if a column has NULL values.
Right now we don't expose this with the Client, but we should.
Proposal for the API:
```ts
prisma.user.count({
field: 'id'
})
```
```ts
prisma.user.aggregate({
count: true
})
```
and
```ts
prisma.user.aggregate({
count: {
_all: true
}
})
```
|
process
|
new count functionality the latest dmmf from the query engine introduced a new feature as a side effect of working on group by being able to not just count by but specific columns which can give a different result if a column has null values right now we don t expose this with the client but we should proposal for the api ts prisma user count field id ts prisma user aggregate count true and ts prisma user aggregate count all true
| 1
|
340,651
| 10,276,768,461
|
IssuesEvent
|
2019-08-24 20:30:56
|
Spartan97/OldTimeHockey
|
https://api.github.com/repos/Spartan97/OldTimeHockey
|
closed
|
Website standings page doesn't break ties correctly
|
Low Priority Website
|
Going to be difficult with H2H, so this might be a won't-fix. If we switch to PF as the only tiebreaker it will be correct.
|
1.0
|
Website standings page doesn't break ties correctly - Going to be difficult with H2H, so this might be a won't-fix. If we switch to PF as the only tiebreaker it will be correct.
|
non_process
|
website standings page doesn t break ties correctly going to be difficult with so this might be a won t fix if we switch to pf as the only tiebreaker it will be correct
| 0
|
12,023
| 7,763,162,280
|
IssuesEvent
|
2018-06-01 15:39:42
|
Azure/azure-cli
|
https://api.github.com/repos/Azure/azure-cli
|
closed
|
`az` does not complete, it would seem as if it does not return properly
|
Performance telemetry
|
**Describe the bug**
This is a theory, not something I can confirm completely but I was running some Azure CLI commands from home and they would take forever, to complete, like 10-30 minutes (or not at all). We use Azure CLI as part of our VSTS deployment process as well, and right now they don't complete at all, they get stuck on the Azure CLI commands.
**To Reproduce**
N/A looking for more information or help on the subject matter
**Expected behavior**
Azure CLI should return within a few seconds the expected response
**Environment summary**
MSI Windows 10 and Hosted 2017, build agents in VSTS/Azure
**Additional context**
This started when I upgraded to the most recent version. I don't know exactly what version of Azure CLI tooling is running in VSTS but everything broke down starting yesterday.
|
True
|
`az` does not complete, it would seem as if it does not return properly - **Describe the bug**
This is a theory, not something I can confirm completely but I was running some Azure CLI commands from home and they would take forever, to complete, like 10-30 minutes (or not at all). We use Azure CLI as part of our VSTS deployment process as well, and right now they don't complete at all, they get stuck on the Azure CLI commands.
**To Reproduce**
N/A looking for more information or help on the subject matter
**Expected behavior**
Azure CLI should return within a few seconds the expected response
**Environment summary**
MSI Windows 10 and Hosted 2017, build agents in VSTS/Azure
**Additional context**
This started when I upgraded to the most recent version. I don't know exactly what version of Azure CLI tooling is running in VSTS but everything broke down starting yesterday.
|
non_process
|
az does not complete it would seem as if it does not return properly describe the bug this is a theory not something i can confirm completely but i was running some azure cli commands from home and they would take forever to complete like minutes or not at all we use azure cli as part of our vsts deployment process as well and right now they don t complete at all they get stuck on the azure cli commands to reproduce n a looking for more information or help on the subject matter expected behavior azure cli should return within a few seconds the expected response environment summary msi windows and hosted build agents in vsts azure additional context this started when i upgraded to the most recent version i don t know exactly what version of azure cli tooling is running in vsts but everything broke down starting yesterday
| 0
|
362,438
| 25,376,106,431
|
IssuesEvent
|
2022-11-21 14:13:25
|
ShSato4JPN/world-of-zono
|
https://api.github.com/repos/ShSato4JPN/world-of-zono
|
closed
|
stylelint についてまとめる
|
documentation
|
**Is your feature request related to a problem? Please describe.**
stylelint についてまとめる
**Describe the solution you'd like**
マークダウンファイルにまとめる
**Additional context**
参考サイト
[StyleLint](https://stylelint.io/)
[.stylelintrc(Qitta)](https://qiita.com/takeshisakuma/items/a7a3b8cc0ce05422f686)
[prettier, eslint, stylelint について](https://rinoguchi.net/2021/12/prettier-eslint-stylelint.html)
|
1.0
|
stylelint についてまとめる - **Is your feature request related to a problem? Please describe.**
stylelint についてまとめる
**Describe the solution you'd like**
マークダウンファイルにまとめる
**Additional context**
参考サイト
[StyleLint](https://stylelint.io/)
[.stylelintrc(Qitta)](https://qiita.com/takeshisakuma/items/a7a3b8cc0ce05422f686)
[prettier, eslint, stylelint について](https://rinoguchi.net/2021/12/prettier-eslint-stylelint.html)
|
non_process
|
stylelint についてまとめる is your feature request related to a problem please describe stylelint についてまとめる describe the solution you d like マークダウンファイルにまとめる additional context 参考サイト
| 0
|
18,995
| 24,987,462,618
|
IssuesEvent
|
2022-11-02 16:02:49
|
googleapis/sphinx-docfx-yaml
|
https://api.github.com/repos/googleapis/sphinx-docfx-yaml
|
closed
|
Presubmit failing for specifying branch on main
|
type: process priority: p1
|
Branch parameter really isn't used anywhere. This could be taken out.
|
1.0
|
Presubmit failing for specifying branch on main - Branch parameter really isn't used anywhere. This could be taken out.
|
process
|
presubmit failing for specifying branch on main branch parameter really isn t used anywhere this could be taken out
| 1
|
595,476
| 18,067,555,001
|
IssuesEvent
|
2021-09-20 21:04:43
|
OpenMandrivaAssociation/test2
|
https://api.github.com/repos/OpenMandrivaAssociation/test2
|
closed
|
Fully update 2013 Beta fails to start display-manager. (Bugzilla Bug 99)
|
bug high priority major
|
This issue was created automatically with bugzilla2github
# Bugzilla Bug 99
Date: 2013-08-23 11:53:11 +0000
From: @benbullard79
To: OpenMandriva QA <<bugs@openmandriva.org>>
CC: @berolinux, @itchka, @robxu9, siriustheking@yahoo.com, @tpgxyz
Last updated: 2013-08-29 07:51:53 +0000
## Comment 492
Date: 2013-08-23 11:53:11 +0000
From: @benbullard79
After updating from 2013 repos system boots to console. Noticed this in dmesg:
3.204417] systemd[1]: Cannot add dependency job for unit display-manager.service, ignoring: Unit display-manager.service failed to load: No such file or directory. See system logs and 'systemctl status display-manager.service' for details.
Hence:
$ systemctl status display-manager.service
display-manager.service
Loaded: not-found (Reason: No such file or directory)
Active: inactive (dead)
$ sudo systemctl restart display-manager.service
...
Failed to issue method call: Unit display-manager.service failed to load: No such file or directory. See system logs and 'systemctl status display-manager.service' for details.
## Comment 496
Date: 2013-08-23 14:20:05 +0000
From: Gus Ballan <<siriustheking@yahoo.com>>
Maybe related with my problem???
See: https://issues.openmandriva.org/show_bug.cgi?id=97#c1
## Comment 497
Date: 2013-08-23 14:24:48 +0000
From: Gus Ballan <<siriustheking@yahoo.com>>
Some additional comments:
1- I also get a lot of "no such file or directory" messages, being the first they that appear during grub2.
2- I also tried to use systemctl in the dracut prompt, getting "no such file..." in some cases (and something like "missing init" in the rest)
3- I don't think that this is a video problem (as you classified this bug) but a problem that occur at boot time.
4- I use Cooker, not 2013.0. However, I cannot boot, like you.
## Comment 498
Date: 2013-08-23 14:29:44 +0000
From: @benbullard79
I can boot. X does not start. It boots to console. I can then login and run startx and have a working system.
But you're right, it isn't a video problem because video works. It's possibly a systemd problem?
## Comment 499
Date: 2013-08-23 14:49:45 +0000
From: Gus Ballan <<siriustheking@yahoo.com>>
Did you try vesa driver?
## Comment 500
Date: 2013-08-23 14:54:15 +0000
From: Gus Ballan <<siriustheking@yahoo.com>>
systemd....
Last update of systemd/udev was on August 19th. If you did updates between Aug 19 and yesterday, and after those updated you could boot in X, then I guess it's not a systemd problem, 'cos you really could boot in X with the last systemd for 2 or 3 days at least.
At least, that's my opinion.
## Comment 501
Date: 2013-08-23 15:46:07 +0000
From: @benbullard79
Yes I used vesa driver until I installed nvidia driver. Graphics are working fine. The problem is that system doesn't boot to graphical interface it boots to console because systemctl doesn't start desktop.service. Or at least that is how it looks to me. But I'm not a developer so...
I installed from May 16 .iso. Graphics worked. Updated kernel and rebooted, installed mividia304.88 driver and graphics worked. Then I updated system completely (about 900 packages including systemd, udev and others) and system then boots to console though graphics will work with console login and startx. So I went from May 16 to August 22.
## Comment 506
Date: 2013-08-23 16:12:31 +0000
From: @robxu9
Confirmed on both Cooker and 2013.0 beta.
## Comment 508
Date: 2013-08-23 16:56:13 +0000
From: Gus Ballan <<siriustheking@yahoo.com>>
If you can use console, run XFdrake, configure to vesa and try again. If boots up in X, then the problem can be the nVidia driver and/or its "relationship" with new kernels.
If you still have the old kernels option in grub menu, boot up with an old kernel. If X works, even with nVidia driver, then you have more data about what the problem can be.
## Comment 510
Date: 2013-08-23 19:22:43 +0000
From: @tpgxyz
By default all desktop manager services should provide alias display-manager.service
for instance:
[Install]
Alias=display-manager.service
Looks like kdm.service provides it.
hopefully this should help
https://abf.rosalinux.ru/openmandriva/systemd/commit/f7b93f13e878e512a4a08c21eb2c30a761330eb8
## Comment 604
Date: 2013-08-27 13:14:24 +0000
From: @benbullard79
I'm still having this problem.
## Comment 605
Date: 2013-08-27 13:21:31 +0000
From: @benbullard79
Created attachment 41
/lib/systemd/system-preset/99-default.preset
> Attached file: 99-default.preset (application/octet-stream, 1570 bytes)
> Description: /lib/systemd/system-preset/99-default.preset
## Comment 607
Date: 2013-08-27 14:03:57 +0000
From: @benbullard79
At the (considerable) risk of getting over my head knowledge wise...
I've looked around my 2013 system and can't find 'display-manager.service' anywhere. There is dm.service here:
/sys/fs/cgroup/systemd/system.slice/
$ systemctl status dm.service
dm.service - LSB: Launches the graphical display manager
Loaded: loaded (/etc/rc.d/init.d/dm)
Active: active (exited) since Tue 2013-08-27 07:33:04 CDT; 1h 28min ago
vs:
$ systemctl status display-manager.service
display-manager.service
Loaded: not-found (Reason: No such file or directory)
Active: inactive (dead)
## Comment 608
Date: 2013-08-27 14:10:01 +0000
From: @berolinux
On my box, there's
/etc/systemd/system/display-manager.service
saying
[Unit]
Description=K Display Manager
After=livesys-late.service systemd-user-sessions.service
# On mandriva gdm/X11 is on tty1. We explicitly cancel the getty here to
# avoid any races around that.
# Do not stop plymouth, it is done in prefdm if required (or left to the dm)
Conflicts=getty@tty1.service plymouth-quit.service
After=getty@tty1.service plymouth-quit.service
[Service]
ExecStart=/usr/bin/kdm -nodaemon
Restart=always
RestartSec=0
IgnoreSIGPIPE=no
[Install]
Alias=display-manager.service
Not exactly sure where it came from though (rpm -qf says it's not owned by any package), probably something created it during installation or startup.
Does adding that file fix your problem?
## Comment 613
Date: 2013-08-27 16:13:36 +0000
From: @itchka
Bero:
I just checked my system and I don't have the file you describe in you comment yet my KDM starts up fine.
Colin
## Comment 614
Date: 2013-08-27 16:28:43 +0000
From: @benbullard79
I don't have that file either. However I discovered that I had xdm installed. Removed that and now system boots to X again. So fixed here. No idea what installed xdm... but it apparently was interfering with kdm.
## Comment 615
Date: 2013-08-27 16:38:17 +0000
From: @itchka
Gus, lease check whether you have xdm installed if you have then we may be on the way to fixing this.
Colin
## Comment 620
Date: 2013-08-27 17:29:16 +0000
From: Gus Ballan <<siriustheking@yahoo.com>>
xdm not installed.
Is this related with Bug #105 ?.
I never said that I have a problem with display-manager. I have no problem at all actually !
## Comment 622
Date: 2013-08-27 18:31:45 +0000
From: @itchka
Gus, I wondered whether it was related 105. I mistakenly thought you had the same bug as Ben. My fault I should have read all the way through. Still 99 is I think due a borked update at some point. Now Ben is fixed I'll close this and reopen after the beta is released from QA. Would you be able to do some testing for QA? It would be very helpful if we could test on as much different hardware as possible.
## Comment 623
Date: 2013-08-27 18:34:45 +0000
From: @itchka
Closing as the problem has been resolved. New iso will tell all.
## Comment 628
Date: 2013-08-27 20:39:44 +0000
From: Gus Ballan <<siriustheking@yahoo.com>>
(In reply to comment #18)
> Gus, I wondered whether it was related 105. I mistakenly thought you had the
> same bug as Ben. My fault I should have read all the way through. Still 99
> is I think due a borked update at some point. Now Ben is fixed I'll close
> this and reopen after the beta is released from QA. Would you be able to do
> some testing for QA? It would be very helpful if we could test on as much
> different hardware as possible.
Colin:
Take for sure that I'll test Beta in my environments. Sadly, I don't have enough space to test Cooker and 2013 at the same time. I would like to do that.
Maybe the next weekend the time to erase unused things to make more space will finally arrive, and new partitions can be created... ;)
PS: Please, when ready, publish the links to the Beta ISO into the Forum (http://forums.openmandriva.org/) ASAP.
## Comment 633
Date: 2013-08-28 06:10:59 +0000
From: @tpgxyz
(In reply to comment #13)
> On my box, there's
>
> /etc/systemd/system/display-manager.service
>
> saying
>
> [Unit]
> Description=K Display Manager
> After=livesys-late.service systemd-user-sessions.service
>
> # On mandriva gdm/X11 is on tty1. We explicitly cancel the getty here to
> # avoid any races around that.
> # Do not stop plymouth, it is done in prefdm if required (or left to the dm)
> Conflicts=getty@tty1.service plymouth-quit.service
> After=getty@tty1.service plymouth-quit.service
>
Since when we switched to tty1? I know that mageia did.
> [Service]
> ExecStart=/usr/bin/kdm -nodaemon
> Restart=always
> RestartSec=0
> IgnoreSIGPIPE=no
>
> [Install]
> Alias=display-manager.service
>
>
>
> Not exactly sure where it came from though (rpm -qf says it's not owned by
> any package), probably something created it during installation or startup.
>
It's created by systemd see [Install] section.
I'm bout to create a default display manager preset file, which will turn systemd to enable first installed service which will provide Alias=display-manager.service
## Comment 652
Date: 2013-08-28 18:27:48 +0000
From: Gus Ballan <<siriustheking@yahoo.com>>
(In reply to comment #21)
>
> Since when we switched to tty1? I know that mageia did.
>
It's not clear to me what are you asking (I'm not english native). In my VBox, tty1 to tty6 are consoles, and they can be accessed using Ctrl+Alt+F1 to Ctrl+Alt+F6. Even I can see the text "tty1" after using Ctrl+Alt+F1.
To get X I must use Ctrl+Alt+F7.
## Comment 653
Date: 2013-08-29 07:51:53 +0000
From: @tpgxyz
(In reply to comment #22)
> (In reply to comment #21)
> >
> > Since when we switched to tty1? I know that mageia did.
> >
>
> It's not clear to me what are you asking (I'm not english native). In my
> VBox, tty1 to tty6 are consoles, and they can be accessed using Ctrl+Alt+F1
> to Ctrl+Alt+F6. Even I can see the text "tty1" after using Ctrl+Alt+F1.
>
> To get X I must use Ctrl+Alt+F7.
Well i'm not asking you, but i'm just pointing there are some issues in kdm.service
which i've fixed already.
Anyways kdm.service points to tty1 while OMV runs X on tty7.
|
1.0
|
Fully update 2013 Beta fails to start display-manager. (Bugzilla Bug 99) - This issue was created automatically with bugzilla2github
# Bugzilla Bug 99
Date: 2013-08-23 11:53:11 +0000
From: @benbullard79
To: OpenMandriva QA <<bugs@openmandriva.org>>
CC: @berolinux, @itchka, @robxu9, siriustheking@yahoo.com, @tpgxyz
Last updated: 2013-08-29 07:51:53 +0000
## Comment 492
Date: 2013-08-23 11:53:11 +0000
From: @benbullard79
After updating from 2013 repos system boots to console. Noticed this in dmesg:
3.204417] systemd[1]: Cannot add dependency job for unit display-manager.service, ignoring: Unit display-manager.service failed to load: No such file or directory. See system logs and 'systemctl status display-manager.service' for details.
Hence:
$ systemctl status display-manager.service
display-manager.service
Loaded: not-found (Reason: No such file or directory)
Active: inactive (dead)
$ sudo systemctl restart display-manager.service
...
Failed to issue method call: Unit display-manager.service failed to load: No such file or directory. See system logs and 'systemctl status display-manager.service' for details.
## Comment 496
Date: 2013-08-23 14:20:05 +0000
From: Gus Ballan <<siriustheking@yahoo.com>>
Maybe related with my problem???
See: https://issues.openmandriva.org/show_bug.cgi?id=97#c1
## Comment 497
Date: 2013-08-23 14:24:48 +0000
From: Gus Ballan <<siriustheking@yahoo.com>>
Some additional comments:
1- I also get a lot of "no such file or directory" messages, being the first they that appear during grub2.
2- I also tried to use systemctl in the dracut prompt, getting "no such file..." in some cases (and something like "missing init" in the rest)
3- I don't think that this is a video problem (as you classified this bug) but a problem that occur at boot time.
4- I use Cooker, not 2013.0. However, I cannot boot, like you.
## Comment 498
Date: 2013-08-23 14:29:44 +0000
From: @benbullard79
I can boot. X does not start. It boots to console. I can then login and run startx and have a working system.
But you're right, it isn't a video problem because video works. It's possibly a systemd problem?
## Comment 499
Date: 2013-08-23 14:49:45 +0000
From: Gus Ballan <<siriustheking@yahoo.com>>
Did you try vesa driver?
## Comment 500
Date: 2013-08-23 14:54:15 +0000
From: Gus Ballan <<siriustheking@yahoo.com>>
systemd....
Last update of systemd/udev was on August 19th. If you did updates between Aug 19 and yesterday, and after those updated you could boot in X, then I guess it's not a systemd problem, 'cos you really could boot in X with the last systemd for 2 or 3 days at least.
At least, that's my opinion.
## Comment 501
Date: 2013-08-23 15:46:07 +0000
From: @benbullard79
Yes I used vesa driver until I installed nvidia driver. Graphics are working fine. The problem is that system doesn't boot to graphical interface it boots to console because systemctl doesn't start desktop.service. Or at least that is how it looks to me. But I'm not a developer so...
I installed from May 16 .iso. Graphics worked. Updated kernel and rebooted, installed mividia304.88 driver and graphics worked. Then I updated system completely (about 900 packages including systemd, udev and others) and system then boots to console though graphics will work with console login and startx. So I went from May 16 to August 22.
## Comment 506
Date: 2013-08-23 16:12:31 +0000
From: @robxu9
Confirmed on both Cooker and 2013.0 beta.
## Comment 508
Date: 2013-08-23 16:56:13 +0000
From: Gus Ballan <<siriustheking@yahoo.com>>
If you can use console, run XFdrake, configure to vesa and try again. If boots up in X, then the problem can be the nVidia driver and/or its "relationship" with new kernels.
If you still have the old kernels option in grub menu, boot up with an old kernel. If X works, even with nVidia driver, then you have more data about what the problem can be.
## Comment 510
Date: 2013-08-23 19:22:43 +0000
From: @tpgxyz
By default all desktop manager services should provide alias display-manager.service
for instance:
[Install]
Alias=display-manager.service
Looks like kdm.service provides it.
hopefully this should help
https://abf.rosalinux.ru/openmandriva/systemd/commit/f7b93f13e878e512a4a08c21eb2c30a761330eb8
## Comment 604
Date: 2013-08-27 13:14:24 +0000
From: @benbullard79
I'm still having this problem.
## Comment 605
Date: 2013-08-27 13:21:31 +0000
From: @benbullard79
Created attachment 41
/lib/systemd/system-preset/99-default.preset
> Attached file: 99-default.preset (application/octet-stream, 1570 bytes)
> Description: /lib/systemd/system-preset/99-default.preset
## Comment 607
Date: 2013-08-27 14:03:57 +0000
From: @benbullard79
At the (considerable) risk of getting over my head knowledge wise...
I've looked around my 2013 system and can't find 'display-manager.service' anywhere. There is dm.service here:
/sys/fs/cgroup/systemd/system.slice/
$ systemctl status dm.service
dm.service - LSB: Launches the graphical display manager
Loaded: loaded (/etc/rc.d/init.d/dm)
Active: active (exited) since Tue 2013-08-27 07:33:04 CDT; 1h 28min ago
vs:
$ systemctl status display-manager.service
display-manager.service
Loaded: not-found (Reason: No such file or directory)
Active: inactive (dead)
## Comment 608
Date: 2013-08-27 14:10:01 +0000
From: @berolinux
On my box, there's
/etc/systemd/system/display-manager.service
saying
[Unit]
Description=K Display Manager
After=livesys-late.service systemd-user-sessions.service
# On mandriva gdm/X11 is on tty1. We explicitly cancel the getty here to
# avoid any races around that.
# Do not stop plymouth, it is done in prefdm if required (or left to the dm)
Conflicts=getty@tty1.service plymouth-quit.service
After=getty@tty1.service plymouth-quit.service
[Service]
ExecStart=/usr/bin/kdm -nodaemon
Restart=always
RestartSec=0
IgnoreSIGPIPE=no
[Install]
Alias=display-manager.service
Not exactly sure where it came from though (rpm -qf says it's not owned by any package), probably something created it during installation or startup.
Does adding that file fix your problem?
## Comment 613
Date: 2013-08-27 16:13:36 +0000
From: @itchka
Bero:
I just checked my system and I don't have the file you describe in you comment yet my KDM starts up fine.
Colin
## Comment 614
Date: 2013-08-27 16:28:43 +0000
From: @benbullard79
I don't have that file either. However I discovered that I had xdm installed. Removed that and now system boots to X again. So fixed here. No idea what installed xdm... but it apparently was interfering with kdm.
## Comment 615
Date: 2013-08-27 16:38:17 +0000
From: @itchka
Gus, lease check whether you have xdm installed if you have then we may be on the way to fixing this.
Colin
## Comment 620
Date: 2013-08-27 17:29:16 +0000
From: Gus Ballan <<siriustheking@yahoo.com>>
xdm not installed.
Is this related with Bug #105 ?.
I never said that I have a problem with display-manager. I have no problem at all actually !
## Comment 622
Date: 2013-08-27 18:31:45 +0000
From: @itchka
Gus, I wondered whether it was related 105. I mistakenly thought you had the same bug as Ben. My fault I should have read all the way through. Still 99 is I think due a borked update at some point. Now Ben is fixed I'll close this and reopen after the beta is released from QA. Would you be able to do some testing for QA? It would be very helpful if we could test on as much different hardware as possible.
## Comment 623
Date: 2013-08-27 18:34:45 +0000
From: @itchka
Closing as the problem has been resolved. New iso will tell all.
## Comment 628
Date: 2013-08-27 20:39:44 +0000
From: Gus Ballan <<siriustheking@yahoo.com>>
(In reply to comment #18)
> Gus, I wondered whether it was related 105. I mistakenly thought you had the
> same bug as Ben. My fault I should have read all the way through. Still 99
> is I think due a borked update at some point. Now Ben is fixed I'll close
> this and reopen after the beta is released from QA. Would you be able to do
> some testing for QA? It would be very helpful if we could test on as much
> different hardware as possible.
Colin:
Take for sure that I'll test Beta in my environments. Sadly, I don't have enough space to test Cooker and 2013 at the same time. I would like to do that.
Maybe the next weekend the time to erase unused things to make more space will finally arrive, and new partitions can be created... ;)
PS: Please, when ready, publish the links to the Beta ISO into the Forum (http://forums.openmandriva.org/) ASAP.
## Comment 633
Date: 2013-08-28 06:10:59 +0000
From: @tpgxyz
(In reply to comment #13)
> On my box, there's
>
> /etc/systemd/system/display-manager.service
>
> saying
>
> [Unit]
> Description=K Display Manager
> After=livesys-late.service systemd-user-sessions.service
>
> # On mandriva gdm/X11 is on tty1. We explicitly cancel the getty here to
> # avoid any races around that.
> # Do not stop plymouth, it is done in prefdm if required (or left to the dm)
> Conflicts=getty@tty1.service plymouth-quit.service
> After=getty@tty1.service plymouth-quit.service
>
Since when we switched to tty1? I know that mageia did.
> [Service]
> ExecStart=/usr/bin/kdm -nodaemon
> Restart=always
> RestartSec=0
> IgnoreSIGPIPE=no
>
> [Install]
> Alias=display-manager.service
>
>
>
> Not exactly sure where it came from though (rpm -qf says it's not owned by
> any package), probably something created it during installation or startup.
>
It's created by systemd see [Install] section.
I'm bout to create a default display manager preset file, which will turn systemd to enable first installed service which will provide Alias=display-manager.service
## Comment 652
Date: 2013-08-28 18:27:48 +0000
From: Gus Ballan <<siriustheking@yahoo.com>>
(In reply to comment #21)
>
> Since when we switched to tty1? I know that mageia did.
>
It's not clear to me what are you asking (I'm not english native). In my VBox, tty1 to tty6 are consoles, and they can be accessed using Ctrl+Alt+F1 to Ctrl+Alt+F6. Even I can see the text "tty1" after using Ctrl+Alt+F1.
To get X I must use Ctrl+Alt+F7.
## Comment 653
Date: 2013-08-29 07:51:53 +0000
From: @tpgxyz
(In reply to comment #22)
> (In reply to comment #21)
> >
> > Since when we switched to tty1? I know that mageia did.
> >
>
> It's not clear to me what are you asking (I'm not english native). In my
> VBox, tty1 to tty6 are consoles, and they can be accessed using Ctrl+Alt+F1
> to Ctrl+Alt+F6. Even I can see the text "tty1" after using Ctrl+Alt+F1.
>
> To get X I must use Ctrl+Alt+F7.
Well i'm not asking you, but i'm just pointing there are some issues in kdm.service
which i've fixed already.
Anyways kdm.service points to tty1 while OMV runs X on tty7.
|
non_process
|
fully update beta fails to start display manager bugzilla bug this issue was created automatically with bugzilla bug date from to openmandriva qa lt gt cc berolinux itchka siriustheking yahoo com tpgxyz last updated comment date from after updating from repos system boots to console noticed this in dmesg systemd cannot add dependency job for unit display manager service ignoring unit display manager service failed to load no such file or directory see system logs and systemctl status display manager service for details hence systemctl status display manager service display manager service loaded not found reason no such file or directory active inactive dead sudo systemctl restart display manager service failed to issue method call unit display manager service failed to load no such file or directory see system logs and systemctl status display manager service for details comment date from gus ballan lt gt maybe related with my problem see comment date from gus ballan lt gt some additional comments i also get a lot of no such file or directory messages being the first they that appear during i also tried to use systemctl in the dracut prompt getting no such file in some cases and something like missing init in the rest i don t think that this is a video problem as you classified this bug but a problem that occur at boot time i use cooker not however i cannot boot like you comment date from i can boot x does not start it boots to console i can then login and run startx and have a working system but you re right it isn t a video problem because video works it s possibly a systemd problem comment date from gus ballan lt gt did you try vesa driver comment date from gus ballan lt gt systemd last update of systemd udev was on august if you did updates between aug and yesterday and after those updated you could boot in x then i guess it s not a systemd problem cos you really could boot in x with the last systemd for or days at least at least that s my opinion comment date from yes i used vesa driver until i installed nvidia driver graphics are working fine the problem is that system doesn t boot to graphical interface it boots to console because systemctl doesn t start desktop service or at least that is how it looks to me but i m not a developer so i installed from may iso graphics worked updated kernel and rebooted installed driver and graphics worked then i updated system completely about packages including systemd udev and others and system then boots to console though graphics will work with console login and startx so i went from may to august comment date from confirmed on both cooker and beta comment date from gus ballan lt gt if you can use console run xfdrake configure to vesa and try again if boots up in x then the problem can be the nvidia driver and or its relationship with new kernels if you still have the old kernels option in grub menu boot up with an old kernel if x works even with nvidia driver then you have more data about what the problem can be comment date from tpgxyz by default all desktop manager services should provide alias display manager service for instance alias display manager service looks like kdm service provides it hopefully this should help comment date from i m still having this problem comment date from created attachment lib systemd system preset default preset attached file default preset application octet stream bytes description lib systemd system preset default preset comment date from at the considerable risk of getting over my head knowledge wise i ve looked around my system and can t find display manager service anywhere there is dm service here sys fs cgroup systemd system slice systemctl status dm service dm service lsb launches the graphical display manager loaded loaded etc rc d init d dm active active exited since tue cdt ago vs systemctl status display manager service display manager service loaded not found reason no such file or directory active inactive dead comment date from berolinux on my box there s etc systemd system display manager service saying description k display manager after livesys late service systemd user sessions service on mandriva gdm is on we explicitly cancel the getty here to avoid any races around that do not stop plymouth it is done in prefdm if required or left to the dm conflicts getty service plymouth quit service after getty service plymouth quit service execstart usr bin kdm nodaemon restart always restartsec ignoresigpipe no alias display manager service not exactly sure where it came from though rpm qf says it s not owned by any package probably something created it during installation or startup does adding that file fix your problem comment date from itchka bero i just checked my system and i don t have the file you describe in you comment yet my kdm starts up fine colin comment date from i don t have that file either however i discovered that i had xdm installed removed that and now system boots to x again so fixed here no idea what installed xdm but it apparently was interfering with kdm comment date from itchka gus lease check whether you have xdm installed if you have then we may be on the way to fixing this colin comment date from gus ballan lt gt xdm not installed is this related with bug i never said that i have a problem with display manager i have no problem at all actually comment date from itchka gus i wondered whether it was related i mistakenly thought you had the same bug as ben my fault i should have read all the way through still is i think due a borked update at some point now ben is fixed i ll close this and reopen after the beta is released from qa would you be able to do some testing for qa it would be very helpful if we could test on as much different hardware as possible comment date from itchka closing as the problem has been resolved new iso will tell all comment date from gus ballan lt gt in reply to comment gus i wondered whether it was related i mistakenly thought you had the same bug as ben my fault i should have read all the way through still is i think due a borked update at some point now ben is fixed i ll close this and reopen after the beta is released from qa would you be able to do some testing for qa it would be very helpful if we could test on as much different hardware as possible colin take for sure that i ll test beta in my environments sadly i don t have enough space to test cooker and at the same time i would like to do that maybe the next weekend the time to erase unused things to make more space will finally arrive and new partitions can be created ps please when ready publish the links to the beta iso into the forum asap comment date from tpgxyz in reply to comment on my box there s etc systemd system display manager service saying description k display manager after livesys late service systemd user sessions service on mandriva gdm is on we explicitly cancel the getty here to avoid any races around that do not stop plymouth it is done in prefdm if required or left to the dm conflicts getty service plymouth quit service after getty service plymouth quit service since when we switched to i know that mageia did execstart usr bin kdm nodaemon restart always restartsec ignoresigpipe no alias display manager service not exactly sure where it came from though rpm qf says it s not owned by any package probably something created it during installation or startup it s created by systemd see section i m bout to create a default display manager preset file which will turn systemd to enable first installed service which will provide alias display manager service comment date from gus ballan lt gt in reply to comment since when we switched to i know that mageia did it s not clear to me what are you asking i m not english native in my vbox to are consoles and they can be accessed using ctrl alt to ctrl alt even i can see the text after using ctrl alt to get x i must use ctrl alt comment date from tpgxyz in reply to comment in reply to comment since when we switched to i know that mageia did it s not clear to me what are you asking i m not english native in my vbox to are consoles and they can be accessed using ctrl alt to ctrl alt even i can see the text after using ctrl alt to get x i must use ctrl alt well i m not asking you but i m just pointing there are some issues in kdm service which i ve fixed already anyways kdm service points to while omv runs x on
| 0
|
586,731
| 17,595,736,979
|
IssuesEvent
|
2021-08-17 04:40:09
|
ita-social-projects/TeachUA
|
https://api.github.com/repos/ita-social-projects/TeachUA
|
closed
|
[Додати центр. Додати локацію] Incorrect cursor pointer if hover over '+Додати локацію'
|
bug UI Priority: Low
|
**Environment:** Windows 10, version 92.0.4515.107, (64)
**Reproducible:** always
**Build found:** last commit
**Steps to reproduce**
- Go to https://speak-ukrainian.org.ua/dev/
- Login as admin@gmail.com, 'Password'="admin";
- Click on user menu > 'Додати центр'
- pay attention to the cursor when hover it over '+Додати локацію' .
**Actual result**
Mouse cursor is displayed as text select pointer .

**Expected result**
Mouse cursor is displayed as hand pointer

|
1.0
|
[Додати центр. Додати локацію] Incorrect cursor pointer if hover over '+Додати локацію' - **Environment:** Windows 10, version 92.0.4515.107, (64)
**Reproducible:** always
**Build found:** last commit
**Steps to reproduce**
- Go to https://speak-ukrainian.org.ua/dev/
- Login as admin@gmail.com, 'Password'="admin";
- Click on user menu > 'Додати центр'
- pay attention to the cursor when hover it over '+Додати локацію' .
**Actual result**
Mouse cursor is displayed as text select pointer .

**Expected result**
Mouse cursor is displayed as hand pointer

|
non_process
|
incorrect cursor pointer if hover over додати локацію environment windows version reproducible always build found last commit steps to reproduce go to login as admin gmail com password admin click on user menu додати центр pay attention to the cursor when hover it over додати локацію actual result mouse cursor is displayed as text select pointer expected result mouse cursor is displayed as hand pointer
| 0
|
406,749
| 11,902,434,638
|
IssuesEvent
|
2020-03-30 13:57:54
|
openshift/odo
|
https://api.github.com/repos/openshift/odo
|
closed
|
Setup Openshift clusters on PSI for use by the team
|
area/release-eng kind/feature points/8 priority/High triage/needs-information
|
[kind/Enhancement]
<!--
Welcome! - We kindly ask you to:
1. Fill out the issue template below
2. Use the chat and talk to us if you have a question rather than a bug or feature request.
The chat room is at: https://chat.openshift.io/developers/channels/odo
Thanks for understanding, and for contributing to the project!
-->
Linked to #1799
Initiate creation of cluster on PSI resources
Acceptance Criteria
- [x] Create Controller node
- [x] Setup ocp 3.11 cluster
- [x] Setup ocp 4.1 cluster
- [x] No DNS setup should be required for clients connecting to the cluser.
|
1.0
|
Setup Openshift clusters on PSI for use by the team - [kind/Enhancement]
<!--
Welcome! - We kindly ask you to:
1. Fill out the issue template below
2. Use the chat and talk to us if you have a question rather than a bug or feature request.
The chat room is at: https://chat.openshift.io/developers/channels/odo
Thanks for understanding, and for contributing to the project!
-->
Linked to #1799
Initiate creation of cluster on PSI resources
Acceptance Criteria
- [x] Create Controller node
- [x] Setup ocp 3.11 cluster
- [x] Setup ocp 4.1 cluster
- [x] No DNS setup should be required for clients connecting to the cluser.
|
non_process
|
setup openshift clusters on psi for use by the team welcome we kindly ask you to fill out the issue template below use the chat and talk to us if you have a question rather than a bug or feature request the chat room is at thanks for understanding and for contributing to the project linked to initiate creation of cluster on psi resources acceptance criteria create controller node setup ocp cluster setup ocp cluster no dns setup should be required for clients connecting to the cluser
| 0
|
68,165
| 14,911,988,113
|
IssuesEvent
|
2021-01-22 11:57:01
|
uniquelyparticular/sync-moltin-to-shippo
|
https://api.github.com/repos/uniquelyparticular/sync-moltin-to-shippo
|
opened
|
CVE-2020-28472 (High) detected in aws-sdk-2.469.0.tgz
|
security vulnerability
|
## CVE-2020-28472 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>aws-sdk-2.469.0.tgz</b></p></summary>
<p>AWS SDK for JavaScript</p>
<p>Library home page: <a href="https://registry.npmjs.org/aws-sdk/-/aws-sdk-2.469.0.tgz">https://registry.npmjs.org/aws-sdk/-/aws-sdk-2.469.0.tgz</a></p>
<p>Path to dependency file: sync-moltin-to-shippo/package.json</p>
<p>Path to vulnerable library: sync-moltin-to-shippo/node_modules/aws-sdk/package.json</p>
<p>
Dependency Hierarchy:
- :x: **aws-sdk-2.469.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/uniquelyparticular/sync-moltin-to-shippo/commit/56d30e546d61f615707f803ca9d1c0e08db4749a">56d30e546d61f615707f803ca9d1c0e08db4749a</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package @aws-sdk/shared-ini-file-loader before 1.0.0-rc.9; the package aws-sdk before 2.814.0. If an attacker submits a malicious INI file to an application that parses it with loadSharedConfigFiles , they will pollute the prototype on the application. This can be exploited further depending on the context.
<p>Publish Date: 2021-01-19
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28472>CVE-2020-28472</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=2020-28472">https://cve.mitre.org/cgi-bin/cvename.cgi?name=2020-28472</a></p>
<p>Release Date: 2021-01-19</p>
<p>Fix Resolution: aws-sdk-2.814.0,@aws-sdk/shared-ini-file-loader-1.0.0-rc.9</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-28472 (High) detected in aws-sdk-2.469.0.tgz - ## CVE-2020-28472 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>aws-sdk-2.469.0.tgz</b></p></summary>
<p>AWS SDK for JavaScript</p>
<p>Library home page: <a href="https://registry.npmjs.org/aws-sdk/-/aws-sdk-2.469.0.tgz">https://registry.npmjs.org/aws-sdk/-/aws-sdk-2.469.0.tgz</a></p>
<p>Path to dependency file: sync-moltin-to-shippo/package.json</p>
<p>Path to vulnerable library: sync-moltin-to-shippo/node_modules/aws-sdk/package.json</p>
<p>
Dependency Hierarchy:
- :x: **aws-sdk-2.469.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/uniquelyparticular/sync-moltin-to-shippo/commit/56d30e546d61f615707f803ca9d1c0e08db4749a">56d30e546d61f615707f803ca9d1c0e08db4749a</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package @aws-sdk/shared-ini-file-loader before 1.0.0-rc.9; the package aws-sdk before 2.814.0. If an attacker submits a malicious INI file to an application that parses it with loadSharedConfigFiles , they will pollute the prototype on the application. This can be exploited further depending on the context.
<p>Publish Date: 2021-01-19
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28472>CVE-2020-28472</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=2020-28472">https://cve.mitre.org/cgi-bin/cvename.cgi?name=2020-28472</a></p>
<p>Release Date: 2021-01-19</p>
<p>Fix Resolution: aws-sdk-2.814.0,@aws-sdk/shared-ini-file-loader-1.0.0-rc.9</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in aws sdk tgz cve high severity vulnerability vulnerable library aws sdk tgz aws sdk for javascript library home page a href path to dependency file sync moltin to shippo package json path to vulnerable library sync moltin to shippo node modules aws sdk package json dependency hierarchy x aws sdk tgz vulnerable library found in head commit a href vulnerability details this affects the package aws sdk shared ini file loader before rc the package aws sdk before if an attacker submits a malicious ini file to an application that parses it with loadsharedconfigfiles they will pollute the prototype on the application this can be exploited further depending on the context publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution aws sdk aws sdk shared ini file loader rc step up your open source security game with whitesource
| 0
|
261,637
| 27,809,825,166
|
IssuesEvent
|
2023-03-18 01:50:56
|
madhans23/linux-4.1.15
|
https://api.github.com/repos/madhans23/linux-4.1.15
|
closed
|
CVE-2016-2782 (Medium) detected in linux-stable-rtv4.1.33 - autoclosed
|
Mend: dependency security vulnerability
|
## CVE-2016-2782 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/madhans23/linux-4.1.15/commit/f9d19044b0eef1965f9bc412d7d9e579b74ec968">f9d19044b0eef1965f9bc412d7d9e579b74ec968</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/usb/serial/visor.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/usb/serial/visor.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The treo_attach function in drivers/usb/serial/visor.c in the Linux kernel before 4.5 allows physically proximate attackers to cause a denial of service (NULL pointer dereference and system crash) or possibly have unspecified other impact by inserting a USB device that lacks a (1) bulk-in or (2) interrupt-in endpoint.
<p>Publish Date: 2016-04-27
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2016-2782>CVE-2016-2782</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Physical
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2016-2782">https://nvd.nist.gov/vuln/detail/CVE-2016-2782</a></p>
<p>Release Date: 2016-04-27</p>
<p>Fix Resolution: 4.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2016-2782 (Medium) detected in linux-stable-rtv4.1.33 - autoclosed - ## CVE-2016-2782 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/madhans23/linux-4.1.15/commit/f9d19044b0eef1965f9bc412d7d9e579b74ec968">f9d19044b0eef1965f9bc412d7d9e579b74ec968</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/usb/serial/visor.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/usb/serial/visor.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The treo_attach function in drivers/usb/serial/visor.c in the Linux kernel before 4.5 allows physically proximate attackers to cause a denial of service (NULL pointer dereference and system crash) or possibly have unspecified other impact by inserting a USB device that lacks a (1) bulk-in or (2) interrupt-in endpoint.
<p>Publish Date: 2016-04-27
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2016-2782>CVE-2016-2782</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Physical
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2016-2782">https://nvd.nist.gov/vuln/detail/CVE-2016-2782</a></p>
<p>Release Date: 2016-04-27</p>
<p>Fix Resolution: 4.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in linux stable autoclosed cve medium severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in head commit a href found in base branch master vulnerable source files drivers usb serial visor c drivers usb serial visor c vulnerability details the treo attach function in drivers usb serial visor c in the linux kernel before allows physically proximate attackers to cause a denial of service null pointer dereference and system crash or possibly have unspecified other impact by inserting a usb device that lacks a bulk in or interrupt in endpoint publish date url a href cvss score details base score metrics exploitability metrics attack vector physical attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
9,161
| 12,218,639,085
|
IssuesEvent
|
2020-05-01 19:48:12
|
Torbjornsson/DATX05-Master_Thesis
|
https://api.github.com/repos/Torbjornsson/DATX05-Master_Thesis
|
closed
|
Process/Development
|
Section: Process
|
- [x] Interactions
- [x] Environment
- [x] 3D-modelling
- [x] Puzzles
- [x] Tutorials
- [x] Sound-design
- [x] Builds
- [x] Corona: change of plans (?)
|
1.0
|
Process/Development - - [x] Interactions
- [x] Environment
- [x] 3D-modelling
- [x] Puzzles
- [x] Tutorials
- [x] Sound-design
- [x] Builds
- [x] Corona: change of plans (?)
|
process
|
process development interactions environment modelling puzzles tutorials sound design builds corona change of plans
| 1
|
2,339
| 5,144,175,913
|
IssuesEvent
|
2017-01-12 17:53:33
|
meteor/meteor
|
https://api.github.com/repos/meteor/meteor
|
closed
|
Warn when Meteor.Collection is used, and explain why you should use Mongo.Collection and depend on the 'mongo' package
|
Project:Mongo Driver Project:Release Process
|
Hello, defining a collection with either Mongo or Meteor before startup within the app directories (not packages) works fine even if not wrapped in Meteor.startup
Within a package however, the Mongo object is only available within Meteor.startup. However, like the app directories, outside of Meteor.startup, the Meteor object is available. So Mongo.Collection only works within Meteor.startup in packages but Meteor.Collection will work outside of Meteor.startup.
I just wanted to point out this inconsistency. It seems that as other databases get added, Mongo.Collection would become the proper way to do things, however that does work the same way Meteor.Collection does now.
Best regards.
|
1.0
|
Warn when Meteor.Collection is used, and explain why you should use Mongo.Collection and depend on the 'mongo' package - Hello, defining a collection with either Mongo or Meteor before startup within the app directories (not packages) works fine even if not wrapped in Meteor.startup
Within a package however, the Mongo object is only available within Meteor.startup. However, like the app directories, outside of Meteor.startup, the Meteor object is available. So Mongo.Collection only works within Meteor.startup in packages but Meteor.Collection will work outside of Meteor.startup.
I just wanted to point out this inconsistency. It seems that as other databases get added, Mongo.Collection would become the proper way to do things, however that does work the same way Meteor.Collection does now.
Best regards.
|
process
|
warn when meteor collection is used and explain why you should use mongo collection and depend on the mongo package hello defining a collection with either mongo or meteor before startup within the app directories not packages works fine even if not wrapped in meteor startup within a package however the mongo object is only available within meteor startup however like the app directories outside of meteor startup the meteor object is available so mongo collection only works within meteor startup in packages but meteor collection will work outside of meteor startup i just wanted to point out this inconsistency it seems that as other databases get added mongo collection would become the proper way to do things however that does work the same way meteor collection does now best regards
| 1
|
15,669
| 19,847,309,894
|
IssuesEvent
|
2022-01-21 08:19:10
|
ooi-data/CE07SHSM-RID26-07-NUTNRB000-recovered_host-nutnr_b_dcl_full_instrument_recovered
|
https://api.github.com/repos/ooi-data/CE07SHSM-RID26-07-NUTNRB000-recovered_host-nutnr_b_dcl_full_instrument_recovered
|
opened
|
🛑 Processing failed: ValueError
|
process
|
## Overview
`ValueError` found in `processing_task` task during run ended on 2022-01-21T08:19:08.710897.
## Details
Flow name: `CE07SHSM-RID26-07-NUTNRB000-recovered_host-nutnr_b_dcl_full_instrument_recovered`
Task name: `processing_task`
Error type: `ValueError`
Error message: not enough values to unpack (expected 3, got 0)
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing
final_path = finalize_data_stream(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 84, in finalize_data_stream
append_to_zarr(mod_ds, final_store, enc, logger=logger)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 357, in append_to_zarr
_append_zarr(store, mod_ds)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 187, in _append_zarr
existing_arr.append(var_data.values)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 519, in values
return _as_array_or_item(self._data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 259, in _as_array_or_item
data = np.asarray(data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 1541, in __array__
x = self.compute()
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 288, in compute
(result,) = compute(self, traverse=False, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 571, in compute
results = schedule(dsk, keys, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/threaded.py", line 79, in get
results = get_async(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 507, in get_async
raise_exception(exc, tb)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 315, in reraise
raise exc
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 220, in execute_task
result = _execute_task(task, data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/core.py", line 119, in _execute_task
return func(*(_execute_task(a, cache) for a in args))
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 116, in getter
c = np.asarray(c)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 357, in __array__
return np.asarray(self.array, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 551, in __array__
self._ensure_cached()
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 548, in _ensure_cached
self.array = NumpyIndexingAdapter(np.asarray(self.array))
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 521, in __array__
return np.asarray(self.array, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__
return np.asarray(array[self.key], dtype=None)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 70, in __array__
return self.func(self.array)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 137, in _apply_mask
data = np.asarray(data, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__
return np.asarray(array[self.key], dtype=None)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/backends/zarr.py", line 73, in __getitem__
return array[key.tuple]
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 673, in __getitem__
return self.get_basic_selection(selection, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 798, in get_basic_selection
return self._get_basic_selection_nd(selection=selection, out=out,
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 841, in _get_basic_selection_nd
return self._get_selection(indexer=indexer, out=out, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1135, in _get_selection
lchunk_coords, lchunk_selection, lout_selection = zip(*indexer)
ValueError: not enough values to unpack (expected 3, got 0)
```
</details>
|
1.0
|
🛑 Processing failed: ValueError - ## Overview
`ValueError` found in `processing_task` task during run ended on 2022-01-21T08:19:08.710897.
## Details
Flow name: `CE07SHSM-RID26-07-NUTNRB000-recovered_host-nutnr_b_dcl_full_instrument_recovered`
Task name: `processing_task`
Error type: `ValueError`
Error message: not enough values to unpack (expected 3, got 0)
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing
final_path = finalize_data_stream(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 84, in finalize_data_stream
append_to_zarr(mod_ds, final_store, enc, logger=logger)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 357, in append_to_zarr
_append_zarr(store, mod_ds)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 187, in _append_zarr
existing_arr.append(var_data.values)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 519, in values
return _as_array_or_item(self._data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 259, in _as_array_or_item
data = np.asarray(data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 1541, in __array__
x = self.compute()
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 288, in compute
(result,) = compute(self, traverse=False, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 571, in compute
results = schedule(dsk, keys, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/threaded.py", line 79, in get
results = get_async(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 507, in get_async
raise_exception(exc, tb)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 315, in reraise
raise exc
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 220, in execute_task
result = _execute_task(task, data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/core.py", line 119, in _execute_task
return func(*(_execute_task(a, cache) for a in args))
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 116, in getter
c = np.asarray(c)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 357, in __array__
return np.asarray(self.array, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 551, in __array__
self._ensure_cached()
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 548, in _ensure_cached
self.array = NumpyIndexingAdapter(np.asarray(self.array))
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 521, in __array__
return np.asarray(self.array, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__
return np.asarray(array[self.key], dtype=None)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 70, in __array__
return self.func(self.array)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 137, in _apply_mask
data = np.asarray(data, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__
return np.asarray(array[self.key], dtype=None)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/backends/zarr.py", line 73, in __getitem__
return array[key.tuple]
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 673, in __getitem__
return self.get_basic_selection(selection, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 798, in get_basic_selection
return self._get_basic_selection_nd(selection=selection, out=out,
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 841, in _get_basic_selection_nd
return self._get_selection(indexer=indexer, out=out, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1135, in _get_selection
lchunk_coords, lchunk_selection, lout_selection = zip(*indexer)
ValueError: not enough values to unpack (expected 3, got 0)
```
</details>
|
process
|
🛑 processing failed valueerror overview valueerror found in processing task task during run ended on details flow name recovered host nutnr b dcl full instrument recovered task name processing task error type valueerror error message not enough values to unpack expected got traceback traceback most recent call last file srv conda envs notebook lib site packages ooi harvester processor pipeline py line in processing final path finalize data stream file srv conda envs notebook lib site packages ooi harvester processor init py line in finalize data stream append to zarr mod ds final store enc logger logger file srv conda envs notebook lib site packages ooi harvester processor init py line in append to zarr append zarr store mod ds file srv conda envs notebook lib site packages ooi harvester processor utils py line in append zarr existing arr append var data values file srv conda envs notebook lib site packages xarray core variable py line in values return as array or item self data file srv conda envs notebook lib site packages xarray core variable py line in as array or item data np asarray data file srv conda envs notebook lib site packages dask array core py line in array x self compute file srv conda envs notebook lib site packages dask base py line in compute result compute self traverse false kwargs file srv conda envs notebook lib site packages dask base py line in compute results schedule dsk keys kwargs file srv conda envs notebook lib site packages dask threaded py line in get results get async file srv conda envs notebook lib site packages dask local py line in get async raise exception exc tb file srv conda envs notebook lib site packages dask local py line in reraise raise exc file srv conda envs notebook lib site packages dask local py line in execute task result execute task task data file srv conda envs notebook lib site packages dask core py line in execute task return func execute task a cache for a in args file srv conda envs notebook lib site packages dask array core py line in getter c np asarray c file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray self array dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array self ensure cached file srv conda envs notebook lib site packages xarray core indexing py line in ensure cached self array numpyindexingadapter np asarray self array file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray self array dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray array dtype none file srv conda envs notebook lib site packages xarray coding variables py line in array return self func self array file srv conda envs notebook lib site packages xarray coding variables py line in apply mask data np asarray data dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray array dtype none file srv conda envs notebook lib site packages xarray backends zarr py line in getitem return array file srv conda envs notebook lib site packages zarr core py line in getitem return self get basic selection selection fields fields file srv conda envs notebook lib site packages zarr core py line in get basic selection return self get basic selection nd selection selection out out file srv conda envs notebook lib site packages zarr core py line in get basic selection nd return self get selection indexer indexer out out fields fields file srv conda envs notebook lib site packages zarr core py line in get selection lchunk coords lchunk selection lout selection zip indexer valueerror not enough values to unpack expected got
| 1
|
17,416
| 23,231,398,825
|
IssuesEvent
|
2022-08-03 07:58:22
|
jasperzhong/read-papers
|
https://api.github.com/repos/jasperzhong/read-papers
|
closed
|
aiDM '22 | GCNSplit: Bounding the State of Streaming Graph Partitioning
|
gnn graph processing systems / graph DB
|
https://sites.bu.edu/casp/files/2022/05/Zwolak22Bounding.pdf
very interesting !!!
|
1.0
|
aiDM '22 | GCNSplit: Bounding the State of Streaming Graph Partitioning - https://sites.bu.edu/casp/files/2022/05/Zwolak22Bounding.pdf
very interesting !!!
|
process
|
aidm gcnsplit bounding the state of streaming graph partitioning very interesting
| 1
|
509,088
| 14,712,573,539
|
IssuesEvent
|
2021-01-05 09:08:03
|
canonical-web-and-design/ubuntu.com
|
https://api.github.com/repos/canonical-web-and-design/ubuntu.com
|
opened
|
Sign up the newsletter from blog is broken
|
Priority: Critical
|
The newsletter signup form is missing grecaptcha.

---
*Reported from: https://ubuntu.com/blog/install-amazon-eks-distro-anywhere*
|
1.0
|
Sign up the newsletter from blog is broken - The newsletter signup form is missing grecaptcha.

---
*Reported from: https://ubuntu.com/blog/install-amazon-eks-distro-anywhere*
|
non_process
|
sign up the newsletter from blog is broken the newsletter signup form is missing grecaptcha reported from
| 0
|
8,605
| 11,761,791,603
|
IssuesEvent
|
2020-03-13 22:49:38
|
googleapis/nodejs-storage
|
https://api.github.com/repos/googleapis/nodejs-storage
|
closed
|
V4 Signed URL Pending work
|
api: storage type: process
|
Tracking issue for upcoming work on v4 Signed URL.
- [x] POST for resumable uploads
- will be fixed in #907
- this is failing because it represents users passing a `X-goog-resumable: start` extension header in the request.
- The nodejs-storage library applies this header for the user if the user specified `action: resumable` in `SignedUrlConfig`.
- However, the library applies `x-goog-resumable: start` *lower-cased*, and when merged with the user provided `extensionHeaders`, both headers exist and result in duplicate signed header entries in the resulting URL.
- [x] Slashes in object name should not be URL encoded
- will be fixed in #905
|
1.0
|
V4 Signed URL Pending work - Tracking issue for upcoming work on v4 Signed URL.
- [x] POST for resumable uploads
- will be fixed in #907
- this is failing because it represents users passing a `X-goog-resumable: start` extension header in the request.
- The nodejs-storage library applies this header for the user if the user specified `action: resumable` in `SignedUrlConfig`.
- However, the library applies `x-goog-resumable: start` *lower-cased*, and when merged with the user provided `extensionHeaders`, both headers exist and result in duplicate signed header entries in the resulting URL.
- [x] Slashes in object name should not be URL encoded
- will be fixed in #905
|
process
|
signed url pending work tracking issue for upcoming work on signed url post for resumable uploads will be fixed in this is failing because it represents users passing a x goog resumable start extension header in the request the nodejs storage library applies this header for the user if the user specified action resumable in signedurlconfig however the library applies x goog resumable start lower cased and when merged with the user provided extensionheaders both headers exist and result in duplicate signed header entries in the resulting url slashes in object name should not be url encoded will be fixed in
| 1
|
532,523
| 15,558,618,166
|
IssuesEvent
|
2021-03-16 10:30:33
|
epiphany-platform/epiphany
|
https://api.github.com/repos/epiphany-platform/epiphany
|
closed
|
[BUG] Erlang package versions specified in the requirements are missing from the external repository (RedHat/CentOS).
|
area/rabbit priority/critical type/bug
|
**Describe the bug**
Erlang packages in the version specified in the `requirements.txt` file are missing from the RabbitMQ Erlang repository.
It looks like new patches have been uploaded and old packages have been removed.
The script `download-requirements.sh` fails with error `ERROR: repoquery failed: package erlang-23.1.4 not found`.
Older epicli versions are also affected: `ERROR: repoquery failed: package erlang-21.3.8.7 not found`.
**How to reproduce**
Steps to reproduce the behavior:
1. Deploy any RHEL/CentOS cluster (repository vm is enough to reproduce)
**Expected behavior**
The cluster has been deployed successfully (epirepo has been set up with no issues)
**Environment**
- Cloud provider: [all]
- OS: [RHEL]
**Additional context**
```
[root@a5c2e1ce1548 /]# yum --showduplicates list erlang-21*
Available Packages
erlang.x86_64 21.3.8.14-1.el7 rabbitmq_erlang
erlang.x86_64 21.3.8.15-1.el7 rabbitmq_erlang
erlang.x86_64 21.3.8.16-1.el7 rabbitmq_erlang
erlang.x86_64 21.3.8.18-1.el7 rabbitmq_erlang
erlang.x86_64 21.3.8.21-1.el7 rabbitmq_erlang
```
```
[root@a5c2e1ce1548 /]# yum --showduplicates list erlang-23*
Available Packages
erlang.x86_64 23.1.2-1.el7 rabbitmq_erlang
erlang.x86_64 23.1.5-1.el7 rabbitmq_erlang
erlang.x86_64 23.2.1-1.el7 rabbitmq_erlang
erlang.x86_64 23.2.3-1.el7 rabbitmq_erlang
erlang.x86_64 23.2.4-1.el7 rabbitmq_erlang
erlang.x86_64 23.2.5-1.el7 rabbitmq_erlang
erlang.x86_64 23.2.6-1.el7 rabbitmq_erlang
erlang.x86_64 23.2.7-1.el7 rabbitmq_erlang
erlang.x86_64 23.2.7-2.el7 rabbitmq_erlang
```
---
**DoD checklist**
* [x] Changelog updated (if affected version was released)
* [x] COMPONENTS.md updated / doesn't need to be updated
* [x] Automated tests passed (QA pipelines)
* [x] apply
* [x] upgrade
* [x] Case covered by automated test (if possible) :information_source: self-tested at runtime
* [ ] Idempotency tested
* [x] Documentation updated / doesn't need to be updated
* [x] All conversations in PR resolved
|
1.0
|
[BUG] Erlang package versions specified in the requirements are missing from the external repository (RedHat/CentOS). - **Describe the bug**
Erlang packages in the version specified in the `requirements.txt` file are missing from the RabbitMQ Erlang repository.
It looks like new patches have been uploaded and old packages have been removed.
The script `download-requirements.sh` fails with error `ERROR: repoquery failed: package erlang-23.1.4 not found`.
Older epicli versions are also affected: `ERROR: repoquery failed: package erlang-21.3.8.7 not found`.
**How to reproduce**
Steps to reproduce the behavior:
1. Deploy any RHEL/CentOS cluster (repository vm is enough to reproduce)
**Expected behavior**
The cluster has been deployed successfully (epirepo has been set up with no issues)
**Environment**
- Cloud provider: [all]
- OS: [RHEL]
**Additional context**
```
[root@a5c2e1ce1548 /]# yum --showduplicates list erlang-21*
Available Packages
erlang.x86_64 21.3.8.14-1.el7 rabbitmq_erlang
erlang.x86_64 21.3.8.15-1.el7 rabbitmq_erlang
erlang.x86_64 21.3.8.16-1.el7 rabbitmq_erlang
erlang.x86_64 21.3.8.18-1.el7 rabbitmq_erlang
erlang.x86_64 21.3.8.21-1.el7 rabbitmq_erlang
```
```
[root@a5c2e1ce1548 /]# yum --showduplicates list erlang-23*
Available Packages
erlang.x86_64 23.1.2-1.el7 rabbitmq_erlang
erlang.x86_64 23.1.5-1.el7 rabbitmq_erlang
erlang.x86_64 23.2.1-1.el7 rabbitmq_erlang
erlang.x86_64 23.2.3-1.el7 rabbitmq_erlang
erlang.x86_64 23.2.4-1.el7 rabbitmq_erlang
erlang.x86_64 23.2.5-1.el7 rabbitmq_erlang
erlang.x86_64 23.2.6-1.el7 rabbitmq_erlang
erlang.x86_64 23.2.7-1.el7 rabbitmq_erlang
erlang.x86_64 23.2.7-2.el7 rabbitmq_erlang
```
---
**DoD checklist**
* [x] Changelog updated (if affected version was released)
* [x] COMPONENTS.md updated / doesn't need to be updated
* [x] Automated tests passed (QA pipelines)
* [x] apply
* [x] upgrade
* [x] Case covered by automated test (if possible) :information_source: self-tested at runtime
* [ ] Idempotency tested
* [x] Documentation updated / doesn't need to be updated
* [x] All conversations in PR resolved
|
non_process
|
erlang package versions specified in the requirements are missing from the external repository redhat centos describe the bug erlang packages in the version specified in the requirements txt file are missing from the rabbitmq erlang repository it looks like new patches have been uploaded and old packages have been removed the script download requirements sh fails with error error repoquery failed package erlang not found older epicli versions are also affected error repoquery failed package erlang not found how to reproduce steps to reproduce the behavior deploy any rhel centos cluster repository vm is enough to reproduce expected behavior the cluster has been deployed successfully epirepo has been set up with no issues environment cloud provider os additional context yum showduplicates list erlang available packages erlang rabbitmq erlang erlang rabbitmq erlang erlang rabbitmq erlang erlang rabbitmq erlang erlang rabbitmq erlang yum showduplicates list erlang available packages erlang rabbitmq erlang erlang rabbitmq erlang erlang rabbitmq erlang erlang rabbitmq erlang erlang rabbitmq erlang erlang rabbitmq erlang erlang rabbitmq erlang erlang rabbitmq erlang erlang rabbitmq erlang dod checklist changelog updated if affected version was released components md updated doesn t need to be updated automated tests passed qa pipelines apply upgrade case covered by automated test if possible information source self tested at runtime idempotency tested documentation updated doesn t need to be updated all conversations in pr resolved
| 0
|
24,792
| 4,109,104,349
|
IssuesEvent
|
2016-06-06 18:21:48
|
kubernetes/kubernetes
|
https://api.github.com/repos/kubernetes/kubernetes
|
closed
|
e2e test flake: should support exec [It]
|
area/test kind/flake team/CSI-API Machinery SIG
|
Noticed this in #22736
```
Failure [1664.033 seconds]
[k8s.io] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
[k8s.io] Simple pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
should support exec [It]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:227
Expected error:
<*errors.errorString | 0xc208544760>: {
s: "Error running &{/var/lib/jenkins/workspace/kubernetes-pull-build-test-e2e-gce/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.154.109.108 --kubeconfig=/var/lib/jenkins/workspace/kubernetes-pull-build-test-e2e-gce/.kube/config exec --namespace=e2e-tests-kubectl-hik24 -i nginx bash] [] 0xc208053a70 hi\n [] <nil> 0xc2082bd9a0 exit status 2 <nil> true [0xc208053a70 0xc208053ae0 0xc208053b00] [0xc208053ae0 0xc208053b00] [0xc208053ac8 0xc208053af8] [0x96dd30 0x96dd30] 0xc20834f800}:\nCommand stdout:\nhi\n\nstderr:\n\nerror:\nexit status 2\n",
}
Error running &{/var/lib/jenkins/workspace/kubernetes-pull-build-test-e2e-gce/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.154.109.108 --kubeconfig=/var/lib/jenkins/workspace/kubernetes-pull-build-test-e2e-gce/.kube/config exec --namespace=e2e-tests-kubectl-hik24 -i nginx bash] [] 0xc208053a70 hi
[] <nil> 0xc2082bd9a0 exit status 2 <nil> true [0xc208053a70 0xc208053ae0 0xc208053b00] [0xc208053ae0 0xc208053b00] [0xc208053ac8 0xc208053af8] [0x96dd30 0x96dd30] 0xc20834f800}:
Command stdout:
hi
stderr:
error:
exit status 2
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/util.go:1591
```
This e2e test can pass in my local cluster test.
|
1.0
|
e2e test flake: should support exec [It] - Noticed this in #22736
```
Failure [1664.033 seconds]
[k8s.io] Kubectl client
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
[k8s.io] Simple pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework.go:420
should support exec [It]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:227
Expected error:
<*errors.errorString | 0xc208544760>: {
s: "Error running &{/var/lib/jenkins/workspace/kubernetes-pull-build-test-e2e-gce/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.154.109.108 --kubeconfig=/var/lib/jenkins/workspace/kubernetes-pull-build-test-e2e-gce/.kube/config exec --namespace=e2e-tests-kubectl-hik24 -i nginx bash] [] 0xc208053a70 hi\n [] <nil> 0xc2082bd9a0 exit status 2 <nil> true [0xc208053a70 0xc208053ae0 0xc208053b00] [0xc208053ae0 0xc208053b00] [0xc208053ac8 0xc208053af8] [0x96dd30 0x96dd30] 0xc20834f800}:\nCommand stdout:\nhi\n\nstderr:\n\nerror:\nexit status 2\n",
}
Error running &{/var/lib/jenkins/workspace/kubernetes-pull-build-test-e2e-gce/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.154.109.108 --kubeconfig=/var/lib/jenkins/workspace/kubernetes-pull-build-test-e2e-gce/.kube/config exec --namespace=e2e-tests-kubectl-hik24 -i nginx bash] [] 0xc208053a70 hi
[] <nil> 0xc2082bd9a0 exit status 2 <nil> true [0xc208053a70 0xc208053ae0 0xc208053b00] [0xc208053ae0 0xc208053b00] [0xc208053ac8 0xc208053af8] [0x96dd30 0x96dd30] 0xc20834f800}:
Command stdout:
hi
stderr:
error:
exit status 2
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/util.go:1591
```
This e2e test can pass in my local cluster test.
|
non_process
|
test flake should support exec noticed this in failure kubectl client go src io kubernetes output dockerized go src io kubernetes test framework go simple pod go src io kubernetes output dockerized go src io kubernetes test framework go should support exec go src io kubernetes output dockerized go src io kubernetes test kubectl go expected error s error running var lib jenkins workspace kubernetes pull build test gce kubernetes platforms linux kubectl hi n exit status true ncommand stdout nhi n nstderr n nerror nexit status n error running var lib jenkins workspace kubernetes pull build test gce kubernetes platforms linux kubectl hi exit status true command stdout hi stderr error exit status not to have occurred go src io kubernetes output dockerized go src io kubernetes test util go this test can pass in my local cluster test
| 0
|
7,097
| 10,245,370,024
|
IssuesEvent
|
2019-08-20 12:43:16
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
child_process: exec EPIPE
|
child_process
|
Not sure if this is a bug. However, there doesn't seem to be a way to avoid `EPIPE` while piping to `exec()`. The only way is to listen to 'error' on the `stdin`. I would have expected that calling `unpipe()` on error would avoid the `EPIPE` error case.
```js
const { exec } = require('child_process')
const { Readable } = require('stream')
const assert = require('assert')
const proc = exec('asd', (err, stdout, stderr) => {
assert(err)
r.unpipe(proc.stdin)
})
const r = new Readable({
read () {
this.push('asd')
}
}).pipe(proc.stdin)
```
Will fail with:
```bash
Error: write EPIPE
at WriteWrap.afterWrite [as oncomplete] (net.js:788:14)
Emitted 'error' event at:
at Socket.onerror (_stream_readable.js:690:12)
at Socket.emit (events.js:182:13)
at onwriteError (_stream_writable.js:431:12)
at onwrite (_stream_writable.js:456:5)
at _destroy (internal/streams/destroy.js:40:7)
at Socket._destroy (net.js:613:3)
at Socket.destroy (internal/streams/destroy.js:32:8)
at WriteWrap.afterWrite [as oncomplete] (net.js:790:10)
```
|
1.0
|
child_process: exec EPIPE - Not sure if this is a bug. However, there doesn't seem to be a way to avoid `EPIPE` while piping to `exec()`. The only way is to listen to 'error' on the `stdin`. I would have expected that calling `unpipe()` on error would avoid the `EPIPE` error case.
```js
const { exec } = require('child_process')
const { Readable } = require('stream')
const assert = require('assert')
const proc = exec('asd', (err, stdout, stderr) => {
assert(err)
r.unpipe(proc.stdin)
})
const r = new Readable({
read () {
this.push('asd')
}
}).pipe(proc.stdin)
```
Will fail with:
```bash
Error: write EPIPE
at WriteWrap.afterWrite [as oncomplete] (net.js:788:14)
Emitted 'error' event at:
at Socket.onerror (_stream_readable.js:690:12)
at Socket.emit (events.js:182:13)
at onwriteError (_stream_writable.js:431:12)
at onwrite (_stream_writable.js:456:5)
at _destroy (internal/streams/destroy.js:40:7)
at Socket._destroy (net.js:613:3)
at Socket.destroy (internal/streams/destroy.js:32:8)
at WriteWrap.afterWrite [as oncomplete] (net.js:790:10)
```
|
process
|
child process exec epipe not sure if this is a bug however there doesn t seem to be a way to avoid epipe while piping to exec the only way is to listen to error on the stdin i would have expected that calling unpipe on error would avoid the epipe error case js const exec require child process const readable require stream const assert require assert const proc exec asd err stdout stderr assert err r unpipe proc stdin const r new readable read this push asd pipe proc stdin will fail with bash error write epipe at writewrap afterwrite net js emitted error event at at socket onerror stream readable js at socket emit events js at onwriteerror stream writable js at onwrite stream writable js at destroy internal streams destroy js at socket destroy net js at socket destroy internal streams destroy js at writewrap afterwrite net js
| 1
|
337,374
| 30,247,583,271
|
IssuesEvent
|
2023-07-06 17:44:03
|
unifyai/ivy
|
https://api.github.com/repos/unifyai/ivy
|
reopened
|
Fix splitting_arrays.test_numpy_hsplit
|
NumPy Frontend Sub Task Failing Test
|
| | |
|---|---|
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/5478319447"><img src=https://img.shields.io/badge/-failure-red></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5478319447"><img src=https://img.shields.io/badge/-success-success></a>
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5478319447"><img src=https://img.shields.io/badge/-failure-red></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/5478319447"><img src=https://img.shields.io/badge/-success-success></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5478319447"><img src=https://img.shields.io/badge/-success-success></a>
|
1.0
|
Fix splitting_arrays.test_numpy_hsplit - | | |
|---|---|
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/5478319447"><img src=https://img.shields.io/badge/-failure-red></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5478319447"><img src=https://img.shields.io/badge/-success-success></a>
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5478319447"><img src=https://img.shields.io/badge/-failure-red></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/5478319447"><img src=https://img.shields.io/badge/-success-success></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5478319447"><img src=https://img.shields.io/badge/-success-success></a>
|
non_process
|
fix splitting arrays test numpy hsplit jax a href src numpy a href src tensorflow a href src torch a href src paddle a href src
| 0
|
17,390
| 23,207,681,747
|
IssuesEvent
|
2022-08-02 07:21:30
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Not able to set scale parameter for SAGA Topographic Position Index
|
Plugins Processing Bug
|
### What is the bug or the crash?
This bug seems related to the implementation of the SAGA TPI tool in QGIS. In the dialog window it is not possible to change the 'scale parameter'. Upon running the plugin, the log shows that the default values of 0;100 are used.
### Steps to reproduce the issue
1. Go to SAGA Topographic Position Index (TPI)
### Versions
QGIS version
3.22.3-Białowieża
QGIS code revision
1628765ec7
Qt version
5.15.2
Python version
3.9.5
GDAL/OGR version
3.4.1
PROJ version
8.2.1
EPSG Registry database version
v10.041 (2021-12-03)
GEOS version
3.10.0-CAPI-1.16.0
SQLite version
3.35.2
PDAL version
2.3.0
PostgreSQL client version
13.0
SpatiaLite version
5.0.1
QWT version
6.1.3
QScintilla2 version
2.11.5
OS version
Windows 10 Version 2009
Active Python plugins
processing_r
3.1.1
TerrainShading
0.9.3
db_manager
0.1.20
grassprovider
2.12.99
processing
2.12.99
sagaprovider
2.12.99
### Supported QGIS version
- [X] I'm running a supported QGIS version according to the roadmap.
### New profile
- [ ] I tried with a new QGIS profile
### Additional context
_No response_
|
1.0
|
Not able to set scale parameter for SAGA Topographic Position Index - ### What is the bug or the crash?
This bug seems related to the implementation of the SAGA TPI tool in QGIS. In the dialog window it is not possible to change the 'scale parameter'. Upon running the plugin, the log shows that the default values of 0;100 are used.
### Steps to reproduce the issue
1. Go to SAGA Topographic Position Index (TPI)
### Versions
QGIS version
3.22.3-Białowieża
QGIS code revision
1628765ec7
Qt version
5.15.2
Python version
3.9.5
GDAL/OGR version
3.4.1
PROJ version
8.2.1
EPSG Registry database version
v10.041 (2021-12-03)
GEOS version
3.10.0-CAPI-1.16.0
SQLite version
3.35.2
PDAL version
2.3.0
PostgreSQL client version
13.0
SpatiaLite version
5.0.1
QWT version
6.1.3
QScintilla2 version
2.11.5
OS version
Windows 10 Version 2009
Active Python plugins
processing_r
3.1.1
TerrainShading
0.9.3
db_manager
0.1.20
grassprovider
2.12.99
processing
2.12.99
sagaprovider
2.12.99
### Supported QGIS version
- [X] I'm running a supported QGIS version according to the roadmap.
### New profile
- [ ] I tried with a new QGIS profile
### Additional context
_No response_
|
process
|
not able to set scale parameter for saga topographic position index what is the bug or the crash this bug seems related to the implementation of the saga tpi tool in qgis in the dialog window it is not possible to change the scale parameter upon running the plugin the log shows that the default values of are used steps to reproduce the issue go to saga topographic position index tpi versions qgis version białowieża qgis code revision qt version python version gdal ogr version proj version epsg registry database version geos version capi sqlite version pdal version postgresql client version spatialite version qwt version version os version windows version active python plugins processing r terrainshading db manager grassprovider processing sagaprovider supported qgis version i m running a supported qgis version according to the roadmap new profile i tried with a new qgis profile additional context no response
| 1
|
18,015
| 24,032,591,083
|
IssuesEvent
|
2022-09-15 16:09:39
|
googleapis/google-cloud-java
|
https://api.github.com/repos/googleapis/google-cloud-java
|
opened
|
Your .repo-metadata.json files have a problem 🤒
|
type: process repo-metadata: lint
|
You have a problem with your .repo-metadata.json files:
Result of scan 📈:
* api_shortname 'apigee-registry' invalid in java-apigee-registry/.repo-metadata.json
* api_shortname 'beyondcorp-appconnections' invalid in java-beyondcorp-appconnections/.repo-metadata.json
* api_shortname 'beyondcorp-appconnectors' invalid in java-beyondcorp-appconnectors/.repo-metadata.json
* api_shortname 'beyondcorp-appgateways' invalid in java-beyondcorp-appgateways/.repo-metadata.json
* api_shortname 'beyondcorp-clientconnectorservices' invalid in java-beyondcorp-clientconnectorservices/.repo-metadata.json
* api_shortname 'beyondcorp-clientgateways' invalid in java-beyondcorp-clientgateways/.repo-metadata.json
* api_shortname 'dialogflow-cx' invalid in java-dialogflow-cx/.repo-metadata.json
* api_shortname 'gke-backup' invalid in java-gke-backup/.repo-metadata.json
* api_shortname 'gke-multi-cloud' invalid in java-gke-multi-cloud/.repo-metadata.json
* api_shortname 'iam-admin' invalid in java-iam-admin/.repo-metadata.json
* api_shortname 'monitoring-dashboards' invalid in java-monitoring-dashboards/.repo-metadata.json
* api_shortname 'orchestration-airflow' invalid in java-orchestration-airflow/.repo-metadata.json
☝️ Once you address these problems, you can close this issue.
### Need help?
* [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field.
* [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**.
* Reach out to **go/github-automation** if you have any questions.
|
1.0
|
Your .repo-metadata.json files have a problem 🤒 - You have a problem with your .repo-metadata.json files:
Result of scan 📈:
* api_shortname 'apigee-registry' invalid in java-apigee-registry/.repo-metadata.json
* api_shortname 'beyondcorp-appconnections' invalid in java-beyondcorp-appconnections/.repo-metadata.json
* api_shortname 'beyondcorp-appconnectors' invalid in java-beyondcorp-appconnectors/.repo-metadata.json
* api_shortname 'beyondcorp-appgateways' invalid in java-beyondcorp-appgateways/.repo-metadata.json
* api_shortname 'beyondcorp-clientconnectorservices' invalid in java-beyondcorp-clientconnectorservices/.repo-metadata.json
* api_shortname 'beyondcorp-clientgateways' invalid in java-beyondcorp-clientgateways/.repo-metadata.json
* api_shortname 'dialogflow-cx' invalid in java-dialogflow-cx/.repo-metadata.json
* api_shortname 'gke-backup' invalid in java-gke-backup/.repo-metadata.json
* api_shortname 'gke-multi-cloud' invalid in java-gke-multi-cloud/.repo-metadata.json
* api_shortname 'iam-admin' invalid in java-iam-admin/.repo-metadata.json
* api_shortname 'monitoring-dashboards' invalid in java-monitoring-dashboards/.repo-metadata.json
* api_shortname 'orchestration-airflow' invalid in java-orchestration-airflow/.repo-metadata.json
☝️ Once you address these problems, you can close this issue.
### Need help?
* [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field.
* [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**.
* Reach out to **go/github-automation** if you have any questions.
|
process
|
your repo metadata json files have a problem 🤒 you have a problem with your repo metadata json files result of scan 📈 api shortname apigee registry invalid in java apigee registry repo metadata json api shortname beyondcorp appconnections invalid in java beyondcorp appconnections repo metadata json api shortname beyondcorp appconnectors invalid in java beyondcorp appconnectors repo metadata json api shortname beyondcorp appgateways invalid in java beyondcorp appgateways repo metadata json api shortname beyondcorp clientconnectorservices invalid in java beyondcorp clientconnectorservices repo metadata json api shortname beyondcorp clientgateways invalid in java beyondcorp clientgateways repo metadata json api shortname dialogflow cx invalid in java dialogflow cx repo metadata json api shortname gke backup invalid in java gke backup repo metadata json api shortname gke multi cloud invalid in java gke multi cloud repo metadata json api shortname iam admin invalid in java iam admin repo metadata json api shortname monitoring dashboards invalid in java monitoring dashboards repo metadata json api shortname orchestration airflow invalid in java orchestration airflow repo metadata json ☝️ once you address these problems you can close this issue need help lists valid options for each field for grpc libraries api shortname should match the subdomain of an api s hostname reach out to go github automation if you have any questions
| 1
|
3,293
| 4,208,866,074
|
IssuesEvent
|
2016-06-29 01:18:02
|
dotnet/roslyn-analyzers
|
https://api.github.com/repos/dotnet/roslyn-analyzers
|
closed
|
Build fails with "Access to the path '...\extensionSdks.en-US.cache' is denied."
|
Area-Infrastructure Bug
|
#### Repro steps
1. Follow steps at https://github.com/dotnet/roslyn-analyzers#getting-started on VS 2015 Update2
2. First error output for `msbuild src\Analyzers.sln`:
```
C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v14.0\VSSDK\Microsoft.VsS
DK.targets(655,5): error VSSDK1040: There was a problem enabling the extension
with a VSIX identifier of "8ea2cb5d-390d-4b56-b9b5-8d3175368d69". Access to the
path 'C:\Users\Bart\AppData\Local\Microsoft\VisualStudio\14.0RoslynDev\Extensi
ons\extensionSdks.en-US.cache' is denied. [e:\Bart\Source\Repos\roslyn-analyzer
s\src\ApiReview.Analyzers\Setup\ApiReview.Analyzers.Setup.csproj]
```
The roslyn repository had [a similar failure](https://github.com/dotnet/roslyn/issues/10407); that fix can possibly be applied here too.
|
1.0
|
Build fails with "Access to the path '...\extensionSdks.en-US.cache' is denied." - #### Repro steps
1. Follow steps at https://github.com/dotnet/roslyn-analyzers#getting-started on VS 2015 Update2
2. First error output for `msbuild src\Analyzers.sln`:
```
C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v14.0\VSSDK\Microsoft.VsS
DK.targets(655,5): error VSSDK1040: There was a problem enabling the extension
with a VSIX identifier of "8ea2cb5d-390d-4b56-b9b5-8d3175368d69". Access to the
path 'C:\Users\Bart\AppData\Local\Microsoft\VisualStudio\14.0RoslynDev\Extensi
ons\extensionSdks.en-US.cache' is denied. [e:\Bart\Source\Repos\roslyn-analyzer
s\src\ApiReview.Analyzers\Setup\ApiReview.Analyzers.Setup.csproj]
```
The roslyn repository had [a similar failure](https://github.com/dotnet/roslyn/issues/10407); that fix can possibly be applied here too.
|
non_process
|
build fails with access to the path extensionsdks en us cache is denied repro steps follow steps at on vs first error output for msbuild src analyzers sln c program files msbuild microsoft visualstudio vssdk microsoft vss dk targets error there was a problem enabling the extension with a vsix identifier of access to the path c users bart appdata local microsoft visualstudio extensi ons extensionsdks en us cache is denied e bart source repos roslyn analyzer s src apireview analyzers setup apireview analyzers setup csproj the roslyn repository had that fix can possibly be applied here too
| 0
|
267,299
| 23,290,959,298
|
IssuesEvent
|
2022-08-05 22:50:50
|
danbudris/vulnerabilityProcessor
|
https://api.github.com/repos/danbudris/vulnerabilityProcessor
|
opened
|
HIGH vulnerability ALAS2-2021-1710 - ca-certificates affecting 1 resources
|
hey there test severity/HIGH
|
Issue auto cut by Vulnerability Processor
Processor Version: `v0.0.0-dev`
Message Source: `EventBridge`
Finding Source: `inspectorV2`
HIGH vulnerability ALAS2-2021-1710 detected in 1 resources
- arn:aws:ecr:us-west-2:338155784195:repository/test-inspector/sha256:7585bd31388fb7584260436e613c871868fd1509a728bf0c60bfe3f792e43aff
Affected Packages:
- ca-certificates
Associated Pull Requests:
- https://github.com/danbudris/vulnerabilityProcessor/pull/1243
|
1.0
|
HIGH vulnerability ALAS2-2021-1710 - ca-certificates affecting 1 resources - Issue auto cut by Vulnerability Processor
Processor Version: `v0.0.0-dev`
Message Source: `EventBridge`
Finding Source: `inspectorV2`
HIGH vulnerability ALAS2-2021-1710 detected in 1 resources
- arn:aws:ecr:us-west-2:338155784195:repository/test-inspector/sha256:7585bd31388fb7584260436e613c871868fd1509a728bf0c60bfe3f792e43aff
Affected Packages:
- ca-certificates
Associated Pull Requests:
- https://github.com/danbudris/vulnerabilityProcessor/pull/1243
|
non_process
|
high vulnerability ca certificates affecting resources issue auto cut by vulnerability processor processor version dev message source eventbridge finding source high vulnerability detected in resources arn aws ecr us west repository test inspector affected packages ca certificates associated pull requests
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.