Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 1 744 | labels stringlengths 4 574 | body stringlengths 9 211k | index stringclasses 10 values | text_combine stringlengths 96 211k | label stringclasses 2 values | text stringlengths 96 188k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
19,489 | 25,800,280,421 | IssuesEvent | 2022-12-10 23:47:24 | LLazyEmail/nomoretogo_email_template | https://api.github.com/repos/LLazyEmail/nomoretogo_email_template | closed | move to config file later | todo in process | https://github.com/LLazyEmail/nomoretogo_email_template/blob/a4b17c64e8f9ea0ec93b8aed405956090af80786/src/display/socialLinksData.js#L2
```javascript
// Footer params
// TODO move to config file later
const socialsLinksParams = [{
href: "https://www.facebook.com/nomoretogo/",
src: "https://raw.githubusercontent.com/LLazyEmail/nomoretogo_email_template/main/data/images/facebook.webp"
},
{
href: "https://twitter.com/nomoretogo",
src: "https://raw.githubusercontent.com/LLazyEmail/nomoretogo_email_template/main/data/images/twitter.webp"
},
{
href: "https://www.instagram.com/nomoretogo/",
src: "https://raw.githubusercontent.com/LLazyEmail/nomoretogo_email_template/main/data/images/instagram.webp"
}];
export default socialsLinksParams;
``` | 1.0 | move to config file later - https://github.com/LLazyEmail/nomoretogo_email_template/blob/a4b17c64e8f9ea0ec93b8aed405956090af80786/src/display/socialLinksData.js#L2
```javascript
// Footer params
// TODO move to config file later
const socialsLinksParams = [{
href: "https://www.facebook.com/nomoretogo/",
src: "https://raw.githubusercontent.com/LLazyEmail/nomoretogo_email_template/main/data/images/facebook.webp"
},
{
href: "https://twitter.com/nomoretogo",
src: "https://raw.githubusercontent.com/LLazyEmail/nomoretogo_email_template/main/data/images/twitter.webp"
},
{
href: "https://www.instagram.com/nomoretogo/",
src: "https://raw.githubusercontent.com/LLazyEmail/nomoretogo_email_template/main/data/images/instagram.webp"
}];
export default socialsLinksParams;
``` | process | move to config file later javascript footer params todo move to config file later const socialslinksparams href src href src href src export default socialslinksparams | 1 |
11,718 | 14,547,572,895 | IssuesEvent | 2020-12-15 23:15:13 | pacificclimate/quail | https://api.github.com/repos/pacificclimate/quail | closed | Maximum Consecutive Dry Days | process | ## Description
This function computes the climdex index CDD: the annual maximum length of dry spells, in days. Dry spells are considered to be sequences of days where daily preciptation is less than 1mm per day.
## Function to wrap
[`climdex.cdd`](https://github.com/pacificclimate/climdex.pcic/blob/master/R/climdex.r#L1206)
| 1.0 | Maximum Consecutive Dry Days - ## Description
This function computes the climdex index CDD: the annual maximum length of dry spells, in days. Dry spells are considered to be sequences of days where daily preciptation is less than 1mm per day.
## Function to wrap
[`climdex.cdd`](https://github.com/pacificclimate/climdex.pcic/blob/master/R/climdex.r#L1206)
| process | maximum consecutive dry days description this function computes the climdex index cdd the annual maximum length of dry spells in days dry spells are considered to be sequences of days where daily preciptation is less than per day function to wrap | 1 |
378,445 | 26,295,441,516 | IssuesEvent | 2023-01-08 22:47:01 | saltstack/salt | https://api.github.com/repos/saltstack/salt | opened | [DOCS] Getting Started doc has broken SSH roster link | Documentation needs-triage | **Description**
The "How Rosters Work" link is broken on this page https://docs.saltproject.io/en/getstarted/ssh/connect.html and links to https://docs.saltproject.io/en/develop/topics/ssh/roster.html#how-rosters-work which no longer exists.
**Suggested Fix**
Link to the correct page https://docs.saltproject.io/en/latest/topics/ssh/roster.html#how-rosters-work
**Type of documentation**
Getting Started guide
**Location or format of documentation**
https://docs.saltproject.io/en/getstarted/ssh/connect.html
**Additional context**
Add any other context or screenshots about the feature request here.
| 1.0 | [DOCS] Getting Started doc has broken SSH roster link - **Description**
The "How Rosters Work" link is broken on this page https://docs.saltproject.io/en/getstarted/ssh/connect.html and links to https://docs.saltproject.io/en/develop/topics/ssh/roster.html#how-rosters-work which no longer exists.
**Suggested Fix**
Link to the correct page https://docs.saltproject.io/en/latest/topics/ssh/roster.html#how-rosters-work
**Type of documentation**
Getting Started guide
**Location or format of documentation**
https://docs.saltproject.io/en/getstarted/ssh/connect.html
**Additional context**
Add any other context or screenshots about the feature request here.
| non_process | getting started doc has broken ssh roster link description the how rosters work link is broken on this page and links to which no longer exists suggested fix link to the correct page type of documentation getting started guide location or format of documentation additional context add any other context or screenshots about the feature request here | 0 |
20,187 | 26,751,832,185 | IssuesEvent | 2023-01-30 20:16:20 | googleapis/nodejs-gce-images | https://api.github.com/repos/googleapis/nodejs-gce-images | closed | Your .repo-metadata.json file has a problem 🤒 | type: process api: compute repo-metadata: lint | You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* api_shortname 'gceimages' invalid in .repo-metadata.json
☝️ Once you address these problems, you can close this issue.
### Need help?
* [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field.
* [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**.
* Reach out to **go/github-automation** if you have any questions. | 1.0 | Your .repo-metadata.json file has a problem 🤒 - You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* api_shortname 'gceimages' invalid in .repo-metadata.json
☝️ Once you address these problems, you can close this issue.
### Need help?
* [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field.
* [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**.
* Reach out to **go/github-automation** if you have any questions. | process | your repo metadata json file has a problem 🤒 you have a problem with your repo metadata json file result of scan 📈 api shortname gceimages invalid in repo metadata json ☝️ once you address these problems you can close this issue need help lists valid options for each field for grpc libraries api shortname should match the subdomain of an api s hostname reach out to go github automation if you have any questions | 1 |
10,818 | 13,609,291,483 | IssuesEvent | 2020-09-23 04:50:44 | googleapis/java-trace | https://api.github.com/repos/googleapis/java-trace | closed | Dependency Dashboard | api: cloudtrace type: process | This issue contains a list of Renovate updates and their statuses.
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-trace-1.x -->chore(deps): update dependency com.google.cloud:google-cloud-trace to v1.2.1
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
| 1.0 | Dependency Dashboard - This issue contains a list of Renovate updates and their statuses.
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-trace-1.x -->chore(deps): update dependency com.google.cloud:google-cloud-trace to v1.2.1
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
| process | dependency dashboard this issue contains a list of renovate updates and their statuses open these updates have all been created already click a checkbox below to force a retry rebase of any chore deps update dependency com google cloud google cloud trace to check this box to trigger a request for renovate to run again on this repository | 1 |
11,244 | 14,015,313,617 | IssuesEvent | 2020-10-29 13:10:13 | tdwg/dwc | https://api.github.com/repos/tdwg/dwc | closed | Change term - establishmentMeans | Process - implement Term - change | ## Change term
* Submitter: John Wieczorek
* Justification (why is this change necessary?): Consistency and correctness
* Proponents (who needs this change): Everyone
Proposed new attributes of the term:
* Examples:
"`native`, `nativeReintroduced`, `introduced`, `introducedAssistedColonisation`, `vagrant`, `uncertain`"
(the example uncertain was missing an opening left single quote)
| 1.0 | Change term - establishmentMeans - ## Change term
* Submitter: John Wieczorek
* Justification (why is this change necessary?): Consistency and correctness
* Proponents (who needs this change): Everyone
Proposed new attributes of the term:
* Examples:
"`native`, `nativeReintroduced`, `introduced`, `introducedAssistedColonisation`, `vagrant`, `uncertain`"
(the example uncertain was missing an opening left single quote)
| process | change term establishmentmeans change term submitter john wieczorek justification why is this change necessary consistency and correctness proponents who needs this change everyone proposed new attributes of the term examples native nativereintroduced introduced introducedassistedcolonisation vagrant uncertain the example uncertain was missing an opening left single quote | 1 |
10,575 | 13,386,034,985 | IssuesEvent | 2020-09-02 14:13:52 | cypress-io/cypress | https://api.github.com/repos/cypress-io/cypress | closed | Internal test, percy snapshot doesn't have css loaded | internal-priority process: flaky test stage: needs review type: chore | [https://github.com/cypress-io/cypress/pull/8469](https://github.com/cypress-io/cypress/pull/8469|smart-link)
<!-- Is this a question? Questions WILL BE CLOSED. Ask in our chat [https://on.cypress.io/chat](https://on.cypress.io/chat) -->
### Current behavior:
Sometimes the CSS doesn't load by the time the screenshot is taken, so we get snapshots like this to compare against in our internal runner tests.
[https://percy.io/cypress-io/cypress/builds/6625916](https://percy.io/cypress-io/cypress/builds/6625916)

| 1.0 | Internal test, percy snapshot doesn't have css loaded - [https://github.com/cypress-io/cypress/pull/8469](https://github.com/cypress-io/cypress/pull/8469|smart-link)
<!-- Is this a question? Questions WILL BE CLOSED. Ask in our chat [https://on.cypress.io/chat](https://on.cypress.io/chat) -->
### Current behavior:
Sometimes the CSS doesn't load by the time the screenshot is taken, so we get snapshots like this to compare against in our internal runner tests.
[https://percy.io/cypress-io/cypress/builds/6625916](https://percy.io/cypress-io/cypress/builds/6625916)

| process | internal test percy snapshot doesn t have css loaded current behavior sometimes the css doesn t load by the time the screenshot is taken so we get snapshots like this to compare against in our internal runner tests | 1 |
16,123 | 30,018,900,602 | IssuesEvent | 2023-06-26 21:07:14 | unicode-org/message-format-wg | https://api.github.com/repos/unicode-org/message-format-wg | closed | Responsive localization | requirements resolve-candidate | Historically, localization was always predominantly a server-side, relatively static operation. Majority of (non-l10n) engineers I worked with usually start with a concept that l10n is a phase done **as early as possible** - for example during pre-processing, maybe build time, or on the server-side in a server-client model. Even when localization is performed at runtime, it's heavily optimized out and cached with an assumption that any invalidation will be performed by throwing out the whole cached UI and rebuilding it in a new locale.
Fluent, very early on its life-cycle, went the opposite way - localization is performed **as late** as possible.
Fluent can still be used to localize a template that is, say, sent to the client side from Django, PHP etc., but in the core use case for us - Firefox UI - we perform the localization as the final step of the pipeline, right before layout and rendering.
I'm bringing it up here because I believe that depending on how useful this group will find such use case, it may have an impact on how we design several features that impact lifecycle of the runtime localization.
It changes the `Build -> Package? -> Run -> Translate -> Display -> Close` model into `Build -> Package? -> Run -> Translate -> Display -> Retranslate -> Redisplay -> Close`.
Adding the `retranslate` has impact on how we, among others, think of fallback (we may start an app with 80% es-MX falling back to es, and during runtime we may update es-MX to 90% without restarting an app), on the dominant API we use for develpers (declarative vs imperative), and on some constrains on more rich features like DOM Overlays because we have to take into account that a DOM may be translated into one locale, and then we need a way to apply a new translation.
Instead of just working with state `Untranslated -> Translated` we also have a state `Translated -> Retranslated` where one locale may have added/removed/modified some bits of the widget.
This **late** model has several advantages that were crucial to us, which I'd like to present:
1) it allows developers to work with declarative, state-full UI, updating the state as they go, without worrying about synchronous or asynchronous manner in which the actual localization is going to be applied.
To illustrate the difference, in one component (Firefox Preferences) with ~1000 strings which we migrated from the previous l10n API to Fluent, we reduced the number of imperative calls by **10x**.
Instead of developers writing:
```js
let value = Services.strings.getFormattedValue("processCount", {count: 5});
let accesskey = Services.strings.getFormattedValue("processCount-accesskey");
document.getElementById("process_count").textContent = value;
document.getElementById("process_count").setAttribute("accesskey", accesskey);
```
they now write:
```js
document.l10n.setAttributes(
document.getElementById("process_count"),
"processCount",
{count: 5}
);
```
`setAttributes` is [a very simple DOM function](https://github.com/projectfluent/fluent.js/blob/master/fluent-dom/src/dom_localization.js#L87-L95) which sets two attributes: `data-l10n-id` and `data-l10n-args`.
Separately, we have a `MutationObserver` which reacts to that change by adding the element to a pool of strings to be translated in the next animation frame, and then performs the localization.
From the developer perspective, they just set the state of DOM, and its out of their concern how and when the translation will happen.
For our discussed use case on the other hand, the value is that we always have a DOM tree available with all the information needed to apply new translation - we just need to traverse the DOM, find all `data-l10n-id`/`data-l10n-args` and translate them to a new locale.
Here you can see [a very old demo](http://informationisart.com/24/) of that feature combined with dynamic language packs.
Many UI toolkits try to emulate such feature by preserving state and rebuilding the UI and reconnecting the state, but FluentDOM allows you to just update the DOM on fly without ever touching the state (we can update your translation while you interact with the UI!).
This feature is already fully implemented in Firefox Desktop now, and we can change translation on fly for the subset of our UI that we already migrated to Fluent.
2) Runtime pseudolocalization
A natural side-effect of the above is that we can pseudolocalize on fly, at runtime, by pushing all translations via a `transform(String) -> String` function and applying them to DOM.
This means that shipping pseudolocalization has no cost on binary size (reason why Android avoids shipping pseudolocales to production) and one can provide many, even customizable, pseudolocalization strategies (for example adjustable length increase to stress test layout).
Here's [a demo of that feature](https://diary.braniecki.net/2018/06/07/pseudolocalization-in-firefox/).
3) Runtime caching
Caching is still possible (we load untranslated UI, apply translation, cache, then load from cache unless locale changed), and its invalidation just becomes part of the `translate -> retranslate` state which also simplifies things.
4) Actual responsive l10n
This is a feature we prototyped, but never got to actually implement, which I see as one of the potential "north stars" - features we may not implement ourselves, but may want to make the outcome of our work be able to be build on top of.
The idea behind it is to use Fluent syntax to provide reactive retranslation on external conditions.
A common idea we wanted to tackle was a scenario where the UI operates in adjustable available space.
Imagine a UI which may be displayed on TV, Laptop, Tablet of Phone.
For the large space, we'd like to use a long string, but when space shrinks, we'd like to display a shorter version of the same message rather than cut out with ellipses.
What's more, different locales may face different challenges - while German may struggle to fit the full text even on large screen, Chinese won't likely need the large version at all.
Since the experience is *per-locale* and the condition of variant selection is *per-locale*, we wanted to use Fluent for it, more or less like so:
```
prefs-cursor-keys-option = { SCREEN_WITH() ->
[wide] Always use the cursor keys to navigate within pages.
[medium] Use the cursor keys to navigate in pages.
*[narrow] Use keys to navigate in pages.
```
This allows an English translation to adjust the width of the message to available space.
What was the real goal was also ability to interpret the message at runtime by FluentDOM and hook `onScreenSizeChange` event handler to retranslate the message.
This handle will be hooked only if the locale actually depends on `SCREEN_WIDTH`.
We never put this feature in production but we validated that Fluent data model and API makes this possible.
[Here's a demo](https://www.youtube.com/watch?v=V5K6YN7MMOA).
====
Such flexibility may be seen as very high level, and I'd say that 90% of work to make such features work are.
But there's 10% of work that depends on how low-level data model is designed - how fallback works, how interpolation works, what I/O is possible.
I'd like to put this proposal in front of this group as a feature that we'd like to make sure our outcome doesn't make impossible. | 1.0 | Responsive localization - Historically, localization was always predominantly a server-side, relatively static operation. Majority of (non-l10n) engineers I worked with usually start with a concept that l10n is a phase done **as early as possible** - for example during pre-processing, maybe build time, or on the server-side in a server-client model. Even when localization is performed at runtime, it's heavily optimized out and cached with an assumption that any invalidation will be performed by throwing out the whole cached UI and rebuilding it in a new locale.
Fluent, very early on its life-cycle, went the opposite way - localization is performed **as late** as possible.
Fluent can still be used to localize a template that is, say, sent to the client side from Django, PHP etc., but in the core use case for us - Firefox UI - we perform the localization as the final step of the pipeline, right before layout and rendering.
I'm bringing it up here because I believe that depending on how useful this group will find such use case, it may have an impact on how we design several features that impact lifecycle of the runtime localization.
It changes the `Build -> Package? -> Run -> Translate -> Display -> Close` model into `Build -> Package? -> Run -> Translate -> Display -> Retranslate -> Redisplay -> Close`.
Adding the `retranslate` has impact on how we, among others, think of fallback (we may start an app with 80% es-MX falling back to es, and during runtime we may update es-MX to 90% without restarting an app), on the dominant API we use for develpers (declarative vs imperative), and on some constrains on more rich features like DOM Overlays because we have to take into account that a DOM may be translated into one locale, and then we need a way to apply a new translation.
Instead of just working with state `Untranslated -> Translated` we also have a state `Translated -> Retranslated` where one locale may have added/removed/modified some bits of the widget.
This **late** model has several advantages that were crucial to us, which I'd like to present:
1) it allows developers to work with declarative, state-full UI, updating the state as they go, without worrying about synchronous or asynchronous manner in which the actual localization is going to be applied.
To illustrate the difference, in one component (Firefox Preferences) with ~1000 strings which we migrated from the previous l10n API to Fluent, we reduced the number of imperative calls by **10x**.
Instead of developers writing:
```js
let value = Services.strings.getFormattedValue("processCount", {count: 5});
let accesskey = Services.strings.getFormattedValue("processCount-accesskey");
document.getElementById("process_count").textContent = value;
document.getElementById("process_count").setAttribute("accesskey", accesskey);
```
they now write:
```js
document.l10n.setAttributes(
document.getElementById("process_count"),
"processCount",
{count: 5}
);
```
`setAttributes` is [a very simple DOM function](https://github.com/projectfluent/fluent.js/blob/master/fluent-dom/src/dom_localization.js#L87-L95) which sets two attributes: `data-l10n-id` and `data-l10n-args`.
Separately, we have a `MutationObserver` which reacts to that change by adding the element to a pool of strings to be translated in the next animation frame, and then performs the localization.
From the developer perspective, they just set the state of DOM, and its out of their concern how and when the translation will happen.
For our discussed use case on the other hand, the value is that we always have a DOM tree available with all the information needed to apply new translation - we just need to traverse the DOM, find all `data-l10n-id`/`data-l10n-args` and translate them to a new locale.
Here you can see [a very old demo](http://informationisart.com/24/) of that feature combined with dynamic language packs.
Many UI toolkits try to emulate such feature by preserving state and rebuilding the UI and reconnecting the state, but FluentDOM allows you to just update the DOM on fly without ever touching the state (we can update your translation while you interact with the UI!).
This feature is already fully implemented in Firefox Desktop now, and we can change translation on fly for the subset of our UI that we already migrated to Fluent.
2) Runtime pseudolocalization
A natural side-effect of the above is that we can pseudolocalize on fly, at runtime, by pushing all translations via a `transform(String) -> String` function and applying them to DOM.
This means that shipping pseudolocalization has no cost on binary size (reason why Android avoids shipping pseudolocales to production) and one can provide many, even customizable, pseudolocalization strategies (for example adjustable length increase to stress test layout).
Here's [a demo of that feature](https://diary.braniecki.net/2018/06/07/pseudolocalization-in-firefox/).
3) Runtime caching
Caching is still possible (we load untranslated UI, apply translation, cache, then load from cache unless locale changed), and its invalidation just becomes part of the `translate -> retranslate` state which also simplifies things.
4) Actual responsive l10n
This is a feature we prototyped, but never got to actually implement, which I see as one of the potential "north stars" - features we may not implement ourselves, but may want to make the outcome of our work be able to be build on top of.
The idea behind it is to use Fluent syntax to provide reactive retranslation on external conditions.
A common idea we wanted to tackle was a scenario where the UI operates in adjustable available space.
Imagine a UI which may be displayed on TV, Laptop, Tablet of Phone.
For the large space, we'd like to use a long string, but when space shrinks, we'd like to display a shorter version of the same message rather than cut out with ellipses.
What's more, different locales may face different challenges - while German may struggle to fit the full text even on large screen, Chinese won't likely need the large version at all.
Since the experience is *per-locale* and the condition of variant selection is *per-locale*, we wanted to use Fluent for it, more or less like so:
```
prefs-cursor-keys-option = { SCREEN_WITH() ->
[wide] Always use the cursor keys to navigate within pages.
[medium] Use the cursor keys to navigate in pages.
*[narrow] Use keys to navigate in pages.
```
This allows an English translation to adjust the width of the message to available space.
What was the real goal was also ability to interpret the message at runtime by FluentDOM and hook `onScreenSizeChange` event handler to retranslate the message.
This handle will be hooked only if the locale actually depends on `SCREEN_WIDTH`.
We never put this feature in production but we validated that Fluent data model and API makes this possible.
[Here's a demo](https://www.youtube.com/watch?v=V5K6YN7MMOA).
====
Such flexibility may be seen as very high level, and I'd say that 90% of work to make such features work are.
But there's 10% of work that depends on how low-level data model is designed - how fallback works, how interpolation works, what I/O is possible.
I'd like to put this proposal in front of this group as a feature that we'd like to make sure our outcome doesn't make impossible. | non_process | responsive localization historically localization was always predominantly a server side relatively static operation majority of non engineers i worked with usually start with a concept that is a phase done as early as possible for example during pre processing maybe build time or on the server side in a server client model even when localization is performed at runtime it s heavily optimized out and cached with an assumption that any invalidation will be performed by throwing out the whole cached ui and rebuilding it in a new locale fluent very early on its life cycle went the opposite way localization is performed as late as possible fluent can still be used to localize a template that is say sent to the client side from django php etc but in the core use case for us firefox ui we perform the localization as the final step of the pipeline right before layout and rendering i m bringing it up here because i believe that depending on how useful this group will find such use case it may have an impact on how we design several features that impact lifecycle of the runtime localization it changes the build package run translate display close model into build package run translate display retranslate redisplay close adding the retranslate has impact on how we among others think of fallback we may start an app with es mx falling back to es and during runtime we may update es mx to without restarting an app on the dominant api we use for develpers declarative vs imperative and on some constrains on more rich features like dom overlays because we have to take into account that a dom may be translated into one locale and then we need a way to apply a new translation instead of just working with state untranslated translated we also have a state translated retranslated where one locale may have added removed modified some bits of the widget this late model has several advantages that were crucial to us which i d like to present it allows developers to work with declarative state full ui updating the state as they go without worrying about synchronous or asynchronous manner in which the actual localization is going to be applied to illustrate the difference in one component firefox preferences with strings which we migrated from the previous api to fluent we reduced the number of imperative calls by instead of developers writing js let value services strings getformattedvalue processcount count let accesskey services strings getformattedvalue processcount accesskey document getelementbyid process count textcontent value document getelementbyid process count setattribute accesskey accesskey they now write js document setattributes document getelementbyid process count processcount count setattributes is which sets two attributes data id and data args separately we have a mutationobserver which reacts to that change by adding the element to a pool of strings to be translated in the next animation frame and then performs the localization from the developer perspective they just set the state of dom and its out of their concern how and when the translation will happen for our discussed use case on the other hand the value is that we always have a dom tree available with all the information needed to apply new translation we just need to traverse the dom find all data id data args and translate them to a new locale here you can see of that feature combined with dynamic language packs many ui toolkits try to emulate such feature by preserving state and rebuilding the ui and reconnecting the state but fluentdom allows you to just update the dom on fly without ever touching the state we can update your translation while you interact with the ui this feature is already fully implemented in firefox desktop now and we can change translation on fly for the subset of our ui that we already migrated to fluent runtime pseudolocalization a natural side effect of the above is that we can pseudolocalize on fly at runtime by pushing all translations via a transform string string function and applying them to dom this means that shipping pseudolocalization has no cost on binary size reason why android avoids shipping pseudolocales to production and one can provide many even customizable pseudolocalization strategies for example adjustable length increase to stress test layout here s runtime caching caching is still possible we load untranslated ui apply translation cache then load from cache unless locale changed and its invalidation just becomes part of the translate retranslate state which also simplifies things actual responsive this is a feature we prototyped but never got to actually implement which i see as one of the potential north stars features we may not implement ourselves but may want to make the outcome of our work be able to be build on top of the idea behind it is to use fluent syntax to provide reactive retranslation on external conditions a common idea we wanted to tackle was a scenario where the ui operates in adjustable available space imagine a ui which may be displayed on tv laptop tablet of phone for the large space we d like to use a long string but when space shrinks we d like to display a shorter version of the same message rather than cut out with ellipses what s more different locales may face different challenges while german may struggle to fit the full text even on large screen chinese won t likely need the large version at all since the experience is per locale and the condition of variant selection is per locale we wanted to use fluent for it more or less like so prefs cursor keys option screen with always use the cursor keys to navigate within pages use the cursor keys to navigate in pages use keys to navigate in pages this allows an english translation to adjust the width of the message to available space what was the real goal was also ability to interpret the message at runtime by fluentdom and hook onscreensizechange event handler to retranslate the message this handle will be hooked only if the locale actually depends on screen width we never put this feature in production but we validated that fluent data model and api makes this possible such flexibility may be seen as very high level and i d say that of work to make such features work are but there s of work that depends on how low level data model is designed how fallback works how interpolation works what i o is possible i d like to put this proposal in front of this group as a feature that we d like to make sure our outcome doesn t make impossible | 0 |
457,199 | 13,153,100,067 | IssuesEvent | 2020-08-10 01:54:30 | kubernetes/website | https://api.github.com/repos/kubernetes/website | closed | No Recommendation Around Command Example in Doc Which Requires Root Privilege | kind/feature lifecycle/rotten priority/important-longterm sig/docs | **This is a Bug Report**
<!-- Thanks for filing an issue! Before submitting, please fill in the following information. -->
<!-- See https://kubernetes.io/docs/contribute/start/ for guidance on writing an actionable issue description. -->
<!--Required Information-->
**Problem:**
In K8s doc, there are command examples which require root privilege. But there is inconsistency of use of `sudo` command in example (e.g. [here](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/)&[here](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/) with sudo but [here](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/) without).
However there are pros&cons:
* Command example with `sudo`
* **Pro:**
1. Clear indication where root privilege is required
* **Con:**
1. dependency on tool (`sudo`)
* Command example without `sudo`
* **Pro:**
1. concise look of doc
1. no dependency on tool
* **Con:**
1. reader who is relatively new to *nix should run commands with try&error
1. addition of indication where root privilege is required to doc
**Proposed Solution:**
To add official recommendation around this to [style guide](https://kubernetes.io/docs/contribute/style/style-guide/).
**Page to Update:**
Too many pages, cannot specify here.
<!--Optional Information (remove the comment tags around information you would like to include)-->
<!--Kubernetes Version:-->
<!--Additional Information:-->
| 1.0 | No Recommendation Around Command Example in Doc Which Requires Root Privilege - **This is a Bug Report**
<!-- Thanks for filing an issue! Before submitting, please fill in the following information. -->
<!-- See https://kubernetes.io/docs/contribute/start/ for guidance on writing an actionable issue description. -->
<!--Required Information-->
**Problem:**
In K8s doc, there are command examples which require root privilege. But there is inconsistency of use of `sudo` command in example (e.g. [here](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/)&[here](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/) with sudo but [here](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/) without).
However there are pros&cons:
* Command example with `sudo`
* **Pro:**
1. Clear indication where root privilege is required
* **Con:**
1. dependency on tool (`sudo`)
* Command example without `sudo`
* **Pro:**
1. concise look of doc
1. no dependency on tool
* **Con:**
1. reader who is relatively new to *nix should run commands with try&error
1. addition of indication where root privilege is required to doc
**Proposed Solution:**
To add official recommendation around this to [style guide](https://kubernetes.io/docs/contribute/style/style-guide/).
**Page to Update:**
Too many pages, cannot specify here.
<!--Optional Information (remove the comment tags around information you would like to include)-->
<!--Kubernetes Version:-->
<!--Additional Information:-->
| non_process | no recommendation around command example in doc which requires root privilege this is a bug report problem in doc there are command examples which require root privilege but there is inconsistency of use of sudo command in example e g with sudo but without however there are pros cons command example with sudo pro clear indication where root privilege is required con dependency on tool sudo command example without sudo pro concise look of doc no dependency on tool con reader who is relatively new to nix should run commands with try error addition of indication where root privilege is required to doc proposed solution to add official recommendation around this to page to update too many pages cannot specify here | 0 |
822 | 3,293,786,475 | IssuesEvent | 2015-10-30 20:42:20 | dotnet/corefx | https://api.github.com/repos/dotnet/corefx | closed | Process_MainModule failed in CI (returned different module than expected) | System.Diagnostics.Process | http://dotnet-ci.cloudapp.net/job/dotnet_corefx_windows_release_prtest/1532/console
```
System.Diagnostics.ProcessTests.ProcessTest.Process_MainModule [FAIL]
Assert.Equal() Failure
(pos 0)
Expected: corerun
Actual: ntdll
(pos 0)
Stack Trace:
d:\j\workspace\dotnet_corefx_windows_release_prtest\src\System.Diagnostics.Process\tests\System.Diagnostics.Process.Tests\ProcessTest.cs(239,0): at System.Diagnostics.ProcessTests.ProcessTest.Process_MainModule()
``` | 1.0 | Process_MainModule failed in CI (returned different module than expected) - http://dotnet-ci.cloudapp.net/job/dotnet_corefx_windows_release_prtest/1532/console
```
System.Diagnostics.ProcessTests.ProcessTest.Process_MainModule [FAIL]
Assert.Equal() Failure
(pos 0)
Expected: corerun
Actual: ntdll
(pos 0)
Stack Trace:
d:\j\workspace\dotnet_corefx_windows_release_prtest\src\System.Diagnostics.Process\tests\System.Diagnostics.Process.Tests\ProcessTest.cs(239,0): at System.Diagnostics.ProcessTests.ProcessTest.Process_MainModule()
``` | process | process mainmodule failed in ci returned different module than expected system diagnostics processtests processtest process mainmodule assert equal failure pos expected corerun actual ntdll pos stack trace d j workspace dotnet corefx windows release prtest src system diagnostics process tests system diagnostics process tests processtest cs at system diagnostics processtests processtest process mainmodule | 1 |
111,490 | 14,102,722,258 | IssuesEvent | 2020-11-06 09:12:09 | AdExNetwork/adex-staking | https://api.github.com/repos/AdExNetwork/adex-staking | closed | Popup "Stake your ADX" after successfully upgrading your ADX | design | The AdEx upgrade modal should transform to "Sucess! You've upgraded your ADX" popup which also includes a suggestion to stake the ADX for added rewards
Even though this is the staking portal and the ability to stake is everywhere, a popup should help with the psychological appeal of staking | 1.0 | Popup "Stake your ADX" after successfully upgrading your ADX - The AdEx upgrade modal should transform to "Sucess! You've upgraded your ADX" popup which also includes a suggestion to stake the ADX for added rewards
Even though this is the staking portal and the ability to stake is everywhere, a popup should help with the psychological appeal of staking | non_process | popup stake your adx after successfully upgrading your adx the adex upgrade modal should transform to sucess you ve upgraded your adx popup which also includes a suggestion to stake the adx for added rewards even though this is the staking portal and the ability to stake is everywhere a popup should help with the psychological appeal of staking | 0 |
22,658 | 31,895,828,261 | IssuesEvent | 2023-09-18 01:32:01 | tdwg/dwc | https://api.github.com/repos/tdwg/dwc | closed | Change term - member | Term - change Class - GeologicalContext normative Task Group - Material Sample Process - complete | ## Term change
* Submitter: [Material Sample Task Group](https://www.tdwg.org/community/osr/material-sample/)
* Efficacy Justification (why is this change necessary?): Create consistency of terms for material in Darwin Core.
* Demand Justification (if the change is semantic in nature, name at least two organizations that independently need this term): [Material Sample Task Group](https://www.tdwg.org/community/osr/material-sample/), which includes representatives of over 10 organizations.
* Stability Justification (what concerns are there that this might affect existing implementations?): None
* Implications for dwciri: namespace (does this change affect a dwciri term version)?: No
Current Term definition: https://dwc.tdwg.org/list/#dwc_member
Proposed attributes of the new term version (Please put actual changes to be implemented in **bold** and ~strikethrough~):
* Term name (in lowerCamelCase for properties, UpperCamelCase for classes): member
* Term label (English, not normative): Member
* Organized in Class (e.g., Occurrence, Event, Location, Taxon): Geological Context
* Definition of the term (normative): The full name of the lithostratigraphic member from which the ~~cataloged item~~**dwc:MaterialEntity** was collected.
* Usage comments (recommendations regarding content, etc., not normative):
* Examples (not normative): Lava Dam Member, Hellnmaria Member
* Refines (identifier of the broader term this term refines; normative): None
* Replaces (identifier of the existing term that would be deprecated and replaced by this term; normative): None
* ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG; not normative): not in ABCD
| 1.0 | Change term - member - ## Term change
* Submitter: [Material Sample Task Group](https://www.tdwg.org/community/osr/material-sample/)
* Efficacy Justification (why is this change necessary?): Create consistency of terms for material in Darwin Core.
* Demand Justification (if the change is semantic in nature, name at least two organizations that independently need this term): [Material Sample Task Group](https://www.tdwg.org/community/osr/material-sample/), which includes representatives of over 10 organizations.
* Stability Justification (what concerns are there that this might affect existing implementations?): None
* Implications for dwciri: namespace (does this change affect a dwciri term version)?: No
Current Term definition: https://dwc.tdwg.org/list/#dwc_member
Proposed attributes of the new term version (Please put actual changes to be implemented in **bold** and ~strikethrough~):
* Term name (in lowerCamelCase for properties, UpperCamelCase for classes): member
* Term label (English, not normative): Member
* Organized in Class (e.g., Occurrence, Event, Location, Taxon): Geological Context
* Definition of the term (normative): The full name of the lithostratigraphic member from which the ~~cataloged item~~**dwc:MaterialEntity** was collected.
* Usage comments (recommendations regarding content, etc., not normative):
* Examples (not normative): Lava Dam Member, Hellnmaria Member
* Refines (identifier of the broader term this term refines; normative): None
* Replaces (identifier of the existing term that would be deprecated and replaced by this term; normative): None
* ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG; not normative): not in ABCD
| process | change term member term change submitter efficacy justification why is this change necessary create consistency of terms for material in darwin core demand justification if the change is semantic in nature name at least two organizations that independently need this term which includes representatives of over organizations stability justification what concerns are there that this might affect existing implementations none implications for dwciri namespace does this change affect a dwciri term version no current term definition proposed attributes of the new term version please put actual changes to be implemented in bold and strikethrough term name in lowercamelcase for properties uppercamelcase for classes member term label english not normative member organized in class e g occurrence event location taxon geological context definition of the term normative the full name of the lithostratigraphic member from which the cataloged item dwc materialentity was collected usage comments recommendations regarding content etc not normative examples not normative lava dam member hellnmaria member refines identifier of the broader term this term refines normative none replaces identifier of the existing term that would be deprecated and replaced by this term normative none abcd xpath of the equivalent term in abcd or efg not normative not in abcd | 1 |
20,912 | 27,752,760,711 | IssuesEvent | 2023-03-15 22:21:14 | open-telemetry/opentelemetry-collector-contrib | https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib | closed | [traces/processor] Processor to take actions on span events | enhancement processor/filter processor/transform | ### Component(s)
_No response_
### Is your feature request related to a problem? Please describe.
The open source library will automatically add span events which seems not useful, for example:
```
message
message.type:
"RECEIVED"
message.id:
"..."
message.uncompressed_size:
"..."
message
message.type:
"SENT"
message.id:
"..."
message.uncompressed_size:
"..."
```
It is good to have a processor can do actions like insert/update/delete to the span events.
The current processors seem not able to achieve this, and I do see OTTL may help on filter span events but the documentation is not clear enough.
### Describe the solution you'd like
A processor for spans event could do actions like insert/update/delete, similar as attributes processor.
For deletion part, it may provide condition check, like delete based on events attribute/name, or just delete all events when no condition provided.
### Describe alternatives you've considered
_No response_
### Additional context
_No response_ | 2.0 | [traces/processor] Processor to take actions on span events - ### Component(s)
_No response_
### Is your feature request related to a problem? Please describe.
The open source library will automatically add span events which seems not useful, for example:
```
message
message.type:
"RECEIVED"
message.id:
"..."
message.uncompressed_size:
"..."
message
message.type:
"SENT"
message.id:
"..."
message.uncompressed_size:
"..."
```
It is good to have a processor can do actions like insert/update/delete to the span events.
The current processors seem not able to achieve this, and I do see OTTL may help on filter span events but the documentation is not clear enough.
### Describe the solution you'd like
A processor for spans event could do actions like insert/update/delete, similar as attributes processor.
For deletion part, it may provide condition check, like delete based on events attribute/name, or just delete all events when no condition provided.
### Describe alternatives you've considered
_No response_
### Additional context
_No response_ | process | processor to take actions on span events component s no response is your feature request related to a problem please describe the open source library will automatically add span events which seems not useful for example message message type received message id message uncompressed size message message type sent message id message uncompressed size it is good to have a processor can do actions like insert update delete to the span events the current processors seem not able to achieve this and i do see ottl may help on filter span events but the documentation is not clear enough describe the solution you d like a processor for spans event could do actions like insert update delete similar as attributes processor for deletion part it may provide condition check like delete based on events attribute name or just delete all events when no condition provided describe alternatives you ve considered no response additional context no response | 1 |
18,665 | 24,582,809,483 | IssuesEvent | 2022-10-13 16:58:41 | Ultimate-Hosts-Blacklist/whitelist | https://api.github.com/repos/Ultimate-Hosts-Blacklist/whitelist | closed | [FALSE-POSITIVE?] | whitelisting process | **Domains or links**
Please list any domains and links listed here which you believe are a false positive.
secure.livechatinc.com
**More Information**
How did you discover your web site or domain was listed here?
Livechat services not functional when the blacklist is in place
(source: https://raw.githubusercontent.com/mitchellkrogza/Phishing.Database/master/phishing-domains-ACTIVE.txt)
**Have you requested removal from other sources?**
Unsure of other sources, it's a legit domain
**Additional context**
I'm not the owner but my company uses both Livechat and blacklists; Livechat won't work unless we manually whitelist.
Keen to understand the ultimate source of your data to investigate further in case of other false positives.
:exclamation:
We understand being listed on a list like this can be frustrating and embarrassing for many web site owners. The first step is to remain calm. The second step is to rest assured one of our maintainers will address your issue as soon as possible. Please make sure you have provided as much information as possible to help speed up the process.
| 1.0 | [FALSE-POSITIVE?] - **Domains or links**
Please list any domains and links listed here which you believe are a false positive.
secure.livechatinc.com
**More Information**
How did you discover your web site or domain was listed here?
Livechat services not functional when the blacklist is in place
(source: https://raw.githubusercontent.com/mitchellkrogza/Phishing.Database/master/phishing-domains-ACTIVE.txt)
**Have you requested removal from other sources?**
Unsure of other sources, it's a legit domain
**Additional context**
I'm not the owner but my company uses both Livechat and blacklists; Livechat won't work unless we manually whitelist.
Keen to understand the ultimate source of your data to investigate further in case of other false positives.
:exclamation:
We understand being listed on a list like this can be frustrating and embarrassing for many web site owners. The first step is to remain calm. The second step is to rest assured one of our maintainers will address your issue as soon as possible. Please make sure you have provided as much information as possible to help speed up the process.
| process | domains or links please list any domains and links listed here which you believe are a false positive secure livechatinc com more information how did you discover your web site or domain was listed here livechat services not functional when the blacklist is in place source have you requested removal from other sources unsure of other sources it s a legit domain additional context i m not the owner but my company uses both livechat and blacklists livechat won t work unless we manually whitelist keen to understand the ultimate source of your data to investigate further in case of other false positives exclamation we understand being listed on a list like this can be frustrating and embarrassing for many web site owners the first step is to remain calm the second step is to rest assured one of our maintainers will address your issue as soon as possible please make sure you have provided as much information as possible to help speed up the process | 1 |
215,585 | 7,295,519,124 | IssuesEvent | 2018-02-26 07:16:19 | Midburn/Volunteers | https://api.github.com/repos/Midburn/Volunteers | closed | Editing form - moving to preview deletes all changes | Bug Priority-Low | if you edit a form, and before pressing save go to preview -all your changes are deleted. | 1.0 | Editing form - moving to preview deletes all changes - if you edit a form, and before pressing save go to preview -all your changes are deleted. | non_process | editing form moving to preview deletes all changes if you edit a form and before pressing save go to preview all your changes are deleted | 0 |
9,551 | 12,513,958,613 | IssuesEvent | 2020-06-03 03:37:48 | ramiromachado/easyRESTToGQL | https://api.github.com/repos/ramiromachado/easyRESTToGQL | closed | Add test coverage to the project | development process enhancement | As a project manager, I want to see how much code is covered so that I would have another measure to evaluate the project's health | 1.0 | Add test coverage to the project - As a project manager, I want to see how much code is covered so that I would have another measure to evaluate the project's health | process | add test coverage to the project as a project manager i want to see how much code is covered so that i would have another measure to evaluate the project s health | 1 |
12,556 | 14,978,450,300 | IssuesEvent | 2021-01-28 10:50:54 | GoogleCloudPlatform/fda-mystudies | https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies | closed | User details page > Search bar is missing | Bug P2 Participant manager Process: Fixed Process: Release 2 Process: Tested QA Process: Tested dev UI | AR : Search bar is missing
ER : Search bar should be present as per invision
 | 4.0 | User details page > Search bar is missing - AR : Search bar is missing
ER : Search bar should be present as per invision
 | process | user details page search bar is missing ar search bar is missing er search bar should be present as per invision | 1 |
12,287 | 14,815,297,371 | IssuesEvent | 2021-01-14 07:00:18 | modi-w/AutoVersionsDB | https://api.github.com/repos/modi-w/AutoVersionsDB | closed | Instruct the user on new DB and new project | area-Core area-UI process-ready-for-implementation type-enhancement | **The Problem**
When a user wants to define a new project on the new database, Mean, we don't have any script file, and no execution record on the DB; The system show error about invalid system table structure.
The above error is not appropriate in this context, it can be confusing for the user.
**Solution**
When the system pay attention that the project has no scripts files and no system table in the database, the system should instruct the user to:
1. Click on recreate DB to create an empty DB with the system tables
2. Add scripts files
3. Run Sync
**Action Items:**
1.
2.
3.
**Updates**
1.
| 1.0 | Instruct the user on new DB and new project - **The Problem**
When a user wants to define a new project on the new database, Mean, we don't have any script file, and no execution record on the DB; The system show error about invalid system table structure.
The above error is not appropriate in this context, it can be confusing for the user.
**Solution**
When the system pay attention that the project has no scripts files and no system table in the database, the system should instruct the user to:
1. Click on recreate DB to create an empty DB with the system tables
2. Add scripts files
3. Run Sync
**Action Items:**
1.
2.
3.
**Updates**
1.
| process | instruct the user on new db and new project the problem when a user wants to define a new project on the new database mean we don t have any script file and no execution record on the db the system show error about invalid system table structure the above error is not appropriate in this context it can be confusing for the user solution when the system pay attention that the project has no scripts files and no system table in the database the system should instruct the user to click on recreate db to create an empty db with the system tables add scripts files run sync action items updates | 1 |
42,550 | 9,248,327,750 | IssuesEvent | 2019-03-15 05:25:07 | joomla/joomla-cms | https://api.github.com/repos/joomla/joomla-cms | closed | Manifest refresh fails if beez or hathor is removed | No Code Attached Yet | ### Steps to reproduce the issue
uninstall beez & hathor
update from 3.9.3 to 3.9.4
### Expected result
no warnings.
### Actual result
Warning
Refresh Manifest Cache failed: beez3 Extension is not currently installed.
Refresh Manifest Cache failed: hathor Extension is not currently installed.
### System information (as much as possible)
[systeminfo-2019-03-15T02_09_56+00_00.txt](https://github.com/joomla/joomla-cms/files/2969161/systeminfo-2019-03-15T02_09_56%2B00_00.txt)
### Additional comments
This is not new to 3.9.4, I've seen it on several of the last updates. | 1.0 | Manifest refresh fails if beez or hathor is removed - ### Steps to reproduce the issue
uninstall beez & hathor
update from 3.9.3 to 3.9.4
### Expected result
no warnings.
### Actual result
Warning
Refresh Manifest Cache failed: beez3 Extension is not currently installed.
Refresh Manifest Cache failed: hathor Extension is not currently installed.
### System information (as much as possible)
[systeminfo-2019-03-15T02_09_56+00_00.txt](https://github.com/joomla/joomla-cms/files/2969161/systeminfo-2019-03-15T02_09_56%2B00_00.txt)
### Additional comments
This is not new to 3.9.4, I've seen it on several of the last updates. | non_process | manifest refresh fails if beez or hathor is removed steps to reproduce the issue uninstall beez hathor update from to expected result no warnings actual result warning refresh manifest cache failed extension is not currently installed refresh manifest cache failed hathor extension is not currently installed system information as much as possible additional comments this is not new to i ve seen it on several of the last updates | 0 |
9,949 | 12,976,737,578 | IssuesEvent | 2020-07-21 19:20:01 | dotnet/runtime | https://api.github.com/repos/dotnet/runtime | closed | Win32Exception from `Process` class when running `dotnet test` on Windows ARM arch | arch-arm64 area-System.Diagnostics.Process | On a machine with Windows IoT Core 1809 and .NET Core 3.1.101 installed:
1. Unzip [complexapp.zip](https://github.com/dotnet/runtime/files/4103678/complexapp.zip)
1. Run `dotnet build`
```
U:\complexapp>dotnet build -c release
Welcome to .NET Core 3.1!
---------------------
SDK Version: 3.1.101
Telemetry
---------
The .NET Core tools collect usage data in order to help us improve your experience. The data is anonymous. It is collected by Microsoft and shared with the community. You can opt-out of telemetry by setting the DOTNET_CLI_TELEMETRY_OPTOUT environment variable to '1' or 'true' using your favorite shell.
Read more about .NET Core CLI Tools telemetry: https://aka.ms/dotnet-cli-telemetry
----------------
Explore documentation: https://aka.ms/dotnet-docs
Report issues and find source on GitHub: https://github.com/dotnet/core
Find out what's new: https://aka.ms/dotnet-whats-new
Learn about the installed HTTPS developer cert: https://aka.ms/aspnet-core-https
Use 'dotnet --help' to see available commands or visit: https://aka.ms/dotnet-cli-docs
Write your first app: https://aka.ms/first-net-core-app
--------------------------------------------------------------------------------------
Microsoft (R) Build Engine version 16.4.0+e901037fe for .NET Core
Copyright (C) Microsoft Corporation. All rights reserved.
Restore completed in 15.87 sec for U:\complexapp\complexapp\complexapp.csproj.
Restore completed in 15.81 sec for U:\complexapp\libbar\libbar.csproj.
Restore completed in 15.81 sec for U:\complexapp\libfoo\libfoo.csproj.
Restore completed in 1.31 min for U:\complexapp\tests\tests.csproj.
libbar -> U:\complexapp\libbar\bin\Release\netstandard2.0\libbar.dll
libfoo -> U:\complexapp\libfoo\bin\Release\netstandard2.0\libfoo.dll
complexapp -> U:\complexapp\complexapp\bin\Release\netcoreapp3.1\complexapp.dll
tests -> U:\complexapp\tests\bin\Release\netcoreapp3.1\tests.dll
Build succeeded.
0 Warning(s)
0 Error(s)
Time Elapsed 00:02:06.14
```
3. Run `dotnet test`
```
U:\complexapp>dotnet test --logger:trx
Test run for U:\complexapp\tests\bin\Debug\netcoreapp3.1\tests.dll(.NETCoreApp,Version=v3.1)
Microsoft (R) Test Execution Command Line Tool Version 16.3.0
Copyright (c) Microsoft Corporation. All rights reserved.
Starting test execution, please wait...
A total of 1 test files matched the specified pattern.
Failed to launch testhost with error: System.AggregateException: One or more errors occurred. (This function is not supported on this system.)
---> System.ComponentModel.Win32Exception (120): This function is not supported on this system.
at System.Diagnostics.Process.StartWithCreateProcess(ProcessStartInfo startInfo)
at System.Diagnostics.Process.Start()
at Microsoft.VisualStudio.TestPlatform.PlatformAbstractions.ProcessHelper.LaunchProcess(String processPath, String ar at Microsoft.VisualStudio.TestPlatform.PlatformAbstractions.ProcessHelper.LaunchProcess(String processPath, String ar at Microsoft.VisualStudio.TestPlatform.PlatformAbstractions.ProcessHelper.LaunchProcess(String processPath, String arguments, String workingDirectory, IDictionary`2 envVariables, Action`2 errorCallback, Action`1 exitCallBack)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.LaunchHost(TestProcessStartInfo testHostStartInfo, CancellationToken cancellationToken)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.<>c__DisplayClass36_0.<LaunchTestHostAsync>b__0()
at System.Threading.Tasks.Task`1.InnerInvoke()
at System.Threading.Tasks.Task.<>c.<.cctor>b__274_0(Object obj)
at System.Threading.ExecutionContext.RunFromThreadPoolDispatchLoop(Thread threadPoolThread, ExecutionContext executionContext, ContextCallback callback, Object state)
--- End of stack trace from previous location where exception was thrown ---
at System.Threading.ExecutionContext.RunFromThreadPoolDispatchLoop(Thread threadPoolThread, ExecutionContext executionContext, ContextCallback callback, Object state)
at System.Threading.Tasks.Task.ExecuteWithThreadLocal(Task& currentTaskSlot, Thread threadPoolThread)
--- End of stack trace from previous location where exception was thrown ---
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.LaunchTestHostAsync(TestProcessStartInfo testHostStartInfo, CancellationToken cancellationToken)
--- End of inner exception stack trace ---
at System.Threading.Tasks.Task`1.GetResultCore(Boolean waitCompletionNotification)
at System.Threading.Tasks.Task`1.get_Result()
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyOperationManager.SetupChannel(IEnumerable`1 sources, String runSettings)
Test run in progress.Results File: U:\complexapp\tests\TestResults\administrator_ddvsohumw038_2020-01-23_06_02_31.trx
Test Run Aborted.
```
The above steps produce the following exception: `System.ComponentModel.Win32Exception (120): This function is not supported on this system.`
A different exception, but similar callstack, will occur if you execute this from within a Docker container with the Dockerfile contained in the zip file: `System.ComponentModel.Win32Exception (298): Too many posts were made to a semaphore.`
```
Failed to launch testhost with error: System.AggregateException: One or more errors occurred. (Too many posts were made to a semaphore.)
---> System.ComponentModel.Win32Exception (298): Too many posts were made to a semaphore.
at System.Diagnostics.Process.StartWithCreateProcess(ProcessStartInfo startInfo)
at System.Diagnostics.Process.Start()
at Microsoft.VisualStudio.TestPlatform.PlatformAbstractions.ProcessHelper.LaunchProcess(String processPath, String arguments, String workingDirectory, IDictionary`2 envVariables, Action`2 errorCallback, Action`1 exitCallBack)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.LaunchHost(TestProcessStartInfo testHostStartInfo, CancellationToken cancellationToken)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.<>c__DisplayClass36_0.<LaunchTestHostAsync>b__0()
at System.Threading.Tasks.Task`1.InnerInvoke()
at System.Threading.Tasks.Task.<>c.<.cctor>b__274_0(Object obj)
at System.Threading.ExecutionContext.RunFromThreadPoolDispatchLoop(Thread threadPoolThread, ExecutionContext executionContext, ContextCallback callback, Object state)
--- End of stack trace from previous location where exception was thrown ---
at System.Threading.ExecutionContext.RunFromThreadPoolDispatchLoop(Thread threadPoolThread, ExecutionContext executionContext, ContextCallback callback, Object state)
at System.Threading.Tasks.Task.ExecuteWithThreadLocal(Task& currentTaskSlot, Thread threadPoolThread)
--- End of stack trace from previous location where exception was thrown ---
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.LaunchTestHostAsync(TestProcessStartInfo testHostStartInfo, CancellationToken cancellationToken)
--- End of inner exception stack trace ---
at System.Threading.Tasks.Task`1.GetResultCore(Boolean waitCompletionNotification)
at System.Threading.Tasks.Task`1.get_Result()
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyOperationManager.SetupChannel(IEnumerable`1 sources, String runSettings)
``` | 1.0 | Win32Exception from `Process` class when running `dotnet test` on Windows ARM arch - On a machine with Windows IoT Core 1809 and .NET Core 3.1.101 installed:
1. Unzip [complexapp.zip](https://github.com/dotnet/runtime/files/4103678/complexapp.zip)
1. Run `dotnet build`
```
U:\complexapp>dotnet build -c release
Welcome to .NET Core 3.1!
---------------------
SDK Version: 3.1.101
Telemetry
---------
The .NET Core tools collect usage data in order to help us improve your experience. The data is anonymous. It is collected by Microsoft and shared with the community. You can opt-out of telemetry by setting the DOTNET_CLI_TELEMETRY_OPTOUT environment variable to '1' or 'true' using your favorite shell.
Read more about .NET Core CLI Tools telemetry: https://aka.ms/dotnet-cli-telemetry
----------------
Explore documentation: https://aka.ms/dotnet-docs
Report issues and find source on GitHub: https://github.com/dotnet/core
Find out what's new: https://aka.ms/dotnet-whats-new
Learn about the installed HTTPS developer cert: https://aka.ms/aspnet-core-https
Use 'dotnet --help' to see available commands or visit: https://aka.ms/dotnet-cli-docs
Write your first app: https://aka.ms/first-net-core-app
--------------------------------------------------------------------------------------
Microsoft (R) Build Engine version 16.4.0+e901037fe for .NET Core
Copyright (C) Microsoft Corporation. All rights reserved.
Restore completed in 15.87 sec for U:\complexapp\complexapp\complexapp.csproj.
Restore completed in 15.81 sec for U:\complexapp\libbar\libbar.csproj.
Restore completed in 15.81 sec for U:\complexapp\libfoo\libfoo.csproj.
Restore completed in 1.31 min for U:\complexapp\tests\tests.csproj.
libbar -> U:\complexapp\libbar\bin\Release\netstandard2.0\libbar.dll
libfoo -> U:\complexapp\libfoo\bin\Release\netstandard2.0\libfoo.dll
complexapp -> U:\complexapp\complexapp\bin\Release\netcoreapp3.1\complexapp.dll
tests -> U:\complexapp\tests\bin\Release\netcoreapp3.1\tests.dll
Build succeeded.
0 Warning(s)
0 Error(s)
Time Elapsed 00:02:06.14
```
3. Run `dotnet test`
```
U:\complexapp>dotnet test --logger:trx
Test run for U:\complexapp\tests\bin\Debug\netcoreapp3.1\tests.dll(.NETCoreApp,Version=v3.1)
Microsoft (R) Test Execution Command Line Tool Version 16.3.0
Copyright (c) Microsoft Corporation. All rights reserved.
Starting test execution, please wait...
A total of 1 test files matched the specified pattern.
Failed to launch testhost with error: System.AggregateException: One or more errors occurred. (This function is not supported on this system.)
---> System.ComponentModel.Win32Exception (120): This function is not supported on this system.
at System.Diagnostics.Process.StartWithCreateProcess(ProcessStartInfo startInfo)
at System.Diagnostics.Process.Start()
at Microsoft.VisualStudio.TestPlatform.PlatformAbstractions.ProcessHelper.LaunchProcess(String processPath, String ar at Microsoft.VisualStudio.TestPlatform.PlatformAbstractions.ProcessHelper.LaunchProcess(String processPath, String ar at Microsoft.VisualStudio.TestPlatform.PlatformAbstractions.ProcessHelper.LaunchProcess(String processPath, String arguments, String workingDirectory, IDictionary`2 envVariables, Action`2 errorCallback, Action`1 exitCallBack)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.LaunchHost(TestProcessStartInfo testHostStartInfo, CancellationToken cancellationToken)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.<>c__DisplayClass36_0.<LaunchTestHostAsync>b__0()
at System.Threading.Tasks.Task`1.InnerInvoke()
at System.Threading.Tasks.Task.<>c.<.cctor>b__274_0(Object obj)
at System.Threading.ExecutionContext.RunFromThreadPoolDispatchLoop(Thread threadPoolThread, ExecutionContext executionContext, ContextCallback callback, Object state)
--- End of stack trace from previous location where exception was thrown ---
at System.Threading.ExecutionContext.RunFromThreadPoolDispatchLoop(Thread threadPoolThread, ExecutionContext executionContext, ContextCallback callback, Object state)
at System.Threading.Tasks.Task.ExecuteWithThreadLocal(Task& currentTaskSlot, Thread threadPoolThread)
--- End of stack trace from previous location where exception was thrown ---
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.LaunchTestHostAsync(TestProcessStartInfo testHostStartInfo, CancellationToken cancellationToken)
--- End of inner exception stack trace ---
at System.Threading.Tasks.Task`1.GetResultCore(Boolean waitCompletionNotification)
at System.Threading.Tasks.Task`1.get_Result()
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyOperationManager.SetupChannel(IEnumerable`1 sources, String runSettings)
Test run in progress.Results File: U:\complexapp\tests\TestResults\administrator_ddvsohumw038_2020-01-23_06_02_31.trx
Test Run Aborted.
```
The above steps produce the following exception: `System.ComponentModel.Win32Exception (120): This function is not supported on this system.`
A different exception, but similar callstack, will occur if you execute this from within a Docker container with the Dockerfile contained in the zip file: `System.ComponentModel.Win32Exception (298): Too many posts were made to a semaphore.`
```
Failed to launch testhost with error: System.AggregateException: One or more errors occurred. (Too many posts were made to a semaphore.)
---> System.ComponentModel.Win32Exception (298): Too many posts were made to a semaphore.
at System.Diagnostics.Process.StartWithCreateProcess(ProcessStartInfo startInfo)
at System.Diagnostics.Process.Start()
at Microsoft.VisualStudio.TestPlatform.PlatformAbstractions.ProcessHelper.LaunchProcess(String processPath, String arguments, String workingDirectory, IDictionary`2 envVariables, Action`2 errorCallback, Action`1 exitCallBack)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.LaunchHost(TestProcessStartInfo testHostStartInfo, CancellationToken cancellationToken)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.<>c__DisplayClass36_0.<LaunchTestHostAsync>b__0()
at System.Threading.Tasks.Task`1.InnerInvoke()
at System.Threading.Tasks.Task.<>c.<.cctor>b__274_0(Object obj)
at System.Threading.ExecutionContext.RunFromThreadPoolDispatchLoop(Thread threadPoolThread, ExecutionContext executionContext, ContextCallback callback, Object state)
--- End of stack trace from previous location where exception was thrown ---
at System.Threading.ExecutionContext.RunFromThreadPoolDispatchLoop(Thread threadPoolThread, ExecutionContext executionContext, ContextCallback callback, Object state)
at System.Threading.Tasks.Task.ExecuteWithThreadLocal(Task& currentTaskSlot, Thread threadPoolThread)
--- End of stack trace from previous location where exception was thrown ---
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.LaunchTestHostAsync(TestProcessStartInfo testHostStartInfo, CancellationToken cancellationToken)
--- End of inner exception stack trace ---
at System.Threading.Tasks.Task`1.GetResultCore(Boolean waitCompletionNotification)
at System.Threading.Tasks.Task`1.get_Result()
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyOperationManager.SetupChannel(IEnumerable`1 sources, String runSettings)
``` | process | from process class when running dotnet test on windows arm arch on a machine with windows iot core and net core installed unzip run dotnet build u complexapp dotnet build c release welcome to net core sdk version telemetry the net core tools collect usage data in order to help us improve your experience the data is anonymous it is collected by microsoft and shared with the community you can opt out of telemetry by setting the dotnet cli telemetry optout environment variable to or true using your favorite shell read more about net core cli tools telemetry explore documentation report issues and find source on github find out what s new learn about the installed https developer cert use dotnet help to see available commands or visit write your first app microsoft r build engine version for net core copyright c microsoft corporation all rights reserved restore completed in sec for u complexapp complexapp complexapp csproj restore completed in sec for u complexapp libbar libbar csproj restore completed in sec for u complexapp libfoo libfoo csproj restore completed in min for u complexapp tests tests csproj libbar u complexapp libbar bin release libbar dll libfoo u complexapp libfoo bin release libfoo dll complexapp u complexapp complexapp bin release complexapp dll tests u complexapp tests bin release tests dll build succeeded warning s error s time elapsed run dotnet test u complexapp dotnet test logger trx test run for u complexapp tests bin debug tests dll netcoreapp version microsoft r test execution command line tool version copyright c microsoft corporation all rights reserved starting test execution please wait a total of test files matched the specified pattern failed to launch testhost with error system aggregateexception one or more errors occurred this function is not supported on this system system componentmodel this function is not supported on this system at system diagnostics process startwithcreateprocess processstartinfo startinfo at system diagnostics process start at microsoft visualstudio testplatform platformabstractions processhelper launchprocess string processpath string ar at microsoft visualstudio testplatform platformabstractions processhelper launchprocess string processpath string ar at microsoft visualstudio testplatform platformabstractions processhelper launchprocess string processpath string arguments string workingdirectory idictionary envvariables action errorcallback action exitcallback at microsoft visualstudio testplatform crossplatengine hosting dotnettesthostmanager launchhost testprocessstartinfo testhoststartinfo cancellationtoken cancellationtoken at microsoft visualstudio testplatform crossplatengine hosting dotnettesthostmanager c b at system threading tasks task innerinvoke at system threading tasks task c b object obj at system threading executioncontext runfromthreadpooldispatchloop thread threadpoolthread executioncontext executioncontext contextcallback callback object state end of stack trace from previous location where exception was thrown at system threading executioncontext runfromthreadpooldispatchloop thread threadpoolthread executioncontext executioncontext contextcallback callback object state at system threading tasks task executewiththreadlocal task currenttaskslot thread threadpoolthread end of stack trace from previous location where exception was thrown at microsoft visualstudio testplatform crossplatengine hosting dotnettesthostmanager launchtesthostasync testprocessstartinfo testhoststartinfo cancellationtoken cancellationtoken end of inner exception stack trace at system threading tasks task getresultcore boolean waitcompletionnotification at system threading tasks task get result at microsoft visualstudio testplatform crossplatengine client proxyoperationmanager setupchannel ienumerable sources string runsettings test run in progress results file u complexapp tests testresults administrator trx test run aborted the above steps produce the following exception system componentmodel this function is not supported on this system a different exception but similar callstack will occur if you execute this from within a docker container with the dockerfile contained in the zip file system componentmodel too many posts were made to a semaphore failed to launch testhost with error system aggregateexception one or more errors occurred too many posts were made to a semaphore system componentmodel too many posts were made to a semaphore at system diagnostics process startwithcreateprocess processstartinfo startinfo at system diagnostics process start at microsoft visualstudio testplatform platformabstractions processhelper launchprocess string processpath string arguments string workingdirectory idictionary envvariables action errorcallback action exitcallback at microsoft visualstudio testplatform crossplatengine hosting dotnettesthostmanager launchhost testprocessstartinfo testhoststartinfo cancellationtoken cancellationtoken at microsoft visualstudio testplatform crossplatengine hosting dotnettesthostmanager c b at system threading tasks task innerinvoke at system threading tasks task c b object obj at system threading executioncontext runfromthreadpooldispatchloop thread threadpoolthread executioncontext executioncontext contextcallback callback object state end of stack trace from previous location where exception was thrown at system threading executioncontext runfromthreadpooldispatchloop thread threadpoolthread executioncontext executioncontext contextcallback callback object state at system threading tasks task executewiththreadlocal task currenttaskslot thread threadpoolthread end of stack trace from previous location where exception was thrown at microsoft visualstudio testplatform crossplatengine hosting dotnettesthostmanager launchtesthostasync testprocessstartinfo testhoststartinfo cancellationtoken cancellationtoken end of inner exception stack trace at system threading tasks task getresultcore boolean waitcompletionnotification at system threading tasks task get result at microsoft visualstudio testplatform crossplatengine client proxyoperationmanager setupchannel ienumerable sources string runsettings | 1 |
1,040 | 3,510,435,957 | IssuesEvent | 2016-01-09 13:30:27 | nilmtk/nilmtk | https://api.github.com/repos/nilmtk/nilmtk | closed | Train-Test split needs review | enhancement pre-processing Review | v 0.1 used to have a simple function for dividing the dataset into train and test. Currently, one has to manually specify the time boundaries for specifying train and test.
Any thoughts? | 1.0 | Train-Test split needs review - v 0.1 used to have a simple function for dividing the dataset into train and test. Currently, one has to manually specify the time boundaries for specifying train and test.
Any thoughts? | process | train test split needs review v used to have a simple function for dividing the dataset into train and test currently one has to manually specify the time boundaries for specifying train and test any thoughts | 1 |
9,654 | 12,625,119,099 | IssuesEvent | 2020-06-14 10:12:06 | Arch666Angel/mods | https://api.github.com/repos/Arch666Angel/mods | closed | Hatchery requires exploration data cores AND alien plant life sample | Angels Bio Processing Impact: Bug | **Describe the bug**
Hatchery requires exploration data cores AND alien plant life sample
**To Reproduce**
Information to reproduce the behavior:
1. Game Version
2. Modlist
**Screenshots**

**Additional context**
- [ ] In the auto replacement script, make sure to remove the alien plant life sample as ingredient when exploration data cores are present OR make sure that they are production data cores
- [ ] Check if it fixed this particular issue | 1.0 | Hatchery requires exploration data cores AND alien plant life sample - **Describe the bug**
Hatchery requires exploration data cores AND alien plant life sample
**To Reproduce**
Information to reproduce the behavior:
1. Game Version
2. Modlist
**Screenshots**

**Additional context**
- [ ] In the auto replacement script, make sure to remove the alien plant life sample as ingredient when exploration data cores are present OR make sure that they are production data cores
- [ ] Check if it fixed this particular issue | process | hatchery requires exploration data cores and alien plant life sample describe the bug hatchery requires exploration data cores and alien plant life sample to reproduce information to reproduce the behavior game version modlist screenshots additional context in the auto replacement script make sure to remove the alien plant life sample as ingredient when exploration data cores are present or make sure that they are production data cores check if it fixed this particular issue | 1 |
20,173 | 3,796,440,503 | IssuesEvent | 2016-03-23 00:29:53 | rancher/rancher | https://api.github.com/repos/rancher/rancher | closed | Unhealthy and degraded states show up as green in the graph | area/ui kind/bug status/to-test | This is a screenshot of a stack using Rancher-server 0.56:

It seems that the color for degraded/unhealthy is wrong. | 1.0 | Unhealthy and degraded states show up as green in the graph - This is a screenshot of a stack using Rancher-server 0.56:

It seems that the color for degraded/unhealthy is wrong. | non_process | unhealthy and degraded states show up as green in the graph this is a screenshot of a stack using rancher server it seems that the color for degraded unhealthy is wrong | 0 |
281,701 | 21,315,429,448 | IssuesEvent | 2022-04-16 07:25:40 | redpelican2108/pe | https://api.github.com/repos/redpelican2108/pe | opened | Error in Sequence Diagram | severity.Low type.DocumentationBug | 
For each object there is a duplicate of the same object at the end of the lifeline
<!--session: 1650088094267-86dfb6d9-8554-40ab-b092-b978eb970c90-->
<!--Version: Web v3.4.2--> | 1.0 | Error in Sequence Diagram - 
For each object there is a duplicate of the same object at the end of the lifeline
<!--session: 1650088094267-86dfb6d9-8554-40ab-b092-b978eb970c90-->
<!--Version: Web v3.4.2--> | non_process | error in sequence diagram for each object there is a duplicate of the same object at the end of the lifeline | 0 |
9,583 | 4,559,322,659 | IssuesEvent | 2016-09-14 01:32:18 | rust-lang/rust | https://api.github.com/repos/rust-lang/rust | closed | Move compiler-rt build into a crate dependency of libcore | A-build A-rustbuild E-help-wanted | One of the major blockers of our dream to "lazily compile std" is to ensure that we have the ability to compile compiler-rt on-demand. This is a repository maintained by LLVM which contains a large set of intrinsics which LLVM lowers function calls down to on some platforms.
Unfortunately the build system of compiler-rt is a bit of a nightmare. We, at the time of the writing, have a large pile of hacks on its makefile-based build system to get things working, and it appears that LLVM has deprecated this build system anyway. We're [trying to move to cmake](https://github.com/rust-lang/rust/pull/34055) but it's still unfortunately a nightmare compiling compiler-rt.
To solve both these problems in one fell swoop, @brson and I were chatting this morning and had the idea of moving the build entirely to a build script of libcore, and basically just using gcc-rs to compile compiler-rt instead of using compiler-rt's build system. This means we don't have to have LLVM installed (why does compiler-rt need llvm-config?) and cross-compiling should be *much* more robust/easy as we're driving the compiles, not working around an opaque build system.
To make matters worse in compiler-rt as well it contains code for a massive number of intrinsics we'll probably never use. And *even worse* these bits and pieces of code often cause compile failures which don't end up mattering in the end. To solve this problem we should just whitelist a set of intrinsics to build and ignore all others. This may be a bit of a rocky road as we discover some we should have compiled but forgot, but in theory we should be able to select a subset to compile and be done with it.
This may make updating compiler-rt difficult, but we've already only done it once in like the past year or two years, so we don't seem to need to do this too urgently. This is a worry to keep in mind, however.
Basically here's what I think we should do:
* Add a build script to libcore, link gcc-rs into it
* Compile select portions of compiler-rt as part of this build script, using gcc-rs
* Disable injection of compiler-rt in the compiler
Staging this is still a bit up in the air, but I'm curious what others think about this as well.
cc @rust-lang/tools
cc @brson
cc @japaric | 2.0 | Move compiler-rt build into a crate dependency of libcore - One of the major blockers of our dream to "lazily compile std" is to ensure that we have the ability to compile compiler-rt on-demand. This is a repository maintained by LLVM which contains a large set of intrinsics which LLVM lowers function calls down to on some platforms.
Unfortunately the build system of compiler-rt is a bit of a nightmare. We, at the time of the writing, have a large pile of hacks on its makefile-based build system to get things working, and it appears that LLVM has deprecated this build system anyway. We're [trying to move to cmake](https://github.com/rust-lang/rust/pull/34055) but it's still unfortunately a nightmare compiling compiler-rt.
To solve both these problems in one fell swoop, @brson and I were chatting this morning and had the idea of moving the build entirely to a build script of libcore, and basically just using gcc-rs to compile compiler-rt instead of using compiler-rt's build system. This means we don't have to have LLVM installed (why does compiler-rt need llvm-config?) and cross-compiling should be *much* more robust/easy as we're driving the compiles, not working around an opaque build system.
To make matters worse in compiler-rt as well it contains code for a massive number of intrinsics we'll probably never use. And *even worse* these bits and pieces of code often cause compile failures which don't end up mattering in the end. To solve this problem we should just whitelist a set of intrinsics to build and ignore all others. This may be a bit of a rocky road as we discover some we should have compiled but forgot, but in theory we should be able to select a subset to compile and be done with it.
This may make updating compiler-rt difficult, but we've already only done it once in like the past year or two years, so we don't seem to need to do this too urgently. This is a worry to keep in mind, however.
Basically here's what I think we should do:
* Add a build script to libcore, link gcc-rs into it
* Compile select portions of compiler-rt as part of this build script, using gcc-rs
* Disable injection of compiler-rt in the compiler
Staging this is still a bit up in the air, but I'm curious what others think about this as well.
cc @rust-lang/tools
cc @brson
cc @japaric | non_process | move compiler rt build into a crate dependency of libcore one of the major blockers of our dream to lazily compile std is to ensure that we have the ability to compile compiler rt on demand this is a repository maintained by llvm which contains a large set of intrinsics which llvm lowers function calls down to on some platforms unfortunately the build system of compiler rt is a bit of a nightmare we at the time of the writing have a large pile of hacks on its makefile based build system to get things working and it appears that llvm has deprecated this build system anyway we re but it s still unfortunately a nightmare compiling compiler rt to solve both these problems in one fell swoop brson and i were chatting this morning and had the idea of moving the build entirely to a build script of libcore and basically just using gcc rs to compile compiler rt instead of using compiler rt s build system this means we don t have to have llvm installed why does compiler rt need llvm config and cross compiling should be much more robust easy as we re driving the compiles not working around an opaque build system to make matters worse in compiler rt as well it contains code for a massive number of intrinsics we ll probably never use and even worse these bits and pieces of code often cause compile failures which don t end up mattering in the end to solve this problem we should just whitelist a set of intrinsics to build and ignore all others this may be a bit of a rocky road as we discover some we should have compiled but forgot but in theory we should be able to select a subset to compile and be done with it this may make updating compiler rt difficult but we ve already only done it once in like the past year or two years so we don t seem to need to do this too urgently this is a worry to keep in mind however basically here s what i think we should do add a build script to libcore link gcc rs into it compile select portions of compiler rt as part of this build script using gcc rs disable injection of compiler rt in the compiler staging this is still a bit up in the air but i m curious what others think about this as well cc rust lang tools cc brson cc japaric | 0 |
301,335 | 22,748,521,725 | IssuesEvent | 2022-07-07 11:14:41 | christoph-buente/kga-v2 | https://api.github.com/repos/christoph-buente/kga-v2 | opened | kga-v1 gem usage | documentation | For info - steps required to install the gems for the existing app.
(done back in Jan on Rails6 as a precursor to us replatforming)
# Software Versions
Ruby: 3.0.2
Rails: 6.1.4.1
Postgres: latest (14)
# Create App
Create default rails app and configure for postgres under docker
## Rspec and FactoryBot
* Change the testing framework to use Rspec instead of Minitest
* Update Gemfile and rebuild:
gem 'rspec-rails'
gem 'factory_bot_rails'
* Install Rspec:
```
$ rails generate rspec:install
```
* Delete rails test folder (spec is used instead)
* Run the tests
```
$ docker-compose run --rm web rspec
```
## Devise
* Update Gemfile and rebuild:
gem 'devise'
gem 'devise-i18n'
* Install Devise:
```
$ rails g devise:install
```
* Generate (optional) views:
```
$ rails g devise:views
$ sudo chown -R "$USER":"$USER" .
```
## Activeadmin
* Update Gemfile and rebuild:
gem 'activeadmin'
* Install Activeadmin:
```
$ rails g active_admin:install User
$ sudo chown -R "$USER":"$USER" .
```
* Delete migration create_active_admin_comments
* Edit config/initializers/active_admin.rb:
config.comments = false
* Edit User migration:
uncomment Trackable & Confirmable attributes
* Migrate db and seed with default login (dev only):
```
$ rails db:migrate
$ rails db:seed
```
* Now got minimum viable app where you can login to the dashboard:
http://localhost:3000/admin/dashboard
## Active Admin add ons
https://github.com/platanus/activeadmin_addons
gem 'activeadmin_addons'
$ rails g activeadmin_addons:install
https://github.com/blocknotes/activeadmin_dynamic_fields
gem 'activeadmin_dynamic_fields'
ERROR on bundle install:
https://github.com/activeadmin-plugins/ active_admin_import
gem 'active_admin_import'
## Devise Invitable
https://github.com/scambra/devise_invitable
gem 'devise_invitable'
$ rails g devise_invitable:install
$ rails g devise_invitable User
$ sudo chown -R "$USER":"$USER" .
$ rails db:migrate
## Cancancan
gem 'cancancan'
$ rails g cancan:ability
$ sudo chown -R "$USER":"$USER" .
## Rolify
https://github.com/RolifyCommunity/rolify
gem 'rolify'
$ rails g rails generate rolify Role User
$ sudo chown -R "$USER":"$USER" .
$ rails db:migrate
## HAML
https://github.com/haml/haml-rails
gem 'haml-rails'
Convert all .erb to .haml:
$ rails haml:erb2haml
## Public activity
https://github.com/chaps-io/public_activity
gem 'public_activity'
$ rails g public_activity:migration
$ sudo chown -R "$USER":"$USER" .
$ rails db:migrate
## Tags
https://github.com/mbleigh/acts-as-taggable-on
gem 'acts-as-taggable-on'
$ rails acts_as_taggable_on_engine:install:migrations
$ sudo chown -R "$USER":"$USER" .
$ rails db:migrate
## Locales
https://github.com/svenfuchs/rails-i18n
gem 'rails-i18n'
config/application.rb:
config.i18n.default_locale = :de
config.time_zone = 'Berlin'
## Bootstrap 3
https://github.com/twbs/bootstrap-sass
gem 'bootstrap-sass'
Add to app/assets/stylesheets/application.scss:
@import "bootstrap-sprockets";
@import "bootstrap";
Add to app/assets/javascripts/application.js:
//= require jquery
//= require bootstrap-sprockets
## Fontawesome Icons
https://github.com/bokmann/font-awesome-rails
gem 'font-awesome-rails'
Add to app/assets/stylesheets/application.scss:
@import "font-awesome";
## Forms
https://github.com/heartcombo/simple_form
gem 'simple_form'
$ rails generate simple_form:install --bootstrap
$ sudo chown -R "$USER":"$USER" .
## Datetimepicker
https://github.com/zpaulovics/datetimepicker-rails
gem 'datetimepicker-rails', '>= 3.0.0', git: 'https://github.com/zpaulovics/datetimepicker-rails', branch: 'tarruda'
$ rails generate datetimepicker_rails:install Font-Awesome
$ sudo chown -R "$USER":"$USER" .
## Datetime validation
https://github.com/adzap/validates_timeliness
gem 'validates_timeliness'
$ rails generate validates_timeliness:install
$ sudo chown -R "$USER":"$USER" .
## Spreadsheet creation
https://github.com/zdavatz/spreadsheet/
gem 'spreadsheet'
## Cron jobs
We use this for db dump.
https://github.com/javan/whenever
gem 'whenever'
$ bundle exec wheneverize .
$ sudo chown -R "$USER":"$USER" .
## Other gems with no special installation requirements
gem 'active_admin_import'
gem 'rollbar'
gem 'uglifier'
gem 'moving_average'
gem 'prawn'
gem 'prawn-table'
gem 'therubyracer', platforms: :ruby
gem 'vpim'
gem 'money'
gem 'rails_db_dump'
## Other dev, test only gems with no special installation requirements
gem 'better_errors'
gem 'binding_of_caller', :platforms=>[:mri_21]
## To be replaced
gem 'paperclip' # with Active Storage
gem 'figaro' # with docker env vars?
## Not needed?
gem 'coffee-rails' # use std javascript
gem 'aasm' # app isn't using state machines
gem 'unicorn' # use puma instead?
## Not explicitly needed
Pulled in as dependencies anyway:
gem 'jquery-rails'
| 1.0 | kga-v1 gem usage - For info - steps required to install the gems for the existing app.
(done back in Jan on Rails6 as a precursor to us replatforming)
# Software Versions
Ruby: 3.0.2
Rails: 6.1.4.1
Postgres: latest (14)
# Create App
Create default rails app and configure for postgres under docker
## Rspec and FactoryBot
* Change the testing framework to use Rspec instead of Minitest
* Update Gemfile and rebuild:
gem 'rspec-rails'
gem 'factory_bot_rails'
* Install Rspec:
```
$ rails generate rspec:install
```
* Delete rails test folder (spec is used instead)
* Run the tests
```
$ docker-compose run --rm web rspec
```
## Devise
* Update Gemfile and rebuild:
gem 'devise'
gem 'devise-i18n'
* Install Devise:
```
$ rails g devise:install
```
* Generate (optional) views:
```
$ rails g devise:views
$ sudo chown -R "$USER":"$USER" .
```
## Activeadmin
* Update Gemfile and rebuild:
gem 'activeadmin'
* Install Activeadmin:
```
$ rails g active_admin:install User
$ sudo chown -R "$USER":"$USER" .
```
* Delete migration create_active_admin_comments
* Edit config/initializers/active_admin.rb:
config.comments = false
* Edit User migration:
uncomment Trackable & Confirmable attributes
* Migrate db and seed with default login (dev only):
```
$ rails db:migrate
$ rails db:seed
```
* Now got minimum viable app where you can login to the dashboard:
http://localhost:3000/admin/dashboard
## Active Admin add ons
https://github.com/platanus/activeadmin_addons
gem 'activeadmin_addons'
$ rails g activeadmin_addons:install
https://github.com/blocknotes/activeadmin_dynamic_fields
gem 'activeadmin_dynamic_fields'
ERROR on bundle install:
https://github.com/activeadmin-plugins/ active_admin_import
gem 'active_admin_import'
## Devise Invitable
https://github.com/scambra/devise_invitable
gem 'devise_invitable'
$ rails g devise_invitable:install
$ rails g devise_invitable User
$ sudo chown -R "$USER":"$USER" .
$ rails db:migrate
## Cancancan
gem 'cancancan'
$ rails g cancan:ability
$ sudo chown -R "$USER":"$USER" .
## Rolify
https://github.com/RolifyCommunity/rolify
gem 'rolify'
$ rails g rails generate rolify Role User
$ sudo chown -R "$USER":"$USER" .
$ rails db:migrate
## HAML
https://github.com/haml/haml-rails
gem 'haml-rails'
Convert all .erb to .haml:
$ rails haml:erb2haml
## Public activity
https://github.com/chaps-io/public_activity
gem 'public_activity'
$ rails g public_activity:migration
$ sudo chown -R "$USER":"$USER" .
$ rails db:migrate
## Tags
https://github.com/mbleigh/acts-as-taggable-on
gem 'acts-as-taggable-on'
$ rails acts_as_taggable_on_engine:install:migrations
$ sudo chown -R "$USER":"$USER" .
$ rails db:migrate
## Locales
https://github.com/svenfuchs/rails-i18n
gem 'rails-i18n'
config/application.rb:
config.i18n.default_locale = :de
config.time_zone = 'Berlin'
## Bootstrap 3
https://github.com/twbs/bootstrap-sass
gem 'bootstrap-sass'
Add to app/assets/stylesheets/application.scss:
@import "bootstrap-sprockets";
@import "bootstrap";
Add to app/assets/javascripts/application.js:
//= require jquery
//= require bootstrap-sprockets
## Fontawesome Icons
https://github.com/bokmann/font-awesome-rails
gem 'font-awesome-rails'
Add to app/assets/stylesheets/application.scss:
@import "font-awesome";
## Forms
https://github.com/heartcombo/simple_form
gem 'simple_form'
$ rails generate simple_form:install --bootstrap
$ sudo chown -R "$USER":"$USER" .
## Datetimepicker
https://github.com/zpaulovics/datetimepicker-rails
gem 'datetimepicker-rails', '>= 3.0.0', git: 'https://github.com/zpaulovics/datetimepicker-rails', branch: 'tarruda'
$ rails generate datetimepicker_rails:install Font-Awesome
$ sudo chown -R "$USER":"$USER" .
## Datetime validation
https://github.com/adzap/validates_timeliness
gem 'validates_timeliness'
$ rails generate validates_timeliness:install
$ sudo chown -R "$USER":"$USER" .
## Spreadsheet creation
https://github.com/zdavatz/spreadsheet/
gem 'spreadsheet'
## Cron jobs
We use this for db dump.
https://github.com/javan/whenever
gem 'whenever'
$ bundle exec wheneverize .
$ sudo chown -R "$USER":"$USER" .
## Other gems with no special installation requirements
gem 'active_admin_import'
gem 'rollbar'
gem 'uglifier'
gem 'moving_average'
gem 'prawn'
gem 'prawn-table'
gem 'therubyracer', platforms: :ruby
gem 'vpim'
gem 'money'
gem 'rails_db_dump'
## Other dev, test only gems with no special installation requirements
gem 'better_errors'
gem 'binding_of_caller', :platforms=>[:mri_21]
## To be replaced
gem 'paperclip' # with Active Storage
gem 'figaro' # with docker env vars?
## Not needed?
gem 'coffee-rails' # use std javascript
gem 'aasm' # app isn't using state machines
gem 'unicorn' # use puma instead?
## Not explicitly needed
Pulled in as dependencies anyway:
gem 'jquery-rails'
| non_process | kga gem usage for info steps required to install the gems for the existing app done back in jan on as a precursor to us replatforming software versions ruby rails postgres latest create app create default rails app and configure for postgres under docker rspec and factorybot change the testing framework to use rspec instead of minitest update gemfile and rebuild gem rspec rails gem factory bot rails install rspec rails generate rspec install delete rails test folder spec is used instead run the tests docker compose run rm web rspec devise update gemfile and rebuild gem devise gem devise install devise rails g devise install generate optional views rails g devise views sudo chown r user user activeadmin update gemfile and rebuild gem activeadmin install activeadmin rails g active admin install user sudo chown r user user delete migration create active admin comments edit config initializers active admin rb config comments false edit user migration uncomment trackable confirmable attributes migrate db and seed with default login dev only rails db migrate rails db seed now got minimum viable app where you can login to the dashboard active admin add ons gem activeadmin addons rails g activeadmin addons install gem activeadmin dynamic fields error on bundle install active admin import gem active admin import devise invitable gem devise invitable rails g devise invitable install rails g devise invitable user sudo chown r user user rails db migrate cancancan gem cancancan rails g cancan ability sudo chown r user user rolify gem rolify rails g rails generate rolify role user sudo chown r user user rails db migrate haml gem haml rails convert all erb to haml rails haml public activity gem public activity rails g public activity migration sudo chown r user user rails db migrate tags gem acts as taggable on rails acts as taggable on engine install migrations sudo chown r user user rails db migrate locales gem rails config application rb config default locale de config time zone berlin bootstrap gem bootstrap sass add to app assets stylesheets application scss import bootstrap sprockets import bootstrap add to app assets javascripts application js require jquery require bootstrap sprockets fontawesome icons gem font awesome rails add to app assets stylesheets application scss import font awesome forms gem simple form rails generate simple form install bootstrap sudo chown r user user datetimepicker gem datetimepicker rails git branch tarruda rails generate datetimepicker rails install font awesome sudo chown r user user datetime validation gem validates timeliness rails generate validates timeliness install sudo chown r user user spreadsheet creation gem spreadsheet cron jobs we use this for db dump gem whenever bundle exec wheneverize sudo chown r user user other gems with no special installation requirements gem active admin import gem rollbar gem uglifier gem moving average gem prawn gem prawn table gem therubyracer platforms ruby gem vpim gem money gem rails db dump other dev test only gems with no special installation requirements gem better errors gem binding of caller platforms to be replaced gem paperclip with active storage gem figaro with docker env vars not needed gem coffee rails use std javascript gem aasm app isn t using state machines gem unicorn use puma instead not explicitly needed pulled in as dependencies anyway gem jquery rails | 0 |
13,106 | 15,496,638,098 | IssuesEvent | 2021-03-11 03:03:24 | dluiscosta/weather_api | https://api.github.com/repos/dluiscosta/weather_api | opened | Isolate config variables from docker-composes | development process enhancement question | Isolate configuration variables from docker-composes, which are currently under the ```environment``` section, thus improving source organization and allowing environment variables to be easily applied to local environment, if desired.
Is this possible without creating multiple sources of truths when accounting for different values used in different stages? | 1.0 | Isolate config variables from docker-composes - Isolate configuration variables from docker-composes, which are currently under the ```environment``` section, thus improving source organization and allowing environment variables to be easily applied to local environment, if desired.
Is this possible without creating multiple sources of truths when accounting for different values used in different stages? | process | isolate config variables from docker composes isolate configuration variables from docker composes which are currently under the environment section thus improving source organization and allowing environment variables to be easily applied to local environment if desired is this possible without creating multiple sources of truths when accounting for different values used in different stages | 1 |
20,674 | 27,337,287,683 | IssuesEvent | 2023-02-26 11:25:15 | vnphanquang/svelte-put | https://api.github.com/repos/vnphanquang/svelte-put | closed | Preprocessor for inlining SVG | type:feature priority:medium scope:preprocess-inline-svg | ## Context
Very often we need to inline svg as svelte components, typically for color styling. Naive solution is to make a svelte component for each svg, but that's not particularly ergonomic.
## Prior Arts
- [svelte-inline-svg]
- operates at runtime, can be used for dynamic svg loaded from network, which is versatile and can utilize caching. Although it may not be very performant?
- wraps with a dedicated Svelte component -> not easy to style without tailwind or global styles.
- [vite-plugin-svelte-svg]
- a vite plugin, very minimal, use a `?component` enhanced import so
- has svgo support as a global plugin option, which is a big plus.
## Inception
What if we can provide a more minimal interface like this:
```html
<svg data-inline-src="../image.svg" data-inline-svgo-...></svg>
```
where:
- `data-inline-src`: a path to the svg asset, relative to the current file
- `data-inline-svgo-...`: per-case option to transform the svg, typically will be merged with the global config. Ex: `data-inline-svgo-removeDimensions` for removing `width` & `height`, or `="false"` to disable (inline each prop here in
- any other attributes will be merged with those in the svg asset
Pros:
- No additional imports
- No wrapping in component -> allow styling and using all other vanila `svg` attributes
Cons:
- What if `data-inline-src` is invalid? Error? (JS import is handled by vite so it'll be more easy)
- What about svg loaded from network? Maybe we can also include something like what `svelte-inline-svg` does, but with an action?? -> lower priority
Can this be a `vite` plugin instead of a svelte preprocessor?
[svgo]: https://github.com/svg/svgo
[vite-plugin-svelte-svg]: https://github.com/metafy-gg/vite-plugin-svelte-svg
[svelte-inline-svg]: https://github.com/robinscholz/svelte-inline-svg | 1.0 | Preprocessor for inlining SVG - ## Context
Very often we need to inline svg as svelte components, typically for color styling. Naive solution is to make a svelte component for each svg, but that's not particularly ergonomic.
## Prior Arts
- [svelte-inline-svg]
- operates at runtime, can be used for dynamic svg loaded from network, which is versatile and can utilize caching. Although it may not be very performant?
- wraps with a dedicated Svelte component -> not easy to style without tailwind or global styles.
- [vite-plugin-svelte-svg]
- a vite plugin, very minimal, use a `?component` enhanced import so
- has svgo support as a global plugin option, which is a big plus.
## Inception
What if we can provide a more minimal interface like this:
```html
<svg data-inline-src="../image.svg" data-inline-svgo-...></svg>
```
where:
- `data-inline-src`: a path to the svg asset, relative to the current file
- `data-inline-svgo-...`: per-case option to transform the svg, typically will be merged with the global config. Ex: `data-inline-svgo-removeDimensions` for removing `width` & `height`, or `="false"` to disable (inline each prop here in
- any other attributes will be merged with those in the svg asset
Pros:
- No additional imports
- No wrapping in component -> allow styling and using all other vanila `svg` attributes
Cons:
- What if `data-inline-src` is invalid? Error? (JS import is handled by vite so it'll be more easy)
- What about svg loaded from network? Maybe we can also include something like what `svelte-inline-svg` does, but with an action?? -> lower priority
Can this be a `vite` plugin instead of a svelte preprocessor?
[svgo]: https://github.com/svg/svgo
[vite-plugin-svelte-svg]: https://github.com/metafy-gg/vite-plugin-svelte-svg
[svelte-inline-svg]: https://github.com/robinscholz/svelte-inline-svg | process | preprocessor for inlining svg context very often we need to inline svg as svelte components typically for color styling naive solution is to make a svelte component for each svg but that s not particularly ergonomic prior arts operates at runtime can be used for dynamic svg loaded from network which is versatile and can utilize caching although it may not be very performant wraps with a dedicated svelte component not easy to style without tailwind or global styles a vite plugin very minimal use a component enhanced import so has svgo support as a global plugin option which is a big plus inception what if we can provide a more minimal interface like this html where data inline src a path to the svg asset relative to the current file data inline svgo per case option to transform the svg typically will be merged with the global config ex data inline svgo removedimensions for removing width height or false to disable inline each prop here in any other attributes will be merged with those in the svg asset pros no additional imports no wrapping in component allow styling and using all other vanila svg attributes cons what if data inline src is invalid error js import is handled by vite so it ll be more easy what about svg loaded from network maybe we can also include something like what svelte inline svg does but with an action lower priority can this be a vite plugin instead of a svelte preprocessor | 1 |
120,278 | 4,787,597,274 | IssuesEvent | 2016-10-30 03:24:23 | rathena/rathena | https://api.github.com/repos/rathena/rathena | closed | Missing Mysterious_Travel_Sack4 Item Group | bug:database mode:renewal priority:low | Hash: 5d24d73
Mode: Renewal
Client Date: Any
The group is defined in /db/const.txt and used in item 13848 Mystery Travel Sack D, but there are no items defined for this group?
PR incoming. | 1.0 | Missing Mysterious_Travel_Sack4 Item Group - Hash: 5d24d73
Mode: Renewal
Client Date: Any
The group is defined in /db/const.txt and used in item 13848 Mystery Travel Sack D, but there are no items defined for this group?
PR incoming. | non_process | missing mysterious travel item group hash mode renewal client date any the group is defined in db const txt and used in item mystery travel sack d but there are no items defined for this group pr incoming | 0 |
18,014 | 24,032,565,680 | IssuesEvent | 2022-09-15 16:08:28 | googleapis/google-cloud-node | https://api.github.com/repos/googleapis/google-cloud-node | opened | Your .repo-metadata.json files have a problem 🤒 | type: process repo-metadata: lint | You have a problem with your .repo-metadata.json files:
Result of scan 📈:
* release_level must be equal to one of the allowed values in packages/gapic-node-templating/templates/bootstrap-templates/.repo-metadata.json
* api_shortname field missing from packages/gapic-node-templating/templates/bootstrap-templates/.repo-metadata.json
* release_level must be equal to one of the allowed values in packages/google-api-apikeys/.repo-metadata.json
* api_shortname field missing from packages/google-api-apikeys/.repo-metadata.json
* release_level must be equal to one of the allowed values in packages/google-cloud-batch/.repo-metadata.json
* api_shortname field missing from packages/google-cloud-batch/.repo-metadata.json
* release_level must be equal to one of the allowed values in packages/google-cloud-beyondcorp-appconnections/.repo-metadata.json
* api_shortname field missing from packages/google-cloud-beyondcorp-appconnections/.repo-metadata.json
* release_level must be equal to one of the allowed values in packages/google-cloud-beyondcorp-appconnectors/.repo-metadata.json
* api_shortname field missing from packages/google-cloud-beyondcorp-appconnectors/.repo-metadata.json
* release_level must be equal to one of the allowed values in packages/google-cloud-beyondcorp-appgateways/.repo-metadata.json
* api_shortname field missing from packages/google-cloud-beyondcorp-appgateways/.repo-metadata.json
* release_level must be equal to one of the allowed values in packages/google-cloud-beyondcorp-clientconnectorservices/.repo-metadata.json
* api_shortname field missing from packages/google-cloud-beyondcorp-clientconnectorservices/.repo-metadata.json
* release_level must be equal to one of the allowed values in packages/google-cloud-beyondcorp-clientgateways/.repo-metadata.json
* api_shortname field missing from packages/google-cloud-beyondcorp-clientgateways/.repo-metadata.json
* release_level must be equal to one of the allowed values in packages/google-cloud-gkemulticloud/.repo-metadata.json
* api_shortname field missing from packages/google-cloud-gkemulticloud/.repo-metadata.json
* release_level must be equal to one of the allowed values in packages/google-cloud-security-publicca/.repo-metadata.json
* api_shortname field missing from packages/google-cloud-security-publicca/.repo-metadata.json
* release_level must be equal to one of the allowed values in packages/google-iam/.repo-metadata.json
* api_shortname field missing from packages/google-iam/.repo-metadata.json
☝️ Once you address these problems, you can close this issue.
### Need help?
* [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field.
* [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**.
* Reach out to **go/github-automation** if you have any questions. | 1.0 | Your .repo-metadata.json files have a problem 🤒 - You have a problem with your .repo-metadata.json files:
Result of scan 📈:
* release_level must be equal to one of the allowed values in packages/gapic-node-templating/templates/bootstrap-templates/.repo-metadata.json
* api_shortname field missing from packages/gapic-node-templating/templates/bootstrap-templates/.repo-metadata.json
* release_level must be equal to one of the allowed values in packages/google-api-apikeys/.repo-metadata.json
* api_shortname field missing from packages/google-api-apikeys/.repo-metadata.json
* release_level must be equal to one of the allowed values in packages/google-cloud-batch/.repo-metadata.json
* api_shortname field missing from packages/google-cloud-batch/.repo-metadata.json
* release_level must be equal to one of the allowed values in packages/google-cloud-beyondcorp-appconnections/.repo-metadata.json
* api_shortname field missing from packages/google-cloud-beyondcorp-appconnections/.repo-metadata.json
* release_level must be equal to one of the allowed values in packages/google-cloud-beyondcorp-appconnectors/.repo-metadata.json
* api_shortname field missing from packages/google-cloud-beyondcorp-appconnectors/.repo-metadata.json
* release_level must be equal to one of the allowed values in packages/google-cloud-beyondcorp-appgateways/.repo-metadata.json
* api_shortname field missing from packages/google-cloud-beyondcorp-appgateways/.repo-metadata.json
* release_level must be equal to one of the allowed values in packages/google-cloud-beyondcorp-clientconnectorservices/.repo-metadata.json
* api_shortname field missing from packages/google-cloud-beyondcorp-clientconnectorservices/.repo-metadata.json
* release_level must be equal to one of the allowed values in packages/google-cloud-beyondcorp-clientgateways/.repo-metadata.json
* api_shortname field missing from packages/google-cloud-beyondcorp-clientgateways/.repo-metadata.json
* release_level must be equal to one of the allowed values in packages/google-cloud-gkemulticloud/.repo-metadata.json
* api_shortname field missing from packages/google-cloud-gkemulticloud/.repo-metadata.json
* release_level must be equal to one of the allowed values in packages/google-cloud-security-publicca/.repo-metadata.json
* api_shortname field missing from packages/google-cloud-security-publicca/.repo-metadata.json
* release_level must be equal to one of the allowed values in packages/google-iam/.repo-metadata.json
* api_shortname field missing from packages/google-iam/.repo-metadata.json
☝️ Once you address these problems, you can close this issue.
### Need help?
* [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field.
* [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**.
* Reach out to **go/github-automation** if you have any questions. | process | your repo metadata json files have a problem 🤒 you have a problem with your repo metadata json files result of scan 📈 release level must be equal to one of the allowed values in packages gapic node templating templates bootstrap templates repo metadata json api shortname field missing from packages gapic node templating templates bootstrap templates repo metadata json release level must be equal to one of the allowed values in packages google api apikeys repo metadata json api shortname field missing from packages google api apikeys repo metadata json release level must be equal to one of the allowed values in packages google cloud batch repo metadata json api shortname field missing from packages google cloud batch repo metadata json release level must be equal to one of the allowed values in packages google cloud beyondcorp appconnections repo metadata json api shortname field missing from packages google cloud beyondcorp appconnections repo metadata json release level must be equal to one of the allowed values in packages google cloud beyondcorp appconnectors repo metadata json api shortname field missing from packages google cloud beyondcorp appconnectors repo metadata json release level must be equal to one of the allowed values in packages google cloud beyondcorp appgateways repo metadata json api shortname field missing from packages google cloud beyondcorp appgateways repo metadata json release level must be equal to one of the allowed values in packages google cloud beyondcorp clientconnectorservices repo metadata json api shortname field missing from packages google cloud beyondcorp clientconnectorservices repo metadata json release level must be equal to one of the allowed values in packages google cloud beyondcorp clientgateways repo metadata json api shortname field missing from packages google cloud beyondcorp clientgateways repo metadata json release level must be equal to one of the allowed values in packages google cloud gkemulticloud repo metadata json api shortname field missing from packages google cloud gkemulticloud repo metadata json release level must be equal to one of the allowed values in packages google cloud security publicca repo metadata json api shortname field missing from packages google cloud security publicca repo metadata json release level must be equal to one of the allowed values in packages google iam repo metadata json api shortname field missing from packages google iam repo metadata json ☝️ once you address these problems you can close this issue need help lists valid options for each field for grpc libraries api shortname should match the subdomain of an api s hostname reach out to go github automation if you have any questions | 1 |
2,718 | 5,581,236,849 | IssuesEvent | 2017-03-28 18:25:02 | djspiewak/issue-testing | https://api.github.com/repos/djspiewak/issue-testing | opened | Find the CSS files | epic: Signup Process 2.0 | We… uh, lost them. We kinda need to make some changes for the new signup process. | 1.0 | Find the CSS files - We… uh, lost them. We kinda need to make some changes for the new signup process. | process | find the css files we… uh lost them we kinda need to make some changes for the new signup process | 1 |
829,498 | 31,881,324,468 | IssuesEvent | 2023-09-16 12:14:36 | nacht-falter/sonic-explorers-api | https://api.github.com/repos/nacht-falter/sonic-explorers-api | closed | USER STORY: Tag Endpoints | THEME: Sound Endpoints PRIORITY: Should-Have EPIC: Tags | As a developer, I want to access an endpoint to get all tags, so that I can display a list of tags. | 1.0 | USER STORY: Tag Endpoints - As a developer, I want to access an endpoint to get all tags, so that I can display a list of tags. | non_process | user story tag endpoints as a developer i want to access an endpoint to get all tags so that i can display a list of tags | 0 |
14,057 | 16,870,221,045 | IssuesEvent | 2021-06-22 02:48:56 | Leviatan-Analytics/LA-data-processing | https://api.github.com/repos/Leviatan-Analytics/LA-data-processing | closed | Train YoloV3 model [3] | Data Processing Sprint 2 Week 4 | Estimated time: 3 hs per assignee
Follow different steps to train the YoloV3 model with the obtained dataset. | 1.0 | Train YoloV3 model [3] - Estimated time: 3 hs per assignee
Follow different steps to train the YoloV3 model with the obtained dataset. | process | train model estimated time hs per assignee follow different steps to train the model with the obtained dataset | 1 |
14,187 | 17,090,890,405 | IssuesEvent | 2021-07-08 17:17:22 | IIIF/api | https://api.github.com/repos/IIIF/api | closed | Image and Presentation 3.0 Feature Implementations | editorial process | The [Evaluation and Testing](https://iiif.io/community/policy/editorial/#evaluation-and-testing) criteria in the IIIF Editorial Process are:
> In order to be considered ready for final review, new features must have two open-source server-side implementations, at least one of which should be in production. New features must also have at least one open-source client-side implementation, which may be a proof-of-concept.
We'll use this ticket to track implementations of Image and Presentation 3.0 features. If **you** have an implementation of v3 features, please add a comment describing them. The [Image API Change Log](https://iiif.io/api/image/3.0/change-log/) and [Presentation API Change Log](https://iiif.io/api/presentation/3.0/change-log/) describe API changes, and the latter includes description of [presentation features added in v3](https://iiif.io/api/presentation/3.0/change-log/#22-additional-features). | 1.0 | Image and Presentation 3.0 Feature Implementations - The [Evaluation and Testing](https://iiif.io/community/policy/editorial/#evaluation-and-testing) criteria in the IIIF Editorial Process are:
> In order to be considered ready for final review, new features must have two open-source server-side implementations, at least one of which should be in production. New features must also have at least one open-source client-side implementation, which may be a proof-of-concept.
We'll use this ticket to track implementations of Image and Presentation 3.0 features. If **you** have an implementation of v3 features, please add a comment describing them. The [Image API Change Log](https://iiif.io/api/image/3.0/change-log/) and [Presentation API Change Log](https://iiif.io/api/presentation/3.0/change-log/) describe API changes, and the latter includes description of [presentation features added in v3](https://iiif.io/api/presentation/3.0/change-log/#22-additional-features). | process | image and presentation feature implementations the criteria in the iiif editorial process are in order to be considered ready for final review new features must have two open source server side implementations at least one of which should be in production new features must also have at least one open source client side implementation which may be a proof of concept we ll use this ticket to track implementations of image and presentation features if you have an implementation of features please add a comment describing them the and describe api changes and the latter includes description of | 1 |
11,684 | 14,542,486,154 | IssuesEvent | 2020-12-15 15:46:15 | MicrosoftDocs/azure-devops-docs | https://api.github.com/repos/MicrosoftDocs/azure-devops-docs | closed | Do environments support Azure Web Apps? | Pri2 devops-cicd-process/tech devops/prod doc-bug | Hello, the documentation on this page says:
"Environments can include Kubernetes clusters, **Azure web apps**, virtual machines, **databases.**"
A few paragraphs later, "The Kubernetes resource and virtual machine resource types are currently supported."
Using Azure DevOps UI I only seem to be able to add Kubernetes and Virtual Machines to environments.
My question is, is there a way of adding Web Apps to environment through YAML or some other way?
Or is it not currently supported? If so, documentation should say: "Environments can include Kubernetes clusters and virtual machines" to avoid any ambiguity.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 77d95db6-9983-7346-d0eb-4b7443e4e252
* Version Independent ID: 0a22cccc-318d-592f-d1ab-09ec01d88087
* Content: [Environment - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/environments?view=azure-devops)
* Content Source: [docs/pipelines/process/environments.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/environments.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam** | 1.0 | Do environments support Azure Web Apps? - Hello, the documentation on this page says:
"Environments can include Kubernetes clusters, **Azure web apps**, virtual machines, **databases.**"
A few paragraphs later, "The Kubernetes resource and virtual machine resource types are currently supported."
Using Azure DevOps UI I only seem to be able to add Kubernetes and Virtual Machines to environments.
My question is, is there a way of adding Web Apps to environment through YAML or some other way?
Or is it not currently supported? If so, documentation should say: "Environments can include Kubernetes clusters and virtual machines" to avoid any ambiguity.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 77d95db6-9983-7346-d0eb-4b7443e4e252
* Version Independent ID: 0a22cccc-318d-592f-d1ab-09ec01d88087
* Content: [Environment - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/environments?view=azure-devops)
* Content Source: [docs/pipelines/process/environments.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/environments.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam** | process | do environments support azure web apps hello the documentation on this page says environments can include kubernetes clusters azure web apps virtual machines databases a few paragraphs later the kubernetes resource and virtual machine resource types are currently supported using azure devops ui i only seem to be able to add kubernetes and virtual machines to environments my question is is there a way of adding web apps to environment through yaml or some other way or is it not currently supported if so documentation should say environments can include kubernetes clusters and virtual machines to avoid any ambiguity document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam | 1 |
54,812 | 6,412,593,894 | IssuesEvent | 2017-08-08 04:02:25 | CenterApp/center-client | https://api.github.com/repos/CenterApp/center-client | closed | Index.html | enhancement untested | The HTML should import the compiled elm, run the main program, and connect any ports with the native language. | 1.0 | Index.html - The HTML should import the compiled elm, run the main program, and connect any ports with the native language. | non_process | index html the html should import the compiled elm run the main program and connect any ports with the native language | 0 |
227,909 | 25,132,759,974 | IssuesEvent | 2022-11-09 16:15:26 | mendts-workshop/WSSBC | https://api.github.com/repos/mendts-workshop/WSSBC | closed | derby-10.8.3.0.jar: 2 vulnerabilities (highest severity is: 5.3) - autoclosed | security vulnerability | <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>derby-10.8.3.0.jar</b></p></summary>
<p>Contains the core Apache Derby database engine, which also includes the embedded JDBC driver.</p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /.m2/repository/org/apache/derby/derby/10.8.3.0/derby-10.8.3.0.jar</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/mendts-workshop/WSSBC/commit/f411011270bffeed474d22f954b7bb29cc62b0c8">f411011270bffeed474d22f954b7bb29cc62b0c8</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (derby version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2018-1313](https://www.mend.io/vulnerability-database/CVE-2018-1313) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.3 | derby-10.8.3.0.jar | Direct | 10.14.2.0 | ✅ |
| [CVE-2015-1832](https://www.mend.io/vulnerability-database/CVE-2015-1832) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 4.8 | derby-10.8.3.0.jar | Direct | 10.12.1.1 | ✅ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2018-1313</summary>
### Vulnerable Library - <b>derby-10.8.3.0.jar</b></p>
<p>Contains the core Apache Derby database engine, which also includes the embedded JDBC driver.</p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /.m2/repository/org/apache/derby/derby/10.8.3.0/derby-10.8.3.0.jar</p>
<p>
Dependency Hierarchy:
- :x: **derby-10.8.3.0.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/mendts-workshop/WSSBC/commit/f411011270bffeed474d22f954b7bb29cc62b0c8">f411011270bffeed474d22f954b7bb29cc62b0c8</a></p>
<p>Found in base branch: <b>easybuggy</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In Apache Derby 10.3.1.4 to 10.14.1.0, a specially-crafted network packet can be used to request the Derby Network Server to boot a database whose location and contents are under the user's control. If the Derby Network Server is not running with a Java Security Manager policy file, the attack is successful. If the server is using a policy file, the policy file must permit the database location to be read for the attack to work. The default Derby Network Server policy file distributed with the affected releases includes a permissive policy as the default Network Server policy, which allows the attack to work.
<p>Publish Date: 2018-05-07
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2018-1313>CVE-2018-1313</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.3</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-1313">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-1313</a></p>
<p>Release Date: 2018-05-07</p>
<p>Fix Resolution: 10.14.2.0</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2015-1832</summary>
### Vulnerable Library - <b>derby-10.8.3.0.jar</b></p>
<p>Contains the core Apache Derby database engine, which also includes the embedded JDBC driver.</p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /.m2/repository/org/apache/derby/derby/10.8.3.0/derby-10.8.3.0.jar</p>
<p>
Dependency Hierarchy:
- :x: **derby-10.8.3.0.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/mendts-workshop/WSSBC/commit/f411011270bffeed474d22f954b7bb29cc62b0c8">f411011270bffeed474d22f954b7bb29cc62b0c8</a></p>
<p>Found in base branch: <b>easybuggy</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
XML external entity (XXE) vulnerability in the SqlXmlUtil code in Apache Derby before 10.12.1.1, when a Java Security Manager is not in place, allows context-dependent attackers to read arbitrary files or cause a denial of service (resource consumption) via vectors involving XmlVTI and the XML datatype.
<p>Publish Date: 2016-10-03
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2015-1832>CVE-2015-1832</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>4.8</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-1832">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-1832</a></p>
<p>Release Date: 2016-10-03</p>
<p>Fix Resolution: 10.12.1.1</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details>
***
<p>:rescue_worker_helmet: Automatic Remediation is available for this issue.</p> | True | derby-10.8.3.0.jar: 2 vulnerabilities (highest severity is: 5.3) - autoclosed - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>derby-10.8.3.0.jar</b></p></summary>
<p>Contains the core Apache Derby database engine, which also includes the embedded JDBC driver.</p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /.m2/repository/org/apache/derby/derby/10.8.3.0/derby-10.8.3.0.jar</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/mendts-workshop/WSSBC/commit/f411011270bffeed474d22f954b7bb29cc62b0c8">f411011270bffeed474d22f954b7bb29cc62b0c8</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (derby version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2018-1313](https://www.mend.io/vulnerability-database/CVE-2018-1313) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.3 | derby-10.8.3.0.jar | Direct | 10.14.2.0 | ✅ |
| [CVE-2015-1832](https://www.mend.io/vulnerability-database/CVE-2015-1832) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 4.8 | derby-10.8.3.0.jar | Direct | 10.12.1.1 | ✅ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2018-1313</summary>
### Vulnerable Library - <b>derby-10.8.3.0.jar</b></p>
<p>Contains the core Apache Derby database engine, which also includes the embedded JDBC driver.</p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /.m2/repository/org/apache/derby/derby/10.8.3.0/derby-10.8.3.0.jar</p>
<p>
Dependency Hierarchy:
- :x: **derby-10.8.3.0.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/mendts-workshop/WSSBC/commit/f411011270bffeed474d22f954b7bb29cc62b0c8">f411011270bffeed474d22f954b7bb29cc62b0c8</a></p>
<p>Found in base branch: <b>easybuggy</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In Apache Derby 10.3.1.4 to 10.14.1.0, a specially-crafted network packet can be used to request the Derby Network Server to boot a database whose location and contents are under the user's control. If the Derby Network Server is not running with a Java Security Manager policy file, the attack is successful. If the server is using a policy file, the policy file must permit the database location to be read for the attack to work. The default Derby Network Server policy file distributed with the affected releases includes a permissive policy as the default Network Server policy, which allows the attack to work.
<p>Publish Date: 2018-05-07
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2018-1313>CVE-2018-1313</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.3</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-1313">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-1313</a></p>
<p>Release Date: 2018-05-07</p>
<p>Fix Resolution: 10.14.2.0</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2015-1832</summary>
### Vulnerable Library - <b>derby-10.8.3.0.jar</b></p>
<p>Contains the core Apache Derby database engine, which also includes the embedded JDBC driver.</p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /.m2/repository/org/apache/derby/derby/10.8.3.0/derby-10.8.3.0.jar</p>
<p>
Dependency Hierarchy:
- :x: **derby-10.8.3.0.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/mendts-workshop/WSSBC/commit/f411011270bffeed474d22f954b7bb29cc62b0c8">f411011270bffeed474d22f954b7bb29cc62b0c8</a></p>
<p>Found in base branch: <b>easybuggy</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
XML external entity (XXE) vulnerability in the SqlXmlUtil code in Apache Derby before 10.12.1.1, when a Java Security Manager is not in place, allows context-dependent attackers to read arbitrary files or cause a denial of service (resource consumption) via vectors involving XmlVTI and the XML datatype.
<p>Publish Date: 2016-10-03
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2015-1832>CVE-2015-1832</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>4.8</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-1832">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-1832</a></p>
<p>Release Date: 2016-10-03</p>
<p>Fix Resolution: 10.12.1.1</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details>
***
<p>:rescue_worker_helmet: Automatic Remediation is available for this issue.</p> | non_process | derby jar vulnerabilities highest severity is autoclosed vulnerable library derby jar contains the core apache derby database engine which also includes the embedded jdbc driver path to dependency file pom xml path to vulnerable library repository org apache derby derby derby jar found in head commit a href vulnerabilities cve severity cvss dependency type fixed in derby version remediation available medium derby jar direct medium derby jar direct details cve vulnerable library derby jar contains the core apache derby database engine which also includes the embedded jdbc driver path to dependency file pom xml path to vulnerable library repository org apache derby derby derby jar dependency hierarchy x derby jar vulnerable library found in head commit a href found in base branch easybuggy vulnerability details in apache derby to a specially crafted network packet can be used to request the derby network server to boot a database whose location and contents are under the user s control if the derby network server is not running with a java security manager policy file the attack is successful if the server is using a policy file the policy file must permit the database location to be read for the attack to work the default derby network server policy file distributed with the affected releases includes a permissive policy as the default network server policy which allows the attack to work publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue cve vulnerable library derby jar contains the core apache derby database engine which also includes the embedded jdbc driver path to dependency file pom xml path to vulnerable library repository org apache derby derby derby jar dependency hierarchy x derby jar vulnerable library found in head commit a href found in base branch easybuggy vulnerability details xml external entity xxe vulnerability in the sqlxmlutil code in apache derby before when a java security manager is not in place allows context dependent attackers to read arbitrary files or cause a denial of service resource consumption via vectors involving xmlvti and the xml datatype publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue rescue worker helmet automatic remediation is available for this issue | 0 |
42,958 | 12,965,143,730 | IssuesEvent | 2020-07-20 21:46:20 | jtimberlake/griffin | https://api.github.com/repos/jtimberlake/griffin | opened | CVE-2020-5408 (Medium) detected in spring-security-core-5.1.6.RELEASE.jar | security vulnerability | ## CVE-2020-5408 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-security-core-5.1.6.RELEASE.jar</b></p></summary>
<p>spring-security-core</p>
<p>Library home page: <a href="https://spring.io/spring-security">https://spring.io/spring-security</a></p>
<p>Path to dependency file: /tmp/ws-scm/griffin/service/hibernate_mysql_pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/security/spring-security-core/5.1.6.RELEASE/spring-security-core-5.1.6.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- spring-security-kerberos-client-1.0.1.RELEASE.jar (Root Library)
- spring-security-kerberos-core-1.0.1.RELEASE.jar
- :x: **spring-security-core-5.1.6.RELEASE.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/jtimberlake/griffin/commit/7b8d4cb53c4eab239eecb18da5b2a6048b2fce60">7b8d4cb53c4eab239eecb18da5b2a6048b2fce60</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Spring Security versions 5.3.x prior to 5.3.2, 5.2.x prior to 5.2.4, 5.1.x prior to 5.1.10, 5.0.x prior to 5.0.16 and 4.2.x prior to 4.2.16 use a fixed null initialization vector with CBC Mode in the implementation of the queryable text encryptor. A malicious user with access to the data that has been encrypted using such an encryptor may be able to derive the unencrypted values using a dictionary attack.
<p>Publish Date: 2020-05-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-5408>CVE-2020-5408</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-5408">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-5408</a></p>
<p>Release Date: 2020-05-14</p>
<p>Fix Resolution: org.springframework.security:spring-security-crypto:4.2.16,5.0.16,5.1.10,5.2.4,5.3.2,org.springframework.security:spring-security-core:4.2.16,5.0.16,5.1.10,5.2.4,5.3.2</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.springframework.security","packageName":"spring-security-core","packageVersion":"5.1.6.RELEASE","isTransitiveDependency":true,"dependencyTree":"org.springframework.security.kerberos:spring-security-kerberos-client:1.0.1.RELEASE;org.springframework.security.kerberos:spring-security-kerberos-core:1.0.1.RELEASE;org.springframework.security:spring-security-core:5.1.6.RELEASE","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.springframework.security:spring-security-crypto:4.2.16,5.0.16,5.1.10,5.2.4,5.3.2,org.springframework.security:spring-security-core:4.2.16,5.0.16,5.1.10,5.2.4,5.3.2"}],"vulnerabilityIdentifier":"CVE-2020-5408","vulnerabilityDetails":"Spring Security versions 5.3.x prior to 5.3.2, 5.2.x prior to 5.2.4, 5.1.x prior to 5.1.10, 5.0.x prior to 5.0.16 and 4.2.x prior to 4.2.16 use a fixed null initialization vector with CBC Mode in the implementation of the queryable text encryptor. A malicious user with access to the data that has been encrypted using such an encryptor may be able to derive the unencrypted values using a dictionary attack.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-5408","cvss3Severity":"medium","cvss3Score":"6.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"Low","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | True | CVE-2020-5408 (Medium) detected in spring-security-core-5.1.6.RELEASE.jar - ## CVE-2020-5408 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-security-core-5.1.6.RELEASE.jar</b></p></summary>
<p>spring-security-core</p>
<p>Library home page: <a href="https://spring.io/spring-security">https://spring.io/spring-security</a></p>
<p>Path to dependency file: /tmp/ws-scm/griffin/service/hibernate_mysql_pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/security/spring-security-core/5.1.6.RELEASE/spring-security-core-5.1.6.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- spring-security-kerberos-client-1.0.1.RELEASE.jar (Root Library)
- spring-security-kerberos-core-1.0.1.RELEASE.jar
- :x: **spring-security-core-5.1.6.RELEASE.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/jtimberlake/griffin/commit/7b8d4cb53c4eab239eecb18da5b2a6048b2fce60">7b8d4cb53c4eab239eecb18da5b2a6048b2fce60</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Spring Security versions 5.3.x prior to 5.3.2, 5.2.x prior to 5.2.4, 5.1.x prior to 5.1.10, 5.0.x prior to 5.0.16 and 4.2.x prior to 4.2.16 use a fixed null initialization vector with CBC Mode in the implementation of the queryable text encryptor. A malicious user with access to the data that has been encrypted using such an encryptor may be able to derive the unencrypted values using a dictionary attack.
<p>Publish Date: 2020-05-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-5408>CVE-2020-5408</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-5408">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-5408</a></p>
<p>Release Date: 2020-05-14</p>
<p>Fix Resolution: org.springframework.security:spring-security-crypto:4.2.16,5.0.16,5.1.10,5.2.4,5.3.2,org.springframework.security:spring-security-core:4.2.16,5.0.16,5.1.10,5.2.4,5.3.2</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.springframework.security","packageName":"spring-security-core","packageVersion":"5.1.6.RELEASE","isTransitiveDependency":true,"dependencyTree":"org.springframework.security.kerberos:spring-security-kerberos-client:1.0.1.RELEASE;org.springframework.security.kerberos:spring-security-kerberos-core:1.0.1.RELEASE;org.springframework.security:spring-security-core:5.1.6.RELEASE","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.springframework.security:spring-security-crypto:4.2.16,5.0.16,5.1.10,5.2.4,5.3.2,org.springframework.security:spring-security-core:4.2.16,5.0.16,5.1.10,5.2.4,5.3.2"}],"vulnerabilityIdentifier":"CVE-2020-5408","vulnerabilityDetails":"Spring Security versions 5.3.x prior to 5.3.2, 5.2.x prior to 5.2.4, 5.1.x prior to 5.1.10, 5.0.x prior to 5.0.16 and 4.2.x prior to 4.2.16 use a fixed null initialization vector with CBC Mode in the implementation of the queryable text encryptor. A malicious user with access to the data that has been encrypted using such an encryptor may be able to derive the unencrypted values using a dictionary attack.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-5408","cvss3Severity":"medium","cvss3Score":"6.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"Low","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | non_process | cve medium detected in spring security core release jar cve medium severity vulnerability vulnerable library spring security core release jar spring security core library home page a href path to dependency file tmp ws scm griffin service hibernate mysql pom xml path to vulnerable library home wss scanner repository org springframework security spring security core release spring security core release jar dependency hierarchy spring security kerberos client release jar root library spring security kerberos core release jar x spring security core release jar vulnerable library found in head commit a href vulnerability details spring security versions x prior to x prior to x prior to x prior to and x prior to use a fixed null initialization vector with cbc mode in the implementation of the queryable text encryptor a malicious user with access to the data that has been encrypted using such an encryptor may be able to derive the unencrypted values using a dictionary attack publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org springframework security spring security crypto org springframework security spring security core isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails spring security versions x prior to x prior to x prior to x prior to and x prior to use a fixed null initialization vector with cbc mode in the implementation of the queryable text encryptor a malicious user with access to the data that has been encrypted using such an encryptor may be able to derive the unencrypted values using a dictionary attack vulnerabilityurl | 0 |
14,326 | 17,362,231,490 | IssuesEvent | 2021-07-29 22:46:55 | dotnet/runtime | https://api.github.com/repos/dotnet/runtime | closed | Process.ToString() can throw | area-System.Diagnostics.Process bug in pr up-for-grabs | System.Diagnostics.Process.ToString() throws if the process is already terminated.
[Guideline](https://docs.microsoft.com/en-us/dotnet/api/system.object.tostring?view=netcore-3.1#notes-to-inheritors) says:
```
Your ToString() override should not throw an exception.
```
If the process is not terminated the method returns "System.Diagnostics.Process (process name)"
If the process is terminated I'd expect the method returns "System.Diagnostics.Process"
| 1.0 | Process.ToString() can throw - System.Diagnostics.Process.ToString() throws if the process is already terminated.
[Guideline](https://docs.microsoft.com/en-us/dotnet/api/system.object.tostring?view=netcore-3.1#notes-to-inheritors) says:
```
Your ToString() override should not throw an exception.
```
If the process is not terminated the method returns "System.Diagnostics.Process (process name)"
If the process is terminated I'd expect the method returns "System.Diagnostics.Process"
| process | process tostring can throw system diagnostics process tostring throws if the process is already terminated says your tostring override should not throw an exception if the process is not terminated the method returns system diagnostics process process name if the process is terminated i d expect the method returns system diagnostics process | 1 |
488,718 | 14,085,507,125 | IssuesEvent | 2020-11-05 01:10:47 | mozilla/addons-code-manager | https://api.github.com/repos/mozilla/addons-code-manager | closed | Ability to pretty print minified files on the fly | component: browse page contrib: welcome priority: p3 state: stale type: feature | This is based on feedback from the meeting we had today to discuss the code manager.
There was a desire expressed to be able to format and/or pretty print minified files on request. Exactly how this would work and what the UX would be like was not discussed.
More discussion about this feature is needed. | 1.0 | Ability to pretty print minified files on the fly - This is based on feedback from the meeting we had today to discuss the code manager.
There was a desire expressed to be able to format and/or pretty print minified files on request. Exactly how this would work and what the UX would be like was not discussed.
More discussion about this feature is needed. | non_process | ability to pretty print minified files on the fly this is based on feedback from the meeting we had today to discuss the code manager there was a desire expressed to be able to format and or pretty print minified files on request exactly how this would work and what the ux would be like was not discussed more discussion about this feature is needed | 0 |
11,702 | 14,545,106,976 | IssuesEvent | 2020-12-15 19:09:31 | MicrosoftDocs/azure-devops-docs | https://api.github.com/repos/MicrosoftDocs/azure-devops-docs | closed | Is AKS Environment supported with on-premise DevOps? | Pri1 devops-cicd-process/tech devops/prod support-request | I went to the environments page here: https://docs.microsoft.com/en-us/azure/devops/pipelines/process/environments-kubernetes?view=azure-devops#kubernetes-resource-creation
And I should see an option for AKS to add a Kubernetes resource. However, I don't see it listed with our Azure DevOps Server 2020. Am I missing something?

---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 7730ae4d-4101-9c83-1823-4ff43ff161ce
* Version Independent ID: 20a7e263-4819-783e-c984-c4f3b459e22f
* Content: [Environment - Kubernetes resource - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/environments-kubernetes?view=azure-devops&viewFallbackFrom=azure-devops-2020)
* Content Source: [docs/pipelines/process/environments-kubernetes.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/environments-kubernetes.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam** | 1.0 | Is AKS Environment supported with on-premise DevOps? - I went to the environments page here: https://docs.microsoft.com/en-us/azure/devops/pipelines/process/environments-kubernetes?view=azure-devops#kubernetes-resource-creation
And I should see an option for AKS to add a Kubernetes resource. However, I don't see it listed with our Azure DevOps Server 2020. Am I missing something?

---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 7730ae4d-4101-9c83-1823-4ff43ff161ce
* Version Independent ID: 20a7e263-4819-783e-c984-c4f3b459e22f
* Content: [Environment - Kubernetes resource - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/environments-kubernetes?view=azure-devops&viewFallbackFrom=azure-devops-2020)
* Content Source: [docs/pipelines/process/environments-kubernetes.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/environments-kubernetes.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam** | process | is aks environment supported with on premise devops i went to the environments page here and i should see an option for aks to add a kubernetes resource however i don t see it listed with our azure devops server am i missing something document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam | 1 |
7,413 | 10,534,940,614 | IssuesEvent | 2019-10-01 15:42:30 | johang88/triton | https://api.github.com/repos/johang88/triton | opened | Content processor should use meta data files instead of sqlite db | content processor | This will make things easier to manage, the metadata file should probably be optional | 1.0 | Content processor should use meta data files instead of sqlite db - This will make things easier to manage, the metadata file should probably be optional | process | content processor should use meta data files instead of sqlite db this will make things easier to manage the metadata file should probably be optional | 1 |
21,706 | 30,204,160,815 | IssuesEvent | 2023-07-05 08:18:11 | benthosdev/benthos | https://api.github.com/repos/benthosdev/benthos | closed | [Bug] Upsert setting does't work for mongodb output plugin | bug processors outputs | `upsert` setting is always false
```yaml
# Config example
output:
mongodb:
...
operation: replace-one
upsert: true
...
```
https://github.com/benthosdev/benthos/blob/5225e1a58a1bf102e05d6285e62df24591fb757e/internal/impl/mongodb/common.go#L337-L341
Affected version: >=4.17.0 | 1.0 | [Bug] Upsert setting does't work for mongodb output plugin - `upsert` setting is always false
```yaml
# Config example
output:
mongodb:
...
operation: replace-one
upsert: true
...
```
https://github.com/benthosdev/benthos/blob/5225e1a58a1bf102e05d6285e62df24591fb757e/internal/impl/mongodb/common.go#L337-L341
Affected version: >=4.17.0 | process | upsert setting does t work for mongodb output plugin upsert setting is always false yaml config example output mongodb operation replace one upsert true affected version | 1 |
131,253 | 18,234,876,973 | IssuesEvent | 2021-10-01 05:01:22 | graywidjaya/snyk-scanning-testing | https://api.github.com/repos/graywidjaya/snyk-scanning-testing | opened | WS-2017-3767 (Medium) detected in spring-security-web-5.1.4.RELEASE.jar | security vulnerability | ## WS-2017-3767 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-security-web-5.1.4.RELEASE.jar</b></p></summary>
<p>spring-security-web</p>
<p>Path to dependency file: snyk-scanning-testing/ProductManager/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/security/spring-security-web/5.1.4.RELEASE/spring-security-web-5.1.4.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-security-2.1.3.RELEASE.jar (Root Library)
- :x: **spring-security-web-5.1.4.RELEASE.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/graywidjaya/snyk-scanning-testing/commit/8e11d4935d4cae9cfc1d6d0b55433a3b1002a16e">8e11d4935d4cae9cfc1d6d0b55433a3b1002a16e</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Cross-Site Request Forgery (CSRF) vulnerability was found in spring-security before 4.2.15, 5.0.15, 5.1.9, 5.2.3, and 5.3.1. SwitchUserFilter responds to all HTTP methods, making it vulnerable to CSRF attacks.
<p>Publish Date: 2017-01-03
<p>URL: <a href=https://github.com/spring-projects/spring-security/commit/eed71243cb86833e7edf230e5e43ad89b01142f9>WS-2017-3767</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/spring-projects/spring-security/releases/tag/5.3.1.RELEASE">https://github.com/spring-projects/spring-security/releases/tag/5.3.1.RELEASE</a></p>
<p>Release Date: 2017-01-03</p>
<p>Fix Resolution: org.springframework.security:spring-security-web:4.2.15,5.0.15,5.1.9,5.2.3,5.3.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | WS-2017-3767 (Medium) detected in spring-security-web-5.1.4.RELEASE.jar - ## WS-2017-3767 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-security-web-5.1.4.RELEASE.jar</b></p></summary>
<p>spring-security-web</p>
<p>Path to dependency file: snyk-scanning-testing/ProductManager/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/security/spring-security-web/5.1.4.RELEASE/spring-security-web-5.1.4.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-security-2.1.3.RELEASE.jar (Root Library)
- :x: **spring-security-web-5.1.4.RELEASE.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/graywidjaya/snyk-scanning-testing/commit/8e11d4935d4cae9cfc1d6d0b55433a3b1002a16e">8e11d4935d4cae9cfc1d6d0b55433a3b1002a16e</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Cross-Site Request Forgery (CSRF) vulnerability was found in spring-security before 4.2.15, 5.0.15, 5.1.9, 5.2.3, and 5.3.1. SwitchUserFilter responds to all HTTP methods, making it vulnerable to CSRF attacks.
<p>Publish Date: 2017-01-03
<p>URL: <a href=https://github.com/spring-projects/spring-security/commit/eed71243cb86833e7edf230e5e43ad89b01142f9>WS-2017-3767</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/spring-projects/spring-security/releases/tag/5.3.1.RELEASE">https://github.com/spring-projects/spring-security/releases/tag/5.3.1.RELEASE</a></p>
<p>Release Date: 2017-01-03</p>
<p>Fix Resolution: org.springframework.security:spring-security-web:4.2.15,5.0.15,5.1.9,5.2.3,5.3.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | ws medium detected in spring security web release jar ws medium severity vulnerability vulnerable library spring security web release jar spring security web path to dependency file snyk scanning testing productmanager pom xml path to vulnerable library home wss scanner repository org springframework security spring security web release spring security web release jar dependency hierarchy spring boot starter security release jar root library x spring security web release jar vulnerable library found in head commit a href found in base branch main vulnerability details cross site request forgery csrf vulnerability was found in spring security before and switchuserfilter responds to all http methods making it vulnerable to csrf attacks publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org springframework security spring security web step up your open source security game with whitesource | 0 |
55,959 | 13,726,049,490 | IssuesEvent | 2020-10-03 21:26:22 | pantsbuild/pants | https://api.github.com/repos/pantsbuild/pants | opened | proposal to unify resources(), files(), and relocated_files() into one target type | BUILD file syntax enhancement idea | # Motivation
## Equivalence of `resources()` and `files()`+`relocated_files()`
I was thinking about the `resources()`, `files()`, and the much more general `relocated_files()` introduced in #10895. Mainly, I noticed we now have another (more cumbersome, but technically possible) way to represent the `resources()` target type:
```python
# In src/python/pants/engine/internals/BUILD.
resources(
name='native_engine',
sources=[
'native_engine.so',
'native_engine.so.metadata',
],
)
# This is the *same* result as above (?), with two targets:
files(
name='native_engine_files',
sources=[
'native_engine.so',
'native_engine.so.metadata',
],
)
relocated_files(
name='native_engine',
files_targets=[':native_engine_files'],
src='src/python/pants/engine/internals',
dest='pants/engine/internals',
)
```
## Issues with `relocated_files()`
In our testing at https://github.com/pantsbuild/pants/blob/4129e0622d456be7b17493be6dda900894adb39c/src/python/pants/core/target_types_test.py#L169-L176 we currently only test for a single case: when each `files()` target is associated with at most one `relocated_files()` target.
It seems that we would likely *not* want to encourage the same files being available *sometimes* at different relative paths, which is possible if e.g. multiple `relocated_files()` were used in the same BUILD file, covering different `files()` targets. (This could be wrong.)
## Issues with `resources()`
Where is a `resources()` target going to place the files into an output PEX depending on it? That requires knowledge of pants source roots, but the name `resources()` doesn't give any indication of that. I don't like the idea of pants users being unsure where files will be placed, personally.
# Proposal
We could make it extremely explicit about where every non-source file is going to be placed in an output PEX by:
- [ ] Explicitly declaring when a target depends on source roots to determine its output location.
- A user is able to consult existing pants documentation on source roots, or can search the site for "source root".
- [ ] Declaring any path rewriting inline in the target definition (avoiding multiple separate path rewritings for the same target).
- If multiple output paths for the same files are necessary, users may simply declare multiple `files()` targets.
## Bikesheddable BUILD File
The same BUILD file from above could look like:
```python
# In src/python/pants/engine/internals/BUILD.
# This method explicitly declares the usage of the source root
# to determine output location.
files(
# `location` is required.
location=SourceRoot,
name='native_engine_files',
sources=[
'native_engine.so',
'native_engine.so.metadata',
],
)
# This method would be used if the file is not in a source root.
files(
# SourceRoot and Relative() are the only valid values for `location`.
location=Relative(
src='src/python/pants/engine/internals',
dest='pants/engine/internals',
),
name='native_engine_files',
sources=[
'native_engine.so',
'native_engine.so.metadata',
],
)
```
## Automatic Rewriting of `resources()` Targets
If we are concerned about have to rewrite numerous `resources()` targets, that could be a perfect motivation for #9434. **One possible implementation of that could, upon detected any deprecated targets, offer to rewrite them into equivalent new targets!** | 1.0 | proposal to unify resources(), files(), and relocated_files() into one target type - # Motivation
## Equivalence of `resources()` and `files()`+`relocated_files()`
I was thinking about the `resources()`, `files()`, and the much more general `relocated_files()` introduced in #10895. Mainly, I noticed we now have another (more cumbersome, but technically possible) way to represent the `resources()` target type:
```python
# In src/python/pants/engine/internals/BUILD.
resources(
name='native_engine',
sources=[
'native_engine.so',
'native_engine.so.metadata',
],
)
# This is the *same* result as above (?), with two targets:
files(
name='native_engine_files',
sources=[
'native_engine.so',
'native_engine.so.metadata',
],
)
relocated_files(
name='native_engine',
files_targets=[':native_engine_files'],
src='src/python/pants/engine/internals',
dest='pants/engine/internals',
)
```
## Issues with `relocated_files()`
In our testing at https://github.com/pantsbuild/pants/blob/4129e0622d456be7b17493be6dda900894adb39c/src/python/pants/core/target_types_test.py#L169-L176 we currently only test for a single case: when each `files()` target is associated with at most one `relocated_files()` target.
It seems that we would likely *not* want to encourage the same files being available *sometimes* at different relative paths, which is possible if e.g. multiple `relocated_files()` were used in the same BUILD file, covering different `files()` targets. (This could be wrong.)
## Issues with `resources()`
Where is a `resources()` target going to place the files into an output PEX depending on it? That requires knowledge of pants source roots, but the name `resources()` doesn't give any indication of that. I don't like the idea of pants users being unsure where files will be placed, personally.
# Proposal
We could make it extremely explicit about where every non-source file is going to be placed in an output PEX by:
- [ ] Explicitly declaring when a target depends on source roots to determine its output location.
- A user is able to consult existing pants documentation on source roots, or can search the site for "source root".
- [ ] Declaring any path rewriting inline in the target definition (avoiding multiple separate path rewritings for the same target).
- If multiple output paths for the same files are necessary, users may simply declare multiple `files()` targets.
## Bikesheddable BUILD File
The same BUILD file from above could look like:
```python
# In src/python/pants/engine/internals/BUILD.
# This method explicitly declares the usage of the source root
# to determine output location.
files(
# `location` is required.
location=SourceRoot,
name='native_engine_files',
sources=[
'native_engine.so',
'native_engine.so.metadata',
],
)
# This method would be used if the file is not in a source root.
files(
# SourceRoot and Relative() are the only valid values for `location`.
location=Relative(
src='src/python/pants/engine/internals',
dest='pants/engine/internals',
),
name='native_engine_files',
sources=[
'native_engine.so',
'native_engine.so.metadata',
],
)
```
## Automatic Rewriting of `resources()` Targets
If we are concerned about have to rewrite numerous `resources()` targets, that could be a perfect motivation for #9434. **One possible implementation of that could, upon detected any deprecated targets, offer to rewrite them into equivalent new targets!** | non_process | proposal to unify resources files and relocated files into one target type motivation equivalence of resources and files relocated files i was thinking about the resources files and the much more general relocated files introduced in mainly i noticed we now have another more cumbersome but technically possible way to represent the resources target type python in src python pants engine internals build resources name native engine sources native engine so native engine so metadata this is the same result as above with two targets files name native engine files sources native engine so native engine so metadata relocated files name native engine files targets src src python pants engine internals dest pants engine internals issues with relocated files in our testing at we currently only test for a single case when each files target is associated with at most one relocated files target it seems that we would likely not want to encourage the same files being available sometimes at different relative paths which is possible if e g multiple relocated files were used in the same build file covering different files targets this could be wrong issues with resources where is a resources target going to place the files into an output pex depending on it that requires knowledge of pants source roots but the name resources doesn t give any indication of that i don t like the idea of pants users being unsure where files will be placed personally proposal we could make it extremely explicit about where every non source file is going to be placed in an output pex by explicitly declaring when a target depends on source roots to determine its output location a user is able to consult existing pants documentation on source roots or can search the site for source root declaring any path rewriting inline in the target definition avoiding multiple separate path rewritings for the same target if multiple output paths for the same files are necessary users may simply declare multiple files targets bikesheddable build file the same build file from above could look like python in src python pants engine internals build this method explicitly declares the usage of the source root to determine output location files location is required location sourceroot name native engine files sources native engine so native engine so metadata this method would be used if the file is not in a source root files sourceroot and relative are the only valid values for location location relative src src python pants engine internals dest pants engine internals name native engine files sources native engine so native engine so metadata automatic rewriting of resources targets if we are concerned about have to rewrite numerous resources targets that could be a perfect motivation for one possible implementation of that could upon detected any deprecated targets offer to rewrite them into equivalent new targets | 0 |
14,393 | 17,404,169,159 | IssuesEvent | 2021-08-03 01:51:28 | qgis/QGIS | https://api.github.com/repos/qgis/QGIS | closed | Hillshading fails | Bug Feedback MacOS Processing stale | GDAL command:
gdaldem hillshade "/Volumes/Pegasus32/Arizona/Coconino Co/Chavez Pass/6-1-21/MME/ChavezPassRuins20210611-DEM.tif" /private/var/folders/6b/ys55yq0d6536cj_fgm76458r0000gr/T/processing_txWYOE/ff9c817002544a4bb4b008880069b1f8/OUTPUT.tif -of GTiff -b 1 -z 1.0 -s 1.0 -az 315.0 -alt 45.0
GDAL command output:
/Applications/QGIS 2.app/Contents/MacOS/bin/gdaldem: line 3: /Applications/QGIS: No such file or directory
Process returned error code 127
<meta http-equiv="Content-Type" content="text/html; charset=utf-8"><style type="text/css">
p, li { white-space: pre-wrap; }
</style>
QGIS version | 3.20.0-Odense | QGIS code revision | decaadbb31
-- | -- | -- | --
Qt version | 5.15.2
Python version | 3.8.7
GDAL/OGR version | 3.2.3
PROJ version | 6.3.2
EPSG Registry database version | v9.8.6 (2020-01-22)
GEOS version | 3.9.1-CAPI-1.14.2
SQLite version | 3.31.1
PDAL version | 2.2.0
PostgreSQL client version | 12.3
SpatiaLite version | 4.3.0a
QWT version | 6.1.4
QScintilla2 version | 2.11.4
OS version | macOS 10.15
Runs on previous version. | 1.0 | Hillshading fails - GDAL command:
gdaldem hillshade "/Volumes/Pegasus32/Arizona/Coconino Co/Chavez Pass/6-1-21/MME/ChavezPassRuins20210611-DEM.tif" /private/var/folders/6b/ys55yq0d6536cj_fgm76458r0000gr/T/processing_txWYOE/ff9c817002544a4bb4b008880069b1f8/OUTPUT.tif -of GTiff -b 1 -z 1.0 -s 1.0 -az 315.0 -alt 45.0
GDAL command output:
/Applications/QGIS 2.app/Contents/MacOS/bin/gdaldem: line 3: /Applications/QGIS: No such file or directory
Process returned error code 127
<meta http-equiv="Content-Type" content="text/html; charset=utf-8"><style type="text/css">
p, li { white-space: pre-wrap; }
</style>
QGIS version | 3.20.0-Odense | QGIS code revision | decaadbb31
-- | -- | -- | --
Qt version | 5.15.2
Python version | 3.8.7
GDAL/OGR version | 3.2.3
PROJ version | 6.3.2
EPSG Registry database version | v9.8.6 (2020-01-22)
GEOS version | 3.9.1-CAPI-1.14.2
SQLite version | 3.31.1
PDAL version | 2.2.0
PostgreSQL client version | 12.3
SpatiaLite version | 4.3.0a
QWT version | 6.1.4
QScintilla2 version | 2.11.4
OS version | macOS 10.15
Runs on previous version. | process | hillshading fails gdal command gdaldem hillshade volumes arizona coconino co chavez pass mme dem tif private var folders t processing txwyoe output tif of gtiff b z s az alt gdal command output applications qgis app contents macos bin gdaldem line applications qgis no such file or directory process returned error code p li white space pre wrap qgis version odense qgis code revision qt version python version gdal ogr version proj version epsg registry database version geos version capi sqlite version pdal version postgresql client version spatialite version qwt version version os version macos runs on previous version | 1 |
8,010 | 11,202,360,749 | IssuesEvent | 2020-01-04 11:54:10 | qgis/QGIS | https://api.github.com/repos/qgis/QGIS | closed | "Point" input not working in Processing Modeller? | Bug Feedback Processing | the 'point' input type for Graphic Models / Processing Modeler is not functioning. The button for the point field appears with the symbol: â€| and when clicked, minimizes the algorithm dialogue to allow the user to interactively click a point on the map canvas, but nothing happens when I do so. The algorithm will not run unless I type in a coordinate pair.
I'm fairly certain older versions of QGIS allowed a point to be entered with a click on the map canvas?
QGIS version 3.8.2 in Windows 10 | 1.0 | "Point" input not working in Processing Modeller? - the 'point' input type for Graphic Models / Processing Modeler is not functioning. The button for the point field appears with the symbol: â€| and when clicked, minimizes the algorithm dialogue to allow the user to interactively click a point on the map canvas, but nothing happens when I do so. The algorithm will not run unless I type in a coordinate pair.
I'm fairly certain older versions of QGIS allowed a point to be entered with a click on the map canvas?
QGIS version 3.8.2 in Windows 10 | process | point input not working in processing modeller the point input type for graphic models processing modeler is not functioning the button for the point field appears with the symbol †and when clicked minimizes the algorithm dialogue to allow the user to interactively click a point on the map canvas but nothing happens when i do so the algorithm will not run unless i type in a coordinate pair i m fairly certain older versions of qgis allowed a point to be entered with a click on the map canvas qgis version in windows | 1 |
5,340 | 8,166,895,188 | IssuesEvent | 2018-08-25 15:16:11 | author/metadoc | https://api.github.com/repos/author/metadoc | opened | Search Indexing | enhancement post-processor | To support fast searching, the primary content needs to be indexed by [Lunr](https://lunrjs.com/). The result should be one or more index files that can be used to seed a search system on a web page. | 1.0 | Search Indexing - To support fast searching, the primary content needs to be indexed by [Lunr](https://lunrjs.com/). The result should be one or more index files that can be used to seed a search system on a web page. | process | search indexing to support fast searching the primary content needs to be indexed by the result should be one or more index files that can be used to seed a search system on a web page | 1 |
7,300 | 10,443,032,475 | IssuesEvent | 2019-09-18 14:10:07 | ORNL-AMO/AMO-Tools-Desktop | https://api.github.com/repos/ORNL-AMO/AMO-Tools-Desktop | opened | PH Calc: Waste Heat for Absorption Chiller | Calculator Process Heating | Develop calculator from ESC calculator.
Excel file found in Dropbox > AMO Tools > Other Tools > Energy Solutions Center Tools escenter.org > No 9 Use of waste heat for absorp | 1.0 | PH Calc: Waste Heat for Absorption Chiller - Develop calculator from ESC calculator.
Excel file found in Dropbox > AMO Tools > Other Tools > Energy Solutions Center Tools escenter.org > No 9 Use of waste heat for absorp | process | ph calc waste heat for absorption chiller develop calculator from esc calculator excel file found in dropbox amo tools other tools energy solutions center tools escenter org no use of waste heat for absorp | 1 |
21,043 | 27,985,095,628 | IssuesEvent | 2023-03-26 15:57:32 | anitsh/til | https://api.github.com/repos/anitsh/til | opened | The importance of flow in software development | agile sdlc process |
From social and psychological theories and studies [1], we know that there exists a mental state called “flow” that allows individuals to concentrate deeply on a specific task without noticing the surrounding environment or the time, while remaining fully aware of the current work that they are doing.
In a recent TV broadcast about cognitive brain function, several illustrative examples were given, such as a free climber who says that he is at peak performance when he completely forgets about the world and the danger associated with the climb, but fully concentrates on the rocks and all the next moves he is planning to make. It was also shown that the world record holders in speed tasks, such as stacking cubes or solving Rubic’s Cube, do not use their cerebrum very intensively when executing the speed task. Only a small, but obviously efficient, part of the brain is concentrating on the task to be executed. Similar examples can be found among athletes and musicians who may occasionally get into a “groove” where performance and concentration reach a peak level.
Software developers that fully concentrate on their work also report this kind of flow, where only the relevant parts of the brain are focused on the core task. We can argue that software development is more complex and probably involves more parts of the brain than speed stacking, but it also seems that software development becomes more productive when the developer has the ability to reach flow for a large part of his or her working time.
This ability to reach and sustain flow depends partially on the developer’s own circumstances; for example, whether they get enough sleep, have little to no stress at home, and lead a safe and enjoyable life. To a large extent, this ability also depends on the concrete working circumstances. Is the room quiet? Can the developers work on their own for a long time without disturbances by phones, emails, or background noise in the room? It is also important that the developer is in a constant state of focus while thinking of the important issues that enable him or her to execute their task. In the software development context, it is helpful if the tooling aids productivity and does not slow down the focus time (e.g., compilation time should not take long). Agile development methods have the potential to capitalize on the opportunity of software developers to get into the flow and provide continuous improvement to the system they are developing. Mihaly Csikszentmihalyi also argues that the flow is very helpful to engage in creative discovery of new ideas and inventions [1]. Software development can benefit from flow because the need to identify an optimal architecture and the best structure for object interactions can be a creative activity.
Agile development methods also thrive from early and immediate feedback: The software should always be able to compile and to run the associated tests. To remain in the flow, it is helpful that compilation and test executions are quick, because otherwise developers may become distracted by the temptation to read emails, go for the next coffee, take an early lunch, or engage another co-worker in a conversation. Software development tools are vitally important for productive development and keeping developers in the flow zone.
When considering the current state of tooling for model-based software development (compared to just coding), an opportunity exists for new capabilities that help developers achieve flow. Currently, many tools are able to help with creating and editing large models, perform partial consistency checks, and generate code for multiple platforms. But in comparison with the available tooling for traditional general-purpose programming languages, there is still a large gap in tool capabilities. Models are often interacted with in a monolithic form, i.e., all models are processed in batch each time a code generation request is started. The time it takes to perform code generation and model checking may cause a disruption in the flow. If a code generation process (for a large industrial model, or a set of models within the project) takes longer than drinking a cup of coffee, software developers that use model-based techniques may lose their flow of concentration. They will not get the same feeling of satisfaction that would result from a better transition across the tool usage, which may hamper productivity when delays emerge. We hope that modeling tools will improve the opportunity for developers to achieve flow through improved tool implementation, but also by better modeling languages that enhance modularity and incremental compilation.
https://link.springer.com/article/10.1007/s10270-017-0621-x
| 1.0 | The importance of flow in software development -
From social and psychological theories and studies [1], we know that there exists a mental state called “flow” that allows individuals to concentrate deeply on a specific task without noticing the surrounding environment or the time, while remaining fully aware of the current work that they are doing.
In a recent TV broadcast about cognitive brain function, several illustrative examples were given, such as a free climber who says that he is at peak performance when he completely forgets about the world and the danger associated with the climb, but fully concentrates on the rocks and all the next moves he is planning to make. It was also shown that the world record holders in speed tasks, such as stacking cubes or solving Rubic’s Cube, do not use their cerebrum very intensively when executing the speed task. Only a small, but obviously efficient, part of the brain is concentrating on the task to be executed. Similar examples can be found among athletes and musicians who may occasionally get into a “groove” where performance and concentration reach a peak level.
Software developers that fully concentrate on their work also report this kind of flow, where only the relevant parts of the brain are focused on the core task. We can argue that software development is more complex and probably involves more parts of the brain than speed stacking, but it also seems that software development becomes more productive when the developer has the ability to reach flow for a large part of his or her working time.
This ability to reach and sustain flow depends partially on the developer’s own circumstances; for example, whether they get enough sleep, have little to no stress at home, and lead a safe and enjoyable life. To a large extent, this ability also depends on the concrete working circumstances. Is the room quiet? Can the developers work on their own for a long time without disturbances by phones, emails, or background noise in the room? It is also important that the developer is in a constant state of focus while thinking of the important issues that enable him or her to execute their task. In the software development context, it is helpful if the tooling aids productivity and does not slow down the focus time (e.g., compilation time should not take long). Agile development methods have the potential to capitalize on the opportunity of software developers to get into the flow and provide continuous improvement to the system they are developing. Mihaly Csikszentmihalyi also argues that the flow is very helpful to engage in creative discovery of new ideas and inventions [1]. Software development can benefit from flow because the need to identify an optimal architecture and the best structure for object interactions can be a creative activity.
Agile development methods also thrive from early and immediate feedback: The software should always be able to compile and to run the associated tests. To remain in the flow, it is helpful that compilation and test executions are quick, because otherwise developers may become distracted by the temptation to read emails, go for the next coffee, take an early lunch, or engage another co-worker in a conversation. Software development tools are vitally important for productive development and keeping developers in the flow zone.
When considering the current state of tooling for model-based software development (compared to just coding), an opportunity exists for new capabilities that help developers achieve flow. Currently, many tools are able to help with creating and editing large models, perform partial consistency checks, and generate code for multiple platforms. But in comparison with the available tooling for traditional general-purpose programming languages, there is still a large gap in tool capabilities. Models are often interacted with in a monolithic form, i.e., all models are processed in batch each time a code generation request is started. The time it takes to perform code generation and model checking may cause a disruption in the flow. If a code generation process (for a large industrial model, or a set of models within the project) takes longer than drinking a cup of coffee, software developers that use model-based techniques may lose their flow of concentration. They will not get the same feeling of satisfaction that would result from a better transition across the tool usage, which may hamper productivity when delays emerge. We hope that modeling tools will improve the opportunity for developers to achieve flow through improved tool implementation, but also by better modeling languages that enhance modularity and incremental compilation.
https://link.springer.com/article/10.1007/s10270-017-0621-x
| process | the importance of flow in software development from social and psychological theories and studies we know that there exists a mental state called “flow” that allows individuals to concentrate deeply on a specific task without noticing the surrounding environment or the time while remaining fully aware of the current work that they are doing in a recent tv broadcast about cognitive brain function several illustrative examples were given such as a free climber who says that he is at peak performance when he completely forgets about the world and the danger associated with the climb but fully concentrates on the rocks and all the next moves he is planning to make it was also shown that the world record holders in speed tasks such as stacking cubes or solving rubic’s cube do not use their cerebrum very intensively when executing the speed task only a small but obviously efficient part of the brain is concentrating on the task to be executed similar examples can be found among athletes and musicians who may occasionally get into a “groove” where performance and concentration reach a peak level software developers that fully concentrate on their work also report this kind of flow where only the relevant parts of the brain are focused on the core task we can argue that software development is more complex and probably involves more parts of the brain than speed stacking but it also seems that software development becomes more productive when the developer has the ability to reach flow for a large part of his or her working time this ability to reach and sustain flow depends partially on the developer’s own circumstances for example whether they get enough sleep have little to no stress at home and lead a safe and enjoyable life to a large extent this ability also depends on the concrete working circumstances is the room quiet can the developers work on their own for a long time without disturbances by phones emails or background noise in the room it is also important that the developer is in a constant state of focus while thinking of the important issues that enable him or her to execute their task in the software development context it is helpful if the tooling aids productivity and does not slow down the focus time e g compilation time should not take long agile development methods have the potential to capitalize on the opportunity of software developers to get into the flow and provide continuous improvement to the system they are developing mihaly csikszentmihalyi also argues that the flow is very helpful to engage in creative discovery of new ideas and inventions software development can benefit from flow because the need to identify an optimal architecture and the best structure for object interactions can be a creative activity agile development methods also thrive from early and immediate feedback the software should always be able to compile and to run the associated tests to remain in the flow it is helpful that compilation and test executions are quick because otherwise developers may become distracted by the temptation to read emails go for the next coffee take an early lunch or engage another co worker in a conversation software development tools are vitally important for productive development and keeping developers in the flow zone when considering the current state of tooling for model based software development compared to just coding an opportunity exists for new capabilities that help developers achieve flow currently many tools are able to help with creating and editing large models perform partial consistency checks and generate code for multiple platforms but in comparison with the available tooling for traditional general purpose programming languages there is still a large gap in tool capabilities models are often interacted with in a monolithic form i e all models are processed in batch each time a code generation request is started the time it takes to perform code generation and model checking may cause a disruption in the flow if a code generation process for a large industrial model or a set of models within the project takes longer than drinking a cup of coffee software developers that use model based techniques may lose their flow of concentration they will not get the same feeling of satisfaction that would result from a better transition across the tool usage which may hamper productivity when delays emerge we hope that modeling tools will improve the opportunity for developers to achieve flow through improved tool implementation but also by better modeling languages that enhance modularity and incremental compilation | 1 |
14,749 | 18,019,318,608 | IssuesEvent | 2021-09-16 17:18:10 | googleapis/python-datastore | https://api.github.com/repos/googleapis/python-datastore | closed | Doctest examples in docstrings are untested | type: process type: docs api: datastore | Somewhere along the way, the session which exercised the docstring doctests got dropped. I would propose that we either a) remove them as redundant to the samples maintained in [this repo](https://github.com/GoogleCloudPlatform/python-docs-samples/tree/master/datastore), or b) exercise them via the `system` session, using the [`pytest-doctest` integration](https://docs.pytest.org/en/6.2.x/doctest.html). | 1.0 | Doctest examples in docstrings are untested - Somewhere along the way, the session which exercised the docstring doctests got dropped. I would propose that we either a) remove them as redundant to the samples maintained in [this repo](https://github.com/GoogleCloudPlatform/python-docs-samples/tree/master/datastore), or b) exercise them via the `system` session, using the [`pytest-doctest` integration](https://docs.pytest.org/en/6.2.x/doctest.html). | process | doctest examples in docstrings are untested somewhere along the way the session which exercised the docstring doctests got dropped i would propose that we either a remove them as redundant to the samples maintained in or b exercise them via the system session using the | 1 |
298,409 | 22,497,385,660 | IssuesEvent | 2022-06-23 08:47:19 | quarkusio/quarkus | https://api.github.com/repos/quarkusio/quarkus | closed | Not possible to include multiple config items in adoc if they both have Duration | kind/bug area/documentation | ### Describe the bug
If you have two different config items that use Duration and you include both their generated .adoc pages into a single page you get the following error:
```
asciidoctor: WARN: ed/config/quarkus-keycloak-devservices-keycloak-keycloak-build-time-config.adoc: line 99: id assigned to block already in use: duration-note-anchor
```
| 1.0 | Not possible to include multiple config items in adoc if they both have Duration - ### Describe the bug
If you have two different config items that use Duration and you include both their generated .adoc pages into a single page you get the following error:
```
asciidoctor: WARN: ed/config/quarkus-keycloak-devservices-keycloak-keycloak-build-time-config.adoc: line 99: id assigned to block already in use: duration-note-anchor
```
| non_process | not possible to include multiple config items in adoc if they both have duration describe the bug if you have two different config items that use duration and you include both their generated adoc pages into a single page you get the following error asciidoctor warn ed config quarkus keycloak devservices keycloak keycloak build time config adoc line id assigned to block already in use duration note anchor | 0 |
293,712 | 25,318,753,924 | IssuesEvent | 2022-11-18 00:44:16 | devssa/onde-codar-em-salvador | https://api.github.com/repos/devssa/onde-codar-em-salvador | closed | Analista de Qualidade na [GI GROUP] | SALVADOR DESENVOLVIMENTO DE SOFTWARE C# SCRUM PLENO AGILE SQL TESTE AUTOMATIZADO TESTE DE INTEGRAÇÃO SELENIUM CUCUMBER NUNIT JUNIT SPECFLOW Stale | <!--
==================================================
POR FAVOR, SÓ POSTE SE A VAGA FOR PARA SALVADOR E CIDADES VIZINHAS!
Use: "Desenvolvedor Front-end" ao invés de
"Front-End Developer" \o/
Exemplo: `[JAVASCRIPT] [MYSQL] [NODE.JS] Desenvolvedor Front-End na [NOME DA EMPRESA]`
==================================================
-->
## Analista de Qualidade (C#)
SALÁRIO R$ 5.000,00 A R$ 5.500,00
## Local
Salvador - BA
## Benefícios
- Assistência médica
- Assistência odontológica
- Seguro de vida
- Vale alimentação
- Vale-refeição
- Vale-transporte
## Requisitos
- Selenium
- Cucumber
- Specflow.
- C#
- Nunit ou similar
- Junit ou similar
- Testes automatizados
- Testes manuais.
- Agile
- Scrum
- SQL
## Gi Group
A Gi Group é uma multinacional italiana reconhecida como uma das líderes globais em soluções dedicadas ao desenvolvimento do mercado de trabalho.
Nosso maior destaque está nas atividades de Recrutamento & Seleção, Administração de Temporários, projetos de Terceirização (Outsourcing), Trade Marketing, Treinamento e Consultoria Empresarial e Programa de Estágios.
## Como se candidatar
https://www.vagas.com.br/vagas/v1889774/analista-de-qualidade-c
| 2.0 | Analista de Qualidade na [GI GROUP] - <!--
==================================================
POR FAVOR, SÓ POSTE SE A VAGA FOR PARA SALVADOR E CIDADES VIZINHAS!
Use: "Desenvolvedor Front-end" ao invés de
"Front-End Developer" \o/
Exemplo: `[JAVASCRIPT] [MYSQL] [NODE.JS] Desenvolvedor Front-End na [NOME DA EMPRESA]`
==================================================
-->
## Analista de Qualidade (C#)
SALÁRIO R$ 5.000,00 A R$ 5.500,00
## Local
Salvador - BA
## Benefícios
- Assistência médica
- Assistência odontológica
- Seguro de vida
- Vale alimentação
- Vale-refeição
- Vale-transporte
## Requisitos
- Selenium
- Cucumber
- Specflow.
- C#
- Nunit ou similar
- Junit ou similar
- Testes automatizados
- Testes manuais.
- Agile
- Scrum
- SQL
## Gi Group
A Gi Group é uma multinacional italiana reconhecida como uma das líderes globais em soluções dedicadas ao desenvolvimento do mercado de trabalho.
Nosso maior destaque está nas atividades de Recrutamento & Seleção, Administração de Temporários, projetos de Terceirização (Outsourcing), Trade Marketing, Treinamento e Consultoria Empresarial e Programa de Estágios.
## Como se candidatar
https://www.vagas.com.br/vagas/v1889774/analista-de-qualidade-c
| non_process | analista de qualidade na por favor só poste se a vaga for para salvador e cidades vizinhas use desenvolvedor front end ao invés de front end developer o exemplo desenvolvedor front end na analista de qualidade c salário r a r local salvador ba benefícios assistência médica assistência odontológica seguro de vida vale alimentação vale refeição vale transporte requisitos selenium cucumber specflow c nunit ou similar junit ou similar testes automatizados testes manuais agile scrum sql gi group a gi group é uma multinacional italiana reconhecida como uma das líderes globais em soluções dedicadas ao desenvolvimento do mercado de trabalho nosso maior destaque está nas atividades de recrutamento seleção administração de temporários projetos de terceirização outsourcing trade marketing treinamento e consultoria empresarial e programa de estágios como se candidatar | 0 |
284,815 | 30,913,701,795 | IssuesEvent | 2023-08-05 02:39:46 | Satheesh575555/linux-4.1.15_CVE-2022-45934 | https://api.github.com/repos/Satheesh575555/linux-4.1.15_CVE-2022-45934 | reopened | CVE-2017-12193 (Medium) detected in linuxlinux-4.6 | Mend: dependency security vulnerability | ## CVE-2017-12193 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.6</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Satheesh575555/linux-4.1.15_CVE-2022-45934/commit/7c0b143b43394df131d83e9aecb3c5518edc127a">7c0b143b43394df131d83e9aecb3c5518edc127a</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/lib/assoc_array.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/lib/assoc_array.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
The assoc_array_insert_into_terminal_node function in lib/assoc_array.c in the Linux kernel before 4.13.11 mishandles node splitting, which allows local users to cause a denial of service (NULL pointer dereference and panic) via a crafted application, as demonstrated by the keyring key type, and key addition and link creation operations.
<p>Publish Date: 2017-11-22
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2017-12193>CVE-2017-12193</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2017-12193">https://nvd.nist.gov/vuln/detail/CVE-2017-12193</a></p>
<p>Release Date: 2017-11-22</p>
<p>Fix Resolution: 4.13.11</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2017-12193 (Medium) detected in linuxlinux-4.6 - ## CVE-2017-12193 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.6</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Satheesh575555/linux-4.1.15_CVE-2022-45934/commit/7c0b143b43394df131d83e9aecb3c5518edc127a">7c0b143b43394df131d83e9aecb3c5518edc127a</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/lib/assoc_array.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/lib/assoc_array.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
The assoc_array_insert_into_terminal_node function in lib/assoc_array.c in the Linux kernel before 4.13.11 mishandles node splitting, which allows local users to cause a denial of service (NULL pointer dereference and panic) via a crafted application, as demonstrated by the keyring key type, and key addition and link creation operations.
<p>Publish Date: 2017-11-22
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2017-12193>CVE-2017-12193</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2017-12193">https://nvd.nist.gov/vuln/detail/CVE-2017-12193</a></p>
<p>Release Date: 2017-11-22</p>
<p>Fix Resolution: 4.13.11</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | cve medium detected in linuxlinux cve medium severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch master vulnerable source files lib assoc array c lib assoc array c vulnerability details the assoc array insert into terminal node function in lib assoc array c in the linux kernel before mishandles node splitting which allows local users to cause a denial of service null pointer dereference and panic via a crafted application as demonstrated by the keyring key type and key addition and link creation operations publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend | 0 |
263,523 | 28,039,401,765 | IssuesEvent | 2023-03-28 17:18:55 | hyperledger/cacti | https://api.github.com/repos/hyperledger/cacti | opened | fix(keychain-memory-wasm): wee_alloc is Unmaintained GHSA-rc23-xxgq-x27g | bug dependencies Security Keychain P1 rust | In short, we need to migrate away from wee_alloc as per the GitHub security advisory: https://github.com/advisories/GHSA-rc23-xxgq-x27g
> wee_alloc (Rust) · [packages/cactus-plugin-keychain-memory-wasm/src/main/rust/cactus-plugin-keychain-memory-wasm/Cargo.toml](https://github.com/hyperledger/cacti/blob/-/packages/cactus-plugin-keychain-memory-wasm/src/main/rust/cactus-plugin-keychain-memory-wasm/Cargo.toml)
>
> Two of the maintainers have indicated that the crate may not be maintained.
> The crate has open issues including memory leaks and may not be suitable for production use.
> It may be best to switch to the default Rust standard allocator on wasm32 targets.
> Last release seems to have been three years ago.
https://github.com/hyperledger/cacti/security/dependabot/241 | True | fix(keychain-memory-wasm): wee_alloc is Unmaintained GHSA-rc23-xxgq-x27g - In short, we need to migrate away from wee_alloc as per the GitHub security advisory: https://github.com/advisories/GHSA-rc23-xxgq-x27g
> wee_alloc (Rust) · [packages/cactus-plugin-keychain-memory-wasm/src/main/rust/cactus-plugin-keychain-memory-wasm/Cargo.toml](https://github.com/hyperledger/cacti/blob/-/packages/cactus-plugin-keychain-memory-wasm/src/main/rust/cactus-plugin-keychain-memory-wasm/Cargo.toml)
>
> Two of the maintainers have indicated that the crate may not be maintained.
> The crate has open issues including memory leaks and may not be suitable for production use.
> It may be best to switch to the default Rust standard allocator on wasm32 targets.
> Last release seems to have been three years ago.
https://github.com/hyperledger/cacti/security/dependabot/241 | non_process | fix keychain memory wasm wee alloc is unmaintained ghsa xxgq in short we need to migrate away from wee alloc as per the github security advisory wee alloc rust · two of the maintainers have indicated that the crate may not be maintained the crate has open issues including memory leaks and may not be suitable for production use it may be best to switch to the default rust standard allocator on targets last release seems to have been three years ago | 0 |
4,445 | 7,313,789,315 | IssuesEvent | 2018-03-01 03:05:16 | P2Poker/P2Poker | https://api.github.com/repos/P2Poker/P2Poker | opened | As a third-party developer, I need to be able to link libraries statically as well as dynamically | c) dev origin d) release 0.1 e) API e) dev tools f) priority 2 g) change request h) in process j) difficult workaround l) minor completion cost l) no risk l) no ux impact n) no impact n) no users affected o) as a third-party dev p) triage completed | ## Story **(REQUIRED)**
As a third-party developer, I need to be able to link libraries statically as well as dynamically.
## Explanation **(REQUIRED)**
Add static library build configurations to all library projects. | 1.0 | As a third-party developer, I need to be able to link libraries statically as well as dynamically - ## Story **(REQUIRED)**
As a third-party developer, I need to be able to link libraries statically as well as dynamically.
## Explanation **(REQUIRED)**
Add static library build configurations to all library projects. | process | as a third party developer i need to be able to link libraries statically as well as dynamically story required as a third party developer i need to be able to link libraries statically as well as dynamically explanation required add static library build configurations to all library projects | 1 |
245,671 | 26,549,331,493 | IssuesEvent | 2023-01-20 05:32:44 | nidhi7598/linux-3.0.35_CVE-2022-45934 | https://api.github.com/repos/nidhi7598/linux-3.0.35_CVE-2022-45934 | opened | CVE-2019-11479 (High) detected in linux-stable-rtv3.8.6, linuxlinux-3.0.49 | security vulnerability | ## CVE-2019-11479 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>linux-stable-rtv3.8.6</b>, <b>linuxlinux-3.0.49</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Jonathan Looney discovered that the Linux kernel default MSS is hard-coded to 48 bytes. This allows a remote peer to fragment TCP resend queues significantly more than if a larger MSS were enforced. A remote attacker could use this to cause a denial of service. This has been fixed in stable kernel releases 4.4.182, 4.9.182, 4.14.127, 4.19.52, 5.1.11, and is fixed in commits 967c05aee439e6e5d7d805e195b3a20ef5c433d6 and 5f3e2bf008c2221478101ee72f5cb4654b9fc363.
<p>Publish Date: 2019-06-19
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-11479>CVE-2019-11479</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11479">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11479</a></p>
<p>Release Date: 2020-10-20</p>
<p>Fix Resolution: release-1.3.6</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2019-11479 (High) detected in linux-stable-rtv3.8.6, linuxlinux-3.0.49 - ## CVE-2019-11479 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>linux-stable-rtv3.8.6</b>, <b>linuxlinux-3.0.49</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Jonathan Looney discovered that the Linux kernel default MSS is hard-coded to 48 bytes. This allows a remote peer to fragment TCP resend queues significantly more than if a larger MSS were enforced. A remote attacker could use this to cause a denial of service. This has been fixed in stable kernel releases 4.4.182, 4.9.182, 4.14.127, 4.19.52, 5.1.11, and is fixed in commits 967c05aee439e6e5d7d805e195b3a20ef5c433d6 and 5f3e2bf008c2221478101ee72f5cb4654b9fc363.
<p>Publish Date: 2019-06-19
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-11479>CVE-2019-11479</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11479">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11479</a></p>
<p>Release Date: 2020-10-20</p>
<p>Fix Resolution: release-1.3.6</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | cve high detected in linux stable linuxlinux cve high severity vulnerability vulnerable libraries linux stable linuxlinux vulnerability details jonathan looney discovered that the linux kernel default mss is hard coded to bytes this allows a remote peer to fragment tcp resend queues significantly more than if a larger mss were enforced a remote attacker could use this to cause a denial of service this has been fixed in stable kernel releases and is fixed in commits and publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution release step up your open source security game with mend | 0 |
10,794 | 13,609,128,495 | IssuesEvent | 2020-09-23 04:21:56 | googleapis/java-document-ai | https://api.github.com/repos/googleapis/java-document-ai | closed | Dependency Dashboard | api: documentai type: process | This issue contains a list of Renovate updates and their statuses.
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-storage-1.x -->deps: update dependency com.google.cloud:google-cloud-storage to v1.113.1
- [ ] <!-- rebase-branch=renovate/com.google.cloud-libraries-bom-10.x -->chore(deps): update dependency com.google.cloud:libraries-bom to v10
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
| 1.0 | Dependency Dashboard - This issue contains a list of Renovate updates and their statuses.
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-storage-1.x -->deps: update dependency com.google.cloud:google-cloud-storage to v1.113.1
- [ ] <!-- rebase-branch=renovate/com.google.cloud-libraries-bom-10.x -->chore(deps): update dependency com.google.cloud:libraries-bom to v10
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
| process | dependency dashboard this issue contains a list of renovate updates and their statuses open these updates have all been created already click a checkbox below to force a retry rebase of any deps update dependency com google cloud google cloud storage to chore deps update dependency com google cloud libraries bom to check this box to trigger a request for renovate to run again on this repository | 1 |
20,519 | 27,178,848,795 | IssuesEvent | 2023-02-18 11:02:52 | dita-ot/dita-ot | https://api.github.com/repos/dita-ot/dita-ot | closed | Conkeyref not resolved in map title when target phrases contain inner phrases | bug preprocess/conref | ## Expected Behavior
The conkeyref should be resolved in the DITA Map's title element:
<title>TITLE: <ph conkeyref="key/productName"/></title>
The target "ph" element looks like this:
<ph id="productName">some <ph id="otherId">other</ph> text</ph>
## Actual Behavior
The conkeyref is not resolved.
## Possible Solution
The conkeyref is to a phrase which has an inner phrase, if the inner phrase is removed, the conkeyref resolves.
## Steps to Reproduce
[conkeyref-map-title.zip](https://github.com/dita-ot/dita-ot/files/10567278/conkeyref-map-title.zip)
1. Publish the attached DITA Map to HTML
2. Look in the generated index.html, the map title is "TITLE:" instead of: "TITLE:some other text"
## Copy of the error message, log file or stack trace
No error message in console.
## Environment
* DITA-OT version: 4.0.1
* Operating system and version:
_(Linux, macOS, Windows)_
* How did you run DITA-OT?
_oXygen_
* Transformation type:
_(HTML5, PDF, custom, etc.)_
| 1.0 | Conkeyref not resolved in map title when target phrases contain inner phrases - ## Expected Behavior
The conkeyref should be resolved in the DITA Map's title element:
<title>TITLE: <ph conkeyref="key/productName"/></title>
The target "ph" element looks like this:
<ph id="productName">some <ph id="otherId">other</ph> text</ph>
## Actual Behavior
The conkeyref is not resolved.
## Possible Solution
The conkeyref is to a phrase which has an inner phrase, if the inner phrase is removed, the conkeyref resolves.
## Steps to Reproduce
[conkeyref-map-title.zip](https://github.com/dita-ot/dita-ot/files/10567278/conkeyref-map-title.zip)
1. Publish the attached DITA Map to HTML
2. Look in the generated index.html, the map title is "TITLE:" instead of: "TITLE:some other text"
## Copy of the error message, log file or stack trace
No error message in console.
## Environment
* DITA-OT version: 4.0.1
* Operating system and version:
_(Linux, macOS, Windows)_
* How did you run DITA-OT?
_oXygen_
* Transformation type:
_(HTML5, PDF, custom, etc.)_
| process | conkeyref not resolved in map title when target phrases contain inner phrases expected behavior the conkeyref should be resolved in the dita map s title element title the target ph element looks like this some other text actual behavior the conkeyref is not resolved possible solution the conkeyref is to a phrase which has an inner phrase if the inner phrase is removed the conkeyref resolves steps to reproduce publish the attached dita map to html look in the generated index html the map title is title instead of title some other text copy of the error message log file or stack trace no error message in console environment dita ot version operating system and version linux macos windows how did you run dita ot oxygen transformation type pdf custom etc | 1 |
117,107 | 25,041,289,798 | IssuesEvent | 2022-11-04 21:05:38 | OpenRefine/OpenRefine | https://api.github.com/repos/OpenRefine/OpenRefine | closed | Project folder should be configurable (so that Refine + Projects can run off of a USB key) | enhancement imported from old code repo priority: Medium persistence documentation workspace | _Original author: stefa...@google.com (October 26, 2011 15:29:41)_
Currently Refine stores project data in a OS-specific location but the user is not allowed to reconfigure it. It should be possible instead to allow Refine to store data in a folder relative to where it is installed, thus allowing one to run Refine directly from portable media (such as USB keys) along with its projects.
_Original issue: http://code.google.com/p/google-refine/issues/detail?id=471_
| 1.0 | Project folder should be configurable (so that Refine + Projects can run off of a USB key) - _Original author: stefa...@google.com (October 26, 2011 15:29:41)_
Currently Refine stores project data in a OS-specific location but the user is not allowed to reconfigure it. It should be possible instead to allow Refine to store data in a folder relative to where it is installed, thus allowing one to run Refine directly from portable media (such as USB keys) along with its projects.
_Original issue: http://code.google.com/p/google-refine/issues/detail?id=471_
| non_process | project folder should be configurable so that refine projects can run off of a usb key original author stefa google com october currently refine stores project data in a os specific location but the user is not allowed to reconfigure it it should be possible instead to allow refine to store data in a folder relative to where it is installed thus allowing one to run refine directly from portable media such as usb keys along with its projects original issue | 0 |
573,567 | 17,023,672,515 | IssuesEvent | 2021-07-03 03:13:29 | tomhughes/trac-tickets | https://api.github.com/repos/tomhughes/trac-tickets | closed | [amenity-points] More sophisticated rendering of parking | Component: mapnik Priority: trivial Resolution: duplicate Type: enhancement | **[Submitted to the original trac issue database at 1.54pm, Tuesday, 18th January 2011]**
Hi,
think we had a seperation between normal surface parking (and private ones) and multistoreies. Unfortunatly now they are rendered with the same 'P' icon.
I recommend to render multistories with a P and roof even for higher zoomlevels so drivers get a clue where they might park in a city. | 1.0 | [amenity-points] More sophisticated rendering of parking - **[Submitted to the original trac issue database at 1.54pm, Tuesday, 18th January 2011]**
Hi,
think we had a seperation between normal surface parking (and private ones) and multistoreies. Unfortunatly now they are rendered with the same 'P' icon.
I recommend to render multistories with a P and roof even for higher zoomlevels so drivers get a clue where they might park in a city. | non_process | more sophisticated rendering of parking hi think we had a seperation between normal surface parking and private ones and multistoreies unfortunatly now they are rendered with the same p icon i recommend to render multistories with a p and roof even for higher zoomlevels so drivers get a clue where they might park in a city | 0 |
68,110 | 17,152,958,099 | IssuesEvent | 2021-07-14 00:19:38 | sstsimulator/sst-elements | https://api.github.com/repos/sstsimulator/sst-elements | closed | Nightly testing for make-dist on sst-elements provides a false pass if distribution files are missing some ref files | Bug SST-BuildSystem | Issue #1683 indicated that the simple carwash test was failing due to a missing test ref file. This should have been caught by our nightly testing of the distribution.
Analysis of the nightly build/test system for make-dist discovered that the frameworks build/test script (copied from the old bamboo script) copied the reference files from the devel branch to the extracted distribution files that were being tested. This allowed some tests that are missing reference files from the distribution to **falsely** pass.
Upon testing of a corrected build/test script, it was discovered that the following elements will fail testing:
* **simpleSimulation**
* **prospero**
* **zodiac**
NOTE: This affects the SST 11.0 release.
A new PR will soon be generated to correct the Makefile.am for the appropriate elements, and an updated frameworks build/test system will be merged. | 1.0 | Nightly testing for make-dist on sst-elements provides a false pass if distribution files are missing some ref files - Issue #1683 indicated that the simple carwash test was failing due to a missing test ref file. This should have been caught by our nightly testing of the distribution.
Analysis of the nightly build/test system for make-dist discovered that the frameworks build/test script (copied from the old bamboo script) copied the reference files from the devel branch to the extracted distribution files that were being tested. This allowed some tests that are missing reference files from the distribution to **falsely** pass.
Upon testing of a corrected build/test script, it was discovered that the following elements will fail testing:
* **simpleSimulation**
* **prospero**
* **zodiac**
NOTE: This affects the SST 11.0 release.
A new PR will soon be generated to correct the Makefile.am for the appropriate elements, and an updated frameworks build/test system will be merged. | non_process | nightly testing for make dist on sst elements provides a false pass if distribution files are missing some ref files issue indicated that the simple carwash test was failing due to a missing test ref file this should have been caught by our nightly testing of the distribution analysis of the nightly build test system for make dist discovered that the frameworks build test script copied from the old bamboo script copied the reference files from the devel branch to the extracted distribution files that were being tested this allowed some tests that are missing reference files from the distribution to falsely pass upon testing of a corrected build test script it was discovered that the following elements will fail testing simplesimulation prospero zodiac note this affects the sst release a new pr will soon be generated to correct the makefile am for the appropriate elements and an updated frameworks build test system will be merged | 0 |
14,561 | 17,688,910,471 | IssuesEvent | 2021-08-24 07:28:31 | ppy/osu-web | https://api.github.com/repos/ppy/osu-web | closed | Audio preview missing from a very old map | area:beatmap-processing | for https://osu.ppy.sh/beatmapsets/28991 which was ranked in 2011, the preview is an empty .mp3 file, response headers if you need them are below. I suspect this may be not the only map without the preview.
```
HTTP/2 200 OK
date: Wed, 02 Jun 2021 22:16:17 GMT
content-type: audio/mpeg
content-length: 0
x-amz-id-2: XNaE2Nz05D1aK8xZUn/GI8Joo+aiZj7r99x9VtEbp2/P4FlCMsFUf1ged7UTVq+BQUzLY8fIViw=
x-amz-request-id: 1YXGGNS6BD7Y991Z
last-modified: Fri, 21 Mar 2014 00:10:01 GMT
etag: "d41d8cd98f00b204e9800998ecf8427e"
x-amz-meta-s3cmd-attrs: uid:1000/gname:memcache/uname:memcache/gid:1000/mode:33279/mtime:1307960108/atime:1381156277/ctime:1382689962
expires: Wed, 09 Jun 2021 13:40:16 GMT
cache-control: public, max-age=604800
strict-transport-security: max-age=31536000; includeSubDomains; preload
cf-cache-status: HIT
age: 27545
accept-ranges: bytes
cf-request-id: 0a706591f300000221bb9c6000000001
expect-ct: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct"
x-content-type-options: nosniff
server: cloudflare
cf-ray: 65940b965b770221-ZRH
alt-svc: h3-27=":443"; ma=86400, h3-28=":443"; ma=86400, h3-29=":443"; ma=86400, h3=":443"; ma=86400
X-Firefox-Spdy: h2
```
another oddity (which I don't feel deserves a separate issue) is https://osu.ppy.sh/beatmapsets/32969, which has a full 1m37s-long audio for a preview, despite the beatmap having the preview point set | 1.0 | Audio preview missing from a very old map - for https://osu.ppy.sh/beatmapsets/28991 which was ranked in 2011, the preview is an empty .mp3 file, response headers if you need them are below. I suspect this may be not the only map without the preview.
```
HTTP/2 200 OK
date: Wed, 02 Jun 2021 22:16:17 GMT
content-type: audio/mpeg
content-length: 0
x-amz-id-2: XNaE2Nz05D1aK8xZUn/GI8Joo+aiZj7r99x9VtEbp2/P4FlCMsFUf1ged7UTVq+BQUzLY8fIViw=
x-amz-request-id: 1YXGGNS6BD7Y991Z
last-modified: Fri, 21 Mar 2014 00:10:01 GMT
etag: "d41d8cd98f00b204e9800998ecf8427e"
x-amz-meta-s3cmd-attrs: uid:1000/gname:memcache/uname:memcache/gid:1000/mode:33279/mtime:1307960108/atime:1381156277/ctime:1382689962
expires: Wed, 09 Jun 2021 13:40:16 GMT
cache-control: public, max-age=604800
strict-transport-security: max-age=31536000; includeSubDomains; preload
cf-cache-status: HIT
age: 27545
accept-ranges: bytes
cf-request-id: 0a706591f300000221bb9c6000000001
expect-ct: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct"
x-content-type-options: nosniff
server: cloudflare
cf-ray: 65940b965b770221-ZRH
alt-svc: h3-27=":443"; ma=86400, h3-28=":443"; ma=86400, h3-29=":443"; ma=86400, h3=":443"; ma=86400
X-Firefox-Spdy: h2
```
another oddity (which I don't feel deserves a separate issue) is https://osu.ppy.sh/beatmapsets/32969, which has a full 1m37s-long audio for a preview, despite the beatmap having the preview point set | process | audio preview missing from a very old map for which was ranked in the preview is an empty file response headers if you need them are below i suspect this may be not the only map without the preview http ok date wed jun gmt content type audio mpeg content length x amz id x amz request id last modified fri mar gmt etag x amz meta attrs uid gname memcache uname memcache gid mode mtime atime ctime expires wed jun gmt cache control public max age strict transport security max age includesubdomains preload cf cache status hit age accept ranges bytes cf request id expect ct max age report uri x content type options nosniff server cloudflare cf ray zrh alt svc ma ma ma ma x firefox spdy another oddity which i don t feel deserves a separate issue is which has a full long audio for a preview despite the beatmap having the preview point set | 1 |
178,163 | 14,661,834,354 | IssuesEvent | 2020-12-29 05:18:17 | Danya0x07/linear-axis | https://api.github.com/repos/Danya0x07/linear-axis | closed | Улучшения документации. | documentation | Пояснить за аббривеатуры, за кроссплатформенность и независимость от IDE используемой системы сборки. | 1.0 | Улучшения документации. - Пояснить за аббривеатуры, за кроссплатформенность и независимость от IDE используемой системы сборки. | non_process | улучшения документации пояснить за аббривеатуры за кроссплатформенность и независимость от ide используемой системы сборки | 0 |
6,928 | 10,091,353,112 | IssuesEvent | 2019-07-26 14:03:49 | danderson/metallb | https://api.github.com/repos/danderson/metallb | closed | stable/metallb chart not updated | blocked-by-upstream bug process | Documentation says to use helm stable/metallb (when using helm) but that chart is not updated to use 0.8 version of metallb as https://github.com/danderson/metallb/tree/v0.8/helm-chart is. | 1.0 | stable/metallb chart not updated - Documentation says to use helm stable/metallb (when using helm) but that chart is not updated to use 0.8 version of metallb as https://github.com/danderson/metallb/tree/v0.8/helm-chart is. | process | stable metallb chart not updated documentation says to use helm stable metallb when using helm but that chart is not updated to use version of metallb as is | 1 |
657,709 | 21,801,935,836 | IssuesEvent | 2022-05-16 06:36:50 | therealbluepandabear/PixaPencil | https://api.github.com/repos/therealbluepandabear/PixaPencil | closed | [Bug] Rotation state not being saved when user exports their art to PNG/JPG | 🐛 bug low priority difficulty: easy v0.0.1 | Fix bug in which the rotation state of the user is not being saved when they export their project to a PNG/JPG file. | 1.0 | [Bug] Rotation state not being saved when user exports their art to PNG/JPG - Fix bug in which the rotation state of the user is not being saved when they export their project to a PNG/JPG file. | non_process | rotation state not being saved when user exports their art to png jpg fix bug in which the rotation state of the user is not being saved when they export their project to a png jpg file | 0 |
382,924 | 26,524,493,297 | IssuesEvent | 2023-01-19 07:25:39 | RWD-data-environment-in-Hospital/Documents | https://api.github.com/repos/RWD-data-environment-in-Hospital/Documents | closed | 環境変数入力後、「再起動してから、タスクバーの検索で...」とあるが、PCの再起動で良いか。 | documentation | ## **対象のドキュメント:Atlasセットアップ手順**
■3.2.4 OMOP 共通データモデルテーブルの作成
環境変数入力後、「再起動してから、タスクバーの検索で...」とあるが、PCの再起動で良いか。 | 1.0 | 環境変数入力後、「再起動してから、タスクバーの検索で...」とあるが、PCの再起動で良いか。 - ## **対象のドキュメント:Atlasセットアップ手順**
■3.2.4 OMOP 共通データモデルテーブルの作成
環境変数入力後、「再起動してから、タスクバーの検索で...」とあるが、PCの再起動で良いか。 | non_process | 環境変数入力後、「再起動してから、タスクバーの検索で 」とあるが、pcの再起動で良いか。 対象のドキュメント:atlasセットアップ手順 ■ . . omop 共通データモデルテーブルの作成 環境変数入力後、「再起動してから、タスクバーの検索で 」とあるが、pcの再起動で良いか。 | 0 |
90,716 | 3,829,576,327 | IssuesEvent | 2016-03-31 11:19:16 | biocore/qiita | https://api.github.com/repos/biocore/qiita | closed | Allow meta-analysis of different processed data types | method addition priority: high | This can include any target gene type or data type (i.e. metabolomics & 16S, for example) | 1.0 | Allow meta-analysis of different processed data types - This can include any target gene type or data type (i.e. metabolomics & 16S, for example) | non_process | allow meta analysis of different processed data types this can include any target gene type or data type i e metabolomics for example | 0 |
29,025 | 7,048,531,681 | IssuesEvent | 2018-01-02 18:04:24 | OpenRIAServices/OpenRiaServices | https://api.github.com/repos/OpenRIAServices/OpenRiaServices | closed | Unable to build codebase in release mode | CodePlexMigrationInitiated Impact: Unassigned | When attempting to build the code in a release build I am getting the following error building Desktop\OpenRiaServices.DomainServices.EntityFramework.
Steps I've done to reproduce this error:-
Download latest code.
Extract code to a folder.
Opened in VS2013.
Switched to build Release.
Build.
3>C:\Windows\Microsoft.NET\Framework\v4.0.30319\Microsoft.Common.targets(1605,5): warning MSB3245: Could not resolve this reference. Could not locate the assembly "EntityFramework". Check to make sure the assembly exists on disk. If this reference is required by your code, you may get compilation errors.
3>C:\Windows\Microsoft.NET\Framework\v4.0.30319\Microsoft.Common.targets(1605,5): warning MSB3245: Could not resolve this reference. Could not locate the assembly "EntityFramework.SqlServer". Check to make sure the assembly exists on disk. If this reference is required by your code, you may get compilation errors.
3>C:\Windows\Microsoft.NET\Framework\v4.0.30319\Microsoft.Common.targets(1605,5): warning MSB3243: No way to resolve conflict between "EntityFramework, Version=6.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" and "EntityFramework". Choosing "EntityFramework, Version=6.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" arbitrarily.
3>C:\Windows\Microsoft.NET\Framework\v4.0.30319\Microsoft.Common.targets(1605,5): warning MSB3243: No way to resolve conflict between "EntityFramework.SqlServer, Version=6.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" and "EntityFramework.SqlServer". Choosing "EntityFramework.SqlServer, Version=6.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" arbitrarily.
7>D:\Code\OpenRiaRelease\OpenRiaServices.DomainServices.EntityFramework\Framework\LinqToEntitiesTypeDescriptor.cs(143,87,143,113): error CS0433: The type 'System.ComponentModel.DataAnnotations.Schema.DatabaseGeneratedAttribute' exists in both 'd:\Code\OpenRiaRelease\packages\EntityFramework.6.1.1\lib\net40\EntityFramework.dll' and 'c:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.5\System.ComponentModel.DataAnnotations.dll'
7>D:\Code\OpenRiaRelease\OpenRiaServices.DomainServices.EntityFramework\Framework\LinqToEntitiesTypeDescriptor.cs(150,93,150,119): error CS0433: The type 'System.ComponentModel.DataAnnotations.Schema.DatabaseGeneratedAttribute' exists in both 'd:\Code\OpenRiaRelease\packages\EntityFramework.6.1.1\lib\net40\EntityFramework.dll' and 'c:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.5\System.ComponentModel.DataAnnotations.dll'
7>D:\Code\OpenRiaRelease\OpenRiaServices.DomainServices.EntityFramework\Framework\LinqToEntitiesTypeDescriptor.cs(150,165,150,188): error CS0433: The type 'System.ComponentModel.DataAnnotations.Schema.DatabaseGeneratedOption' exists in both 'd:\Code\OpenRiaRelease\packages\EntityFramework.6.1.1\lib\net40\EntityFramework.dll' and 'c:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.5\System.ComponentModel.DataAnnotations.dll'
7>D:\Code\OpenRiaRelease\OpenRiaServices.DomainServices.EntityFramework\Framework\LinqToEntitiesTypeDescriptor.cs(154,93,154,119): error CS0433: The type 'System.ComponentModel.DataAnnotations.Schema.DatabaseGeneratedAttribute' exists in both 'd:\Code\OpenRiaRelease\packages\EntityFramework.6.1.1\lib\net40\EntityFramework.dll' and 'c:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.5\System.ComponentModel.DataAnnotations.dll'
7>D:\Code\OpenRiaRelease\OpenRiaServices.DomainServices.EntityFramework\Framework\LinqToEntitiesTypeDescriptor.cs(154,165,154,188): error CS0433: The type 'System.ComponentModel.DataAnnotations.Schema.DatabaseGeneratedOption' exists in both 'd:\Code\OpenRiaRelease\packages\EntityFramework.6.1.1\lib\net40\EntityFramework.dll' and 'c:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.5\System.ComponentModel.DataAnnotations.dll'
7>D:\Code\OpenRiaRelease\OpenRiaServices.DomainServices.EntityFramework\Framework\LinqToEntitiesTypeDescriptor.cs(173,46,173,72): error CS0433: The type 'System.ComponentModel.DataAnnotations.Schema.DatabaseGeneratedAttribute' exists in both 'd:\Code\OpenRiaRelease\packages\EntityFramework.6.1.1\lib\net40\EntityFramework.dll' and 'c:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.5\System.ComponentModel.DataAnnotations.dll'
7>D:\Code\OpenRiaRelease\OpenRiaServices.DomainServices.EntityFramework\Framework\LinqToEntitiesTypeDescriptor.cs(173,116,173,139): error CS0433: The type 'System.ComponentModel.DataAnnotations.Schema.DatabaseGeneratedOption' exists in both 'd:\Code\OpenRiaRelease\packages\EntityFramework.6.1.1\lib\net40\EntityFramework.dll' and 'c:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.5\System.ComponentModel.DataAnnotations.dll'
#### This work item was migrated from CodePlex
CodePlex work item ID: '87'
Vote count: '1'
| 1.0 | Unable to build codebase in release mode - When attempting to build the code in a release build I am getting the following error building Desktop\OpenRiaServices.DomainServices.EntityFramework.
Steps I've done to reproduce this error:-
Download latest code.
Extract code to a folder.
Opened in VS2013.
Switched to build Release.
Build.
3>C:\Windows\Microsoft.NET\Framework\v4.0.30319\Microsoft.Common.targets(1605,5): warning MSB3245: Could not resolve this reference. Could not locate the assembly "EntityFramework". Check to make sure the assembly exists on disk. If this reference is required by your code, you may get compilation errors.
3>C:\Windows\Microsoft.NET\Framework\v4.0.30319\Microsoft.Common.targets(1605,5): warning MSB3245: Could not resolve this reference. Could not locate the assembly "EntityFramework.SqlServer". Check to make sure the assembly exists on disk. If this reference is required by your code, you may get compilation errors.
3>C:\Windows\Microsoft.NET\Framework\v4.0.30319\Microsoft.Common.targets(1605,5): warning MSB3243: No way to resolve conflict between "EntityFramework, Version=6.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" and "EntityFramework". Choosing "EntityFramework, Version=6.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" arbitrarily.
3>C:\Windows\Microsoft.NET\Framework\v4.0.30319\Microsoft.Common.targets(1605,5): warning MSB3243: No way to resolve conflict between "EntityFramework.SqlServer, Version=6.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" and "EntityFramework.SqlServer". Choosing "EntityFramework.SqlServer, Version=6.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" arbitrarily.
7>D:\Code\OpenRiaRelease\OpenRiaServices.DomainServices.EntityFramework\Framework\LinqToEntitiesTypeDescriptor.cs(143,87,143,113): error CS0433: The type 'System.ComponentModel.DataAnnotations.Schema.DatabaseGeneratedAttribute' exists in both 'd:\Code\OpenRiaRelease\packages\EntityFramework.6.1.1\lib\net40\EntityFramework.dll' and 'c:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.5\System.ComponentModel.DataAnnotations.dll'
7>D:\Code\OpenRiaRelease\OpenRiaServices.DomainServices.EntityFramework\Framework\LinqToEntitiesTypeDescriptor.cs(150,93,150,119): error CS0433: The type 'System.ComponentModel.DataAnnotations.Schema.DatabaseGeneratedAttribute' exists in both 'd:\Code\OpenRiaRelease\packages\EntityFramework.6.1.1\lib\net40\EntityFramework.dll' and 'c:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.5\System.ComponentModel.DataAnnotations.dll'
7>D:\Code\OpenRiaRelease\OpenRiaServices.DomainServices.EntityFramework\Framework\LinqToEntitiesTypeDescriptor.cs(150,165,150,188): error CS0433: The type 'System.ComponentModel.DataAnnotations.Schema.DatabaseGeneratedOption' exists in both 'd:\Code\OpenRiaRelease\packages\EntityFramework.6.1.1\lib\net40\EntityFramework.dll' and 'c:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.5\System.ComponentModel.DataAnnotations.dll'
7>D:\Code\OpenRiaRelease\OpenRiaServices.DomainServices.EntityFramework\Framework\LinqToEntitiesTypeDescriptor.cs(154,93,154,119): error CS0433: The type 'System.ComponentModel.DataAnnotations.Schema.DatabaseGeneratedAttribute' exists in both 'd:\Code\OpenRiaRelease\packages\EntityFramework.6.1.1\lib\net40\EntityFramework.dll' and 'c:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.5\System.ComponentModel.DataAnnotations.dll'
7>D:\Code\OpenRiaRelease\OpenRiaServices.DomainServices.EntityFramework\Framework\LinqToEntitiesTypeDescriptor.cs(154,165,154,188): error CS0433: The type 'System.ComponentModel.DataAnnotations.Schema.DatabaseGeneratedOption' exists in both 'd:\Code\OpenRiaRelease\packages\EntityFramework.6.1.1\lib\net40\EntityFramework.dll' and 'c:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.5\System.ComponentModel.DataAnnotations.dll'
7>D:\Code\OpenRiaRelease\OpenRiaServices.DomainServices.EntityFramework\Framework\LinqToEntitiesTypeDescriptor.cs(173,46,173,72): error CS0433: The type 'System.ComponentModel.DataAnnotations.Schema.DatabaseGeneratedAttribute' exists in both 'd:\Code\OpenRiaRelease\packages\EntityFramework.6.1.1\lib\net40\EntityFramework.dll' and 'c:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.5\System.ComponentModel.DataAnnotations.dll'
7>D:\Code\OpenRiaRelease\OpenRiaServices.DomainServices.EntityFramework\Framework\LinqToEntitiesTypeDescriptor.cs(173,116,173,139): error CS0433: The type 'System.ComponentModel.DataAnnotations.Schema.DatabaseGeneratedOption' exists in both 'd:\Code\OpenRiaRelease\packages\EntityFramework.6.1.1\lib\net40\EntityFramework.dll' and 'c:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.5\System.ComponentModel.DataAnnotations.dll'
#### This work item was migrated from CodePlex
CodePlex work item ID: '87'
Vote count: '1'
| non_process | unable to build codebase in release mode when attempting to build the code in a release build i am getting the following error building desktop openriaservices domainservices entityframework steps i ve done to reproduce this error download latest code extract code to a folder opened in switched to build release build c windows microsoft net framework microsoft common targets warning could not resolve this reference could not locate the assembly entityframework check to make sure the assembly exists on disk if this reference is required by your code you may get compilation errors c windows microsoft net framework microsoft common targets warning could not resolve this reference could not locate the assembly entityframework sqlserver check to make sure the assembly exists on disk if this reference is required by your code you may get compilation errors c windows microsoft net framework microsoft common targets warning no way to resolve conflict between entityframework version culture neutral publickeytoken and entityframework choosing entityframework version culture neutral publickeytoken arbitrarily c windows microsoft net framework microsoft common targets warning no way to resolve conflict between entityframework sqlserver version culture neutral publickeytoken and entityframework sqlserver choosing entityframework sqlserver version culture neutral publickeytoken arbitrarily d code openriarelease openriaservices domainservices entityframework framework linqtoentitiestypedescriptor cs error the type system componentmodel dataannotations schema databasegeneratedattribute exists in both d code openriarelease packages entityframework lib entityframework dll and c program files reference assemblies microsoft framework netframework system componentmodel dataannotations dll d code openriarelease openriaservices domainservices entityframework framework linqtoentitiestypedescriptor cs error the type system componentmodel dataannotations schema databasegeneratedattribute exists in both d code openriarelease packages entityframework lib entityframework dll and c program files reference assemblies microsoft framework netframework system componentmodel dataannotations dll d code openriarelease openriaservices domainservices entityframework framework linqtoentitiestypedescriptor cs error the type system componentmodel dataannotations schema databasegeneratedoption exists in both d code openriarelease packages entityframework lib entityframework dll and c program files reference assemblies microsoft framework netframework system componentmodel dataannotations dll d code openriarelease openriaservices domainservices entityframework framework linqtoentitiestypedescriptor cs error the type system componentmodel dataannotations schema databasegeneratedattribute exists in both d code openriarelease packages entityframework lib entityframework dll and c program files reference assemblies microsoft framework netframework system componentmodel dataannotations dll d code openriarelease openriaservices domainservices entityframework framework linqtoentitiestypedescriptor cs error the type system componentmodel dataannotations schema databasegeneratedoption exists in both d code openriarelease packages entityframework lib entityframework dll and c program files reference assemblies microsoft framework netframework system componentmodel dataannotations dll d code openriarelease openriaservices domainservices entityframework framework linqtoentitiestypedescriptor cs error the type system componentmodel dataannotations schema databasegeneratedattribute exists in both d code openriarelease packages entityframework lib entityframework dll and c program files reference assemblies microsoft framework netframework system componentmodel dataannotations dll d code openriarelease openriaservices domainservices entityframework framework linqtoentitiestypedescriptor cs error the type system componentmodel dataannotations schema databasegeneratedoption exists in both d code openriarelease packages entityframework lib entityframework dll and c program files reference assemblies microsoft framework netframework system componentmodel dataannotations dll this work item was migrated from codeplex codeplex work item id vote count | 0 |
53,903 | 13,262,479,614 | IssuesEvent | 2020-08-20 21:53:04 | icecube-trac/tix4 | https://api.github.com/repos/icecube-trac/tix4 | closed | [icetray] svn revision info required (Trac #2284) | Migrated from Trac combo core defect | If cmake can't detect the svn revision, the build fails:
```text
In file included from <command-line>:0:0:
/cvmfs/icecube.opensciencegrid.org/py2-v3.1.1/metaprojects/combo/V00-00-00-RC1/icetray/private/icetray/I3TrayInfoService.cxx: In member function 'I3TrayInfo I3TrayInfoService::GetConfig()':
/cvmfs/icecube.opensciencegrid.org/py2-v3.1.1/RHEL_7_x86_64/metaprojects/combo/V00-00-00-RC1/icetray/CMakeFiles/workspace_config.h:29:22: error: 'SVN_REVISION' was not declared in this scope
#define SVN_REVISION SVN_REVISION-NOTFOUND
^
/cvmfs/icecube.opensciencegrid.org/py2-v3.1.1/metaprojects/combo/V00-00-00-RC1/icetray/private/icetray/I3TrayInfoService.cxx:63:29: note: in expansion of macro 'SVN_REVISION'
the_config.svn_revision = SVN_REVISION;
^
/cvmfs/icecube.opensciencegrid.org/py2-v3.1.1/RHEL_7_x86_64/metaprojects/combo/V00-00-00-RC1/icetray/CMakeFiles/workspace_config.h:29:35: error: 'NOTFOUND' was not declared in this scope
#define SVN_REVISION SVN_REVISION-NOTFOUND
^
/cvmfs/icecube.opensciencegrid.org/py2-v3.1.1/metaprojects/combo/V00-00-00-RC1/icetray/private/icetray/I3TrayInfoService.cxx:63:29: note: in expansion of macro 'SVN_REVISION'
the_config.svn_revision = SVN_REVISION;
^
```
This is blocking the cvmfs build for combo.
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/2284">https://code.icecube.wisc.edu/projects/icecube/ticket/2284</a>, reported by david.schultzand owned by nega</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-05-16T22:09:18",
"_ts": "1558044558386076",
"description": "If cmake can't detect the svn revision, the build fails:\n\n{{{\nIn file included from <command-line>:0:0:\n/cvmfs/icecube.opensciencegrid.org/py2-v3.1.1/metaprojects/combo/V00-00-00-RC1/icetray/private/icetray/I3TrayInfoService.cxx: In member function 'I3TrayInfo I3TrayInfoService::GetConfig()':\n/cvmfs/icecube.opensciencegrid.org/py2-v3.1.1/RHEL_7_x86_64/metaprojects/combo/V00-00-00-RC1/icetray/CMakeFiles/workspace_config.h:29:22: error: 'SVN_REVISION' was not declared in this scope\n#define SVN_REVISION SVN_REVISION-NOTFOUND\n ^\n/cvmfs/icecube.opensciencegrid.org/py2-v3.1.1/metaprojects/combo/V00-00-00-RC1/icetray/private/icetray/I3TrayInfoService.cxx:63:29: note: in expansion of macro 'SVN_REVISION'\n the_config.svn_revision = SVN_REVISION;\n ^\n/cvmfs/icecube.opensciencegrid.org/py2-v3.1.1/RHEL_7_x86_64/metaprojects/combo/V00-00-00-RC1/icetray/CMakeFiles/workspace_config.h:29:35: error: 'NOTFOUND' was not declared in this scope\n#define SVN_REVISION SVN_REVISION-NOTFOUND\n ^\n/cvmfs/icecube.opensciencegrid.org/py2-v3.1.1/metaprojects/combo/V00-00-00-RC1/icetray/private/icetray/I3TrayInfoService.cxx:63:29: note: in expansion of macro 'SVN_REVISION'\n the_config.svn_revision = SVN_REVISION;\n ^\n}}}\n\nThis is blocking the cvmfs build for combo.",
"reporter": "david.schultz",
"cc": "olivas",
"resolution": "fixed",
"time": "2019-05-16T20:06:50",
"component": "combo core",
"summary": "[icetray] svn revision info required",
"priority": "critical",
"keywords": "",
"milestone": "Vernal Equinox 2019",
"owner": "nega",
"type": "defect"
}
```
</p>
</details>
| 1.0 | [icetray] svn revision info required (Trac #2284) - If cmake can't detect the svn revision, the build fails:
```text
In file included from <command-line>:0:0:
/cvmfs/icecube.opensciencegrid.org/py2-v3.1.1/metaprojects/combo/V00-00-00-RC1/icetray/private/icetray/I3TrayInfoService.cxx: In member function 'I3TrayInfo I3TrayInfoService::GetConfig()':
/cvmfs/icecube.opensciencegrid.org/py2-v3.1.1/RHEL_7_x86_64/metaprojects/combo/V00-00-00-RC1/icetray/CMakeFiles/workspace_config.h:29:22: error: 'SVN_REVISION' was not declared in this scope
#define SVN_REVISION SVN_REVISION-NOTFOUND
^
/cvmfs/icecube.opensciencegrid.org/py2-v3.1.1/metaprojects/combo/V00-00-00-RC1/icetray/private/icetray/I3TrayInfoService.cxx:63:29: note: in expansion of macro 'SVN_REVISION'
the_config.svn_revision = SVN_REVISION;
^
/cvmfs/icecube.opensciencegrid.org/py2-v3.1.1/RHEL_7_x86_64/metaprojects/combo/V00-00-00-RC1/icetray/CMakeFiles/workspace_config.h:29:35: error: 'NOTFOUND' was not declared in this scope
#define SVN_REVISION SVN_REVISION-NOTFOUND
^
/cvmfs/icecube.opensciencegrid.org/py2-v3.1.1/metaprojects/combo/V00-00-00-RC1/icetray/private/icetray/I3TrayInfoService.cxx:63:29: note: in expansion of macro 'SVN_REVISION'
the_config.svn_revision = SVN_REVISION;
^
```
This is blocking the cvmfs build for combo.
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/2284">https://code.icecube.wisc.edu/projects/icecube/ticket/2284</a>, reported by david.schultzand owned by nega</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-05-16T22:09:18",
"_ts": "1558044558386076",
"description": "If cmake can't detect the svn revision, the build fails:\n\n{{{\nIn file included from <command-line>:0:0:\n/cvmfs/icecube.opensciencegrid.org/py2-v3.1.1/metaprojects/combo/V00-00-00-RC1/icetray/private/icetray/I3TrayInfoService.cxx: In member function 'I3TrayInfo I3TrayInfoService::GetConfig()':\n/cvmfs/icecube.opensciencegrid.org/py2-v3.1.1/RHEL_7_x86_64/metaprojects/combo/V00-00-00-RC1/icetray/CMakeFiles/workspace_config.h:29:22: error: 'SVN_REVISION' was not declared in this scope\n#define SVN_REVISION SVN_REVISION-NOTFOUND\n ^\n/cvmfs/icecube.opensciencegrid.org/py2-v3.1.1/metaprojects/combo/V00-00-00-RC1/icetray/private/icetray/I3TrayInfoService.cxx:63:29: note: in expansion of macro 'SVN_REVISION'\n the_config.svn_revision = SVN_REVISION;\n ^\n/cvmfs/icecube.opensciencegrid.org/py2-v3.1.1/RHEL_7_x86_64/metaprojects/combo/V00-00-00-RC1/icetray/CMakeFiles/workspace_config.h:29:35: error: 'NOTFOUND' was not declared in this scope\n#define SVN_REVISION SVN_REVISION-NOTFOUND\n ^\n/cvmfs/icecube.opensciencegrid.org/py2-v3.1.1/metaprojects/combo/V00-00-00-RC1/icetray/private/icetray/I3TrayInfoService.cxx:63:29: note: in expansion of macro 'SVN_REVISION'\n the_config.svn_revision = SVN_REVISION;\n ^\n}}}\n\nThis is blocking the cvmfs build for combo.",
"reporter": "david.schultz",
"cc": "olivas",
"resolution": "fixed",
"time": "2019-05-16T20:06:50",
"component": "combo core",
"summary": "[icetray] svn revision info required",
"priority": "critical",
"keywords": "",
"milestone": "Vernal Equinox 2019",
"owner": "nega",
"type": "defect"
}
```
</p>
</details>
| non_process | svn revision info required trac if cmake can t detect the svn revision the build fails text in file included from cvmfs icecube opensciencegrid org metaprojects combo icetray private icetray cxx in member function getconfig cvmfs icecube opensciencegrid org rhel metaprojects combo icetray cmakefiles workspace config h error svn revision was not declared in this scope define svn revision svn revision notfound cvmfs icecube opensciencegrid org metaprojects combo icetray private icetray cxx note in expansion of macro svn revision the config svn revision svn revision cvmfs icecube opensciencegrid org rhel metaprojects combo icetray cmakefiles workspace config h error notfound was not declared in this scope define svn revision svn revision notfound cvmfs icecube opensciencegrid org metaprojects combo icetray private icetray cxx note in expansion of macro svn revision the config svn revision svn revision this is blocking the cvmfs build for combo migrated from json status closed changetime ts description if cmake can t detect the svn revision the build fails n n nin file included from n cvmfs icecube opensciencegrid org metaprojects combo icetray private icetray cxx in member function getconfig n cvmfs icecube opensciencegrid org rhel metaprojects combo icetray cmakefiles workspace config h error svn revision was not declared in this scope n define svn revision svn revision notfound n n cvmfs icecube opensciencegrid org metaprojects combo icetray private icetray cxx note in expansion of macro svn revision n the config svn revision svn revision n n cvmfs icecube opensciencegrid org rhel metaprojects combo icetray cmakefiles workspace config h error notfound was not declared in this scope n define svn revision svn revision notfound n n cvmfs icecube opensciencegrid org metaprojects combo icetray private icetray cxx note in expansion of macro svn revision n the config svn revision svn revision n n n nthis is blocking the cvmfs build for combo reporter david schultz cc olivas resolution fixed time component combo core summary svn revision info required priority critical keywords milestone vernal equinox owner nega type defect | 0 |
1,865 | 4,691,710,706 | IssuesEvent | 2016-10-11 11:37:40 | tomchristie/django-rest-framework | https://api.github.com/repos/tomchristie/django-rest-framework | closed | Track ongoing deprecation | Process | While we have a deprecation policy, we don't track the ongoing deprecations.
We need to have something similar to https://docs.djangoproject.com/en/1.8/internals/deprecation/ to help us conform to the policy. | 1.0 | Track ongoing deprecation - While we have a deprecation policy, we don't track the ongoing deprecations.
We need to have something similar to https://docs.djangoproject.com/en/1.8/internals/deprecation/ to help us conform to the policy. | process | track ongoing deprecation while we have a deprecation policy we don t track the ongoing deprecations we need to have something similar to to help us conform to the policy | 1 |
349,241 | 24,939,450,494 | IssuesEvent | 2022-10-31 17:39:03 | CryptoBlades/cryptoblades | https://api.github.com/repos/CryptoBlades/cryptoblades | closed | [Doc] - Set addFee to 0 on bazaar | documentation | Made the following call on BSC to help alleviate transaction fees for users:
NFTMarket.setAddValue(0)
Previously,
addFee() was returning: (int128, an ABDK64x64 fraction, old-oracled USD value)
'368934881474191032'
passing it into usdToSkill:
'318930388020575'
Meaning it was about 0.0003189, practically nothing. It made no sense to charge this amount.
Old oracle (that is used in this calculation) is stuck at $62/skill, that's why it comes out so small.
| 1.0 | [Doc] - Set addFee to 0 on bazaar - Made the following call on BSC to help alleviate transaction fees for users:
NFTMarket.setAddValue(0)
Previously,
addFee() was returning: (int128, an ABDK64x64 fraction, old-oracled USD value)
'368934881474191032'
passing it into usdToSkill:
'318930388020575'
Meaning it was about 0.0003189, practically nothing. It made no sense to charge this amount.
Old oracle (that is used in this calculation) is stuck at $62/skill, that's why it comes out so small.
| non_process | set addfee to on bazaar made the following call on bsc to help alleviate transaction fees for users nftmarket setaddvalue previously addfee was returning an fraction old oracled usd value passing it into usdtoskill meaning it was about practically nothing it made no sense to charge this amount old oracle that is used in this calculation is stuck at skill that s why it comes out so small | 0 |
367,179 | 10,850,532,330 | IssuesEvent | 2019-11-13 09:02:00 | googleapis/java-monitoring | https://api.github.com/repos/googleapis/java-monitoring | closed | Synthesis failed for java-monitoring | autosynth failure priority: p1 type: bug | Hello! Autosynth couldn't regenerate java-monitoring. :broken_heart:
Here's the output from running `synth.py`:
```
Cloning into 'working_repo'...
Switched to branch 'autosynth'
Running synthtool
['/tmpfs/src/git/autosynth/env/bin/python3', '-m', 'synthtool', 'synth.py', '--']
synthtool > Executing /tmpfs/src/git/autosynth/working_repo/synth.py.
synthtool > Ensuring dependencies.
synthtool > Pulling artman image.
latest: Pulling from googleapis/artman
Digest: sha256:545c758c76c3f779037aa259023ec3d1ef2d57d2c8cd00a222cb187d63ceac5e
Status: Image is up to date for googleapis/artman:latest
synthtool > Cloning googleapis.
synthtool > Running generator for google/monitoring/artman_monitoring.yaml.
synthtool > Failed executing docker run --name artman-docker --rm -i -e HOST_USER_ID=1000 -e HOST_GROUP_ID=1000 -e RUNNING_IN_ARTMAN_DOCKER=True -v /home/kbuilder/.cache/synthtool/googleapis:/home/kbuilder/.cache/synthtool/googleapis -v /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles:/home/kbuilder/.cache/synthtool/googleapis/artman-genfiles -w /home/kbuilder/.cache/synthtool/googleapis googleapis/artman:latest /bin/bash -c artman --local --config google/monitoring/artman_monitoring.yaml generate java_gapic:
artman> Final args:
artman> api_name: monitoring
artman> api_version: v3
artman> artifact_type: GAPIC
artman> aspect: ALL
artman> gapic_code_dir: /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/java/gapic-google-cloud-monitoring-v3
artman> gapic_yaml: /home/kbuilder/.cache/synthtool/googleapis/google/monitoring/v3/monitoring_gapic.yaml
artman> generator_args: null
artman> import_proto_path:
artman> - /home/kbuilder/.cache/synthtool/googleapis
artman> language: java
artman> organization_name: google-cloud
artman> output_dir: /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles
artman> proto_deps:
artman> - name: google-common-protos
artman> proto_package: ''
artman> release_level: ga
artman> root_dir: /home/kbuilder/.cache/synthtool/googleapis
artman> samples: ''
artman> service_yaml: /home/kbuilder/.cache/synthtool/googleapis/google/monitoring/monitoring.yaml
artman> src_proto_path:
artman> - /home/kbuilder/.cache/synthtool/googleapis/google/monitoring/v3
artman> toolkit_path: /toolkit
artman>
artman> Creating GapicClientPipeline.
artman.output >
WARNING: toplevel: (lint) control-presence: Service monitoring.googleapis.com does not have control environment configured.
WARNING: /home/kbuilder/.cache/synthtool/googleapis/google/monitoring/monitoring.yaml:148: auth rule has selector(s) 'google.monitoring.v3.AgentTranslationService.CreateCollectdTimeSeries' that do not match and are not shadowed by other rules.
ERROR: toplevel: interface not reachable: google.monitoring.v3.ServiceMonitoringService.
WARNING: toplevel: (lint) control-presence: Service monitoring.googleapis.com does not have control environment configured.
WARNING: /home/kbuilder/.cache/synthtool/googleapis/google/monitoring/monitoring.yaml:148: auth rule has selector(s) 'google.monitoring.v3.AgentTranslationService.CreateCollectdTimeSeries' that do not match and are not shadowed by other rules.
ERROR: toplevel: interface not reachable: google.monitoring.v3.ServiceMonitoringService.
artman> Traceback (most recent call last):
File "/artman/artman/cli/main.py", line 72, in main
engine.run()
File "/usr/local/lib/python3.5/dist-packages/taskflow/engines/action_engine/engine.py", line 247, in run
for _state in self.run_iter(timeout=timeout):
File "/usr/local/lib/python3.5/dist-packages/taskflow/engines/action_engine/engine.py", line 340, in run_iter
failure.Failure.reraise_if_any(er_failures)
File "/usr/local/lib/python3.5/dist-packages/taskflow/types/failure.py", line 339, in reraise_if_any
failures[0].reraise()
File "/usr/local/lib/python3.5/dist-packages/taskflow/types/failure.py", line 346, in reraise
six.reraise(*self._exc_info)
File "/usr/local/lib/python3.5/dist-packages/six.py", line 696, in reraise
raise value
File "/usr/local/lib/python3.5/dist-packages/taskflow/engines/action_engine/executor.py", line 53, in _execute_task
result = task.execute(**arguments)
File "/artman/artman/tasks/gapic_tasks.py", line 148, in execute
task_utils.gapic_gen_task(toolkit_path, [gapic_artifact] + args))
File "/artman/artman/tasks/task_base.py", line 64, in exec_command
raise e
File "/artman/artman/tasks/task_base.py", line 56, in exec_command
output = subprocess.check_output(args, stderr=subprocess.STDOUT)
File "/usr/lib/python3.5/subprocess.py", line 626, in check_output
**kwargs).stdout
File "/usr/lib/python3.5/subprocess.py", line 708, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['java', '-cp', '/toolkit/build/libs/gapic-generator-latest-fatjar.jar', 'com.google.api.codegen.GeneratorMain', 'LEGACY_GAPIC_AND_PACKAGE', '--descriptor_set=/home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/google-cloud-monitoring-v3.desc', '--package_yaml2=/home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/java_google-cloud-monitoring-v3_package2.yaml', '--output=/home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/java/gapic-google-cloud-monitoring-v3', '--language=java', '--service_yaml=/home/kbuilder/.cache/synthtool/googleapis/google/monitoring/monitoring.yaml', '--gapic_yaml=/home/kbuilder/.cache/synthtool/googleapis/google/monitoring/v3/monitoring_gapic.yaml']' returned non-zero exit status 1
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/__main__.py", line 87, in <module>
main()
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 764, in __call__
return self.main(*args, **kwargs)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 717, in main
rv = self.invoke(ctx)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 956, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 555, in invoke
return callback(*args, **kwargs)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/__main__.py", line 79, in main
spec.loader.exec_module(synth_module) # type: ignore
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 205, in _call_with_frames_removed
File "/tmpfs/src/git/autosynth/working_repo/synth.py", line 32, in <module>
artman_output_name='')
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/gcp/gapic_generator.py", line 64, in java_library
return self._generate_code(service, version, "java", **kwargs)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/gcp/gapic_generator.py", line 138, in _generate_code
generator_args=generator_args,
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/gcp/artman.py", line 141, in run
shell.run(cmd, cwd=root_dir)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/shell.py", line 39, in run
raise exc
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/shell.py", line 33, in run
encoding="utf-8",
File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/subprocess.py", line 418, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['docker', 'run', '--name', 'artman-docker', '--rm', '-i', '-e', 'HOST_USER_ID=1000', '-e', 'HOST_GROUP_ID=1000', '-e', 'RUNNING_IN_ARTMAN_DOCKER=True', '-v', '/home/kbuilder/.cache/synthtool/googleapis:/home/kbuilder/.cache/synthtool/googleapis', '-v', '/home/kbuilder/.cache/synthtool/googleapis/artman-genfiles:/home/kbuilder/.cache/synthtool/googleapis/artman-genfiles', '-w', PosixPath('/home/kbuilder/.cache/synthtool/googleapis'), 'googleapis/artman:latest', '/bin/bash', '-c', 'artman --local --config google/monitoring/artman_monitoring.yaml generate java_gapic']' returned non-zero exit status 32.
synthtool > Cleaned up 0 temporary directories.
synthtool > Wrote metadata to synth.metadata.
Synthesis failed
```
Google internal developers can see the full log [here](https://sponge/953f472e-3476-4164-95ec-c933c3653fdb).
| 1.0 | Synthesis failed for java-monitoring - Hello! Autosynth couldn't regenerate java-monitoring. :broken_heart:
Here's the output from running `synth.py`:
```
Cloning into 'working_repo'...
Switched to branch 'autosynth'
Running synthtool
['/tmpfs/src/git/autosynth/env/bin/python3', '-m', 'synthtool', 'synth.py', '--']
synthtool > Executing /tmpfs/src/git/autosynth/working_repo/synth.py.
synthtool > Ensuring dependencies.
synthtool > Pulling artman image.
latest: Pulling from googleapis/artman
Digest: sha256:545c758c76c3f779037aa259023ec3d1ef2d57d2c8cd00a222cb187d63ceac5e
Status: Image is up to date for googleapis/artman:latest
synthtool > Cloning googleapis.
synthtool > Running generator for google/monitoring/artman_monitoring.yaml.
synthtool > Failed executing docker run --name artman-docker --rm -i -e HOST_USER_ID=1000 -e HOST_GROUP_ID=1000 -e RUNNING_IN_ARTMAN_DOCKER=True -v /home/kbuilder/.cache/synthtool/googleapis:/home/kbuilder/.cache/synthtool/googleapis -v /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles:/home/kbuilder/.cache/synthtool/googleapis/artman-genfiles -w /home/kbuilder/.cache/synthtool/googleapis googleapis/artman:latest /bin/bash -c artman --local --config google/monitoring/artman_monitoring.yaml generate java_gapic:
artman> Final args:
artman> api_name: monitoring
artman> api_version: v3
artman> artifact_type: GAPIC
artman> aspect: ALL
artman> gapic_code_dir: /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/java/gapic-google-cloud-monitoring-v3
artman> gapic_yaml: /home/kbuilder/.cache/synthtool/googleapis/google/monitoring/v3/monitoring_gapic.yaml
artman> generator_args: null
artman> import_proto_path:
artman> - /home/kbuilder/.cache/synthtool/googleapis
artman> language: java
artman> organization_name: google-cloud
artman> output_dir: /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles
artman> proto_deps:
artman> - name: google-common-protos
artman> proto_package: ''
artman> release_level: ga
artman> root_dir: /home/kbuilder/.cache/synthtool/googleapis
artman> samples: ''
artman> service_yaml: /home/kbuilder/.cache/synthtool/googleapis/google/monitoring/monitoring.yaml
artman> src_proto_path:
artman> - /home/kbuilder/.cache/synthtool/googleapis/google/monitoring/v3
artman> toolkit_path: /toolkit
artman>
artman> Creating GapicClientPipeline.
artman.output >
WARNING: toplevel: (lint) control-presence: Service monitoring.googleapis.com does not have control environment configured.
WARNING: /home/kbuilder/.cache/synthtool/googleapis/google/monitoring/monitoring.yaml:148: auth rule has selector(s) 'google.monitoring.v3.AgentTranslationService.CreateCollectdTimeSeries' that do not match and are not shadowed by other rules.
ERROR: toplevel: interface not reachable: google.monitoring.v3.ServiceMonitoringService.
WARNING: toplevel: (lint) control-presence: Service monitoring.googleapis.com does not have control environment configured.
WARNING: /home/kbuilder/.cache/synthtool/googleapis/google/monitoring/monitoring.yaml:148: auth rule has selector(s) 'google.monitoring.v3.AgentTranslationService.CreateCollectdTimeSeries' that do not match and are not shadowed by other rules.
ERROR: toplevel: interface not reachable: google.monitoring.v3.ServiceMonitoringService.
artman> Traceback (most recent call last):
File "/artman/artman/cli/main.py", line 72, in main
engine.run()
File "/usr/local/lib/python3.5/dist-packages/taskflow/engines/action_engine/engine.py", line 247, in run
for _state in self.run_iter(timeout=timeout):
File "/usr/local/lib/python3.5/dist-packages/taskflow/engines/action_engine/engine.py", line 340, in run_iter
failure.Failure.reraise_if_any(er_failures)
File "/usr/local/lib/python3.5/dist-packages/taskflow/types/failure.py", line 339, in reraise_if_any
failures[0].reraise()
File "/usr/local/lib/python3.5/dist-packages/taskflow/types/failure.py", line 346, in reraise
six.reraise(*self._exc_info)
File "/usr/local/lib/python3.5/dist-packages/six.py", line 696, in reraise
raise value
File "/usr/local/lib/python3.5/dist-packages/taskflow/engines/action_engine/executor.py", line 53, in _execute_task
result = task.execute(**arguments)
File "/artman/artman/tasks/gapic_tasks.py", line 148, in execute
task_utils.gapic_gen_task(toolkit_path, [gapic_artifact] + args))
File "/artman/artman/tasks/task_base.py", line 64, in exec_command
raise e
File "/artman/artman/tasks/task_base.py", line 56, in exec_command
output = subprocess.check_output(args, stderr=subprocess.STDOUT)
File "/usr/lib/python3.5/subprocess.py", line 626, in check_output
**kwargs).stdout
File "/usr/lib/python3.5/subprocess.py", line 708, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['java', '-cp', '/toolkit/build/libs/gapic-generator-latest-fatjar.jar', 'com.google.api.codegen.GeneratorMain', 'LEGACY_GAPIC_AND_PACKAGE', '--descriptor_set=/home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/google-cloud-monitoring-v3.desc', '--package_yaml2=/home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/java_google-cloud-monitoring-v3_package2.yaml', '--output=/home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/java/gapic-google-cloud-monitoring-v3', '--language=java', '--service_yaml=/home/kbuilder/.cache/synthtool/googleapis/google/monitoring/monitoring.yaml', '--gapic_yaml=/home/kbuilder/.cache/synthtool/googleapis/google/monitoring/v3/monitoring_gapic.yaml']' returned non-zero exit status 1
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/__main__.py", line 87, in <module>
main()
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 764, in __call__
return self.main(*args, **kwargs)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 717, in main
rv = self.invoke(ctx)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 956, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 555, in invoke
return callback(*args, **kwargs)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/__main__.py", line 79, in main
spec.loader.exec_module(synth_module) # type: ignore
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 205, in _call_with_frames_removed
File "/tmpfs/src/git/autosynth/working_repo/synth.py", line 32, in <module>
artman_output_name='')
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/gcp/gapic_generator.py", line 64, in java_library
return self._generate_code(service, version, "java", **kwargs)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/gcp/gapic_generator.py", line 138, in _generate_code
generator_args=generator_args,
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/gcp/artman.py", line 141, in run
shell.run(cmd, cwd=root_dir)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/shell.py", line 39, in run
raise exc
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/shell.py", line 33, in run
encoding="utf-8",
File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/subprocess.py", line 418, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['docker', 'run', '--name', 'artman-docker', '--rm', '-i', '-e', 'HOST_USER_ID=1000', '-e', 'HOST_GROUP_ID=1000', '-e', 'RUNNING_IN_ARTMAN_DOCKER=True', '-v', '/home/kbuilder/.cache/synthtool/googleapis:/home/kbuilder/.cache/synthtool/googleapis', '-v', '/home/kbuilder/.cache/synthtool/googleapis/artman-genfiles:/home/kbuilder/.cache/synthtool/googleapis/artman-genfiles', '-w', PosixPath('/home/kbuilder/.cache/synthtool/googleapis'), 'googleapis/artman:latest', '/bin/bash', '-c', 'artman --local --config google/monitoring/artman_monitoring.yaml generate java_gapic']' returned non-zero exit status 32.
synthtool > Cleaned up 0 temporary directories.
synthtool > Wrote metadata to synth.metadata.
Synthesis failed
```
Google internal developers can see the full log [here](https://sponge/953f472e-3476-4164-95ec-c933c3653fdb).
| non_process | synthesis failed for java monitoring hello autosynth couldn t regenerate java monitoring broken heart here s the output from running synth py cloning into working repo switched to branch autosynth running synthtool synthtool executing tmpfs src git autosynth working repo synth py synthtool ensuring dependencies synthtool pulling artman image latest pulling from googleapis artman digest status image is up to date for googleapis artman latest synthtool cloning googleapis synthtool running generator for google monitoring artman monitoring yaml synthtool failed executing docker run name artman docker rm i e host user id e host group id e running in artman docker true v home kbuilder cache synthtool googleapis home kbuilder cache synthtool googleapis v home kbuilder cache synthtool googleapis artman genfiles home kbuilder cache synthtool googleapis artman genfiles w home kbuilder cache synthtool googleapis googleapis artman latest bin bash c artman local config google monitoring artman monitoring yaml generate java gapic artman final args artman api name monitoring artman api version artman artifact type gapic artman aspect all artman gapic code dir home kbuilder cache synthtool googleapis artman genfiles java gapic google cloud monitoring artman gapic yaml home kbuilder cache synthtool googleapis google monitoring monitoring gapic yaml artman generator args null artman import proto path artman home kbuilder cache synthtool googleapis artman language java artman organization name google cloud artman output dir home kbuilder cache synthtool googleapis artman genfiles artman proto deps artman name google common protos artman proto package artman release level ga artman root dir home kbuilder cache synthtool googleapis artman samples artman service yaml home kbuilder cache synthtool googleapis google monitoring monitoring yaml artman src proto path artman home kbuilder cache synthtool googleapis google monitoring artman toolkit path toolkit artman artman creating gapicclientpipeline artman output warning toplevel lint control presence service monitoring googleapis com does not have control environment configured warning home kbuilder cache synthtool googleapis google monitoring monitoring yaml auth rule has selector s google monitoring agenttranslationservice createcollectdtimeseries that do not match and are not shadowed by other rules error toplevel interface not reachable google monitoring servicemonitoringservice warning toplevel lint control presence service monitoring googleapis com does not have control environment configured warning home kbuilder cache synthtool googleapis google monitoring monitoring yaml auth rule has selector s google monitoring agenttranslationservice createcollectdtimeseries that do not match and are not shadowed by other rules error toplevel interface not reachable google monitoring servicemonitoringservice artman traceback most recent call last file artman artman cli main py line in main engine run file usr local lib dist packages taskflow engines action engine engine py line in run for state in self run iter timeout timeout file usr local lib dist packages taskflow engines action engine engine py line in run iter failure failure reraise if any er failures file usr local lib dist packages taskflow types failure py line in reraise if any failures reraise file usr local lib dist packages taskflow types failure py line in reraise six reraise self exc info file usr local lib dist packages six py line in reraise raise value file usr local lib dist packages taskflow engines action engine executor py line in execute task result task execute arguments file artman artman tasks gapic tasks py line in execute task utils gapic gen task toolkit path args file artman artman tasks task base py line in exec command raise e file artman artman tasks task base py line in exec command output subprocess check output args stderr subprocess stdout file usr lib subprocess py line in check output kwargs stdout file usr lib subprocess py line in run output stdout stderr stderr subprocess calledprocesserror command returned non zero exit status traceback most recent call last file home kbuilder pyenv versions lib runpy py line in run module as main main mod spec file home kbuilder pyenv versions lib runpy py line in run code exec code run globals file tmpfs src git autosynth env lib site packages synthtool main py line in main file tmpfs src git autosynth env lib site packages click core py line in call return self main args kwargs file tmpfs src git autosynth env lib site packages click core py line in main rv self invoke ctx file tmpfs src git autosynth env lib site packages click core py line in invoke return ctx invoke self callback ctx params file tmpfs src git autosynth env lib site packages click core py line in invoke return callback args kwargs file tmpfs src git autosynth env lib site packages synthtool main py line in main spec loader exec module synth module type ignore file line in exec module file line in call with frames removed file tmpfs src git autosynth working repo synth py line in artman output name file tmpfs src git autosynth env lib site packages synthtool gcp gapic generator py line in java library return self generate code service version java kwargs file tmpfs src git autosynth env lib site packages synthtool gcp gapic generator py line in generate code generator args generator args file tmpfs src git autosynth env lib site packages synthtool gcp artman py line in run shell run cmd cwd root dir file tmpfs src git autosynth env lib site packages synthtool shell py line in run raise exc file tmpfs src git autosynth env lib site packages synthtool shell py line in run encoding utf file home kbuilder pyenv versions lib subprocess py line in run output stdout stderr stderr subprocess calledprocesserror command returned non zero exit status synthtool cleaned up temporary directories synthtool wrote metadata to synth metadata synthesis failed google internal developers can see the full log | 0 |
15,459 | 19,673,904,819 | IssuesEvent | 2022-01-11 10:18:00 | eunseo2/JAVA | https://api.github.com/repos/eunseo2/JAVA | opened | Mapping | process | > Task :Mapping.main()
user1
user2
user3
JAVA
PYTHON
JAVASCRIPT
JAVA
PYTHON
JAVASCRIPT
map vs flatMap
**flatMap**은 결과를 스트림으로 반환하기 때문에 flatMap의 결과를 가지고 바로 forEach 메서드를 체이닝하여 모든 요소를 출력할 수 있는 반면, map의 경우에는 단일 요소로 리턴 되기 때문에 map의 결과를 가지고 forEach메서드로 루프를 진행한 후 그 내부에서 다시 한 번 forEach 메서드를 체이닝하여 사용해야 한다. | 1.0 | Mapping - > Task :Mapping.main()
user1
user2
user3
JAVA
PYTHON
JAVASCRIPT
JAVA
PYTHON
JAVASCRIPT
map vs flatMap
**flatMap**은 결과를 스트림으로 반환하기 때문에 flatMap의 결과를 가지고 바로 forEach 메서드를 체이닝하여 모든 요소를 출력할 수 있는 반면, map의 경우에는 단일 요소로 리턴 되기 때문에 map의 결과를 가지고 forEach메서드로 루프를 진행한 후 그 내부에서 다시 한 번 forEach 메서드를 체이닝하여 사용해야 한다. | process | mapping task mapping main java python javascript java python javascript map vs flatmap flatmap 은 결과를 스트림으로 반환하기 때문에 flatmap의 결과를 가지고 바로 foreach 메서드를 체이닝하여 모든 요소를 출력할 수 있는 반면 map의 경우에는 단일 요소로 리턴 되기 때문에 map의 결과를 가지고 foreach메서드로 루프를 진행한 후 그 내부에서 다시 한 번 foreach 메서드를 체이닝하여 사용해야 한다 | 1 |
14,469 | 17,577,460,540 | IssuesEvent | 2021-08-15 22:04:10 | 2i2c-org/pilot-hubs | https://api.github.com/repos/2i2c-org/pilot-hubs | opened | Status Page for our clusters | type: enhancement :label: team-process :label: hub administrator prio: low | # Summary
There are many cases where our clusters might be down for one reason or another (e.g. upgrades, outages, etc). In those cases, it's helpful if there is a source of truth for "is 2i2c's infrastructure down, or is it just me?". We should have a place to point users to so that they can quickly answer this question.
# Important information
The most common service I've seen for this is [`statuspage.io`](https://www.atlassian.com/software/statuspage), which [even has a non-profit discount](https://support.atlassian.com/statuspage/docs/apply-for-a-community-open-source-or-academic-license/).
# Tasks to complete
- [ ] Decide what kind of service we'd like to use for a status page
- [ ] ...figure out steps to implement this
- [ ] Document the new status page in our user and team documentation | 1.0 | Status Page for our clusters - # Summary
There are many cases where our clusters might be down for one reason or another (e.g. upgrades, outages, etc). In those cases, it's helpful if there is a source of truth for "is 2i2c's infrastructure down, or is it just me?". We should have a place to point users to so that they can quickly answer this question.
# Important information
The most common service I've seen for this is [`statuspage.io`](https://www.atlassian.com/software/statuspage), which [even has a non-profit discount](https://support.atlassian.com/statuspage/docs/apply-for-a-community-open-source-or-academic-license/).
# Tasks to complete
- [ ] Decide what kind of service we'd like to use for a status page
- [ ] ...figure out steps to implement this
- [ ] Document the new status page in our user and team documentation | process | status page for our clusters summary there are many cases where our clusters might be down for one reason or another e g upgrades outages etc in those cases it s helpful if there is a source of truth for is s infrastructure down or is it just me we should have a place to point users to so that they can quickly answer this question important information the most common service i ve seen for this is which tasks to complete decide what kind of service we d like to use for a status page figure out steps to implement this document the new status page in our user and team documentation | 1 |
217,458 | 24,334,912,312 | IssuesEvent | 2022-10-01 01:09:02 | H-459/test4Gal | https://api.github.com/repos/H-459/test4Gal | closed | CVE-2020-36182 (High) detected in jackson-databind-2.9.9.jar - autoclosed | security vulnerability | ## CVE-2020-36182 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.9.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /BaragonCore/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9/jackson-databind-2.9.9.jar,/2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9/jackson-databind-2.9.9.jar,/2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9/jackson-databind-2.9.9.jar,/2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9/jackson-databind-2.9.9.jar,/2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9/jackson-databind-2.9.9.jar,/2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9/jackson-databind-2.9.9.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.9.9.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/H-459/test4Gal/commit/659aa3eb63f125f4e5cbe927376da658f670c874">659aa3eb63f125f4e5cbe927376da658f670c874</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to org.apache.tomcat.dbcp.dbcp2.cpdsadapter.DriverAdapterCPDS.
<p>Publish Date: 2021-01-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36182>CVE-2020-36182</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2021-01-07</p>
<p>Fix Resolution: 2.9.10.8</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
| True | CVE-2020-36182 (High) detected in jackson-databind-2.9.9.jar - autoclosed - ## CVE-2020-36182 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.9.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /BaragonCore/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9/jackson-databind-2.9.9.jar,/2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9/jackson-databind-2.9.9.jar,/2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9/jackson-databind-2.9.9.jar,/2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9/jackson-databind-2.9.9.jar,/2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9/jackson-databind-2.9.9.jar,/2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9/jackson-databind-2.9.9.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.9.9.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/H-459/test4Gal/commit/659aa3eb63f125f4e5cbe927376da658f670c874">659aa3eb63f125f4e5cbe927376da658f670c874</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to org.apache.tomcat.dbcp.dbcp2.cpdsadapter.DriverAdapterCPDS.
<p>Publish Date: 2021-01-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36182>CVE-2020-36182</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2021-01-07</p>
<p>Fix Resolution: 2.9.10.8</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
| non_process | cve high detected in jackson databind jar autoclosed cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file baragoncore pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar repository com fasterxml jackson core jackson databind jackson databind jar repository com fasterxml jackson core jackson databind jackson databind jar repository com fasterxml jackson core jackson databind jackson databind jar repository com fasterxml jackson core jackson databind jackson databind jar repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org apache tomcat dbcp cpdsadapter driveradaptercpds publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution check this box to open an automated fix pr | 0 |
67,539 | 12,974,010,327 | IssuesEvent | 2020-07-21 14:50:17 | ArctosDB/arctos | https://api.github.com/repos/ArctosDB/arctos | closed | SpecimenResults: "othercatalognumbers" display | Enhancement Function-CodeTables Function-Relationship Priority-High | "Othercatalognumbers" does not currently consider sort_order (from the code table) or base_url.
Given our current sort order - note "preparator number"...
<img width="935" alt="screen shot 2017-07-31 at 11 51 06 am" src="https://user-images.githubusercontent.com/5720791/28793055-99334058-75e6-11e7-990d-e32ab03c0268.png">
... the concatenation is by alpha-sort:
<img width="261" alt="screen shot 2017-07-31 at 11 52 07 am" src="https://user-images.githubusercontent.com/5720791/28793090-b4154088-75e6-11e7-83de-6af2742cd8e3.png">
Note preparator number hiding down at the bottom.
In SQL:
```
select replace(ConcatOtherId(flat.collection_object_id),'; ',chr(10)) AS ids from flat where guid='MSB:Mamm:271075';
IDS
------------------------------------------------------------------------------------------------------------------------
GenBank=KX754470.1
GenBank=KX754497.1
GenBank=KX754530.1
GenBank=KX754561.1
GenBank=KX754590.1
GenBank=KX754620.1
GenBank=KX754651.1
GenBank=KX754681.1
GenBank=KX754713.1
GenBank=KX754741.1
GenBank=KX754772.1
GenBank=KX754806.1
GenBank=KX754834.1
GenBank=KX754864.1
GenBank=KX754912.1
GenBank=KX754965.1
GenBank=KX755011.1
GenBank=KX755065.1
GenBank=KX755117.1
GenBank=KX755168.1
GenBank=KX755220.1
NK=264027
institutional catalog number=NMMNH 4348
preparator number=DJH 4822
```
I could just change the sort order - note preparator number now ordered properly:
```
select replace(ConcatOtherId2(flat.collection_object_id),'; ',chr(10)) AS ids from flat where guid='MSB:Mamm:271075';
IDS
------------------------------------------------------------------------------------------------------------------------
GenBank=KX754470.1
GenBank=KX754497.1
GenBank=KX754530.1
GenBank=KX754561.1
GenBank=KX754590.1
GenBank=KX754620.1
GenBank=KX754651.1
GenBank=KX754681.1
GenBank=KX754713.1
GenBank=KX754741.1
GenBank=KX754772.1
GenBank=KX754806.1
GenBank=KX754834.1
GenBank=KX754864.1
GenBank=KX754912.1
GenBank=KX754965.1
GenBank=KX755011.1
GenBank=KX755065.1
GenBank=KX755117.1
GenBank=KX755168.1
GenBank=KX755220.1
NK=264027
preparator number=DJH 4822
NMMNHS: New Mexico Museum of Natural History and Science=4348
institutional catalog number=NMMNH 4348
```
but perhaps we also want to include the link (made from base_url) for those IDs with it
```
select replace(ConcatOtherId3(flat.collection_object_id),'; ',chr(10)) AS ids from flat where guid='MSB:Mamm:271075';
IDS
------------------------------------------------------------------------------------------------------------------------
GenBank=KX754470.1 (http://www.ncbi.nlm.nih.gov/nuccore/KX754470.1)
GenBank=KX754497.1 (http://www.ncbi.nlm.nih.gov/nuccore/KX754497.1)
GenBank=KX754530.1 (http://www.ncbi.nlm.nih.gov/nuccore/KX754530.1)
GenBank=KX754561.1 (http://www.ncbi.nlm.nih.gov/nuccore/KX754561.1)
GenBank=KX754590.1 (http://www.ncbi.nlm.nih.gov/nuccore/KX754590.1)
GenBank=KX754620.1 (http://www.ncbi.nlm.nih.gov/nuccore/KX754620.1)
GenBank=KX754651.1 (http://www.ncbi.nlm.nih.gov/nuccore/KX754651.1)
GenBank=KX754681.1 (http://www.ncbi.nlm.nih.gov/nuccore/KX754681.1)
GenBank=KX754713.1 (http://www.ncbi.nlm.nih.gov/nuccore/KX754713.1)
GenBank=KX754741.1 (http://www.ncbi.nlm.nih.gov/nuccore/KX754741.1)
GenBank=KX754772.1 (http://www.ncbi.nlm.nih.gov/nuccore/KX754772.1)
GenBank=KX754806.1 (http://www.ncbi.nlm.nih.gov/nuccore/KX754806.1)
GenBank=KX754834.1 (http://www.ncbi.nlm.nih.gov/nuccore/KX754834.1)
GenBank=KX754864.1 (http://www.ncbi.nlm.nih.gov/nuccore/KX754864.1)
GenBank=KX754912.1 (http://www.ncbi.nlm.nih.gov/nuccore/KX754912.1)
GenBank=KX754965.1 (http://www.ncbi.nlm.nih.gov/nuccore/KX754965.1)
GenBank=KX755011.1 (http://www.ncbi.nlm.nih.gov/nuccore/KX755011.1)
GenBank=KX755065.1 (http://www.ncbi.nlm.nih.gov/nuccore/KX755065.1)
GenBank=KX755117.1 (http://www.ncbi.nlm.nih.gov/nuccore/KX755117.1)
GenBank=KX755168.1 (http://www.ncbi.nlm.nih.gov/nuccore/KX755168.1)
GenBank=KX755220.1 (http://www.ncbi.nlm.nih.gov/nuccore/KX755220.1)
NK=264027
preparator number=DJH 4822
NMMNHS: New Mexico Museum of Natural History and Science=4348
institutional catalog number=NMMNH 4348
```
That will occasionally be long, but it seems wrong to exclude important data from the place where most otherIDs are encountered. The format could be whatever - just
```
GenBank=http://www.ncbi.nlm.nih.gov/nuccore/KX755220.1
```
or the URLS wrapped in HTML (nice in your browser, pretty funky in Excel), or WHATEVER.
Thoughts?
| 1.0 | SpecimenResults: "othercatalognumbers" display - "Othercatalognumbers" does not currently consider sort_order (from the code table) or base_url.
Given our current sort order - note "preparator number"...
<img width="935" alt="screen shot 2017-07-31 at 11 51 06 am" src="https://user-images.githubusercontent.com/5720791/28793055-99334058-75e6-11e7-990d-e32ab03c0268.png">
... the concatenation is by alpha-sort:
<img width="261" alt="screen shot 2017-07-31 at 11 52 07 am" src="https://user-images.githubusercontent.com/5720791/28793090-b4154088-75e6-11e7-83de-6af2742cd8e3.png">
Note preparator number hiding down at the bottom.
In SQL:
```
select replace(ConcatOtherId(flat.collection_object_id),'; ',chr(10)) AS ids from flat where guid='MSB:Mamm:271075';
IDS
------------------------------------------------------------------------------------------------------------------------
GenBank=KX754470.1
GenBank=KX754497.1
GenBank=KX754530.1
GenBank=KX754561.1
GenBank=KX754590.1
GenBank=KX754620.1
GenBank=KX754651.1
GenBank=KX754681.1
GenBank=KX754713.1
GenBank=KX754741.1
GenBank=KX754772.1
GenBank=KX754806.1
GenBank=KX754834.1
GenBank=KX754864.1
GenBank=KX754912.1
GenBank=KX754965.1
GenBank=KX755011.1
GenBank=KX755065.1
GenBank=KX755117.1
GenBank=KX755168.1
GenBank=KX755220.1
NK=264027
institutional catalog number=NMMNH 4348
preparator number=DJH 4822
```
I could just change the sort order - note preparator number now ordered properly:
```
select replace(ConcatOtherId2(flat.collection_object_id),'; ',chr(10)) AS ids from flat where guid='MSB:Mamm:271075';
IDS
------------------------------------------------------------------------------------------------------------------------
GenBank=KX754470.1
GenBank=KX754497.1
GenBank=KX754530.1
GenBank=KX754561.1
GenBank=KX754590.1
GenBank=KX754620.1
GenBank=KX754651.1
GenBank=KX754681.1
GenBank=KX754713.1
GenBank=KX754741.1
GenBank=KX754772.1
GenBank=KX754806.1
GenBank=KX754834.1
GenBank=KX754864.1
GenBank=KX754912.1
GenBank=KX754965.1
GenBank=KX755011.1
GenBank=KX755065.1
GenBank=KX755117.1
GenBank=KX755168.1
GenBank=KX755220.1
NK=264027
preparator number=DJH 4822
NMMNHS: New Mexico Museum of Natural History and Science=4348
institutional catalog number=NMMNH 4348
```
but perhaps we also want to include the link (made from base_url) for those IDs with it
```
select replace(ConcatOtherId3(flat.collection_object_id),'; ',chr(10)) AS ids from flat where guid='MSB:Mamm:271075';
IDS
------------------------------------------------------------------------------------------------------------------------
GenBank=KX754470.1 (http://www.ncbi.nlm.nih.gov/nuccore/KX754470.1)
GenBank=KX754497.1 (http://www.ncbi.nlm.nih.gov/nuccore/KX754497.1)
GenBank=KX754530.1 (http://www.ncbi.nlm.nih.gov/nuccore/KX754530.1)
GenBank=KX754561.1 (http://www.ncbi.nlm.nih.gov/nuccore/KX754561.1)
GenBank=KX754590.1 (http://www.ncbi.nlm.nih.gov/nuccore/KX754590.1)
GenBank=KX754620.1 (http://www.ncbi.nlm.nih.gov/nuccore/KX754620.1)
GenBank=KX754651.1 (http://www.ncbi.nlm.nih.gov/nuccore/KX754651.1)
GenBank=KX754681.1 (http://www.ncbi.nlm.nih.gov/nuccore/KX754681.1)
GenBank=KX754713.1 (http://www.ncbi.nlm.nih.gov/nuccore/KX754713.1)
GenBank=KX754741.1 (http://www.ncbi.nlm.nih.gov/nuccore/KX754741.1)
GenBank=KX754772.1 (http://www.ncbi.nlm.nih.gov/nuccore/KX754772.1)
GenBank=KX754806.1 (http://www.ncbi.nlm.nih.gov/nuccore/KX754806.1)
GenBank=KX754834.1 (http://www.ncbi.nlm.nih.gov/nuccore/KX754834.1)
GenBank=KX754864.1 (http://www.ncbi.nlm.nih.gov/nuccore/KX754864.1)
GenBank=KX754912.1 (http://www.ncbi.nlm.nih.gov/nuccore/KX754912.1)
GenBank=KX754965.1 (http://www.ncbi.nlm.nih.gov/nuccore/KX754965.1)
GenBank=KX755011.1 (http://www.ncbi.nlm.nih.gov/nuccore/KX755011.1)
GenBank=KX755065.1 (http://www.ncbi.nlm.nih.gov/nuccore/KX755065.1)
GenBank=KX755117.1 (http://www.ncbi.nlm.nih.gov/nuccore/KX755117.1)
GenBank=KX755168.1 (http://www.ncbi.nlm.nih.gov/nuccore/KX755168.1)
GenBank=KX755220.1 (http://www.ncbi.nlm.nih.gov/nuccore/KX755220.1)
NK=264027
preparator number=DJH 4822
NMMNHS: New Mexico Museum of Natural History and Science=4348
institutional catalog number=NMMNH 4348
```
That will occasionally be long, but it seems wrong to exclude important data from the place where most otherIDs are encountered. The format could be whatever - just
```
GenBank=http://www.ncbi.nlm.nih.gov/nuccore/KX755220.1
```
or the URLS wrapped in HTML (nice in your browser, pretty funky in Excel), or WHATEVER.
Thoughts?
| non_process | specimenresults othercatalognumbers display othercatalognumbers does not currently consider sort order from the code table or base url given our current sort order note preparator number img width alt screen shot at am src the concatenation is by alpha sort img width alt screen shot at am src note preparator number hiding down at the bottom in sql select replace concatotherid flat collection object id chr as ids from flat where guid msb mamm ids genbank genbank genbank genbank genbank genbank genbank genbank genbank genbank genbank genbank genbank genbank genbank genbank genbank genbank genbank genbank genbank nk institutional catalog number nmmnh preparator number djh i could just change the sort order note preparator number now ordered properly select replace flat collection object id chr as ids from flat where guid msb mamm ids genbank genbank genbank genbank genbank genbank genbank genbank genbank genbank genbank genbank genbank genbank genbank genbank genbank genbank genbank genbank genbank nk preparator number djh nmmnhs new mexico museum of natural history and science institutional catalog number nmmnh but perhaps we also want to include the link made from base url for those ids with it select replace flat collection object id chr as ids from flat where guid msb mamm ids genbank genbank genbank genbank genbank genbank genbank genbank genbank genbank genbank genbank genbank genbank genbank genbank genbank genbank genbank genbank genbank nk preparator number djh nmmnhs new mexico museum of natural history and science institutional catalog number nmmnh that will occasionally be long but it seems wrong to exclude important data from the place where most otherids are encountered the format could be whatever just genbank or the urls wrapped in html nice in your browser pretty funky in excel or whatever thoughts | 0 |
660,726 | 21,996,837,112 | IssuesEvent | 2022-05-26 07:24:34 | harvester/harvester | https://api.github.com/repos/harvester/harvester | opened | [FEATURE] Bump golang to v1.17 and k8s version to v1.23.x | enhancement area/backend priority/1 | **Is your feature request related to a problem? Please describe.**
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
**Describe the solution you'd like**
Bump golang to v1.17 and k8s version to v1.23.x
**Describe alternatives you've considered**
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
**Additional context**
<!-- Add any other context or screenshots about the feature request here. -->
| 1.0 | [FEATURE] Bump golang to v1.17 and k8s version to v1.23.x - **Is your feature request related to a problem? Please describe.**
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
**Describe the solution you'd like**
Bump golang to v1.17 and k8s version to v1.23.x
**Describe alternatives you've considered**
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
**Additional context**
<!-- Add any other context or screenshots about the feature request here. -->
| non_process | bump golang to and version to x is your feature request related to a problem please describe describe the solution you d like bump golang to and version to x describe alternatives you ve considered additional context | 0 |
35,290 | 7,928,255,361 | IssuesEvent | 2018-07-06 10:56:29 | tarantool/graphql | https://api.github.com/repos/tarantool/graphql | closed | Move timeout_ms option to compile function | code health customer enhancement prio2 refactoring | `timeout_ms` is currently an option of `accessor.new()`
The point is that passing this parameter to `compile` is much better, because both simple and huge queries might be created for one accessor, and we want to support different timeouts for this queries. | 1.0 | Move timeout_ms option to compile function - `timeout_ms` is currently an option of `accessor.new()`
The point is that passing this parameter to `compile` is much better, because both simple and huge queries might be created for one accessor, and we want to support different timeouts for this queries. | non_process | move timeout ms option to compile function timeout ms is currently an option of accessor new the point is that passing this parameter to compile is much better because both simple and huge queries might be created for one accessor and we want to support different timeouts for this queries | 0 |
703,253 | 24,150,481,664 | IssuesEvent | 2022-09-21 23:48:12 | googleapis/nodejs-ai-platform | https://api.github.com/repos/googleapis/nodejs-ai-platform | closed | AI platform create dataset tabular gcs: should create a new gcs tabular dataset in the parent resource failed | type: bug priority: p1 flakybot: issue api: vertex-ai | This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: e1c5cd6b5d03afb03911ba9aa685457aa359a602
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/69dc3b23-298d-461c-9279-cd7551a90fc9), [Sponge](http://sponge2/69dc3b23-298d-461c-9279-cd7551a90fc9)
status: failed
<details><summary>Test output</summary><br><pre>Command failed: node ./create-dataset-tabular-gcs.js temp_create_dataset_tables_gcs_test_dbb45d8a-6ad7-4409-ad4c-9812adda24b6 gs://cloud-ml-tables-data/bank-marketing.csv ucaip-sample-tests us-central1
16 UNAUTHENTICATED: Request had invalid authentication credentials. Expected OAuth 2 access token, login cookie or other valid authentication credential. See https://developers.google.com/identity/sign-in/web/devconsole-project.
Error: Command failed: node ./create-dataset-tabular-gcs.js temp_create_dataset_tables_gcs_test_dbb45d8a-6ad7-4409-ad4c-9812adda24b6 gs://cloud-ml-tables-data/bank-marketing.csv ucaip-sample-tests us-central1
16 UNAUTHENTICATED: Request had invalid authentication credentials. Expected OAuth 2 access token, login cookie or other valid authentication credential. See https://developers.google.com/identity/sign-in/web/devconsole-project.
at checkExecSyncError (child_process.js:635:11)
at Object.execSync (child_process.js:671:15)
at execSync (test/create-dataset-tabular-gcs.test.js:25:28)
at Context.<anonymous> (test/create-dataset-tabular-gcs.test.js:37:20)
at processImmediate (internal/timers.js:461:21)</pre></details> | 1.0 | AI platform create dataset tabular gcs: should create a new gcs tabular dataset in the parent resource failed - This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: e1c5cd6b5d03afb03911ba9aa685457aa359a602
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/69dc3b23-298d-461c-9279-cd7551a90fc9), [Sponge](http://sponge2/69dc3b23-298d-461c-9279-cd7551a90fc9)
status: failed
<details><summary>Test output</summary><br><pre>Command failed: node ./create-dataset-tabular-gcs.js temp_create_dataset_tables_gcs_test_dbb45d8a-6ad7-4409-ad4c-9812adda24b6 gs://cloud-ml-tables-data/bank-marketing.csv ucaip-sample-tests us-central1
16 UNAUTHENTICATED: Request had invalid authentication credentials. Expected OAuth 2 access token, login cookie or other valid authentication credential. See https://developers.google.com/identity/sign-in/web/devconsole-project.
Error: Command failed: node ./create-dataset-tabular-gcs.js temp_create_dataset_tables_gcs_test_dbb45d8a-6ad7-4409-ad4c-9812adda24b6 gs://cloud-ml-tables-data/bank-marketing.csv ucaip-sample-tests us-central1
16 UNAUTHENTICATED: Request had invalid authentication credentials. Expected OAuth 2 access token, login cookie or other valid authentication credential. See https://developers.google.com/identity/sign-in/web/devconsole-project.
at checkExecSyncError (child_process.js:635:11)
at Object.execSync (child_process.js:671:15)
at execSync (test/create-dataset-tabular-gcs.test.js:25:28)
at Context.<anonymous> (test/create-dataset-tabular-gcs.test.js:37:20)
at processImmediate (internal/timers.js:461:21)</pre></details> | non_process | ai platform create dataset tabular gcs should create a new gcs tabular dataset in the parent resource failed this test failed to configure my behavior see if i m commenting on this issue too often add the flakybot quiet label and i will stop commenting commit buildurl status failed test output command failed node create dataset tabular gcs js temp create dataset tables gcs test gs cloud ml tables data bank marketing csv ucaip sample tests us unauthenticated request had invalid authentication credentials expected oauth access token login cookie or other valid authentication credential see error command failed node create dataset tabular gcs js temp create dataset tables gcs test gs cloud ml tables data bank marketing csv ucaip sample tests us unauthenticated request had invalid authentication credentials expected oauth access token login cookie or other valid authentication credential see at checkexecsyncerror child process js at object execsync child process js at execsync test create dataset tabular gcs test js at context test create dataset tabular gcs test js at processimmediate internal timers js | 0 |
3,960 | 6,893,728,805 | IssuesEvent | 2017-11-23 06:24:42 | peterwebster/henson | https://api.github.com/repos/peterwebster/henson | closed | Script to deal with badly formed XML at end of Stage 2 | process refinement | If at Stage 1 there are instances of the date comments <!021274> being rendered in bold, the result at the end of Stage 2 is a pair of files, one of which has an unclosed <p tag at the end, before /body>, and a second which has an extraneous /p> tag before the opening <body tag.
Consider a script to run at the end of Stage 2 to find and remove both. (NB. Variable whitespace in various places to account for.) | 1.0 | Script to deal with badly formed XML at end of Stage 2 - If at Stage 1 there are instances of the date comments <!021274> being rendered in bold, the result at the end of Stage 2 is a pair of files, one of which has an unclosed <p tag at the end, before /body>, and a second which has an extraneous /p> tag before the opening <body tag.
Consider a script to run at the end of Stage 2 to find and remove both. (NB. Variable whitespace in various places to account for.) | process | script to deal with badly formed xml at end of stage if at stage there are instances of the date comments being rendered in bold the result at the end of stage is a pair of files one of which has an unclosed and a second which has an extraneous p tag before the opening body tag consider a script to run at the end of stage to find and remove both nb variable whitespace in various places to account for | 1 |
8,673 | 11,806,989,638 | IssuesEvent | 2020-03-19 10:34:20 | qgis/QGIS | https://api.github.com/repos/qgis/QGIS | closed | convex hull by class | Feature Request Feedback Processing | Author Name: **Alain FERRATON** (@FERRATON)
Original Redmine Issue: [21749](https://issues.qgis.org/issues/21749)
Redmine category:processing/qgis
---
it would be nice to be able to create
convex hull by class (with a field)
as for concave hull (k-nearest neighbor)
| 1.0 | convex hull by class - Author Name: **Alain FERRATON** (@FERRATON)
Original Redmine Issue: [21749](https://issues.qgis.org/issues/21749)
Redmine category:processing/qgis
---
it would be nice to be able to create
convex hull by class (with a field)
as for concave hull (k-nearest neighbor)
| process | convex hull by class author name alain ferraton ferraton original redmine issue redmine category processing qgis it would be nice to be able to create convex hull by class with a field as for concave hull k nearest neighbor | 1 |
7,565 | 10,682,242,738 | IssuesEvent | 2019-10-22 04:27:55 | qgis/QGIS-Documentation | https://api.github.com/repos/qgis/QGIS-Documentation | closed | [FEATURE][processing] New algorithm "Create style database from project" | 3.10 Automatic new feature Processing Alg | Original commit: https://github.com/qgis/QGIS/commit/08a985ac8ffbc829abb57499687274f8649ae355 by nyalldawson
Extracts all symbols, color ramps, text formats and label settings from
the current project and stores them in a new style XML database | 1.0 | [FEATURE][processing] New algorithm "Create style database from project" - Original commit: https://github.com/qgis/QGIS/commit/08a985ac8ffbc829abb57499687274f8649ae355 by nyalldawson
Extracts all symbols, color ramps, text formats and label settings from
the current project and stores them in a new style XML database | process | new algorithm create style database from project original commit by nyalldawson extracts all symbols color ramps text formats and label settings from the current project and stores them in a new style xml database | 1 |
51,463 | 7,702,596,663 | IssuesEvent | 2018-05-21 03:32:56 | hackoregon/civic-devops | https://api.github.com/repos/hackoregon/civic-devops | opened | Document the environment variables and values used in all running Travis repos | Priority: medium documentation-needed | Generate a google sheet of all the values of all the environment variables defined in the wide variety of Travis repos we have in use.
Export this sheet as a CSV file, and upload it to the hacko-devops S3 bucket for safekeeping. | 1.0 | Document the environment variables and values used in all running Travis repos - Generate a google sheet of all the values of all the environment variables defined in the wide variety of Travis repos we have in use.
Export this sheet as a CSV file, and upload it to the hacko-devops S3 bucket for safekeeping. | non_process | document the environment variables and values used in all running travis repos generate a google sheet of all the values of all the environment variables defined in the wide variety of travis repos we have in use export this sheet as a csv file and upload it to the hacko devops bucket for safekeeping | 0 |
20,803 | 27,562,152,893 | IssuesEvent | 2023-03-07 23:08:46 | PyCQA/pylint | https://api.github.com/repos/PyCQA/pylint | opened | Check args for `Process` and `Thread` | Enhancement ✨ multiprocessing | ### Current problem
`multiprocessing.Process` and `threading.Thread` are each initialized with a `target` function and a list / tuple of `args`, with `function` being called on `args`. Currently Pylint does not check whether the `args` are appropriate for the `target`.
```python
import threading
import multiprocessing
def add(a: int, b: int) -> None:
print(a + b)
threading.Thread(
target = add,
args = (4, 5),
).run() # prints 9
multiprocessing.Process(
target = add,
args = (6, 7),
).run() # prints 13
threading.Thread(
target = add,
args = (4,),
).run() # error at runtime, but no warning
multiprocessing.Process(
target = add,
args = (6,),
).run() # error at runtime, but no warning
```
### Desired solution
Pylint should check `args` against `target` and make sure they are appropriate (right number, etc).
I imagine it should be possible to re-use existing argument-checking infrastructure to do this.
### Additional context
Mypy doesn't check this either. | 1.0 | Check args for `Process` and `Thread` - ### Current problem
`multiprocessing.Process` and `threading.Thread` are each initialized with a `target` function and a list / tuple of `args`, with `function` being called on `args`. Currently Pylint does not check whether the `args` are appropriate for the `target`.
```python
import threading
import multiprocessing
def add(a: int, b: int) -> None:
print(a + b)
threading.Thread(
target = add,
args = (4, 5),
).run() # prints 9
multiprocessing.Process(
target = add,
args = (6, 7),
).run() # prints 13
threading.Thread(
target = add,
args = (4,),
).run() # error at runtime, but no warning
multiprocessing.Process(
target = add,
args = (6,),
).run() # error at runtime, but no warning
```
### Desired solution
Pylint should check `args` against `target` and make sure they are appropriate (right number, etc).
I imagine it should be possible to re-use existing argument-checking infrastructure to do this.
### Additional context
Mypy doesn't check this either. | process | check args for process and thread current problem multiprocessing process and threading thread are each initialized with a target function and a list tuple of args with function being called on args currently pylint does not check whether the args are appropriate for the target python import threading import multiprocessing def add a int b int none print a b threading thread target add args run prints multiprocessing process target add args run prints threading thread target add args run error at runtime but no warning multiprocessing process target add args run error at runtime but no warning desired solution pylint should check args against target and make sure they are appropriate right number etc i imagine it should be possible to re use existing argument checking infrastructure to do this additional context mypy doesn t check this either | 1 |
9,984 | 13,031,976,180 | IssuesEvent | 2020-07-28 02:53:03 | googleapis/repo-automation-bots | https://api.github.com/repos/googleapis/repo-automation-bots | opened | add test for bot's PubSub flow | type: process | We should add a regression test for jobs that originate in PubSub format.
@azizsonawalla added tracking ticket for adding a test to #758
see: #758 | 1.0 | add test for bot's PubSub flow - We should add a regression test for jobs that originate in PubSub format.
@azizsonawalla added tracking ticket for adding a test to #758
see: #758 | process | add test for bot s pubsub flow we should add a regression test for jobs that originate in pubsub format azizsonawalla added tracking ticket for adding a test to see | 1 |
73,948 | 7,370,608,207 | IssuesEvent | 2018-03-13 09:05:07 | mantidproject/mantid | https://api.github.com/repos/mantidproject/mantid | closed | CrystalFieldPythonInterface system test crashes on Windows 7 | Quality: System Tests | The [master system tests](http://builds.mantidproject.org/view/Master%20Pipeline/job/master_systemtests-win7/568/testReport/junit/SystemTests/CrystalFieldPythonInterface/CrystalFieldPythonInterface/) are showing that the new `CrystalField.PythonInterface` test has started failing but only on Windows 7 and not 10.
| 1.0 | CrystalFieldPythonInterface system test crashes on Windows 7 - The [master system tests](http://builds.mantidproject.org/view/Master%20Pipeline/job/master_systemtests-win7/568/testReport/junit/SystemTests/CrystalFieldPythonInterface/CrystalFieldPythonInterface/) are showing that the new `CrystalField.PythonInterface` test has started failing but only on Windows 7 and not 10.
| non_process | crystalfieldpythoninterface system test crashes on windows the are showing that the new crystalfield pythoninterface test has started failing but only on windows and not | 0 |
15,653 | 19,846,820,444 | IssuesEvent | 2022-01-21 07:41:03 | ooi-data/CE07SHSM-MFD35-04-ADCPTC000-recovered_inst-adcp_velocity_earth | https://api.github.com/repos/ooi-data/CE07SHSM-MFD35-04-ADCPTC000-recovered_inst-adcp_velocity_earth | opened | 🛑 Processing failed: ValueError | process | ## Overview
`ValueError` found in `processing_task` task during run ended on 2022-01-21T07:41:03.271887.
## Details
Flow name: `CE07SHSM-MFD35-04-ADCPTC000-recovered_inst-adcp_velocity_earth`
Task name: `processing_task`
Error type: `ValueError`
Error message: not enough values to unpack (expected 3, got 0)
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing
final_path = finalize_data_stream(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 84, in finalize_data_stream
append_to_zarr(mod_ds, final_store, enc, logger=logger)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 357, in append_to_zarr
_append_zarr(store, mod_ds)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 187, in _append_zarr
existing_arr.append(var_data.values)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 519, in values
return _as_array_or_item(self._data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 259, in _as_array_or_item
data = np.asarray(data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 1541, in __array__
x = self.compute()
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 288, in compute
(result,) = compute(self, traverse=False, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 571, in compute
results = schedule(dsk, keys, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/threaded.py", line 79, in get
results = get_async(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 507, in get_async
raise_exception(exc, tb)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 315, in reraise
raise exc
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 220, in execute_task
result = _execute_task(task, data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/core.py", line 119, in _execute_task
return func(*(_execute_task(a, cache) for a in args))
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 116, in getter
c = np.asarray(c)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 357, in __array__
return np.asarray(self.array, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 551, in __array__
self._ensure_cached()
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 548, in _ensure_cached
self.array = NumpyIndexingAdapter(np.asarray(self.array))
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 521, in __array__
return np.asarray(self.array, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__
return np.asarray(array[self.key], dtype=None)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 70, in __array__
return self.func(self.array)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 137, in _apply_mask
data = np.asarray(data, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__
return np.asarray(array[self.key], dtype=None)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/backends/zarr.py", line 73, in __getitem__
return array[key.tuple]
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 673, in __getitem__
return self.get_basic_selection(selection, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 798, in get_basic_selection
return self._get_basic_selection_nd(selection=selection, out=out,
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 841, in _get_basic_selection_nd
return self._get_selection(indexer=indexer, out=out, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1135, in _get_selection
lchunk_coords, lchunk_selection, lout_selection = zip(*indexer)
ValueError: not enough values to unpack (expected 3, got 0)
```
</details>
| 1.0 | 🛑 Processing failed: ValueError - ## Overview
`ValueError` found in `processing_task` task during run ended on 2022-01-21T07:41:03.271887.
## Details
Flow name: `CE07SHSM-MFD35-04-ADCPTC000-recovered_inst-adcp_velocity_earth`
Task name: `processing_task`
Error type: `ValueError`
Error message: not enough values to unpack (expected 3, got 0)
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing
final_path = finalize_data_stream(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 84, in finalize_data_stream
append_to_zarr(mod_ds, final_store, enc, logger=logger)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 357, in append_to_zarr
_append_zarr(store, mod_ds)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 187, in _append_zarr
existing_arr.append(var_data.values)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 519, in values
return _as_array_or_item(self._data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 259, in _as_array_or_item
data = np.asarray(data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 1541, in __array__
x = self.compute()
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 288, in compute
(result,) = compute(self, traverse=False, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 571, in compute
results = schedule(dsk, keys, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/threaded.py", line 79, in get
results = get_async(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 507, in get_async
raise_exception(exc, tb)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 315, in reraise
raise exc
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 220, in execute_task
result = _execute_task(task, data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/core.py", line 119, in _execute_task
return func(*(_execute_task(a, cache) for a in args))
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 116, in getter
c = np.asarray(c)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 357, in __array__
return np.asarray(self.array, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 551, in __array__
self._ensure_cached()
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 548, in _ensure_cached
self.array = NumpyIndexingAdapter(np.asarray(self.array))
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 521, in __array__
return np.asarray(self.array, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__
return np.asarray(array[self.key], dtype=None)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 70, in __array__
return self.func(self.array)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 137, in _apply_mask
data = np.asarray(data, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__
return np.asarray(array[self.key], dtype=None)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/backends/zarr.py", line 73, in __getitem__
return array[key.tuple]
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 673, in __getitem__
return self.get_basic_selection(selection, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 798, in get_basic_selection
return self._get_basic_selection_nd(selection=selection, out=out,
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 841, in _get_basic_selection_nd
return self._get_selection(indexer=indexer, out=out, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1135, in _get_selection
lchunk_coords, lchunk_selection, lout_selection = zip(*indexer)
ValueError: not enough values to unpack (expected 3, got 0)
```
</details>
| process | 🛑 processing failed valueerror overview valueerror found in processing task task during run ended on details flow name recovered inst adcp velocity earth task name processing task error type valueerror error message not enough values to unpack expected got traceback traceback most recent call last file srv conda envs notebook lib site packages ooi harvester processor pipeline py line in processing final path finalize data stream file srv conda envs notebook lib site packages ooi harvester processor init py line in finalize data stream append to zarr mod ds final store enc logger logger file srv conda envs notebook lib site packages ooi harvester processor init py line in append to zarr append zarr store mod ds file srv conda envs notebook lib site packages ooi harvester processor utils py line in append zarr existing arr append var data values file srv conda envs notebook lib site packages xarray core variable py line in values return as array or item self data file srv conda envs notebook lib site packages xarray core variable py line in as array or item data np asarray data file srv conda envs notebook lib site packages dask array core py line in array x self compute file srv conda envs notebook lib site packages dask base py line in compute result compute self traverse false kwargs file srv conda envs notebook lib site packages dask base py line in compute results schedule dsk keys kwargs file srv conda envs notebook lib site packages dask threaded py line in get results get async file srv conda envs notebook lib site packages dask local py line in get async raise exception exc tb file srv conda envs notebook lib site packages dask local py line in reraise raise exc file srv conda envs notebook lib site packages dask local py line in execute task result execute task task data file srv conda envs notebook lib site packages dask core py line in execute task return func execute task a cache for a in args file srv conda envs notebook lib site packages dask array core py line in getter c np asarray c file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray self array dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array self ensure cached file srv conda envs notebook lib site packages xarray core indexing py line in ensure cached self array numpyindexingadapter np asarray self array file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray self array dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray array dtype none file srv conda envs notebook lib site packages xarray coding variables py line in array return self func self array file srv conda envs notebook lib site packages xarray coding variables py line in apply mask data np asarray data dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray array dtype none file srv conda envs notebook lib site packages xarray backends zarr py line in getitem return array file srv conda envs notebook lib site packages zarr core py line in getitem return self get basic selection selection fields fields file srv conda envs notebook lib site packages zarr core py line in get basic selection return self get basic selection nd selection selection out out file srv conda envs notebook lib site packages zarr core py line in get basic selection nd return self get selection indexer indexer out out fields fields file srv conda envs notebook lib site packages zarr core py line in get selection lchunk coords lchunk selection lout selection zip indexer valueerror not enough values to unpack expected got | 1 |
21,663 | 30,110,089,882 | IssuesEvent | 2023-06-30 06:57:45 | DevExpress/testcafe | https://api.github.com/repos/DevExpress/testcafe | closed | Console keeps showing an error Uncaught DOMException: Blocked a frame with origin from accessing a cross-origin frame. | TYPE: bug SYSTEM: hammerhead FREQUENCY: level 1 SYSTEM: iframe processing STATE: possibly fixed in native automation | <!--
If you have all reproduction steps with a complete sample app, please share as many details as possible in the sections below.
Make sure that you tried using the latest TestCafe version (https://github.com/DevExpress/testcafe/releases), where this behavior might have been already addressed.
Before submitting an issue, please check CONTRIBUTING.md and existing issues in this repository (https://github.com/DevExpress/testcafe/issues) in case a similar issue exists or was already addressed. This may save your time (and ours).
-->
### What is your Test Scenario?
The user is trying to log in
### What is the current behavior?
When a user tries to login from an iframe popup which has a redirect after successfully logging,user is stuck on the login popup and the user is not logged in
### What is the Expected behavior?
User is allowed to be logged in
### What is your web application and your TestCafe test code?
```js
fixture `MyFixture`
.page('https://www.hoefenhaag.nl/');
test('Test1', async t => {
await t
.maximizeWindow()
.click('button.module-cookies-close')
.click('a.button-full')
.click('div.js-navigation-login-heart-bg')
.switchToIframe('#login-iframe')
.typeText('input[name*="account_email"]', 'test08@mailinator.com')
.typeText('input[name*="account_password"]', 'Qwerty123$')
.click('#button-submit')
.switchToMainWindow();
});
```
### Steps to Reproduce:
Steps described above,
Error : `Uncaught DOMException: Blocked a frame with origin "xxxxxxx" from accessing a cross-origin frame.`
### Your Environment details:
* testcafe version: 1.9.1 also 1.9.4
* node.js version: 12.18.2
| 1.0 | Console keeps showing an error Uncaught DOMException: Blocked a frame with origin from accessing a cross-origin frame. - <!--
If you have all reproduction steps with a complete sample app, please share as many details as possible in the sections below.
Make sure that you tried using the latest TestCafe version (https://github.com/DevExpress/testcafe/releases), where this behavior might have been already addressed.
Before submitting an issue, please check CONTRIBUTING.md and existing issues in this repository (https://github.com/DevExpress/testcafe/issues) in case a similar issue exists or was already addressed. This may save your time (and ours).
-->
### What is your Test Scenario?
The user is trying to log in
### What is the current behavior?
When a user tries to login from an iframe popup which has a redirect after successfully logging,user is stuck on the login popup and the user is not logged in
### What is the Expected behavior?
User is allowed to be logged in
### What is your web application and your TestCafe test code?
```js
fixture `MyFixture`
.page('https://www.hoefenhaag.nl/');
test('Test1', async t => {
await t
.maximizeWindow()
.click('button.module-cookies-close')
.click('a.button-full')
.click('div.js-navigation-login-heart-bg')
.switchToIframe('#login-iframe')
.typeText('input[name*="account_email"]', 'test08@mailinator.com')
.typeText('input[name*="account_password"]', 'Qwerty123$')
.click('#button-submit')
.switchToMainWindow();
});
```
### Steps to Reproduce:
Steps described above,
Error : `Uncaught DOMException: Blocked a frame with origin "xxxxxxx" from accessing a cross-origin frame.`
### Your Environment details:
* testcafe version: 1.9.1 also 1.9.4
* node.js version: 12.18.2
| process | console keeps showing an error uncaught domexception blocked a frame with origin from accessing a cross origin frame if you have all reproduction steps with a complete sample app please share as many details as possible in the sections below make sure that you tried using the latest testcafe version where this behavior might have been already addressed before submitting an issue please check contributing md and existing issues in this repository in case a similar issue exists or was already addressed this may save your time and ours what is your test scenario the user is trying to log in what is the current behavior when a user tries to login from an iframe popup which has a redirect after successfully logging user is stuck on the login popup and the user is not logged in what is the expected behavior user is allowed to be logged in what is your web application and your testcafe test code js fixture myfixture page test async t await t maximizewindow click button module cookies close click a button full click div js navigation login heart bg switchtoiframe login iframe typetext input mailinator com typetext input click button submit switchtomainwindow steps to reproduce steps described above error uncaught domexception blocked a frame with origin xxxxxxx from accessing a cross origin frame your environment details testcafe version also node js version | 1 |
108,151 | 9,276,565,082 | IssuesEvent | 2019-03-20 03:29:57 | brave/brave-browser | https://api.github.com/repos/brave/brave-browser | opened | some YT publishers not appearing under BR panel when pasting URL | QA/Test-Plan-Specified QA/Yes bug feature/rewards | <!-- Have you searched for similar issues? Before submitting this issue, please check the open issues and add a note before logging a new issue.
PLEASE USE THE TEMPLATE BELOW TO PROVIDE INFORMATION ABOUT THE ISSUE.
INSUFFICIENT INFO WILL GET THE ISSUE CLOSED. IT WILL ONLY BE REOPENED AFTER SUFFICIENT INFO IS PROVIDED-->
## Description
<!--Provide a brief description of the issue-->
As per https://github.com/brave/browser-android-tabs/issues/1317#issuecomment-472937673, the Rewards panel won't work when directly pasting specific YT publisher URLs into the ombibox.
## Steps to Reproduce
<!--Please add a series of steps to reproduce the issue-->
1. launch `0.61.51 Chromium: 73.0.3683.75` & enable rewards
2. paste https://www.youtube.com/watch?v=UvJl202PlNI into the URL
3. click on the BAT icon to reveal the Rewards panel
## Actual result:
<!--Please add screenshots if needed-->
<img width="1438" alt="Screen Shot 2019-03-14 at 12 01 17 PM" src="https://user-images.githubusercontent.com/2602313/54656886-cbbd1c00-4a9d-11e9-8b18-6a479505a789.png">
## Expected result:
<img width="1283" alt="Screen Shot 2019-03-19 at 11 22 34 PM" src="https://user-images.githubusercontent.com/2602313/54656919-e8f1ea80-4a9d-11e9-8512-4ad1575ba643.png">
## Reproduces how often:
<!--[Easily reproduced/Intermittent issue/No steps to reproduce]-->
100% reproducible when using the above STR.
## Brave version (brave://version info)
<!--For installed build, please copy Brave, Revision and OS from brave://version and paste here. If building from source please mention it along with brave://version details-->
Brave | 0.61.51 Chromium: 73.0.3683.75 (Official Build) (64-bit)
-- | --
Revision | 909ee014fcea6828f9a610e6716145bc0b3ebf4a-refs/branch-heads/3683@{#803}
OS | Mac OS X
### Reproducible on current release: Yes, reproducible under `0.61.51 Chromium: 73.0.3683.75`
- Does it reproduce on brave-browser dev/beta builds? Yes, reproducible on all channels.
### Website problems only:
- Does the issue resolve itself when disabling Brave Shields? N/A
- Is the issue reproducible on the latest version of Chrome? N/A
### Additional Information
<!--Any additional information, related issues, extra QA steps, configuration or data that might be necessary to reproduce the issue-->
| 1.0 | some YT publishers not appearing under BR panel when pasting URL - <!-- Have you searched for similar issues? Before submitting this issue, please check the open issues and add a note before logging a new issue.
PLEASE USE THE TEMPLATE BELOW TO PROVIDE INFORMATION ABOUT THE ISSUE.
INSUFFICIENT INFO WILL GET THE ISSUE CLOSED. IT WILL ONLY BE REOPENED AFTER SUFFICIENT INFO IS PROVIDED-->
## Description
<!--Provide a brief description of the issue-->
As per https://github.com/brave/browser-android-tabs/issues/1317#issuecomment-472937673, the Rewards panel won't work when directly pasting specific YT publisher URLs into the ombibox.
## Steps to Reproduce
<!--Please add a series of steps to reproduce the issue-->
1. launch `0.61.51 Chromium: 73.0.3683.75` & enable rewards
2. paste https://www.youtube.com/watch?v=UvJl202PlNI into the URL
3. click on the BAT icon to reveal the Rewards panel
## Actual result:
<!--Please add screenshots if needed-->
<img width="1438" alt="Screen Shot 2019-03-14 at 12 01 17 PM" src="https://user-images.githubusercontent.com/2602313/54656886-cbbd1c00-4a9d-11e9-8b18-6a479505a789.png">
## Expected result:
<img width="1283" alt="Screen Shot 2019-03-19 at 11 22 34 PM" src="https://user-images.githubusercontent.com/2602313/54656919-e8f1ea80-4a9d-11e9-8512-4ad1575ba643.png">
## Reproduces how often:
<!--[Easily reproduced/Intermittent issue/No steps to reproduce]-->
100% reproducible when using the above STR.
## Brave version (brave://version info)
<!--For installed build, please copy Brave, Revision and OS from brave://version and paste here. If building from source please mention it along with brave://version details-->
Brave | 0.61.51 Chromium: 73.0.3683.75 (Official Build) (64-bit)
-- | --
Revision | 909ee014fcea6828f9a610e6716145bc0b3ebf4a-refs/branch-heads/3683@{#803}
OS | Mac OS X
### Reproducible on current release: Yes, reproducible under `0.61.51 Chromium: 73.0.3683.75`
- Does it reproduce on brave-browser dev/beta builds? Yes, reproducible on all channels.
### Website problems only:
- Does the issue resolve itself when disabling Brave Shields? N/A
- Is the issue reproducible on the latest version of Chrome? N/A
### Additional Information
<!--Any additional information, related issues, extra QA steps, configuration or data that might be necessary to reproduce the issue-->
| non_process | some yt publishers not appearing under br panel when pasting url have you searched for similar issues before submitting this issue please check the open issues and add a note before logging a new issue please use the template below to provide information about the issue insufficient info will get the issue closed it will only be reopened after sufficient info is provided description as per the rewards panel won t work when directly pasting specific yt publisher urls into the ombibox steps to reproduce launch chromium enable rewards paste into the url click on the bat icon to reveal the rewards panel actual result img width alt screen shot at pm src expected result img width alt screen shot at pm src reproduces how often reproducible when using the above str brave version brave version info brave chromium official build bit revision refs branch heads os mac os x reproducible on current release yes reproducible under chromium does it reproduce on brave browser dev beta builds yes reproducible on all channels website problems only does the issue resolve itself when disabling brave shields n a is the issue reproducible on the latest version of chrome n a additional information | 0 |
99,361 | 8,698,460,018 | IssuesEvent | 2018-12-04 23:32:07 | trailofbits/deepstate | https://api.github.com/repos/trailofbits/deepstate | opened | Add a quiet mode | enhancement good first issue test replay usability | For regression, DeepState should be able to suppress all INFO logging statements, and maybe everything except FATALs. | 1.0 | Add a quiet mode - For regression, DeepState should be able to suppress all INFO logging statements, and maybe everything except FATALs. | non_process | add a quiet mode for regression deepstate should be able to suppress all info logging statements and maybe everything except fatals | 0 |
22,595 | 31,818,494,643 | IssuesEvent | 2023-09-13 22:55:45 | googleapis/nodejs-apigee-registry | https://api.github.com/repos/googleapis/nodejs-apigee-registry | closed | Your .repo-metadata.json file has a problem 🤒 | type: process repo-metadata: lint | You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* must have required property 'library_type' in .repo-metadata.json
* release_level must be equal to one of the allowed values in .repo-metadata.json
☝️ Once you address these problems, you can close this issue.
### Need help?
* [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field.
* [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**.
* Reach out to **go/github-automation** if you have any questions. | 1.0 | Your .repo-metadata.json file has a problem 🤒 - You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* must have required property 'library_type' in .repo-metadata.json
* release_level must be equal to one of the allowed values in .repo-metadata.json
☝️ Once you address these problems, you can close this issue.
### Need help?
* [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field.
* [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**.
* Reach out to **go/github-automation** if you have any questions. | process | your repo metadata json file has a problem 🤒 you have a problem with your repo metadata json file result of scan 📈 must have required property library type in repo metadata json release level must be equal to one of the allowed values in repo metadata json ☝️ once you address these problems you can close this issue need help lists valid options for each field for grpc libraries api shortname should match the subdomain of an api s hostname reach out to go github automation if you have any questions | 1 |
17,802 | 12,343,677,925 | IssuesEvent | 2020-05-15 04:51:39 | gluster/glusterfs | https://api.github.com/repos/gluster/glusterfs | closed | [RFE] geo-replication to s3 / glacier backend (or other REST API endpoint) | CB: geo-replication FA: Usability & Supportability Prio: High Type:Enhancement UseCase: ScaleOut NAS wontfix | Instead of syncing to remote gluster or any directory (through posix fops), can Gluster's geo-rep pick only frozen files (ie, files which are older than NN hours/days) and sync to s3 only during certain scheduled time in day.
Further enhancement can be for providing source directory, or regex of file pattern to sync. | True | [RFE] geo-replication to s3 / glacier backend (or other REST API endpoint) - Instead of syncing to remote gluster or any directory (through posix fops), can Gluster's geo-rep pick only frozen files (ie, files which are older than NN hours/days) and sync to s3 only during certain scheduled time in day.
Further enhancement can be for providing source directory, or regex of file pattern to sync. | non_process | geo replication to glacier backend or other rest api endpoint instead of syncing to remote gluster or any directory through posix fops can gluster s geo rep pick only frozen files ie files which are older than nn hours days and sync to only during certain scheduled time in day further enhancement can be for providing source directory or regex of file pattern to sync | 0 |
765,200 | 26,837,074,138 | IssuesEvent | 2023-02-02 20:27:09 | ME-ICA/tedana | https://api.github.com/repos/ME-ICA/tedana | opened | Divergence between older MEICA component selection and tedana | priority: medium effort: medium impact: medium breaking change | ### Summary
As part of the review for #924, we noticed a divergence between the older MEICA decision tree and the one tedana has been using for years. There are several edge-case criteria which MEICA applies to the same subject of remaining components while tedana only applies to components that weren't changed by earlier edge case criteria.
### Additional Detail
In the MEICA code, all the steps we now label I009-I012 were done on the same subset of components.
https://github.com/ME-ICA/me-ica/blob/53191a7e8838788acf837fdf7cb3026efadf49ac/meica.libs/select_model.py#L344-L360
`ncl` (now `unclf`) is assigned before this section and isn't changed until the end of the section. In tedana Main, after each classification step, we see:
[tedana/tedana/selection/tedica.py](https://github.com/ME-ICA/tedana/blob/f00cb25152142611b8e289ab59c7b5b8ab6eaf08/tedana/selection/tedica.py#L375)
Line 375 in [f00cb25](https://github.com/ME-ICA/tedana/commit/f00cb25152142611b8e289ab59c7b5b8ab6eaf08)
unclf = np.setdiff1d(unclf, midk)
In practice, what this means is that the components rejected with I009 or I010 are NOT candidates to be ignored in I011 and I012 while they can change from rejected (actually midk) to ignored in the original MEICA. This vaguely makes sense since the goal of these final steps is to retain a few things that might have otherwise been rejected. That said, I011 and I012 operate on all remaining unclassified components so they can and do change components from accepted to ignored. All four of these steps are classifying based on different thresholds for variance explained & d_table_score_scrub so it would take a bit more thinking to see how they all interact.
### Next Steps
- This will be a breaking change, but it is not necessarily a bug since both the MEICA and tedana decision trees are plausible, but different. The MEICA decision tree will end up accepting more components, so it's a bit more conservative.
- Since this has been how tedana has functioned for years, several of us informally decided NOT to fix this divergence until the massive refactor in #756 is merged. Then the current tedana `main` will give the same results as tedana `main` has been giving for years and the newer modularized decision tree will match those results. (Additionally it will be nice to change this once rather than making and tests separate fixes both for `main` and #756)
- After #756 is merged, we can create a new PR to have the decision tree line up with the older MEICA steps. | 1.0 | Divergence between older MEICA component selection and tedana - ### Summary
As part of the review for #924, we noticed a divergence between the older MEICA decision tree and the one tedana has been using for years. There are several edge-case criteria which MEICA applies to the same subject of remaining components while tedana only applies to components that weren't changed by earlier edge case criteria.
### Additional Detail
In the MEICA code, all the steps we now label I009-I012 were done on the same subset of components.
https://github.com/ME-ICA/me-ica/blob/53191a7e8838788acf837fdf7cb3026efadf49ac/meica.libs/select_model.py#L344-L360
`ncl` (now `unclf`) is assigned before this section and isn't changed until the end of the section. In tedana Main, after each classification step, we see:
[tedana/tedana/selection/tedica.py](https://github.com/ME-ICA/tedana/blob/f00cb25152142611b8e289ab59c7b5b8ab6eaf08/tedana/selection/tedica.py#L375)
Line 375 in [f00cb25](https://github.com/ME-ICA/tedana/commit/f00cb25152142611b8e289ab59c7b5b8ab6eaf08)
unclf = np.setdiff1d(unclf, midk)
In practice, what this means is that the components rejected with I009 or I010 are NOT candidates to be ignored in I011 and I012 while they can change from rejected (actually midk) to ignored in the original MEICA. This vaguely makes sense since the goal of these final steps is to retain a few things that might have otherwise been rejected. That said, I011 and I012 operate on all remaining unclassified components so they can and do change components from accepted to ignored. All four of these steps are classifying based on different thresholds for variance explained & d_table_score_scrub so it would take a bit more thinking to see how they all interact.
### Next Steps
- This will be a breaking change, but it is not necessarily a bug since both the MEICA and tedana decision trees are plausible, but different. The MEICA decision tree will end up accepting more components, so it's a bit more conservative.
- Since this has been how tedana has functioned for years, several of us informally decided NOT to fix this divergence until the massive refactor in #756 is merged. Then the current tedana `main` will give the same results as tedana `main` has been giving for years and the newer modularized decision tree will match those results. (Additionally it will be nice to change this once rather than making and tests separate fixes both for `main` and #756)
- After #756 is merged, we can create a new PR to have the decision tree line up with the older MEICA steps. | non_process | divergence between older meica component selection and tedana summary as part of the review for we noticed a divergence between the older meica decision tree and the one tedana has been using for years there are several edge case criteria which meica applies to the same subject of remaining components while tedana only applies to components that weren t changed by earlier edge case criteria additional detail in the meica code all the steps we now label were done on the same subset of components ncl now unclf is assigned before this section and isn t changed until the end of the section in tedana main after each classification step we see line in unclf np unclf midk in practice what this means is that the components rejected with or are not candidates to be ignored in and while they can change from rejected actually midk to ignored in the original meica this vaguely makes sense since the goal of these final steps is to retain a few things that might have otherwise been rejected that said and operate on all remaining unclassified components so they can and do change components from accepted to ignored all four of these steps are classifying based on different thresholds for variance explained d table score scrub so it would take a bit more thinking to see how they all interact next steps this will be a breaking change but it is not necessarily a bug since both the meica and tedana decision trees are plausible but different the meica decision tree will end up accepting more components so it s a bit more conservative since this has been how tedana has functioned for years several of us informally decided not to fix this divergence until the massive refactor in is merged then the current tedana main will give the same results as tedana main has been giving for years and the newer modularized decision tree will match those results additionally it will be nice to change this once rather than making and tests separate fixes both for main and after is merged we can create a new pr to have the decision tree line up with the older meica steps | 0 |
46,635 | 7,272,822,759 | IssuesEvent | 2018-02-21 01:06:11 | uswds/uswds-site | https://api.github.com/repos/uswds/uswds-site | closed | Docs are confusing about USWDS's dependency on jQuery | [Priority] Minor [Skill] DevOps [Skill] Documentation [Skill] Front end [Type] Enhancement | `developers.md` currently states the following:
> ### Using npm
>
> Note: Using npm to install the Standards will include jQuery version `2.2.0`. Please make sure that you’re not including any other version of jQuery on your page.
Based on the USWDS's [`package.json`](https://github.com/18F/web-design-standards/blob/develop/package.json), it seems jQuery is only mentioned in `devDependencies`. So at the very least, installing USWDS with `npm install --production` seems like it would not install jQuery.
I am unclear on whether this documentation is actually out of date or not, though. I found #71 and https://github.com/18F/web-design-standards/issues/1351, which are confusing.
It makes sense to me that our *tests* use jQuery while USWDS itself does not; however, it seems inconvenient to then require all developers who use a node-based toolchain to bundle their assets to then be required to use jQuery 2.2.0 in their project. It's also unclear as to whether they'll only have to use this version of jQuery during development vs. in production.
In short, as someone new to USWDS, I find this jQuery business *extremely* confusing. If this documentation is not out of date, I think we should expand it to include a bit of rationale on *why* jQuery is included when using npm but *not* when including the pre-built `uswds.js`.
| 1.0 | Docs are confusing about USWDS's dependency on jQuery - `developers.md` currently states the following:
> ### Using npm
>
> Note: Using npm to install the Standards will include jQuery version `2.2.0`. Please make sure that you’re not including any other version of jQuery on your page.
Based on the USWDS's [`package.json`](https://github.com/18F/web-design-standards/blob/develop/package.json), it seems jQuery is only mentioned in `devDependencies`. So at the very least, installing USWDS with `npm install --production` seems like it would not install jQuery.
I am unclear on whether this documentation is actually out of date or not, though. I found #71 and https://github.com/18F/web-design-standards/issues/1351, which are confusing.
It makes sense to me that our *tests* use jQuery while USWDS itself does not; however, it seems inconvenient to then require all developers who use a node-based toolchain to bundle their assets to then be required to use jQuery 2.2.0 in their project. It's also unclear as to whether they'll only have to use this version of jQuery during development vs. in production.
In short, as someone new to USWDS, I find this jQuery business *extremely* confusing. If this documentation is not out of date, I think we should expand it to include a bit of rationale on *why* jQuery is included when using npm but *not* when including the pre-built `uswds.js`.
| non_process | docs are confusing about uswds s dependency on jquery developers md currently states the following using npm note using npm to install the standards will include jquery version please make sure that you’re not including any other version of jquery on your page based on the uswds s it seems jquery is only mentioned in devdependencies so at the very least installing uswds with npm install production seems like it would not install jquery i am unclear on whether this documentation is actually out of date or not though i found and which are confusing it makes sense to me that our tests use jquery while uswds itself does not however it seems inconvenient to then require all developers who use a node based toolchain to bundle their assets to then be required to use jquery in their project it s also unclear as to whether they ll only have to use this version of jquery during development vs in production in short as someone new to uswds i find this jquery business extremely confusing if this documentation is not out of date i think we should expand it to include a bit of rationale on why jquery is included when using npm but not when including the pre built uswds js | 0 |
810,004 | 30,221,269,243 | IssuesEvent | 2023-07-05 19:36:45 | aws/s2n-quic | https://api.github.com/repos/aws/s2n-quic | closed | Make message buffer sizes configurable | priority/medium size/small | ### Problem:
In #1792, we chose 8Mb as the hard-coded default allocation size for the ring buffers.
### Solution:
This value should ideally be configurable through the IO provider builder. | 1.0 | Make message buffer sizes configurable - ### Problem:
In #1792, we chose 8Mb as the hard-coded default allocation size for the ring buffers.
### Solution:
This value should ideally be configurable through the IO provider builder. | non_process | make message buffer sizes configurable problem in we chose as the hard coded default allocation size for the ring buffers solution this value should ideally be configurable through the io provider builder | 0 |
311,843 | 23,406,790,341 | IssuesEvent | 2022-08-12 13:35:44 | KinsonDigital/CASL | https://api.github.com/repos/KinsonDigital/CASL | closed | 🚧Add new issue templates to project | workflow high priority preview 🔗has dependencies 📝documentation/product | ### I have done the items below . . .
- [X] I have updated the title by replacing the '**_<title_**>' section.
### Description
Add the the following issue templates to the project.
NOTE: These are from the [Velaptor](https://github.com/KinsonDigital/Velaptor) project
**Templates To Add:**
- `qa-testing-template.yml`
- `release-todo-issue-template.yml`
- `research-issue-template.yml`
Also add a `config.yml` file with the following content:
```yml
blank_issues_enabled: false
```
### Acceptance Criteria
**This issue is finished when:**
- [ ] `qa-testing-template.yml`
- [ ] `release-todo-issue-template.yml`
- [ ] `research-issue-template.yml`
- [ ] Update preview feature PR template to match Velaptor
- [ ] `config.yml` file added
### ToDo Items
- [ ] Draft pull request created and linked to this issue
- [X] Priority label added to issue (**_low priority_**, **_medium priority_**, or **_high priority_**)
- [X] Issue linked to the proper project
- [X] Issue linked to proper milestone
- [ ] Unit tests have been written and/or adjusted for code additions or changes
- [ ] All unit tests pass
### Issue Dependencies
- KinsonDigital/Velaptor#296
- KinsonDigital/Velaptor#295
- KinsonDigital/Velaptor#298
### Related Work
_No response_ | 1.0 | 🚧Add new issue templates to project - ### I have done the items below . . .
- [X] I have updated the title by replacing the '**_<title_**>' section.
### Description
Add the the following issue templates to the project.
NOTE: These are from the [Velaptor](https://github.com/KinsonDigital/Velaptor) project
**Templates To Add:**
- `qa-testing-template.yml`
- `release-todo-issue-template.yml`
- `research-issue-template.yml`
Also add a `config.yml` file with the following content:
```yml
blank_issues_enabled: false
```
### Acceptance Criteria
**This issue is finished when:**
- [ ] `qa-testing-template.yml`
- [ ] `release-todo-issue-template.yml`
- [ ] `research-issue-template.yml`
- [ ] Update preview feature PR template to match Velaptor
- [ ] `config.yml` file added
### ToDo Items
- [ ] Draft pull request created and linked to this issue
- [X] Priority label added to issue (**_low priority_**, **_medium priority_**, or **_high priority_**)
- [X] Issue linked to the proper project
- [X] Issue linked to proper milestone
- [ ] Unit tests have been written and/or adjusted for code additions or changes
- [ ] All unit tests pass
### Issue Dependencies
- KinsonDigital/Velaptor#296
- KinsonDigital/Velaptor#295
- KinsonDigital/Velaptor#298
### Related Work
_No response_ | non_process | 🚧add new issue templates to project i have done the items below i have updated the title by replacing the section description add the the following issue templates to the project note these are from the project templates to add qa testing template yml release todo issue template yml research issue template yml also add a config yml file with the following content yml blank issues enabled false acceptance criteria this issue is finished when qa testing template yml release todo issue template yml research issue template yml update preview feature pr template to match velaptor config yml file added todo items draft pull request created and linked to this issue priority label added to issue low priority medium priority or high priority issue linked to the proper project issue linked to proper milestone unit tests have been written and or adjusted for code additions or changes all unit tests pass issue dependencies kinsondigital velaptor kinsondigital velaptor kinsondigital velaptor related work no response | 0 |
393,123 | 11,610,533,515 | IssuesEvent | 2020-02-26 03:22:44 | inspireui/support | https://api.github.com/repos/inspireui/support | closed | [Fluxstore - Woo] - Stock quantity problem "This is important" | Feature Request ⚡️ FluxStore priority !!! question ❓ | My issue:
### This is important
The application offers the opportunity to place orders regardless of the number of stocks. The problem for this store. Because we cannot sell more than the number of stocks.
The app needs to check the stock count and not be able to order more than this number.
I made some arrangements for this.
I pulled the information from the store page via API.
_The information I have taken;_
**1- Stock quantity "stok"
2- Maximum number of orders "maxQuantitiy"**
(Note: The maximum number of orders is the number we apply to the products with the woocommerce plugin "[WooCommerce Max Quantity.](https://woocommerce.com/products/minmax-quantities/)" A user can only buy "x" from one product at a time. If we do not enter any numbers, he can buy 60.)
Just what needs to be done;
**"Using this information;**
**1) The user can only purchase a product for the amount of stock.**
**2) User can get maximum "maximum_allowed_quantity" from a product."**

### File;
**source\lib\models**
[product dart - MODELS.zip](https://github.com/inspireui/support/files/4156695/product.dart.-.MODELS.zip)
| 1.0 | [Fluxstore - Woo] - Stock quantity problem "This is important" - My issue:
### This is important
The application offers the opportunity to place orders regardless of the number of stocks. The problem for this store. Because we cannot sell more than the number of stocks.
The app needs to check the stock count and not be able to order more than this number.
I made some arrangements for this.
I pulled the information from the store page via API.
_The information I have taken;_
**1- Stock quantity "stok"
2- Maximum number of orders "maxQuantitiy"**
(Note: The maximum number of orders is the number we apply to the products with the woocommerce plugin "[WooCommerce Max Quantity.](https://woocommerce.com/products/minmax-quantities/)" A user can only buy "x" from one product at a time. If we do not enter any numbers, he can buy 60.)
Just what needs to be done;
**"Using this information;**
**1) The user can only purchase a product for the amount of stock.**
**2) User can get maximum "maximum_allowed_quantity" from a product."**

### File;
**source\lib\models**
[product dart - MODELS.zip](https://github.com/inspireui/support/files/4156695/product.dart.-.MODELS.zip)
| non_process | stock quantity problem this is important my issue this is important the application offers the opportunity to place orders regardless of the number of stocks the problem for this store because we cannot sell more than the number of stocks the app needs to check the stock count and not be able to order more than this number i made some arrangements for this i pulled the information from the store page via api the information i have taken stock quantity stok maximum number of orders maxquantitiy note the maximum number of orders is the number we apply to the products with the woocommerce plugin a user can only buy x from one product at a time if we do not enter any numbers he can buy just what needs to be done using this information the user can only purchase a product for the amount of stock user can get maximum maximum allowed quantity from a product file source lib models | 0 |
321,383 | 27,524,306,038 | IssuesEvent | 2023-03-06 16:57:56 | dubinsky/scalajs-gradle | https://api.github.com/repos/dubinsky/scalajs-gradle | closed | Test Output | testing | - [x] add newlines to and suppress weird test output (visible in Idea and Gradle report's "output");
- [x] bring the current log level from the task into `TestClassProcessor`
- [x] hook ScalaJSLogger into TestResultProcessor when running tests - log output should be captured already...
- [x] framework summaries need a test id (AttachParent does not attach to the output :(); rootSuiteId is a string, and output message gets stuck... | 1.0 | Test Output - - [x] add newlines to and suppress weird test output (visible in Idea and Gradle report's "output");
- [x] bring the current log level from the task into `TestClassProcessor`
- [x] hook ScalaJSLogger into TestResultProcessor when running tests - log output should be captured already...
- [x] framework summaries need a test id (AttachParent does not attach to the output :(); rootSuiteId is a string, and output message gets stuck... | non_process | test output add newlines to and suppress weird test output visible in idea and gradle report s output bring the current log level from the task into testclassprocessor hook scalajslogger into testresultprocessor when running tests log output should be captured already framework summaries need a test id attachparent does not attach to the output rootsuiteid is a string and output message gets stuck | 0 |
1,970 | 4,793,239,089 | IssuesEvent | 2016-10-31 17:37:20 | dotnet/corefx | https://api.github.com/repos/dotnet/corefx | opened | Bring APIs requiring SecureString to netstandard2.0 | area-System.Diagnostics.Process netstandard2.0 | The following APIs need to be brought back once SecureString is available:
```csharp
public class Process : Component
{
public static Process Start(string fileName, string userName, SecureString password, string domain);
public static Process Start(string fileName, string arguments, string userName, SecureString password, string domain);
}
public sealed class ProcessStartInfo
{
public SecureString Password { get; set; }
}
``` | 1.0 | Bring APIs requiring SecureString to netstandard2.0 - The following APIs need to be brought back once SecureString is available:
```csharp
public class Process : Component
{
public static Process Start(string fileName, string userName, SecureString password, string domain);
public static Process Start(string fileName, string arguments, string userName, SecureString password, string domain);
}
public sealed class ProcessStartInfo
{
public SecureString Password { get; set; }
}
``` | process | bring apis requiring securestring to the following apis need to be brought back once securestring is available csharp public class process component public static process start string filename string username securestring password string domain public static process start string filename string arguments string username securestring password string domain public sealed class processstartinfo public securestring password get set | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.