Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
2
665
labels
stringlengths
4
554
body
stringlengths
3
235k
index
stringclasses
6 values
text_combine
stringlengths
96
235k
label
stringclasses
2 values
text
stringlengths
96
196k
binary_label
int64
0
1
16,847
12,152,123,546
IssuesEvent
2020-04-24 21:27:34
BCDevOps/developer-experience
https://api.github.com/repos/BCDevOps/developer-experience
opened
Create issues in GitHub for change control process
Infrastructure action-required
https://trello.com/c/4szHFMr9/42-create-issues-in-github-for-change-control-process I'm thinking it would be handy to create issues or PR's in GitHub for things like OCP upgrades. This way we can add some approvers, we can add any code or steps that are going to happen for review and visibility.
1.0
Create issues in GitHub for change control process - https://trello.com/c/4szHFMr9/42-create-issues-in-github-for-change-control-process I'm thinking it would be handy to create issues or PR's in GitHub for things like OCP upgrades. This way we can add some approvers, we can add any code or steps that are going to happen for review and visibility.
infrastructure
create issues in github for change control process i m thinking it would be handy to create issues or pr s in github for things like ocp upgrades this way we can add some approvers we can add any code or steps that are going to happen for review and visibility
1
721,321
24,823,839,071
IssuesEvent
2022-10-25 18:46:52
gravitational/gravity
https://api.github.com/repos/gravitational/gravity
closed
Upgrade agents resiliency improvements
kind/enhancement port/5.5 port/6.1 priority/1 port/7.0
There are a few issues with the upgrade agents. ## 1. Agents do not restart after crash/reboot Right now our upgrade (or any other operation) agents are installed as one-shot systemd units and will not restart in case of failure, node reboot, etc. which leads to a bad upgrade experience because users in general don't know how to redeploy the agents. We should make them into regular services so they're more resilient and automatically restart upon failures, reboots and do not require manual intervention during upgrades. Success: - [x] Upgrade (and others) agents automatically restart after reboots/crashes and upgrade operation can continue without needing to redeploy the agents manually. A couple of things to watch out for: * What happens if the "lead" upgrade agent (the one that runs automatic upgrade) restarts automatically? Is it going to attempt to resume the automatic upgrade operation? * Agents should be made to be shut down and disabled properly after each operation to make sure they're not restarting automatically when they're not needed. ## 2. Make sure `gravity upgrade --resume` (or `plan resume`) check that all agents are running When resuming an operation (e.g. after a node reboot), the resume command should check if all required agents are running and alert the user to redeploy any of them if some aren't. Success: - [x] The resume command bails out with a clear error message if any of the upgrade agents are not running. ## 3. It is possible to deploy agents of incorrect version IIRC the current behavior is that deployed upgrade agents are of the same version as the "gravity" binary that performs the deployment. So if a user executes "gravity agent deploy" with gravity 5.0.0, the same gravity 5.0.0 will be used to run agents. This may be causing incompatibility issues because during upgrade the agents should be running the new gravity version, but users sometimes manage to deploy the old ones. Success: - [ ] Make sure upgrade operation always deploys agents of correct version (the version we're upgrading to). We may need to make sure here that "gravity upgrade" can only be launched using the "new" binary (we should have this check in place, but double-check). - [ ] Make sure "gravity agent deploy" launched separately deploys agents of correct version if there's an ongoing upgrade operation, regardless of which "gravity" binary was used. - [ ] Make sure versions of deployed agents are printed in the console, just for better visibility. ## 4. Agents deployment CLI improvements These are more of a nice-to-haves, but it would be helpful to update our "gravity agent" CLI with following: - [x] Have an ability to deploy an agent <strike>on a particular node and</strike> of a particular version. For example, `gravity agent deploy node-2 --version=7.0.1`. - [x] Have a command to check the deployed agents and their statuses. For example: ``` $ gravity agent status Node | IP | Status | Version ----------------------------------------- node-1 | 192.168.1.1 | Deployed | 7.0.1 node-2 | 192.168.1.2 | Offline | ```
1.0
Upgrade agents resiliency improvements - There are a few issues with the upgrade agents. ## 1. Agents do not restart after crash/reboot Right now our upgrade (or any other operation) agents are installed as one-shot systemd units and will not restart in case of failure, node reboot, etc. which leads to a bad upgrade experience because users in general don't know how to redeploy the agents. We should make them into regular services so they're more resilient and automatically restart upon failures, reboots and do not require manual intervention during upgrades. Success: - [x] Upgrade (and others) agents automatically restart after reboots/crashes and upgrade operation can continue without needing to redeploy the agents manually. A couple of things to watch out for: * What happens if the "lead" upgrade agent (the one that runs automatic upgrade) restarts automatically? Is it going to attempt to resume the automatic upgrade operation? * Agents should be made to be shut down and disabled properly after each operation to make sure they're not restarting automatically when they're not needed. ## 2. Make sure `gravity upgrade --resume` (or `plan resume`) check that all agents are running When resuming an operation (e.g. after a node reboot), the resume command should check if all required agents are running and alert the user to redeploy any of them if some aren't. Success: - [x] The resume command bails out with a clear error message if any of the upgrade agents are not running. ## 3. It is possible to deploy agents of incorrect version IIRC the current behavior is that deployed upgrade agents are of the same version as the "gravity" binary that performs the deployment. So if a user executes "gravity agent deploy" with gravity 5.0.0, the same gravity 5.0.0 will be used to run agents. This may be causing incompatibility issues because during upgrade the agents should be running the new gravity version, but users sometimes manage to deploy the old ones. Success: - [ ] Make sure upgrade operation always deploys agents of correct version (the version we're upgrading to). We may need to make sure here that "gravity upgrade" can only be launched using the "new" binary (we should have this check in place, but double-check). - [ ] Make sure "gravity agent deploy" launched separately deploys agents of correct version if there's an ongoing upgrade operation, regardless of which "gravity" binary was used. - [ ] Make sure versions of deployed agents are printed in the console, just for better visibility. ## 4. Agents deployment CLI improvements These are more of a nice-to-haves, but it would be helpful to update our "gravity agent" CLI with following: - [x] Have an ability to deploy an agent <strike>on a particular node and</strike> of a particular version. For example, `gravity agent deploy node-2 --version=7.0.1`. - [x] Have a command to check the deployed agents and their statuses. For example: ``` $ gravity agent status Node | IP | Status | Version ----------------------------------------- node-1 | 192.168.1.1 | Deployed | 7.0.1 node-2 | 192.168.1.2 | Offline | ```
non_infrastructure
upgrade agents resiliency improvements there are a few issues with the upgrade agents agents do not restart after crash reboot right now our upgrade or any other operation agents are installed as one shot systemd units and will not restart in case of failure node reboot etc which leads to a bad upgrade experience because users in general don t know how to redeploy the agents we should make them into regular services so they re more resilient and automatically restart upon failures reboots and do not require manual intervention during upgrades success upgrade and others agents automatically restart after reboots crashes and upgrade operation can continue without needing to redeploy the agents manually a couple of things to watch out for what happens if the lead upgrade agent the one that runs automatic upgrade restarts automatically is it going to attempt to resume the automatic upgrade operation agents should be made to be shut down and disabled properly after each operation to make sure they re not restarting automatically when they re not needed make sure gravity upgrade resume or plan resume check that all agents are running when resuming an operation e g after a node reboot the resume command should check if all required agents are running and alert the user to redeploy any of them if some aren t success the resume command bails out with a clear error message if any of the upgrade agents are not running it is possible to deploy agents of incorrect version iirc the current behavior is that deployed upgrade agents are of the same version as the gravity binary that performs the deployment so if a user executes gravity agent deploy with gravity the same gravity will be used to run agents this may be causing incompatibility issues because during upgrade the agents should be running the new gravity version but users sometimes manage to deploy the old ones success make sure upgrade operation always deploys agents of correct version the version we re upgrading to we may need to make sure here that gravity upgrade can only be launched using the new binary we should have this check in place but double check make sure gravity agent deploy launched separately deploys agents of correct version if there s an ongoing upgrade operation regardless of which gravity binary was used make sure versions of deployed agents are printed in the console just for better visibility agents deployment cli improvements these are more of a nice to haves but it would be helpful to update our gravity agent cli with following have an ability to deploy an agent on a particular node and of a particular version for example gravity agent deploy node version have a command to check the deployed agents and their statuses for example gravity agent status node ip status version node deployed node offline
0
4,909
5,330,014,032
IssuesEvent
2017-02-15 16:08:34
dotnet/corefx
https://api.github.com/repos/dotnet/corefx
closed
[arm32/Linux] Remove workaround to prevent arm package restore
area-Infrastructure arm32 bug os-linux
In https://github.com/dotnet/corefx/pull/15518, we restore packages for x64 (instead of arm) since Arm host packages are not yet available (CoreFX has a cyclic dependency on upstack repo for this). Once Core-Setup publishes packages for Host for Linux Arm32, https://github.com/dotnet/corefx/pull/15518 should be reverted. CC @weshaggard @ericstj @hqueue @jyoungyun @hseok-oh
1.0
[arm32/Linux] Remove workaround to prevent arm package restore - In https://github.com/dotnet/corefx/pull/15518, we restore packages for x64 (instead of arm) since Arm host packages are not yet available (CoreFX has a cyclic dependency on upstack repo for this). Once Core-Setup publishes packages for Host for Linux Arm32, https://github.com/dotnet/corefx/pull/15518 should be reverted. CC @weshaggard @ericstj @hqueue @jyoungyun @hseok-oh
infrastructure
remove workaround to prevent arm package restore in we restore packages for instead of arm since arm host packages are not yet available corefx has a cyclic dependency on upstack repo for this once core setup publishes packages for host for linux should be reverted cc weshaggard ericstj hqueue jyoungyun hseok oh
1
4,365
5,008,766,236
IssuesEvent
2016-12-12 20:26:35
eslint/eslint
https://api.github.com/repos/eslint/eslint
closed
Proposal: semver-minor and semver-patch labels
evaluating infrastructure
At the moment, we use the `enhancement` and `feature` labels for any semver-minor change. However, according to our [semver policy](https://github.com/eslint/eslint/blob/master/README.md#semantic-versioning-policy), not all semver-minor changes are enhancements; for example, fixing a false negative in a rule is a semver-minor change. We currently put the `enhancement` label on false-negative bugfixes to clarify that they're semver-minor, but I think this is mildly confusing; a false-negative bugfix isn't any more of an "enhancement" than a false-positive bugfix is. IMO, we're overloading the "enhancement" label to mean "semver-minor". As a solution, I think we should add `semver-minor` and `semver-patch` labels. This would make it easier to see what can be merged for a patch release, and we would be able to use the `enhancement` label only for actual enhancements, such as new rule options. --- I think our use `Fix:` and `Update:` in commit messages has the same problem; false-negative fixes use the `Update:` label to indicate that they're semver-minor. For example, some commit messages look like this: ``` Update: fix false negative of foo ``` But I think messages like this are easier to understand: ``` Fix: false negative of foo ``` The second commit message makes it easier for anyone skimming the changelog for new features (since they can just look at all commits with `Update:` and `New:` prefixes). At the moment, we use the `Update:` prefixe for bugfixes as well, which is a bit misleading to the reader. (However, I know that these prefixes are used by the release tool to determine what kind of release to create, so I don't have a good solution for this problem aside from using separate prefixes for different types of bugfixes.)
1.0
Proposal: semver-minor and semver-patch labels - At the moment, we use the `enhancement` and `feature` labels for any semver-minor change. However, according to our [semver policy](https://github.com/eslint/eslint/blob/master/README.md#semantic-versioning-policy), not all semver-minor changes are enhancements; for example, fixing a false negative in a rule is a semver-minor change. We currently put the `enhancement` label on false-negative bugfixes to clarify that they're semver-minor, but I think this is mildly confusing; a false-negative bugfix isn't any more of an "enhancement" than a false-positive bugfix is. IMO, we're overloading the "enhancement" label to mean "semver-minor". As a solution, I think we should add `semver-minor` and `semver-patch` labels. This would make it easier to see what can be merged for a patch release, and we would be able to use the `enhancement` label only for actual enhancements, such as new rule options. --- I think our use `Fix:` and `Update:` in commit messages has the same problem; false-negative fixes use the `Update:` label to indicate that they're semver-minor. For example, some commit messages look like this: ``` Update: fix false negative of foo ``` But I think messages like this are easier to understand: ``` Fix: false negative of foo ``` The second commit message makes it easier for anyone skimming the changelog for new features (since they can just look at all commits with `Update:` and `New:` prefixes). At the moment, we use the `Update:` prefixe for bugfixes as well, which is a bit misleading to the reader. (However, I know that these prefixes are used by the release tool to determine what kind of release to create, so I don't have a good solution for this problem aside from using separate prefixes for different types of bugfixes.)
infrastructure
proposal semver minor and semver patch labels at the moment we use the enhancement and feature labels for any semver minor change however according to our not all semver minor changes are enhancements for example fixing a false negative in a rule is a semver minor change we currently put the enhancement label on false negative bugfixes to clarify that they re semver minor but i think this is mildly confusing a false negative bugfix isn t any more of an enhancement than a false positive bugfix is imo we re overloading the enhancement label to mean semver minor as a solution i think we should add semver minor and semver patch labels this would make it easier to see what can be merged for a patch release and we would be able to use the enhancement label only for actual enhancements such as new rule options i think our use fix and update in commit messages has the same problem false negative fixes use the update label to indicate that they re semver minor for example some commit messages look like this update fix false negative of foo but i think messages like this are easier to understand fix false negative of foo the second commit message makes it easier for anyone skimming the changelog for new features since they can just look at all commits with update and new prefixes at the moment we use the update prefixe for bugfixes as well which is a bit misleading to the reader however i know that these prefixes are used by the release tool to determine what kind of release to create so i don t have a good solution for this problem aside from using separate prefixes for different types of bugfixes
1
219,173
7,333,741,803
IssuesEvent
2018-03-05 20:23:41
leeensminger/DelDOT-NPDES-Field-Tool
https://api.github.com/repos/leeensminger/DelDOT-NPDES-Field-Tool
opened
Duplicate Temporary Dummy Records
bug - high priority
Around April 2017 there were email discussions about duplicate records being created in the database, particularly with temporary dummy nodes. The issue was not given a top priority since the temporary dummy points aren't used as frequently as other structure types, such as inlets but the issue still seems to persist. When there are duplicate records, field staff have difficulties deleting the point. ![image](https://user-images.githubusercontent.com/16919898/36997785-c1f736fa-2088-11e8-83a9-9369d6a2e95e.png)
1.0
Duplicate Temporary Dummy Records - Around April 2017 there were email discussions about duplicate records being created in the database, particularly with temporary dummy nodes. The issue was not given a top priority since the temporary dummy points aren't used as frequently as other structure types, such as inlets but the issue still seems to persist. When there are duplicate records, field staff have difficulties deleting the point. ![image](https://user-images.githubusercontent.com/16919898/36997785-c1f736fa-2088-11e8-83a9-9369d6a2e95e.png)
non_infrastructure
duplicate temporary dummy records around april there were email discussions about duplicate records being created in the database particularly with temporary dummy nodes the issue was not given a top priority since the temporary dummy points aren t used as frequently as other structure types such as inlets but the issue still seems to persist when there are duplicate records field staff have difficulties deleting the point
0
89,221
11,205,327,856
IssuesEvent
2020-01-05 13:32:16
ZeffonWu/algo
https://api.github.com/repos/ZeffonWu/algo
opened
219.存在重复元素 II | Zeffon's blog | 算法博客
Gitalk https://algo.zeffon.design/posts/e2647b52.html
https://algo.zeffon.design/posts/e2647b52.html 题目要求给定一个整数数组和一个整数 k,判断数组中是否存在两个不同的索引 i 和 j,使得 nums [i] = nums [j],并且 i 和 j 的差的绝对值不超过 k。 题目示例示例 1: 12输入: nums = [1,2,3,1], k = 3输出: true 示例 2: 12输入: nums = [1,0,1,1], k = 1输出: true 示例 3: 12输入: nums =
1.0
219.存在重复元素 II | Zeffon's blog | 算法博客 - https://algo.zeffon.design/posts/e2647b52.html 题目要求给定一个整数数组和一个整数 k,判断数组中是否存在两个不同的索引 i 和 j,使得 nums [i] = nums [j],并且 i 和 j 的差的绝对值不超过 k。 题目示例示例 1: 12输入: nums = [1,2,3,1], k = 3输出: true 示例 2: 12输入: nums = [1,0,1,1], k = 1输出: true 示例 3: 12输入: nums =
non_infrastructure
存在重复元素 ii zeffon s blog 算法博客 题目要求给定一个整数数组和一个整数 k,判断数组中是否存在两个不同的索引 i 和 j,使得 nums nums ,并且 i 和 j 的差的绝对值不超过 k。 题目示例示例 nums k true 示例 nums k true 示例 nums
0
24,970
17,971,448,940
IssuesEvent
2021-09-14 02:53:44
math-blocks/math-blocks
https://api.github.com/repos/math-blocks/math-blocks
closed
Build and publish packages to npm
infrastructure
As part of this we'll want to set up `lerna` to handle bumping of versions. I think we'll want to go with independent versioning instead of having the versions proceed in lockstep.
1.0
Build and publish packages to npm - As part of this we'll want to set up `lerna` to handle bumping of versions. I think we'll want to go with independent versioning instead of having the versions proceed in lockstep.
infrastructure
build and publish packages to npm as part of this we ll want to set up lerna to handle bumping of versions i think we ll want to go with independent versioning instead of having the versions proceed in lockstep
1
219,909
24,539,550,444
IssuesEvent
2022-10-12 01:32:49
ilan-WS/m3
https://api.github.com/repos/ilan-WS/m3
closed
CVE-2021-33502 (High) detected in normalize-url-1.9.1.tgz - autoclosed
security vulnerability
## CVE-2021-33502 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>normalize-url-1.9.1.tgz</b></p></summary> <p>Normalize a URL</p> <p>Library home page: <a href="https://registry.npmjs.org/normalize-url/-/normalize-url-1.9.1.tgz">https://registry.npmjs.org/normalize-url/-/normalize-url-1.9.1.tgz</a></p> <p>Path to dependency file: /src/ctl/ui/package.json</p> <p>Path to vulnerable library: /src/ctl/ui/node_modules/normalize-url</p> <p> Dependency Hierarchy: - react-scripts-1.0.10.tgz (Root Library) - css-loader-0.28.4.tgz - cssnano-3.10.0.tgz - postcss-normalize-url-3.0.8.tgz - :x: **normalize-url-1.9.1.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/ilan-WS/m3/commit/a62d2ead44380e2c1668bbbf026d5385b98d56ec">a62d2ead44380e2c1668bbbf026d5385b98d56ec</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The normalize-url package before 4.5.1, 5.x before 5.3.1, and 6.x before 6.0.1 for Node.js has a ReDoS (regular expression denial of service) issue because it has exponential performance for data: URLs. <p>Publish Date: 2021-05-24 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-33502>CVE-2021-33502</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33502">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33502</a></p> <p>Release Date: 2021-05-24</p> <p>Fix Resolution (normalize-url): 4.5.1</p> <p>Direct dependency fix Resolution (react-scripts): 5.0.0</p> </p> </details> <p></p> *** <!-- REMEDIATE-OPEN-PR-START --> - [ ] Check this box to open an automated fix PR <!-- REMEDIATE-OPEN-PR-END -->
True
CVE-2021-33502 (High) detected in normalize-url-1.9.1.tgz - autoclosed - ## CVE-2021-33502 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>normalize-url-1.9.1.tgz</b></p></summary> <p>Normalize a URL</p> <p>Library home page: <a href="https://registry.npmjs.org/normalize-url/-/normalize-url-1.9.1.tgz">https://registry.npmjs.org/normalize-url/-/normalize-url-1.9.1.tgz</a></p> <p>Path to dependency file: /src/ctl/ui/package.json</p> <p>Path to vulnerable library: /src/ctl/ui/node_modules/normalize-url</p> <p> Dependency Hierarchy: - react-scripts-1.0.10.tgz (Root Library) - css-loader-0.28.4.tgz - cssnano-3.10.0.tgz - postcss-normalize-url-3.0.8.tgz - :x: **normalize-url-1.9.1.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/ilan-WS/m3/commit/a62d2ead44380e2c1668bbbf026d5385b98d56ec">a62d2ead44380e2c1668bbbf026d5385b98d56ec</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The normalize-url package before 4.5.1, 5.x before 5.3.1, and 6.x before 6.0.1 for Node.js has a ReDoS (regular expression denial of service) issue because it has exponential performance for data: URLs. <p>Publish Date: 2021-05-24 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-33502>CVE-2021-33502</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33502">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33502</a></p> <p>Release Date: 2021-05-24</p> <p>Fix Resolution (normalize-url): 4.5.1</p> <p>Direct dependency fix Resolution (react-scripts): 5.0.0</p> </p> </details> <p></p> *** <!-- REMEDIATE-OPEN-PR-START --> - [ ] Check this box to open an automated fix PR <!-- REMEDIATE-OPEN-PR-END -->
non_infrastructure
cve high detected in normalize url tgz autoclosed cve high severity vulnerability vulnerable library normalize url tgz normalize a url library home page a href path to dependency file src ctl ui package json path to vulnerable library src ctl ui node modules normalize url dependency hierarchy react scripts tgz root library css loader tgz cssnano tgz postcss normalize url tgz x normalize url tgz vulnerable library found in head commit a href found in base branch master vulnerability details the normalize url package before x before and x before for node js has a redos regular expression denial of service issue because it has exponential performance for data urls publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution normalize url direct dependency fix resolution react scripts check this box to open an automated fix pr
0
63,250
6,835,479,614
IssuesEvent
2017-11-10 01:25:01
theprimegamer/breakthrough-conflict-online
https://api.github.com/repos/theprimegamer/breakthrough-conflict-online
closed
Test CacheHelper
test
This spec will contain tests that include: - Setting and getting a cache entry with a value of a string (String) - Setting and getting a cache entry with a value of a int (Int32) - Setting and getting a cache entry with a value of a Dictionary - Setting a cache entry twice and getting the latest value instead of the first value. - Getting a cache entry that does not exist and expecting a null *Note:* Testing equality of objects may be more difficult than just using ==
1.0
Test CacheHelper - This spec will contain tests that include: - Setting and getting a cache entry with a value of a string (String) - Setting and getting a cache entry with a value of a int (Int32) - Setting and getting a cache entry with a value of a Dictionary - Setting a cache entry twice and getting the latest value instead of the first value. - Getting a cache entry that does not exist and expecting a null *Note:* Testing equality of objects may be more difficult than just using ==
non_infrastructure
test cachehelper this spec will contain tests that include setting and getting a cache entry with a value of a string string setting and getting a cache entry with a value of a int setting and getting a cache entry with a value of a dictionary setting a cache entry twice and getting the latest value instead of the first value getting a cache entry that does not exist and expecting a null note testing equality of objects may be more difficult than just using
0
128,054
18,024,967,898
IssuesEvent
2021-09-17 02:25:59
mcaj-git/nextcloud-dev
https://api.github.com/repos/mcaj-git/nextcloud-dev
opened
CVE-2021-3777 (Medium) detected in tmpl-1.0.4.tgz
security vulnerability
## CVE-2021-3777 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tmpl-1.0.4.tgz</b></p></summary> <p>JavaScript micro templates.</p> <p>Library home page: <a href="https://registry.npmjs.org/tmpl/-/tmpl-1.0.4.tgz">https://registry.npmjs.org/tmpl/-/tmpl-1.0.4.tgz</a></p> <p> Dependency Hierarchy: - jest-26.6.3.tgz (Root Library) - core-26.6.3.tgz - jest-haste-map-26.6.2.tgz - walker-1.0.7.tgz - makeerror-1.0.11.tgz - :x: **tmpl-1.0.4.tgz** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> nodejs-tmpl is vulnerable to Inefficient Regular Expression Complexity <p>Publish Date: 2021-09-15 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3777>CVE-2021-3777</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: N/A - Attack Complexity: N/A - Privileges Required: N/A - User Interaction: N/A - Scope: N/A - Impact Metrics: - Confidentiality Impact: N/A - Integrity Impact: N/A - Availability Impact: N/A </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/daaku/nodejs-tmpl/releases/tag/v1.0.5">https://github.com/daaku/nodejs-tmpl/releases/tag/v1.0.5</a></p> <p>Release Date: 2021-09-15</p> <p>Fix Resolution: tmpl - 1.0.5</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-3777 (Medium) detected in tmpl-1.0.4.tgz - ## CVE-2021-3777 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tmpl-1.0.4.tgz</b></p></summary> <p>JavaScript micro templates.</p> <p>Library home page: <a href="https://registry.npmjs.org/tmpl/-/tmpl-1.0.4.tgz">https://registry.npmjs.org/tmpl/-/tmpl-1.0.4.tgz</a></p> <p> Dependency Hierarchy: - jest-26.6.3.tgz (Root Library) - core-26.6.3.tgz - jest-haste-map-26.6.2.tgz - walker-1.0.7.tgz - makeerror-1.0.11.tgz - :x: **tmpl-1.0.4.tgz** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> nodejs-tmpl is vulnerable to Inefficient Regular Expression Complexity <p>Publish Date: 2021-09-15 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3777>CVE-2021-3777</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: N/A - Attack Complexity: N/A - Privileges Required: N/A - User Interaction: N/A - Scope: N/A - Impact Metrics: - Confidentiality Impact: N/A - Integrity Impact: N/A - Availability Impact: N/A </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/daaku/nodejs-tmpl/releases/tag/v1.0.5">https://github.com/daaku/nodejs-tmpl/releases/tag/v1.0.5</a></p> <p>Release Date: 2021-09-15</p> <p>Fix Resolution: tmpl - 1.0.5</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_infrastructure
cve medium detected in tmpl tgz cve medium severity vulnerability vulnerable library tmpl tgz javascript micro templates library home page a href dependency hierarchy jest tgz root library core tgz jest haste map tgz walker tgz makeerror tgz x tmpl tgz vulnerable library found in base branch master vulnerability details nodejs tmpl is vulnerable to inefficient regular expression complexity publish date url a href cvss score details base score metrics exploitability metrics attack vector n a attack complexity n a privileges required n a user interaction n a scope n a impact metrics confidentiality impact n a integrity impact n a availability impact n a for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tmpl step up your open source security game with whitesource
0
771,277
27,077,801,901
IssuesEvent
2023-02-14 11:53:25
zephyrproject-rtos/zephyr
https://api.github.com/repos/zephyrproject-rtos/zephyr
closed
esp32 invalid flash dependencies
bug priority: low area: Flash platform: ESP32
**Describe the bug** The following was seen in a MCUmgr test which enables flash for the `icev_wireless` board: ``` FAILED: zephyr/zephyr_pre0.elf zephyr/zephyr_pre0.map : && ccache /opt/toolchains/zephyr-sdk-0.15.2/riscv64-zephyr-elf/bin/riscv64-zephyr-elf-gcc -gdwarf-4 zephyr/CMakeFiles/zephyr_pre0.dir/misc/empty_file.c.obj -o zephyr/zephyr_pre0.elf -fuse-ld=bfd -Wl,-T zephyr/linker_zephyr_pre0.cmd -Wl,-Map=/__w/zephyr/zephyr/twister-out/icev_wireless/tests/subsys/mgmt/mcumgr/fs_mgmt_hash_supported/mgmt.mcumgr.fs.mgmt.hash.supported.crc32/zephyr/zephyr_pre0.map -Wl,--whole-archive app/libapp.a zephyr/libzephyr.a zephyr/arch/common/libarch__common.a zephyr/arch/arch/riscv/core/libarch__riscv__core.a zephyr/lib/libc/minimal/liblib__libc__minimal.a zephyr/subsys/fs/libsubsys__fs.a zephyr/subsys/mgmt/mcumgr/mgmt/libsubsys__mgmt__mcumgr__mgmt.a zephyr/subsys/mgmt/mcumgr/smp/libsubsys__mgmt__mcumgr__smp.a zephyr/subsys/mgmt/mcumgr/util/libsubsys__mgmt__mcumgr__util.a zephyr/subsys/mgmt/mcumgr/grp/fs_mgmt/libsubsys__mgmt__mcumgr__grp__fs_mgmt.a zephyr/subsys/mgmt/mcumgr/transport/libsubsys__mgmt__mcumgr__transport.a zephyr/subsys/net/libsubsys__net.a zephyr/subsys/testsuite/ztest/libsubsys__testsuite__ztest.a zephyr/drivers/interrupt_controller/libdrivers__interrupt_controller.a zephyr/drivers/clock_control/libdrivers__clock_control.a zephyr/drivers/console/libdrivers__console.a zephyr/drivers/gpio/libdrivers__gpio.a zephyr/drivers/flash/libdrivers__flash.a zephyr/drivers/serial/libdrivers__serial.a zephyr/drivers/timer/libdrivers__timer.a zephyr/drivers/pinctrl/libdrivers__pinctrl.a modules/zcbor/libmodules__zcbor.a -Wl,--no-whole-archive zephyr/kernel/libkernel.a zephyr/CMakeFiles/offsets.dir/./arch/riscv/core/offsets/offsets.c.obj -L"/opt/toolchains/zephyr-sdk-0.15.2/riscv64-zephyr-elf/bin/../lib/gcc/riscv64-zephyr-elf/12.1.0" -L/__w/zephyr/zephyr/twister-out/icev_wireless/tests/subsys/mgmt/mcumgr/fs_mgmt_hash_supported/mgmt.mcumgr.fs.mgmt.hash.supported.crc32/zephyr -lgcc zephyr/arch/common/libisr_tables.a -lgcc -no-pie -mabi=ilp32 -march=rv32ima_zicsr_zifencei -Wl,--gc-sections -Wl,--build-id=none -Wl,--sort-common=descending -Wl,--sort-section=alignment -Wl,-u,_OffsetAbsSyms -Wl,-u,_ConfigAbsSyms -nostdlib -static -Wl,-X -Wl,-N -Wl,--orphan-handling=warn -Wl,--fatal-warnings -T/__w/zephyr/modules/hal/espressif/zephyr/esp32c3/src/linker/esp32c3.rom.alias.ld -T/__w/zephyr/modules/hal/espressif/zephyr/esp32c3/../../components/esp_rom/esp32c3/ld/esp32c3.rom.ld -T/__w/zephyr/modules/hal/espressif/zephyr/esp32c3/../../components/esp_rom/esp32c3/ld/esp32c3.rom.eco3.ld -T/__w/zephyr/modules/hal/espressif/zephyr/esp32c3/../../components/esp_rom/esp32c3/ld/esp32c3.rom.api.ld -T/__w/zephyr/modules/hal/espressif/zephyr/esp32c3/../../components/esp_rom/esp32c3/ld/esp32c3.rom.libgcc.ld -T/__w/zephyr/modules/hal/espressif/zephyr/esp32c3/../../components/soc/esp32c3/ld/esp32c3.peripherals.ld && cd /__w/zephyr/zephyr/twister-out/icev_wireless/tests/subsys/mgmt/mcumgr/fs_mgmt_hash_supported/mgmt.mcumgr.fs.mgmt.hash.supported.crc32/zephyr && /usr/local/bin/cmake -E echo /opt/toolchains/zephyr-sdk-0.15.2/riscv64-zephyr-elf/bin/../lib/gcc/riscv64-zephyr-elf/12.1.0/../../../../riscv64-zephyr-elf/bin/ld.bfd: zephyr/libzephyr.a(flash_mmap.c.obj): in function `spi_flash_mmap_pages': /__w/zephyr/modules/hal/espressif/components/spi_flash/flash_mmap.c:185: undefined reference to `k_malloc' /opt/toolchains/zephyr-sdk-0.15.2/riscv64-zephyr-elf/bin/../lib/gcc/riscv64-zephyr-elf/12.1.0/../../../../riscv64-zephyr-elf/bin/ld.bfd: zephyr/libzephyr.a(flash_mmap.c.obj): in function `spi_flash_mmap': /__w/zephyr/modules/hal/espressif/components/spi_flash/flash_mmap.c:151: undefined reference to `k_malloc' collect2: error: ld returned 1 exit status ninja: build stopped: subcommand failed. ``` The Kconfig dependencies for this are therefore invalid **Expected behavior** Dependencies to be properly set **Impact** Showstopper, holding up and blocking completely unrelated PRs
1.0
esp32 invalid flash dependencies - **Describe the bug** The following was seen in a MCUmgr test which enables flash for the `icev_wireless` board: ``` FAILED: zephyr/zephyr_pre0.elf zephyr/zephyr_pre0.map : && ccache /opt/toolchains/zephyr-sdk-0.15.2/riscv64-zephyr-elf/bin/riscv64-zephyr-elf-gcc -gdwarf-4 zephyr/CMakeFiles/zephyr_pre0.dir/misc/empty_file.c.obj -o zephyr/zephyr_pre0.elf -fuse-ld=bfd -Wl,-T zephyr/linker_zephyr_pre0.cmd -Wl,-Map=/__w/zephyr/zephyr/twister-out/icev_wireless/tests/subsys/mgmt/mcumgr/fs_mgmt_hash_supported/mgmt.mcumgr.fs.mgmt.hash.supported.crc32/zephyr/zephyr_pre0.map -Wl,--whole-archive app/libapp.a zephyr/libzephyr.a zephyr/arch/common/libarch__common.a zephyr/arch/arch/riscv/core/libarch__riscv__core.a zephyr/lib/libc/minimal/liblib__libc__minimal.a zephyr/subsys/fs/libsubsys__fs.a zephyr/subsys/mgmt/mcumgr/mgmt/libsubsys__mgmt__mcumgr__mgmt.a zephyr/subsys/mgmt/mcumgr/smp/libsubsys__mgmt__mcumgr__smp.a zephyr/subsys/mgmt/mcumgr/util/libsubsys__mgmt__mcumgr__util.a zephyr/subsys/mgmt/mcumgr/grp/fs_mgmt/libsubsys__mgmt__mcumgr__grp__fs_mgmt.a zephyr/subsys/mgmt/mcumgr/transport/libsubsys__mgmt__mcumgr__transport.a zephyr/subsys/net/libsubsys__net.a zephyr/subsys/testsuite/ztest/libsubsys__testsuite__ztest.a zephyr/drivers/interrupt_controller/libdrivers__interrupt_controller.a zephyr/drivers/clock_control/libdrivers__clock_control.a zephyr/drivers/console/libdrivers__console.a zephyr/drivers/gpio/libdrivers__gpio.a zephyr/drivers/flash/libdrivers__flash.a zephyr/drivers/serial/libdrivers__serial.a zephyr/drivers/timer/libdrivers__timer.a zephyr/drivers/pinctrl/libdrivers__pinctrl.a modules/zcbor/libmodules__zcbor.a -Wl,--no-whole-archive zephyr/kernel/libkernel.a zephyr/CMakeFiles/offsets.dir/./arch/riscv/core/offsets/offsets.c.obj -L"/opt/toolchains/zephyr-sdk-0.15.2/riscv64-zephyr-elf/bin/../lib/gcc/riscv64-zephyr-elf/12.1.0" -L/__w/zephyr/zephyr/twister-out/icev_wireless/tests/subsys/mgmt/mcumgr/fs_mgmt_hash_supported/mgmt.mcumgr.fs.mgmt.hash.supported.crc32/zephyr -lgcc zephyr/arch/common/libisr_tables.a -lgcc -no-pie -mabi=ilp32 -march=rv32ima_zicsr_zifencei -Wl,--gc-sections -Wl,--build-id=none -Wl,--sort-common=descending -Wl,--sort-section=alignment -Wl,-u,_OffsetAbsSyms -Wl,-u,_ConfigAbsSyms -nostdlib -static -Wl,-X -Wl,-N -Wl,--orphan-handling=warn -Wl,--fatal-warnings -T/__w/zephyr/modules/hal/espressif/zephyr/esp32c3/src/linker/esp32c3.rom.alias.ld -T/__w/zephyr/modules/hal/espressif/zephyr/esp32c3/../../components/esp_rom/esp32c3/ld/esp32c3.rom.ld -T/__w/zephyr/modules/hal/espressif/zephyr/esp32c3/../../components/esp_rom/esp32c3/ld/esp32c3.rom.eco3.ld -T/__w/zephyr/modules/hal/espressif/zephyr/esp32c3/../../components/esp_rom/esp32c3/ld/esp32c3.rom.api.ld -T/__w/zephyr/modules/hal/espressif/zephyr/esp32c3/../../components/esp_rom/esp32c3/ld/esp32c3.rom.libgcc.ld -T/__w/zephyr/modules/hal/espressif/zephyr/esp32c3/../../components/soc/esp32c3/ld/esp32c3.peripherals.ld && cd /__w/zephyr/zephyr/twister-out/icev_wireless/tests/subsys/mgmt/mcumgr/fs_mgmt_hash_supported/mgmt.mcumgr.fs.mgmt.hash.supported.crc32/zephyr && /usr/local/bin/cmake -E echo /opt/toolchains/zephyr-sdk-0.15.2/riscv64-zephyr-elf/bin/../lib/gcc/riscv64-zephyr-elf/12.1.0/../../../../riscv64-zephyr-elf/bin/ld.bfd: zephyr/libzephyr.a(flash_mmap.c.obj): in function `spi_flash_mmap_pages': /__w/zephyr/modules/hal/espressif/components/spi_flash/flash_mmap.c:185: undefined reference to `k_malloc' /opt/toolchains/zephyr-sdk-0.15.2/riscv64-zephyr-elf/bin/../lib/gcc/riscv64-zephyr-elf/12.1.0/../../../../riscv64-zephyr-elf/bin/ld.bfd: zephyr/libzephyr.a(flash_mmap.c.obj): in function `spi_flash_mmap': /__w/zephyr/modules/hal/espressif/components/spi_flash/flash_mmap.c:151: undefined reference to `k_malloc' collect2: error: ld returned 1 exit status ninja: build stopped: subcommand failed. ``` The Kconfig dependencies for this are therefore invalid **Expected behavior** Dependencies to be properly set **Impact** Showstopper, holding up and blocking completely unrelated PRs
non_infrastructure
invalid flash dependencies describe the bug the following was seen in a mcumgr test which enables flash for the icev wireless board failed zephyr zephyr elf zephyr zephyr map ccache opt toolchains zephyr sdk zephyr elf bin zephyr elf gcc gdwarf zephyr cmakefiles zephyr dir misc empty file c obj o zephyr zephyr elf fuse ld bfd wl t zephyr linker zephyr cmd wl map w zephyr zephyr twister out icev wireless tests subsys mgmt mcumgr fs mgmt hash supported mgmt mcumgr fs mgmt hash supported zephyr zephyr map wl whole archive app libapp a zephyr libzephyr a zephyr arch common libarch common a zephyr arch arch riscv core libarch riscv core a zephyr lib libc minimal liblib libc minimal a zephyr subsys fs libsubsys fs a zephyr subsys mgmt mcumgr mgmt libsubsys mgmt mcumgr mgmt a zephyr subsys mgmt mcumgr smp libsubsys mgmt mcumgr smp a zephyr subsys mgmt mcumgr util libsubsys mgmt mcumgr util a zephyr subsys mgmt mcumgr grp fs mgmt libsubsys mgmt mcumgr grp fs mgmt a zephyr subsys mgmt mcumgr transport libsubsys mgmt mcumgr transport a zephyr subsys net libsubsys net a zephyr subsys testsuite ztest libsubsys testsuite ztest a zephyr drivers interrupt controller libdrivers interrupt controller a zephyr drivers clock control libdrivers clock control a zephyr drivers console libdrivers console a zephyr drivers gpio libdrivers gpio a zephyr drivers flash libdrivers flash a zephyr drivers serial libdrivers serial a zephyr drivers timer libdrivers timer a zephyr drivers pinctrl libdrivers pinctrl a modules zcbor libmodules zcbor a wl no whole archive zephyr kernel libkernel a zephyr cmakefiles offsets dir arch riscv core offsets offsets c obj l opt toolchains zephyr sdk zephyr elf bin lib gcc zephyr elf l w zephyr zephyr twister out icev wireless tests subsys mgmt mcumgr fs mgmt hash supported mgmt mcumgr fs mgmt hash supported zephyr lgcc zephyr arch common libisr tables a lgcc no pie mabi march zicsr zifencei wl gc sections wl build id none wl sort common descending wl sort section alignment wl u offsetabssyms wl u configabssyms nostdlib static wl x wl n wl orphan handling warn wl fatal warnings t w zephyr modules hal espressif zephyr src linker rom alias ld t w zephyr modules hal espressif zephyr components esp rom ld rom ld t w zephyr modules hal espressif zephyr components esp rom ld rom ld t w zephyr modules hal espressif zephyr components esp rom ld rom api ld t w zephyr modules hal espressif zephyr components esp rom ld rom libgcc ld t w zephyr modules hal espressif zephyr components soc ld peripherals ld cd w zephyr zephyr twister out icev wireless tests subsys mgmt mcumgr fs mgmt hash supported mgmt mcumgr fs mgmt hash supported zephyr usr local bin cmake e echo opt toolchains zephyr sdk zephyr elf bin lib gcc zephyr elf zephyr elf bin ld bfd zephyr libzephyr a flash mmap c obj in function spi flash mmap pages w zephyr modules hal espressif components spi flash flash mmap c undefined reference to k malloc opt toolchains zephyr sdk zephyr elf bin lib gcc zephyr elf zephyr elf bin ld bfd zephyr libzephyr a flash mmap c obj in function spi flash mmap w zephyr modules hal espressif components spi flash flash mmap c undefined reference to k malloc error ld returned exit status ninja build stopped subcommand failed the kconfig dependencies for this are therefore invalid expected behavior dependencies to be properly set impact showstopper holding up and blocking completely unrelated prs
0
33,224
27,320,183,343
IssuesEvent
2023-02-24 19:05:05
aneoconsulting/ArmoniK
https://api.github.com/repos/aneoconsulting/ArmoniK
closed
Update EKS from 1.24 to 1.25
Enhancement Infrastructure
Initial release of Kubernetes version 1.25 for Amazon EKS since February 21, 2023 [see here](https://docs.aws.amazon.com/eks/latest/userguide/platform-versions.html)
1.0
Update EKS from 1.24 to 1.25 - Initial release of Kubernetes version 1.25 for Amazon EKS since February 21, 2023 [see here](https://docs.aws.amazon.com/eks/latest/userguide/platform-versions.html)
infrastructure
update eks from to initial release of kubernetes version for amazon eks since february
1
24,084
5,027,114,750
IssuesEvent
2016-12-15 14:42:43
IBM-Swift/Kitura
https://api.github.com/repos/IBM-Swift/Kitura
closed
KituraSession: Additional documentation request for `secret` parameter
documentation
I presume this string should contain random characters. It would be useful to know length bounds on this parameter if any. E.g., would 256 characters be sufficient? Or can any length be used and additional length gives additional security?
1.0
KituraSession: Additional documentation request for `secret` parameter - I presume this string should contain random characters. It would be useful to know length bounds on this parameter if any. E.g., would 256 characters be sufficient? Or can any length be used and additional length gives additional security?
non_infrastructure
kiturasession additional documentation request for secret parameter i presume this string should contain random characters it would be useful to know length bounds on this parameter if any e g would characters be sufficient or can any length be used and additional length gives additional security
0
162
2,545,653,998
IssuesEvent
2015-01-29 18:31:17
rust-lang/rust
https://api.github.com/repos/rust-lang/rust
closed
vim indent file broken with multiline conditions
A-infrastructure
```rust fn f() { if x && y { } } ```
1.0
vim indent file broken with multiline conditions - ```rust fn f() { if x && y { } } ```
infrastructure
vim indent file broken with multiline conditions rust fn f if x y
1
3,380
2,668,896,405
IssuesEvent
2015-03-23 12:21:31
hydroshare/hydroshare
https://api.github.com/repos/hydroshare/hydroshare
closed
Error when referenced time series is created without valid parameters
bug ready for testing
Error when referenced time series is created without valid parameters. For example, if the user clicks "Create Resource" in the RefTs resource creation page without a valid URL, and/or Site or Variable (when using SOAP) it throws an ugly error
1.0
Error when referenced time series is created without valid parameters - Error when referenced time series is created without valid parameters. For example, if the user clicks "Create Resource" in the RefTs resource creation page without a valid URL, and/or Site or Variable (when using SOAP) it throws an ugly error
non_infrastructure
error when referenced time series is created without valid parameters error when referenced time series is created without valid parameters for example if the user clicks create resource in the refts resource creation page without a valid url and or site or variable when using soap it throws an ugly error
0
132,257
28,128,141,000
IssuesEvent
2023-03-31 19:43:26
creativecommons/cc-resource-archive
https://api.github.com/repos/creativecommons/cc-resource-archive
closed
[Feature] Adding Footer
🟩 priority: low ⛔️ status: discarded 🚦 status: awaiting triage ✨ goal: improvement 💻 aspect: code
## Problem There is No footer Section ## Description Adding footer for showing contact information about Creative Commons. ## Implementation ![image](https://user-images.githubusercontent.com/65482186/226956610-c19e440b-63f0-4155-a087-581a6c6c00c7.png) <!-- Replace the [ ] with [x] to check the box. --> - [x] I would be interested in implementing this feature.
1.0
[Feature] Adding Footer - ## Problem There is No footer Section ## Description Adding footer for showing contact information about Creative Commons. ## Implementation ![image](https://user-images.githubusercontent.com/65482186/226956610-c19e440b-63f0-4155-a087-581a6c6c00c7.png) <!-- Replace the [ ] with [x] to check the box. --> - [x] I would be interested in implementing this feature.
non_infrastructure
adding footer problem there is no footer section description adding footer for showing contact information about creative commons implementation i would be interested in implementing this feature
0
85,160
16,610,375,438
IssuesEvent
2021-06-02 10:43:17
Regalis11/Barotrauma
https://api.github.com/repos/Regalis11/Barotrauma
closed
Pirate sub spawn in abyss at the start of the round
Bug Code
- [x] I have searched the issue tracker to check if the issue has already been reported. **Description** Select pirate mission, start the round and enemy humpback will spawn in the abyss and start sinking **Steps To Reproduce** as above **Version** 0.1400.2.0 **Additional information** [Save.zip](https://github.com/Regalis11/Barotrauma/files/6576099/Save.zip)
1.0
Pirate sub spawn in abyss at the start of the round - - [x] I have searched the issue tracker to check if the issue has already been reported. **Description** Select pirate mission, start the round and enemy humpback will spawn in the abyss and start sinking **Steps To Reproduce** as above **Version** 0.1400.2.0 **Additional information** [Save.zip](https://github.com/Regalis11/Barotrauma/files/6576099/Save.zip)
non_infrastructure
pirate sub spawn in abyss at the start of the round i have searched the issue tracker to check if the issue has already been reported description select pirate mission start the round and enemy humpback will spawn in the abyss and start sinking steps to reproduce as above version additional information
0
18,806
13,111,095,129
IssuesEvent
2020-08-04 22:06:05
intel/dffml
https://api.github.com/repos/intel/dffml
reopened
ci: TensorFlow 2.3.0 doesn't work with numpy 1.19.1
bug kind/infrastructure
``` 2020-07-27T19:25:00.0718202Z pkg_resources.ContextualVersionConflict: (numpy 1.19.1 (/usr/share/miniconda/lib/python3.7/site-packages), Requirement.parse('numpy<1.19.0,>=1.16.0'), {'tensorflow'}) ```
1.0
ci: TensorFlow 2.3.0 doesn't work with numpy 1.19.1 - ``` 2020-07-27T19:25:00.0718202Z pkg_resources.ContextualVersionConflict: (numpy 1.19.1 (/usr/share/miniconda/lib/python3.7/site-packages), Requirement.parse('numpy<1.19.0,>=1.16.0'), {'tensorflow'}) ```
infrastructure
ci tensorflow doesn t work with numpy pkg resources contextualversionconflict numpy usr share miniconda lib site packages requirement parse numpy tensorflow
1
152,100
12,086,284,072
IssuesEvent
2020-04-18 09:09:02
keep-network/keep-ecdsa
https://api.github.com/repos/keep-network/keep-ecdsa
closed
Abort signing when signing timeout has passed
⛓chain 🐛 bug 📟 client 🕵️ system tests
A client doesn't check if signing process has timed out which leaves it in a retry loop. Currently when client submits the signature it receives the message: > 12:22:23.900 ERROR keep-ecdsa: failed to submit signature for keep [0xe7FB2CB75209F8FE246641125De06E6A44bAb6b6]: [got error [VM Exception while processing transaction: revert Signing timeout elapsed] while resolving original error [failed to estimate gas needed: VM Exception while processing transaction: revert Signing timeout elapsed]]; will retry after 1 minute node.go:359 The client should use keep's `hasSigningTimedOut` function to determine if the signing process should be abandoned in retry loops or when starting the signing process.
1.0
Abort signing when signing timeout has passed - A client doesn't check if signing process has timed out which leaves it in a retry loop. Currently when client submits the signature it receives the message: > 12:22:23.900 ERROR keep-ecdsa: failed to submit signature for keep [0xe7FB2CB75209F8FE246641125De06E6A44bAb6b6]: [got error [VM Exception while processing transaction: revert Signing timeout elapsed] while resolving original error [failed to estimate gas needed: VM Exception while processing transaction: revert Signing timeout elapsed]]; will retry after 1 minute node.go:359 The client should use keep's `hasSigningTimedOut` function to determine if the signing process should be abandoned in retry loops or when starting the signing process.
non_infrastructure
abort signing when signing timeout has passed a client doesn t check if signing process has timed out which leaves it in a retry loop currently when client submits the signature it receives the message error keep ecdsa failed to submit signature for keep while resolving original error will retry after minute node go the client should use keep s hassigningtimedout function to determine if the signing process should be abandoned in retry loops or when starting the signing process
0
298,432
25,827,400,798
IssuesEvent
2022-12-12 13:55:06
AppFlowy-IO/AppFlowy
https://api.github.com/repos/AppFlowy-IO/AppFlowy
closed
[Test]: Improve Code Coverage for render files
help wanted good first issue for devs tests editor hacktoberfest
### Description Create more tests to cover files under the "render" folder An example to learn how to create a test: https://github.com/AppFlowy-IO/AppFlowy/discussions/1107 ### Impact Code Coverage metric helps in determining the performance and quality aspects of any software. Help us ensure AppFlowy's quality! ### Additional Context _No response_
1.0
[Test]: Improve Code Coverage for render files - ### Description Create more tests to cover files under the "render" folder An example to learn how to create a test: https://github.com/AppFlowy-IO/AppFlowy/discussions/1107 ### Impact Code Coverage metric helps in determining the performance and quality aspects of any software. Help us ensure AppFlowy's quality! ### Additional Context _No response_
non_infrastructure
improve code coverage for render files description create more tests to cover files under the render folder an example to learn how to create a test impact code coverage metric helps in determining the performance and quality aspects of any software help us ensure appflowy s quality additional context no response
0
24,266
17,063,420,840
IssuesEvent
2021-07-07 02:20:23
intellij-rust/intellij-rust
https://api.github.com/repos/intellij-rust/intellij-rust
closed
Fix pretty printers tests for non zero structs
subsystem::debugger subsystem::infrastructure
I've temporarily disabled pretty printers tests for non zero structs in #7430 because they started failing with the new 212 EAP. As I can see, everything is fine in production, only tests are affected It would be great to investigate why tests start failing and fix them
1.0
Fix pretty printers tests for non zero structs - I've temporarily disabled pretty printers tests for non zero structs in #7430 because they started failing with the new 212 EAP. As I can see, everything is fine in production, only tests are affected It would be great to investigate why tests start failing and fix them
infrastructure
fix pretty printers tests for non zero structs i ve temporarily disabled pretty printers tests for non zero structs in because they started failing with the new eap as i can see everything is fine in production only tests are affected it would be great to investigate why tests start failing and fix them
1
32,852
27,039,694,966
IssuesEvent
2023-02-13 03:28:05
oven-sh/bun
https://api.github.com/repos/oven-sh/bun
closed
Repo setup
infrastructure
- [x] Add a license - [x] Add contribution guidelines - [x] Set up linting (eg eslint config) - [x] Set up templates for issues (bug, feature request, etc)
1.0
Repo setup - - [x] Add a license - [x] Add contribution guidelines - [x] Set up linting (eg eslint config) - [x] Set up templates for issues (bug, feature request, etc)
infrastructure
repo setup add a license add contribution guidelines set up linting eg eslint config set up templates for issues bug feature request etc
1
157,462
12,374,117,890
IssuesEvent
2020-05-19 00:33:16
rancher/dashboard
https://api.github.com/repos/rancher/dashboard
closed
Workload Environment Variable bugs
[zube]: To Test area/workloads kind/bug
- [x] Source in Environment Variables needs to not have the namespace. It should only show the source from the namespace that is selected from the Namespace field at top - [x] Prefix or Alias field in Environment Variables loses focus after every character I enter ![image](https://user-images.githubusercontent.com/11514927/78930909-bd9bbe00-7a59-11ea-8faa-57d10da54e9e.png)
1.0
Workload Environment Variable bugs - - [x] Source in Environment Variables needs to not have the namespace. It should only show the source from the namespace that is selected from the Namespace field at top - [x] Prefix or Alias field in Environment Variables loses focus after every character I enter ![image](https://user-images.githubusercontent.com/11514927/78930909-bd9bbe00-7a59-11ea-8faa-57d10da54e9e.png)
non_infrastructure
workload environment variable bugs source in environment variables needs to not have the namespace it should only show the source from the namespace that is selected from the namespace field at top prefix or alias field in environment variables loses focus after every character i enter
0
31,596
25,916,579,342
IssuesEvent
2022-12-15 17:52:33
cal-itp/benefits
https://api.github.com/repos/cal-itp/benefits
closed
determine how we want to manage secrets
security deliverable infrastructure
We currently have secrets managed a number of ways, without a source of truth, which has led to confusion about what the values are intended to be in a given environment, missing secrets when deploying new features, and longer recovery time when we've needed to replace them. ## Acceptance Criteria - [x] We have a plan to manage secrets in a consistent and well-understood way ## Additional context <!-- Include information about scope, time frame, person who requested the task, links to resources --> ## What is the definition of done? - [ ] We have a documented process for how to update secrets - [x] We have consensus on said process
1.0
determine how we want to manage secrets - We currently have secrets managed a number of ways, without a source of truth, which has led to confusion about what the values are intended to be in a given environment, missing secrets when deploying new features, and longer recovery time when we've needed to replace them. ## Acceptance Criteria - [x] We have a plan to manage secrets in a consistent and well-understood way ## Additional context <!-- Include information about scope, time frame, person who requested the task, links to resources --> ## What is the definition of done? - [ ] We have a documented process for how to update secrets - [x] We have consensus on said process
infrastructure
determine how we want to manage secrets we currently have secrets managed a number of ways without a source of truth which has led to confusion about what the values are intended to be in a given environment missing secrets when deploying new features and longer recovery time when we ve needed to replace them acceptance criteria we have a plan to manage secrets in a consistent and well understood way additional context what is the definition of done we have a documented process for how to update secrets we have consensus on said process
1
653,746
21,625,733,164
IssuesEvent
2022-05-05 01:41:52
comp195/senior-project-spring-2022-blueprint-automation-tool
https://api.github.com/repos/comp195/senior-project-spring-2022-blueprint-automation-tool
closed
User Interface Implementation
status: in-progress priority: high type: feature
- [x] Create Home Screen - [x] Configuration Screen - [x] Implementation Testing
1.0
User Interface Implementation - - [x] Create Home Screen - [x] Configuration Screen - [x] Implementation Testing
non_infrastructure
user interface implementation create home screen configuration screen implementation testing
0
18,790
13,105,243,823
IssuesEvent
2020-08-04 11:50:02
GIScience/openrouteservice
https://api.github.com/repos/GIScience/openrouteservice
closed
Docker setup no longer works after Java 11 update
:bug: bug infrastructure
Since the code to move to java 11 has been added, the current docker setup no longer works with the output being a fatal error when creating the java virtual machine
1.0
Docker setup no longer works after Java 11 update - Since the code to move to java 11 has been added, the current docker setup no longer works with the output being a fatal error when creating the java virtual machine
infrastructure
docker setup no longer works after java update since the code to move to java has been added the current docker setup no longer works with the output being a fatal error when creating the java virtual machine
1
21,324
14,526,041,705
IssuesEvent
2020-12-14 13:44:40
shaughnessyar/driftR
https://api.github.com/repos/shaughnessyar/driftR
closed
Remove pipes from functions
type:infrastructure
These make errors more difficult to track. All `%>%` should be removed!
1.0
Remove pipes from functions - These make errors more difficult to track. All `%>%` should be removed!
infrastructure
remove pipes from functions these make errors more difficult to track all should be removed
1
278,715
30,702,390,576
IssuesEvent
2023-07-27 01:26:03
Nivaskumark/CVE-2020-0074-frameworks_base
https://api.github.com/repos/Nivaskumark/CVE-2020-0074-frameworks_base
reopened
CVE-2019-2232 (High) detected in baseandroid-11.0.0_r39
Mend: dependency security vulnerability
## CVE-2019-2232 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>baseandroid-11.0.0_r39</b></p></summary> <p> <p>Android framework classes and services</p> <p>Library home page: <a href=https://android.googlesource.com/platform/frameworks/base>https://android.googlesource.com/platform/frameworks/base</a></p> <p>Found in HEAD commit: <a href="https://github.com/Nivaskumark/CVE-2020-0074-frameworks_base/commit/f63c00c11df9fe4c62ee2ed7d5f72e3a7ebec027">f63c00c11df9fe4c62ee2ed7d5f72e3a7ebec027</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/core/java/android/text/TextLine.java</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> In handleRun of TextLine.java, there is a possible application crash due to improper input validation. This could lead to remote denial of service when processing Unicode with no additional execution privileges needed. User interaction is not needed for exploitation.Product: AndroidVersions: Android-8.0 Android-8.1 Android-9 Android-10Android ID: A-140632678 <p>Publish Date: 2019-12-06 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-2232>CVE-2019-2232</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-2232">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-2232</a></p> <p>Release Date: 2019-12-06</p> <p>Fix Resolution: android-8.0.0_r41;android-8.1.0_r71;android-9.0.0_r51;android-10.0.0_r17</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2019-2232 (High) detected in baseandroid-11.0.0_r39 - ## CVE-2019-2232 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>baseandroid-11.0.0_r39</b></p></summary> <p> <p>Android framework classes and services</p> <p>Library home page: <a href=https://android.googlesource.com/platform/frameworks/base>https://android.googlesource.com/platform/frameworks/base</a></p> <p>Found in HEAD commit: <a href="https://github.com/Nivaskumark/CVE-2020-0074-frameworks_base/commit/f63c00c11df9fe4c62ee2ed7d5f72e3a7ebec027">f63c00c11df9fe4c62ee2ed7d5f72e3a7ebec027</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/core/java/android/text/TextLine.java</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> In handleRun of TextLine.java, there is a possible application crash due to improper input validation. This could lead to remote denial of service when processing Unicode with no additional execution privileges needed. User interaction is not needed for exploitation.Product: AndroidVersions: Android-8.0 Android-8.1 Android-9 Android-10Android ID: A-140632678 <p>Publish Date: 2019-12-06 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-2232>CVE-2019-2232</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-2232">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-2232</a></p> <p>Release Date: 2019-12-06</p> <p>Fix Resolution: android-8.0.0_r41;android-8.1.0_r71;android-9.0.0_r51;android-10.0.0_r17</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_infrastructure
cve high detected in baseandroid cve high severity vulnerability vulnerable library baseandroid android framework classes and services library home page a href found in head commit a href found in base branch master vulnerable source files core java android text textline java vulnerability details in handlerun of textline java there is a possible application crash due to improper input validation this could lead to remote denial of service when processing unicode with no additional execution privileges needed user interaction is not needed for exploitation product androidversions android android android android id a publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution android android android android step up your open source security game with mend
0
9,937
8,257,287,708
IssuesEvent
2018-09-13 04:02:41
eslint/eslint
https://api.github.com/repos/eslint/eslint
closed
Requiring 2FA for the ESLint GitHub organization
accepted infrastructure
In light of recent events, I think this would be a good time to start requiring 2FA to be enabled for members of the `eslint` organization on GitHub. People in the organization (particularly TSC members) have a lot of access with their GitHub accounts, and it's important for us and for our users that we keep the accounts safe. Currently, 26 of the 34 members of the organization have 2FA enabled. Of the remaining 8, two are bot accounts (@eslintbot and @jquerybot) and two are TSC members. In order to do this, we would need to enable 2FA for all of the remaining accounts, otherwise those accounts would be removed from the organization when we started requiring 2FA. We would also have to ensure that the bot accounts are still able to function correctly with 2FA enabled, or we could remove them from the organization if their functionality has been superseded by [`eslint-github-bot`](https://github.com/eslint/eslint-github-bot). (I think the @eslintbot account is still used to push commits during releases.)
1.0
Requiring 2FA for the ESLint GitHub organization - In light of recent events, I think this would be a good time to start requiring 2FA to be enabled for members of the `eslint` organization on GitHub. People in the organization (particularly TSC members) have a lot of access with their GitHub accounts, and it's important for us and for our users that we keep the accounts safe. Currently, 26 of the 34 members of the organization have 2FA enabled. Of the remaining 8, two are bot accounts (@eslintbot and @jquerybot) and two are TSC members. In order to do this, we would need to enable 2FA for all of the remaining accounts, otherwise those accounts would be removed from the organization when we started requiring 2FA. We would also have to ensure that the bot accounts are still able to function correctly with 2FA enabled, or we could remove them from the organization if their functionality has been superseded by [`eslint-github-bot`](https://github.com/eslint/eslint-github-bot). (I think the @eslintbot account is still used to push commits during releases.)
infrastructure
requiring for the eslint github organization in light of recent events i think this would be a good time to start requiring to be enabled for members of the eslint organization on github people in the organization particularly tsc members have a lot of access with their github accounts and it s important for us and for our users that we keep the accounts safe currently of the members of the organization have enabled of the remaining two are bot accounts eslintbot and jquerybot and two are tsc members in order to do this we would need to enable for all of the remaining accounts otherwise those accounts would be removed from the organization when we started requiring we would also have to ensure that the bot accounts are still able to function correctly with enabled or we could remove them from the organization if their functionality has been superseded by i think the eslintbot account is still used to push commits during releases
1
12,980
10,053,894,118
IssuesEvent
2019-07-21 20:40:42
tempesta-tech/tempesta-test
https://api.github.com/repos/tempesta-tech/tempesta-test
opened
TLSfuzzer deployment
CI Infrastructure
Just tried to run a single script from https://github.com/tomato42/tlsfuzzer test suite ``` PYTHONPATH=. python ./scripts/test-large-hello.py -h 192.168.100.4 -p 443 ``` against current TempestaTLS. The test took plenty of time and didn't crash the system. I just played a little bit with the testing suite and didn't figure out how to interpret it's results. Need to deploy the testing suite on our CI (it's unrealistic to run it developer machines), probably in separate VM - I suppose the full suite can take several days for full run. All the scripts should be executed and verified for results, not only whether it crashes the system.
1.0
TLSfuzzer deployment - Just tried to run a single script from https://github.com/tomato42/tlsfuzzer test suite ``` PYTHONPATH=. python ./scripts/test-large-hello.py -h 192.168.100.4 -p 443 ``` against current TempestaTLS. The test took plenty of time and didn't crash the system. I just played a little bit with the testing suite and didn't figure out how to interpret it's results. Need to deploy the testing suite on our CI (it's unrealistic to run it developer machines), probably in separate VM - I suppose the full suite can take several days for full run. All the scripts should be executed and verified for results, not only whether it crashes the system.
infrastructure
tlsfuzzer deployment just tried to run a single script from test suite pythonpath python scripts test large hello py h p against current tempestatls the test took plenty of time and didn t crash the system i just played a little bit with the testing suite and didn t figure out how to interpret it s results need to deploy the testing suite on our ci it s unrealistic to run it developer machines probably in separate vm i suppose the full suite can take several days for full run all the scripts should be executed and verified for results not only whether it crashes the system
1
13,070
10,110,691,976
IssuesEvent
2019-07-30 10:53:21
dart-lang/sdk
https://api.github.com/repos/dart-lang/sdk
closed
Flaky infra failures during "find tests that began failing" / "find unapproved passing tests" steps
area-infrastructure
From time to time our bots fail with infra (purple) failure ``` Infra Failure recipe infra failure: Uncaught Exception: OSError(22, 'Invalid argument') ``` Step execution details: ``` /b/s/w/ir/cache/builder/sdk/tools/sdks/dart-sdk/bin/dart tools/bots/compare_results.dart --flakiness-data /b/s/w/ir/tmp/t/tmpzXsTI0 --human --verbose /b/s/w/ir/cache/builder/sdk/LATEST/results.json /b/s/w/ir/tmp/t/tmpsmdQ7v /b/s/w/ir/cache/builder/sdk/LATEST/approved_results.json --logs /b/s/w/ir/tmp/t/tmpaNJIkK --logs-only --changed --failing in dir /b/s/w/ir/cache/builder/sdk full environment: ANALYZER_STATE_LOCATION_OVERRIDE: /b/s/w/ir/k/recipe_cleanup/analysis-cache Apple_PubSub_Socket_Render: /private/tmp/com.apple.launchd.BDy6yxGgrK/Render BOTO_CONFIG: /b/s/w/ir/tmp/gsutil-task/.boto BUILDBUCKET_EXPERIMENTAL: FALSE CIPD_CACHE_DIR: /b/s/cipd_cache/cache CIPD_PROTOCOL: v2 DEVSHELL_CLIENT_PORT: 59468 DOCKER_CONFIG: /b/s/w/ir/tmp/docker-cfg-task DOCKER_TMPDIR: /b/s/w/ir/tmp/docker-tmp-task GIT_CONFIG_NOSYSTEM: 1 GIT_TERMINAL_PROMPT: 0 HOME: /Users/chrome-bot INFRA_GIT_WRAPPER_HOME: /b/s/w/ir/tmp/git-home-task LOGDOG_COORDINATOR_HOST: logs.chromium.org LOGDOG_STREAM_PREFIX: buildbucket/cr-buildbucket.appspot.com/8909738097010534176 LOGDOG_STREAM_PROJECT: dart LOGDOG_STREAM_SERVER_PATH: unix:/b/s/w/ir/tmp/ld.sock LOGNAME: chrome-bot LUCI_CONTEXT: /b/s/w/ir/tmp/luci_context.740694097 MAC_CHROMIUM_TMPDIR: /b/s/w/ir/tmp/t NO_GCE_CHECK: False PATH: /b/s/w/ir/cipd_bin_packages:/b/s/w/ir/cipd_bin_packages/bin:/b/s/cipd_cache/bin:/opt/local/bin:/opt/local/sbin:/usr/local/sbin:/usr/local/git/bin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin PWD: /b/s/w/ir/k PYTHONIOENCODING: UTF-8 PYTHONUNBUFFERED: 1 SHELL: /bin/bash SHLVL: 1 SSH_AUTH_SOCK: /private/tmp/com.apple.launchd.uckososCXs/Listeners SWARMING_BOT_ID: vm19-m9 SWARMING_HEADLESS: 1 SWARMING_SERVER: https://chromium-swarm.appspot.com SWARMING_TASK_ID: 45a4069301f5e911 TEMP: /b/s/w/ir/tmp/t TEMPDIR: /b/s/w/ir/tmp/t TMP: /b/s/w/ir/tmp/t TMPDIR: /b/s/w/ir/tmp/t USER: chrome-bot VERSIONER_PYTHON_PREFER_32_BIT: no VERSIONER_PYTHON_VERSION: 2.7 VPYTHON_VIRTUALENV_ROOT: /b/s/w/ir/cache/vpython XPC_FLAGS: 0x0 XPC_SERVICE_NAME: 0 _: /b/s/w/ir/cipd_bin_packages/vpython __CF_USER_TEXT_ENCODING: 0x1F4:0x0:0x0 Step had exception. ``` RECIPE CRASH: ``` The recipe has crashed at point 'Uncaught exception'! Traceback (most recent call last): File "/b/s/w/ir/kitchen-checkout/recipe_engine/recipe_engine/internal/engine.py", line 450, in run_steps raw_result = recipe_obj.run_steps(api, engine) File "/b/s/w/ir/kitchen-checkout/recipe_engine/recipe_engine/internal/recipe_deps.py", line 706, in run_steps properties_def, api=api) File "/b/s/w/ir/kitchen-checkout/recipe_engine/recipe_engine/internal/property_invoker.py", line 89, in invoke_with_properties arg_names, **additional_args) File "/b/s/w/ir/kitchen-checkout/recipe_engine/recipe_engine/internal/property_invoker.py", line 52, in _invoke_with_properties return callable_obj(*props, **additional_args) File "/b/s/w/ir/kitchen-checkout/build/scripts/slave/recipes/dart/neo.py", line 79, in RunSteps _run_steps_impl(api) File "/b/s/w/ir/kitchen-checkout/build/scripts/slave/recipes/dart/neo.py", line 123, in _run_steps_impl api.dart.test(test_data=TEST_MATRIX) File "/b/s/w/ir/kitchen-checkout/recipe_engine/recipe_engine/recipe_api.py", line 615, in _inner ret = func(*a, **kw) File "/b/s/w/ir/kitchen-checkout/build/scripts/slave/recipe_modules/dart/api.py", line 643, in test self._run_steps(config, isolate_hashes, builder, global_config) File "/b/s/w/ir/kitchen-checkout/recipe_engine/recipe_engine/recipe_api.py", line 608, in _inner return func(*a, **kw) File "/b/s/w/ir/kitchen-checkout/build/scripts/slave/recipe_modules/dart/api.py", line 770, in _run_steps self._process_test_results(test_steps, global_config) File "/b/s/w/ir/kitchen-checkout/recipe_engine/recipe_engine/recipe_api.py", line 608, in _inner return func(*a, **kw) File "/b/s/w/ir/kitchen-checkout/build/scripts/slave/recipe_modules/dart/api.py", line 849, in _process_test_results self._present_results(logs_str, results_str, flaky_json_str) File "/b/s/w/ir/kitchen-checkout/recipe_engine/recipe_engine/recipe_api.py", line 608, in _inner return func(*a, **kw) File "/b/s/w/ir/kitchen-checkout/build/scripts/slave/recipe_modules/dart/api.py", line 525, in _present_results stdout=self.m.raw_io.output_text(add_output_log=True)).stdout File "/b/s/w/ir/kitchen-checkout/recipe_engine/recipe_engine/recipe_api.py", line 649, in _inner return func(*a, **kw) File "/b/s/w/ir/kitchen-checkout/recipe_engine/recipe_modules/step/api.py", line 353, in __call__ step_test_data=step_test_data, File "/b/s/w/ir/kitchen-checkout/recipe_engine/recipe_engine/recipe_api.py", line 235, in run_step return self._engine.run_step(step) File "/b/s/w/ir/kitchen-checkout/recipe_engine/recipe_engine/internal/engine.py", line 685, in _run_step step_data.name_tokens, debug_log, rendered_step) File "/b/s/w/ir/kitchen-checkout/recipe_engine/recipe_engine/internal/step_runner/subproc.py", line 176, in run **extra_kwargs) File "/b/s/w/ir/cache/vpython/0be37a/lib/python2.7/site-packages/gevent/subprocess.py", line 658, in __init__ reraise(*exc_info) File "/b/s/w/ir/cache/vpython/0be37a/lib/python2.7/site-packages/gevent/subprocess.py", line 627, in __init__ restore_signals, start_new_session) File "/b/s/w/ir/cache/vpython/0be37a/lib/python2.7/site-packages/gevent/subprocess.py", line 1488, in _execute_child data = errpipe_read.read() File "/b/s/w/ir/cache/vpython/0be37a/lib/python2.7/site-packages/gevent/_fileobjectposix.py", line 136, in readall data = self.__read(DEFAULT_BUFFER_SIZE) File "/b/s/w/ir/cache/vpython/0be37a/lib/python2.7/site-packages/gevent/_fileobjectposix.py", line 127, in __read return _read(self._fileno, n) OSError: [Errno 22] Invalid argument ``` [failure 1](https://ci.chromium.org/p/dart/builders/ci.sandbox/vm-kernel-mac-release-simdbc64/3226) [failure 2](https://ci.chromium.org/p/dart/builders/ci.sandbox/vm-kernel-mac-debug-x64/2333) [failure 3](https://ci.chromium.org/p/dart/builders/ci.sandbox/vm-kernel-mac-product-x64/3437) [failure 4](https://ci.chromium.org/p/dart/builders/ci.sandbox/vm-kernel-mac-debug-x64/2173) /cc @whesse @athomas
1.0
Flaky infra failures during "find tests that began failing" / "find unapproved passing tests" steps - From time to time our bots fail with infra (purple) failure ``` Infra Failure recipe infra failure: Uncaught Exception: OSError(22, 'Invalid argument') ``` Step execution details: ``` /b/s/w/ir/cache/builder/sdk/tools/sdks/dart-sdk/bin/dart tools/bots/compare_results.dart --flakiness-data /b/s/w/ir/tmp/t/tmpzXsTI0 --human --verbose /b/s/w/ir/cache/builder/sdk/LATEST/results.json /b/s/w/ir/tmp/t/tmpsmdQ7v /b/s/w/ir/cache/builder/sdk/LATEST/approved_results.json --logs /b/s/w/ir/tmp/t/tmpaNJIkK --logs-only --changed --failing in dir /b/s/w/ir/cache/builder/sdk full environment: ANALYZER_STATE_LOCATION_OVERRIDE: /b/s/w/ir/k/recipe_cleanup/analysis-cache Apple_PubSub_Socket_Render: /private/tmp/com.apple.launchd.BDy6yxGgrK/Render BOTO_CONFIG: /b/s/w/ir/tmp/gsutil-task/.boto BUILDBUCKET_EXPERIMENTAL: FALSE CIPD_CACHE_DIR: /b/s/cipd_cache/cache CIPD_PROTOCOL: v2 DEVSHELL_CLIENT_PORT: 59468 DOCKER_CONFIG: /b/s/w/ir/tmp/docker-cfg-task DOCKER_TMPDIR: /b/s/w/ir/tmp/docker-tmp-task GIT_CONFIG_NOSYSTEM: 1 GIT_TERMINAL_PROMPT: 0 HOME: /Users/chrome-bot INFRA_GIT_WRAPPER_HOME: /b/s/w/ir/tmp/git-home-task LOGDOG_COORDINATOR_HOST: logs.chromium.org LOGDOG_STREAM_PREFIX: buildbucket/cr-buildbucket.appspot.com/8909738097010534176 LOGDOG_STREAM_PROJECT: dart LOGDOG_STREAM_SERVER_PATH: unix:/b/s/w/ir/tmp/ld.sock LOGNAME: chrome-bot LUCI_CONTEXT: /b/s/w/ir/tmp/luci_context.740694097 MAC_CHROMIUM_TMPDIR: /b/s/w/ir/tmp/t NO_GCE_CHECK: False PATH: /b/s/w/ir/cipd_bin_packages:/b/s/w/ir/cipd_bin_packages/bin:/b/s/cipd_cache/bin:/opt/local/bin:/opt/local/sbin:/usr/local/sbin:/usr/local/git/bin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin PWD: /b/s/w/ir/k PYTHONIOENCODING: UTF-8 PYTHONUNBUFFERED: 1 SHELL: /bin/bash SHLVL: 1 SSH_AUTH_SOCK: /private/tmp/com.apple.launchd.uckososCXs/Listeners SWARMING_BOT_ID: vm19-m9 SWARMING_HEADLESS: 1 SWARMING_SERVER: https://chromium-swarm.appspot.com SWARMING_TASK_ID: 45a4069301f5e911 TEMP: /b/s/w/ir/tmp/t TEMPDIR: /b/s/w/ir/tmp/t TMP: /b/s/w/ir/tmp/t TMPDIR: /b/s/w/ir/tmp/t USER: chrome-bot VERSIONER_PYTHON_PREFER_32_BIT: no VERSIONER_PYTHON_VERSION: 2.7 VPYTHON_VIRTUALENV_ROOT: /b/s/w/ir/cache/vpython XPC_FLAGS: 0x0 XPC_SERVICE_NAME: 0 _: /b/s/w/ir/cipd_bin_packages/vpython __CF_USER_TEXT_ENCODING: 0x1F4:0x0:0x0 Step had exception. ``` RECIPE CRASH: ``` The recipe has crashed at point 'Uncaught exception'! Traceback (most recent call last): File "/b/s/w/ir/kitchen-checkout/recipe_engine/recipe_engine/internal/engine.py", line 450, in run_steps raw_result = recipe_obj.run_steps(api, engine) File "/b/s/w/ir/kitchen-checkout/recipe_engine/recipe_engine/internal/recipe_deps.py", line 706, in run_steps properties_def, api=api) File "/b/s/w/ir/kitchen-checkout/recipe_engine/recipe_engine/internal/property_invoker.py", line 89, in invoke_with_properties arg_names, **additional_args) File "/b/s/w/ir/kitchen-checkout/recipe_engine/recipe_engine/internal/property_invoker.py", line 52, in _invoke_with_properties return callable_obj(*props, **additional_args) File "/b/s/w/ir/kitchen-checkout/build/scripts/slave/recipes/dart/neo.py", line 79, in RunSteps _run_steps_impl(api) File "/b/s/w/ir/kitchen-checkout/build/scripts/slave/recipes/dart/neo.py", line 123, in _run_steps_impl api.dart.test(test_data=TEST_MATRIX) File "/b/s/w/ir/kitchen-checkout/recipe_engine/recipe_engine/recipe_api.py", line 615, in _inner ret = func(*a, **kw) File "/b/s/w/ir/kitchen-checkout/build/scripts/slave/recipe_modules/dart/api.py", line 643, in test self._run_steps(config, isolate_hashes, builder, global_config) File "/b/s/w/ir/kitchen-checkout/recipe_engine/recipe_engine/recipe_api.py", line 608, in _inner return func(*a, **kw) File "/b/s/w/ir/kitchen-checkout/build/scripts/slave/recipe_modules/dart/api.py", line 770, in _run_steps self._process_test_results(test_steps, global_config) File "/b/s/w/ir/kitchen-checkout/recipe_engine/recipe_engine/recipe_api.py", line 608, in _inner return func(*a, **kw) File "/b/s/w/ir/kitchen-checkout/build/scripts/slave/recipe_modules/dart/api.py", line 849, in _process_test_results self._present_results(logs_str, results_str, flaky_json_str) File "/b/s/w/ir/kitchen-checkout/recipe_engine/recipe_engine/recipe_api.py", line 608, in _inner return func(*a, **kw) File "/b/s/w/ir/kitchen-checkout/build/scripts/slave/recipe_modules/dart/api.py", line 525, in _present_results stdout=self.m.raw_io.output_text(add_output_log=True)).stdout File "/b/s/w/ir/kitchen-checkout/recipe_engine/recipe_engine/recipe_api.py", line 649, in _inner return func(*a, **kw) File "/b/s/w/ir/kitchen-checkout/recipe_engine/recipe_modules/step/api.py", line 353, in __call__ step_test_data=step_test_data, File "/b/s/w/ir/kitchen-checkout/recipe_engine/recipe_engine/recipe_api.py", line 235, in run_step return self._engine.run_step(step) File "/b/s/w/ir/kitchen-checkout/recipe_engine/recipe_engine/internal/engine.py", line 685, in _run_step step_data.name_tokens, debug_log, rendered_step) File "/b/s/w/ir/kitchen-checkout/recipe_engine/recipe_engine/internal/step_runner/subproc.py", line 176, in run **extra_kwargs) File "/b/s/w/ir/cache/vpython/0be37a/lib/python2.7/site-packages/gevent/subprocess.py", line 658, in __init__ reraise(*exc_info) File "/b/s/w/ir/cache/vpython/0be37a/lib/python2.7/site-packages/gevent/subprocess.py", line 627, in __init__ restore_signals, start_new_session) File "/b/s/w/ir/cache/vpython/0be37a/lib/python2.7/site-packages/gevent/subprocess.py", line 1488, in _execute_child data = errpipe_read.read() File "/b/s/w/ir/cache/vpython/0be37a/lib/python2.7/site-packages/gevent/_fileobjectposix.py", line 136, in readall data = self.__read(DEFAULT_BUFFER_SIZE) File "/b/s/w/ir/cache/vpython/0be37a/lib/python2.7/site-packages/gevent/_fileobjectposix.py", line 127, in __read return _read(self._fileno, n) OSError: [Errno 22] Invalid argument ``` [failure 1](https://ci.chromium.org/p/dart/builders/ci.sandbox/vm-kernel-mac-release-simdbc64/3226) [failure 2](https://ci.chromium.org/p/dart/builders/ci.sandbox/vm-kernel-mac-debug-x64/2333) [failure 3](https://ci.chromium.org/p/dart/builders/ci.sandbox/vm-kernel-mac-product-x64/3437) [failure 4](https://ci.chromium.org/p/dart/builders/ci.sandbox/vm-kernel-mac-debug-x64/2173) /cc @whesse @athomas
infrastructure
flaky infra failures during find tests that began failing find unapproved passing tests steps from time to time our bots fail with infra purple failure infra failure recipe infra failure uncaught exception oserror invalid argument step execution details b s w ir cache builder sdk tools sdks dart sdk bin dart tools bots compare results dart flakiness data b s w ir tmp t human verbose b s w ir cache builder sdk latest results json b s w ir tmp t b s w ir cache builder sdk latest approved results json logs b s w ir tmp t tmpanjikk logs only changed failing in dir b s w ir cache builder sdk full environment analyzer state location override b s w ir k recipe cleanup analysis cache apple pubsub socket render private tmp com apple launchd render boto config b s w ir tmp gsutil task boto buildbucket experimental false cipd cache dir b s cipd cache cache cipd protocol devshell client port docker config b s w ir tmp docker cfg task docker tmpdir b s w ir tmp docker tmp task git config nosystem git terminal prompt home users chrome bot infra git wrapper home b s w ir tmp git home task logdog coordinator host logs chromium org logdog stream prefix buildbucket cr buildbucket appspot com logdog stream project dart logdog stream server path unix b s w ir tmp ld sock logname chrome bot luci context b s w ir tmp luci context mac chromium tmpdir b s w ir tmp t no gce check false path b s w ir cipd bin packages b s w ir cipd bin packages bin b s cipd cache bin opt local bin opt local sbin usr local sbin usr local git bin usr local bin usr sbin usr bin sbin bin pwd b s w ir k pythonioencoding utf pythonunbuffered shell bin bash shlvl ssh auth sock private tmp com apple launchd uckososcxs listeners swarming bot id swarming headless swarming server swarming task id temp b s w ir tmp t tempdir b s w ir tmp t tmp b s w ir tmp t tmpdir b s w ir tmp t user chrome bot versioner python prefer bit no versioner python version vpython virtualenv root b s w ir cache vpython xpc flags xpc service name b s w ir cipd bin packages vpython cf user text encoding step had exception recipe crash the recipe has crashed at point uncaught exception traceback most recent call last file b s w ir kitchen checkout recipe engine recipe engine internal engine py line in run steps raw result recipe obj run steps api engine file b s w ir kitchen checkout recipe engine recipe engine internal recipe deps py line in run steps properties def api api file b s w ir kitchen checkout recipe engine recipe engine internal property invoker py line in invoke with properties arg names additional args file b s w ir kitchen checkout recipe engine recipe engine internal property invoker py line in invoke with properties return callable obj props additional args file b s w ir kitchen checkout build scripts slave recipes dart neo py line in runsteps run steps impl api file b s w ir kitchen checkout build scripts slave recipes dart neo py line in run steps impl api dart test test data test matrix file b s w ir kitchen checkout recipe engine recipe engine recipe api py line in inner ret func a kw file b s w ir kitchen checkout build scripts slave recipe modules dart api py line in test self run steps config isolate hashes builder global config file b s w ir kitchen checkout recipe engine recipe engine recipe api py line in inner return func a kw file b s w ir kitchen checkout build scripts slave recipe modules dart api py line in run steps self process test results test steps global config file b s w ir kitchen checkout recipe engine recipe engine recipe api py line in inner return func a kw file b s w ir kitchen checkout build scripts slave recipe modules dart api py line in process test results self present results logs str results str flaky json str file b s w ir kitchen checkout recipe engine recipe engine recipe api py line in inner return func a kw file b s w ir kitchen checkout build scripts slave recipe modules dart api py line in present results stdout self m raw io output text add output log true stdout file b s w ir kitchen checkout recipe engine recipe engine recipe api py line in inner return func a kw file b s w ir kitchen checkout recipe engine recipe modules step api py line in call step test data step test data file b s w ir kitchen checkout recipe engine recipe engine recipe api py line in run step return self engine run step step file b s w ir kitchen checkout recipe engine recipe engine internal engine py line in run step step data name tokens debug log rendered step file b s w ir kitchen checkout recipe engine recipe engine internal step runner subproc py line in run extra kwargs file b s w ir cache vpython lib site packages gevent subprocess py line in init reraise exc info file b s w ir cache vpython lib site packages gevent subprocess py line in init restore signals start new session file b s w ir cache vpython lib site packages gevent subprocess py line in execute child data errpipe read read file b s w ir cache vpython lib site packages gevent fileobjectposix py line in readall data self read default buffer size file b s w ir cache vpython lib site packages gevent fileobjectposix py line in read return read self fileno n oserror invalid argument cc whesse athomas
1
3,210
4,154,646,647
IssuesEvent
2016-06-16 12:29:05
OpenSCAP/scap-security-guide
https://api.github.com/repos/OpenSCAP/scap-security-guide
opened
[RFE] [Infrastructure] Generate $(OUT)/remediation_functions.xml dynamically from shared/remediations/bash/templates/remediation_functions
enhancement Infrastructure
See: * https://github.com/OpenSCAP/scap-security-guide/pull/1270#issuecomment-222984282 [Thanks to a recent change](https://github.com/OpenSCAP/scap-security-guide/pull/1270) the remediation scripts are now part of final benchmark itself (no need to rely on external ```/usr/share/scap-security-guide/remediation_functions``` sh library). But in the current form ```shared/xccdf/remediation_functions.xml``` has been generated manually only once (it's not regenerated during the build process). The issue here is each time a new function got added into ```shared/remediations/bash/templates/remediation_functions``` the ```shared/xccdf/remediation_functions.xml``` will need to be created again. To get rid of this need, the above proposal mentions creation of ```$(OUT)/remediation_functions.xml``` file during each build from ```shared/remediations/bash/templates/remediation_functions```. We need to implement this.
1.0
[RFE] [Infrastructure] Generate $(OUT)/remediation_functions.xml dynamically from shared/remediations/bash/templates/remediation_functions - See: * https://github.com/OpenSCAP/scap-security-guide/pull/1270#issuecomment-222984282 [Thanks to a recent change](https://github.com/OpenSCAP/scap-security-guide/pull/1270) the remediation scripts are now part of final benchmark itself (no need to rely on external ```/usr/share/scap-security-guide/remediation_functions``` sh library). But in the current form ```shared/xccdf/remediation_functions.xml``` has been generated manually only once (it's not regenerated during the build process). The issue here is each time a new function got added into ```shared/remediations/bash/templates/remediation_functions``` the ```shared/xccdf/remediation_functions.xml``` will need to be created again. To get rid of this need, the above proposal mentions creation of ```$(OUT)/remediation_functions.xml``` file during each build from ```shared/remediations/bash/templates/remediation_functions```. We need to implement this.
infrastructure
generate out remediation functions xml dynamically from shared remediations bash templates remediation functions see the remediation scripts are now part of final benchmark itself no need to rely on external usr share scap security guide remediation functions sh library but in the current form shared xccdf remediation functions xml has been generated manually only once it s not regenerated during the build process the issue here is each time a new function got added into shared remediations bash templates remediation functions the shared xccdf remediation functions xml will need to be created again to get rid of this need the above proposal mentions creation of out remediation functions xml file during each build from shared remediations bash templates remediation functions we need to implement this
1
30,126
24,563,362,401
IssuesEvent
2022-10-12 22:58:08
GaloisInc/cclyzerpp
https://api.github.com/repos/GaloisInc/cclyzerpp
closed
cmake: Integrate packaging
infrastructure
`pkg/pkg.sh` packages cclyzer++ with `fpm`. It assumes CMake has already been run. It would be better to integrate this as a custom CMake target that declares its dependencies on other targets.
1.0
cmake: Integrate packaging - `pkg/pkg.sh` packages cclyzer++ with `fpm`. It assumes CMake has already been run. It would be better to integrate this as a custom CMake target that declares its dependencies on other targets.
infrastructure
cmake integrate packaging pkg pkg sh packages cclyzer with fpm it assumes cmake has already been run it would be better to integrate this as a custom cmake target that declares its dependencies on other targets
1
13,159
10,131,821,914
IssuesEvent
2019-08-01 20:33:51
HumanCellAtlas/secondary-analysis
https://api.github.com/repos/HumanCellAtlas/secondary-analysis
reopened
Fix animal reference support for single-end SmartSeq2 pipeline
infrastructure
The single-end SmartSeq2 pipeline is currently expecting to get reference file information from the inputs.tsv file created in the GetInputs step. However, those fields are not included in the inputs file so this will cause failures when running the workflow. Note: This is less urgent because the single-end pipeline is not in production. ┆Issue is synchronized with this [Jira Story](https://broadinstitute.atlassian.net/browse/GH-325)
1.0
Fix animal reference support for single-end SmartSeq2 pipeline - The single-end SmartSeq2 pipeline is currently expecting to get reference file information from the inputs.tsv file created in the GetInputs step. However, those fields are not included in the inputs file so this will cause failures when running the workflow. Note: This is less urgent because the single-end pipeline is not in production. ┆Issue is synchronized with this [Jira Story](https://broadinstitute.atlassian.net/browse/GH-325)
infrastructure
fix animal reference support for single end pipeline the single end pipeline is currently expecting to get reference file information from the inputs tsv file created in the getinputs step however those fields are not included in the inputs file so this will cause failures when running the workflow note this is less urgent because the single end pipeline is not in production ┆issue is synchronized with this
1
78,882
3,518,705,194
IssuesEvent
2016-01-12 14:10:46
Apollo-Community/ApolloStation
https://api.github.com/repos/Apollo-Community/ApolloStation
closed
Add persistent OOC notes.
priority: low suggestion
OOC notes tab per character that allows players to keep persistent notes per character. Either some chemistry recipes for chemist characters, a prompt for the characters personality, etc.
1.0
Add persistent OOC notes. - OOC notes tab per character that allows players to keep persistent notes per character. Either some chemistry recipes for chemist characters, a prompt for the characters personality, etc.
non_infrastructure
add persistent ooc notes ooc notes tab per character that allows players to keep persistent notes per character either some chemistry recipes for chemist characters a prompt for the characters personality etc
0
1,076
3,030,272,303
IssuesEvent
2015-08-04 16:37:26
google/trace-viewer
https://api.github.com/repos/google/trace-viewer
closed
Errors raised in rAF after tests run are not treated as a failure
Bug Infrastructure P1
_From [nd...@chromium.org](https://code.google.com/u/102435256078839283966/) on March 29, 2014 19:39:28_ If we have rendering issues in non-async tests [e.g. you instantiate a timeline, it renders in rAF and throws an error] the test runner doesn't detect this and turn it into an error. Tests pass. No good. _Original issue: http://code.google.com/p/trace-viewer/issues/detail?id=542_
1.0
Errors raised in rAF after tests run are not treated as a failure - _From [nd...@chromium.org](https://code.google.com/u/102435256078839283966/) on March 29, 2014 19:39:28_ If we have rendering issues in non-async tests [e.g. you instantiate a timeline, it renders in rAF and throws an error] the test runner doesn't detect this and turn it into an error. Tests pass. No good. _Original issue: http://code.google.com/p/trace-viewer/issues/detail?id=542_
infrastructure
errors raised in raf after tests run are not treated as a failure from on march if we have rendering issues in non async tests the test runner doesn t detect this and turn it into an error tests pass no good original issue
1
826,815
31,713,201,139
IssuesEvent
2023-09-09 14:30:49
rpitv/glimpse-graphics
https://api.github.com/repos/rpitv/glimpse-graphics
closed
Football Graphics
Priority: HIGH
The current graphics for football is somewhat generic and meant for all sports, with some features of it dedicated specifically for football. The new graphics will be made with football in mind, having our own "unique" twist on it. ![Web 1920 – 1](https://github.com/rpitv/glimpse-graphics/assets/83081917/e18597d2-6517-4e37-9f90-3447ba52da27)
1.0
Football Graphics - The current graphics for football is somewhat generic and meant for all sports, with some features of it dedicated specifically for football. The new graphics will be made with football in mind, having our own "unique" twist on it. ![Web 1920 – 1](https://github.com/rpitv/glimpse-graphics/assets/83081917/e18597d2-6517-4e37-9f90-3447ba52da27)
non_infrastructure
football graphics the current graphics for football is somewhat generic and meant for all sports with some features of it dedicated specifically for football the new graphics will be made with football in mind having our own unique twist on it
0
15,974
11,785,740,204
IssuesEvent
2020-03-17 10:52:21
ansible/galaxy-dev
https://api.github.com/repos/ansible/galaxy-dev
closed
Infra: transform API into a Pulp plugin
area/infrastructure priority/high status/in-progress status/new type/enhancement
Meeting notes: https://docs.google.com/document/d/1HU77_fB-jxGK0C3y0EYIIMS0WSEabGmhrs9Vy5X14ss/edit?usp=sharing The plan, as enumerated in the above doc, includes: ### API - Use direct access to the Pulp database to avoid calling into Pulp’s API entirely E.g. CollectionImport would likely use this - Reuse Serializers from pulp_ansible - Not try to deduplicate API calls on initial integration ### Settings - For INSTALLED_APPS, the galaxy-api will create a PluginAppConfig which will chain-load the application into INSTALLED_APPS - Put settings for galaxy-api into settings.py ### Overall requirements: - [ ] Take codebase from galaxy-api and make it a Pulp plugin w/ direct access to pulp_ansible objects - [ ] Replace API calls with queries
1.0
Infra: transform API into a Pulp plugin - Meeting notes: https://docs.google.com/document/d/1HU77_fB-jxGK0C3y0EYIIMS0WSEabGmhrs9Vy5X14ss/edit?usp=sharing The plan, as enumerated in the above doc, includes: ### API - Use direct access to the Pulp database to avoid calling into Pulp’s API entirely E.g. CollectionImport would likely use this - Reuse Serializers from pulp_ansible - Not try to deduplicate API calls on initial integration ### Settings - For INSTALLED_APPS, the galaxy-api will create a PluginAppConfig which will chain-load the application into INSTALLED_APPS - Put settings for galaxy-api into settings.py ### Overall requirements: - [ ] Take codebase from galaxy-api and make it a Pulp plugin w/ direct access to pulp_ansible objects - [ ] Replace API calls with queries
infrastructure
infra transform api into a pulp plugin meeting notes the plan as enumerated in the above doc includes api use direct access to the pulp database to avoid calling into pulp’s api entirely e g collectionimport would likely use this reuse serializers from pulp ansible not try to deduplicate api calls on initial integration settings for installed apps the galaxy api will create a pluginappconfig which will chain load the application into installed apps put settings for galaxy api into settings py overall requirements take codebase from galaxy api and make it a pulp plugin w direct access to pulp ansible objects replace api calls with queries
1
381,011
11,271,838,775
IssuesEvent
2020-01-14 13:50:22
celo-org/celo-monorepo
https://api.github.com/repos/celo-org/celo-monorepo
closed
Extra special character is shown (~~Payment Requested) and text is not shown translated for Payment requested when user selects “Espanol(America Latina)” language.
Priority: P1 applications bug ios qa wallet
**Frequency:** 100% **App version:** IOS test flight build v1.5.2 (17) **Repro on:** iPhone 7 (13.3), iPhone XS Max (13.2), iPhone 7+ (12.4) Pre-condition: 1. User should have received and sent payment request sections on wallet tab. 2. User should select the “Espanol(America Latina)” language. **Repro Steps:** 1) Launch the app. 2) Tap on the cog icon. 3) Tap on language setting. 4) Select “Espanol” language and tap on the continue button. 5) Observed on received and sent payment request sections from wallet tab **Investigation:** Extra character is not shown when a user selects the English language. **Current Behavior:** Extra special character and text is not translated. **Expected Behavior:** Text should be shown translated and no extra character is shown. **Attachment:** IOS_Localization_Issue_payment.png ![IOS_Localization_Issue_payment](https://user-images.githubusercontent.com/55572027/71177438-04d72680-2292-11ea-80ae-6d6b20870cd5.PNG)
1.0
Extra special character is shown (~~Payment Requested) and text is not shown translated for Payment requested when user selects “Espanol(America Latina)” language. - **Frequency:** 100% **App version:** IOS test flight build v1.5.2 (17) **Repro on:** iPhone 7 (13.3), iPhone XS Max (13.2), iPhone 7+ (12.4) Pre-condition: 1. User should have received and sent payment request sections on wallet tab. 2. User should select the “Espanol(America Latina)” language. **Repro Steps:** 1) Launch the app. 2) Tap on the cog icon. 3) Tap on language setting. 4) Select “Espanol” language and tap on the continue button. 5) Observed on received and sent payment request sections from wallet tab **Investigation:** Extra character is not shown when a user selects the English language. **Current Behavior:** Extra special character and text is not translated. **Expected Behavior:** Text should be shown translated and no extra character is shown. **Attachment:** IOS_Localization_Issue_payment.png ![IOS_Localization_Issue_payment](https://user-images.githubusercontent.com/55572027/71177438-04d72680-2292-11ea-80ae-6d6b20870cd5.PNG)
non_infrastructure
extra special character is shown payment requested and text is not shown translated for payment requested when user selects “espanol america latina ” language frequency app version ios test flight build repro on iphone iphone xs max iphone pre condition user should have received and sent payment request sections on wallet tab user should select the “espanol america latina ” language repro steps launch the app tap on the cog icon tap on language setting select “espanol” language and tap on the continue button observed on received and sent payment request sections from wallet tab investigation extra character is not shown when a user selects the english language current behavior extra special character and text is not translated expected behavior text should be shown translated and no extra character is shown attachment ios localization issue payment png
0
85,557
3,691,574,870
IssuesEvent
2016-02-26 00:46:52
openshift/origin
https://api.github.com/repos/openshift/origin
closed
Disable start build button when BC has certain annotation
area/usability component/web kind/enhancement priority/P2
See details on annotation change here: https://github.com/openshift/origin/pull/6185#issuecomment-172556695
1.0
Disable start build button when BC has certain annotation - See details on annotation change here: https://github.com/openshift/origin/pull/6185#issuecomment-172556695
non_infrastructure
disable start build button when bc has certain annotation see details on annotation change here
0
230,001
7,603,188,885
IssuesEvent
2018-04-29 11:52:10
openshift/autoheal
https://api.github.com/repos/openshift/autoheal
closed
Send all labels and annotations as `extraVars` by default
enhancement high priority
Currently the auto heal service can send to the AWX job a set of `extraVars` that are defined as a JSON document: ```yaml awxJob: template: "My template" extraVars: |- { "myvar": "myvalue", "yourvar": "yourvalue", } ``` This is very useful to send values or labels or annotations of the alert, for example, to send the value of the `instance` label: ```yaml awxJob: template: "My template" extraVars: |- { "instance": "{{ $labels.instance }}", } ``` It is so useful that it should be the default: if the `extraVars` field isn't used then we should automatically populate it with all the labels and annotations of the alert. For example, if the alert is like this: ```yaml labels: instance: 192.168.100.7:9100 job: node-exporter-123 annotations: message: "Node '192.168.100.7:9100' is down" ``` Then we should automatically populate the `extraVars` field like this: ```yaml extraVars: |- { "labels": { "instance": "192.168.100.7:9100", "job": "node-exporter-123" }, "annotations": { "message": "Node '192.168.100.7 } } ``` Actually we should probably just provide the full alert description: ```yaml extraVars: |- { "alerts": [ ... ] } ``` This should be compatible with other custom `extraVars` that the user may want to add. For example, the following action: ```yaml awxJob: template: "My template" extraVars: myvar: myvalue yourvalue: yourvalue ``` Should be equivalent to this: ```yaml awxJob: template: "My template" extraVars: |- { "myvar": "myvalue", "yourvar": "yourvalue", "alerts": [...] } ```
1.0
Send all labels and annotations as `extraVars` by default - Currently the auto heal service can send to the AWX job a set of `extraVars` that are defined as a JSON document: ```yaml awxJob: template: "My template" extraVars: |- { "myvar": "myvalue", "yourvar": "yourvalue", } ``` This is very useful to send values or labels or annotations of the alert, for example, to send the value of the `instance` label: ```yaml awxJob: template: "My template" extraVars: |- { "instance": "{{ $labels.instance }}", } ``` It is so useful that it should be the default: if the `extraVars` field isn't used then we should automatically populate it with all the labels and annotations of the alert. For example, if the alert is like this: ```yaml labels: instance: 192.168.100.7:9100 job: node-exporter-123 annotations: message: "Node '192.168.100.7:9100' is down" ``` Then we should automatically populate the `extraVars` field like this: ```yaml extraVars: |- { "labels": { "instance": "192.168.100.7:9100", "job": "node-exporter-123" }, "annotations": { "message": "Node '192.168.100.7 } } ``` Actually we should probably just provide the full alert description: ```yaml extraVars: |- { "alerts": [ ... ] } ``` This should be compatible with other custom `extraVars` that the user may want to add. For example, the following action: ```yaml awxJob: template: "My template" extraVars: myvar: myvalue yourvalue: yourvalue ``` Should be equivalent to this: ```yaml awxJob: template: "My template" extraVars: |- { "myvar": "myvalue", "yourvar": "yourvalue", "alerts": [...] } ```
non_infrastructure
send all labels and annotations as extravars by default currently the auto heal service can send to the awx job a set of extravars that are defined as a json document yaml awxjob template my template extravars myvar myvalue yourvar yourvalue this is very useful to send values or labels or annotations of the alert for example to send the value of the instance label yaml awxjob template my template extravars instance labels instance it is so useful that it should be the default if the extravars field isn t used then we should automatically populate it with all the labels and annotations of the alert for example if the alert is like this yaml labels instance job node exporter annotations message node is down then we should automatically populate the extravars field like this yaml extravars labels instance job node exporter annotations message node actually we should probably just provide the full alert description yaml extravars alerts this should be compatible with other custom extravars that the user may want to add for example the following action yaml awxjob template my template extravars myvar myvalue yourvalue yourvalue should be equivalent to this yaml awxjob template my template extravars myvar myvalue yourvar yourvalue alerts
0
17,376
12,323,978,120
IssuesEvent
2020-05-13 13:03:11
icgc-argo/roadmap
https://api.github.com/repos/icgc-argo/roadmap
closed
Platform Backups: Support backup / restore for Postgres (Ego, Program Service)
INFRASTRUCTURE SP:3 devops
- Using the template "Service Backup" Helm chart developed by Henrich, modify the chart to support postgres services. - This ticket includes testing and documenting the backup and restore process. --- Helm chart for etcd backups used as a template: https://github.com/icgc-argo/kube-infra/tree/master/etcd-backup Installation documentation: https://wiki.oicr.on.ca/pages/viewpage.action?spaceKey=icgcargotech&title=ETCD+backups
1.0
Platform Backups: Support backup / restore for Postgres (Ego, Program Service) - - Using the template "Service Backup" Helm chart developed by Henrich, modify the chart to support postgres services. - This ticket includes testing and documenting the backup and restore process. --- Helm chart for etcd backups used as a template: https://github.com/icgc-argo/kube-infra/tree/master/etcd-backup Installation documentation: https://wiki.oicr.on.ca/pages/viewpage.action?spaceKey=icgcargotech&title=ETCD+backups
infrastructure
platform backups support backup restore for postgres ego program service using the template service backup helm chart developed by henrich modify the chart to support postgres services this ticket includes testing and documenting the backup and restore process helm chart for etcd backups used as a template installation documentation
1
15,167
11,388,329,332
IssuesEvent
2020-01-29 16:29:51
ForNeVeR/AvaloniaRider
https://api.github.com/repos/ForNeVeR/AvaloniaRider
opened
Investigate cache issues on macOS
infrastructure
Currently the caches on macOS are too big to be uploaded, but they really shouldn't be that huge (several GiB each). Need to investigate.
1.0
Investigate cache issues on macOS - Currently the caches on macOS are too big to be uploaded, but they really shouldn't be that huge (several GiB each). Need to investigate.
infrastructure
investigate cache issues on macos currently the caches on macos are too big to be uploaded but they really shouldn t be that huge several gib each need to investigate
1
7,467
6,965,675,045
IssuesEvent
2017-12-09 09:27:30
MultiMC/MultiMC5
https://api.github.com/repos/MultiMC/MultiMC5
closed
[Develop] Bundled openssl depends on MSVC redist 2013
bug infrastructure
System Information ----------------------------- MultiMC version: 0.6.0-develop-1133 Operating System: Windows 10 v1709 Build 16299.64 Summary of the issue or suggestion: ---------------------------------------------- MMC5 develop unable to get most data from web (news, connection to mojang auth servers) What should happen: ------------------------------ On checking updates: Should appear changelog or "nothing" messages On adding account: It shouldn't give out error Steps to reproduce the issue: ------------------------------------------------------------- 1. Launch MMC5 2. Check for updates 3. Get error about "github api unavailable" (it is available actually via browser) 4. Check news line in "statusbar" 5. Can see only "Loading news" 6. Try to add account 7. Get `Error creating SSL context () (99)` error Logs/Screenshots: ---------------------------- [//]: # (Please refer to https://github.com/MultiMC/MultiMC5/wiki/Log-Upload for instructions on how to attach your logs.) ![MMC5 Update window with "Loading news"](https://Moeka.is-a-good-waifu.com/8bacb6.png) ![Adding account](https://Moeka.is-a-good-waifu.com/7a021f.png) [MMC5 Logs](https://Moeka.is-a-good-waifu.com/ac6e64.log) ![Availability of mmc news rss](https://Moeka.is-a-good-waifu.com/fc9daf.png) ![Availability of github api](https://Moeka.is-a-good-waifu.com/ae0f13.png) ![Availability of MC auth servers](https://Moeka.is-a-good-waifu.com/a67bdb.png) Additional Info: --------------------------- This started happens about version 1130, but now it even doesn't even let me start any MC instance. (I have "reinstalled" MMC5 completely (removed all folders, except `instances` and reunpacked release MMC, then updated it to MMC develop)), before that i has added account and i wasnt able to join online servers (invalid session ingame error), now i can't even add an MC account Changelogs in updating is available, but updating is working
1.0
[Develop] Bundled openssl depends on MSVC redist 2013 - System Information ----------------------------- MultiMC version: 0.6.0-develop-1133 Operating System: Windows 10 v1709 Build 16299.64 Summary of the issue or suggestion: ---------------------------------------------- MMC5 develop unable to get most data from web (news, connection to mojang auth servers) What should happen: ------------------------------ On checking updates: Should appear changelog or "nothing" messages On adding account: It shouldn't give out error Steps to reproduce the issue: ------------------------------------------------------------- 1. Launch MMC5 2. Check for updates 3. Get error about "github api unavailable" (it is available actually via browser) 4. Check news line in "statusbar" 5. Can see only "Loading news" 6. Try to add account 7. Get `Error creating SSL context () (99)` error Logs/Screenshots: ---------------------------- [//]: # (Please refer to https://github.com/MultiMC/MultiMC5/wiki/Log-Upload for instructions on how to attach your logs.) ![MMC5 Update window with "Loading news"](https://Moeka.is-a-good-waifu.com/8bacb6.png) ![Adding account](https://Moeka.is-a-good-waifu.com/7a021f.png) [MMC5 Logs](https://Moeka.is-a-good-waifu.com/ac6e64.log) ![Availability of mmc news rss](https://Moeka.is-a-good-waifu.com/fc9daf.png) ![Availability of github api](https://Moeka.is-a-good-waifu.com/ae0f13.png) ![Availability of MC auth servers](https://Moeka.is-a-good-waifu.com/a67bdb.png) Additional Info: --------------------------- This started happens about version 1130, but now it even doesn't even let me start any MC instance. (I have "reinstalled" MMC5 completely (removed all folders, except `instances` and reunpacked release MMC, then updated it to MMC develop)), before that i has added account and i wasnt able to join online servers (invalid session ingame error), now i can't even add an MC account Changelogs in updating is available, but updating is working
infrastructure
bundled openssl depends on msvc redist system information multimc version develop operating system windows build summary of the issue or suggestion develop unable to get most data from web news connection to mojang auth servers what should happen on checking updates should appear changelog or nothing messages on adding account it shouldn t give out error steps to reproduce the issue launch check for updates get error about github api unavailable it is available actually via browser check news line in statusbar can see only loading news try to add account get error creating ssl context error logs screenshots please refer to for instructions on how to attach your logs additional info this started happens about version but now it even doesn t even let me start any mc instance i have reinstalled completely removed all folders except instances and reunpacked release mmc then updated it to mmc develop before that i has added account and i wasnt able to join online servers invalid session ingame error now i can t even add an mc account changelogs in updating is available but updating is working
1
7,538
7,965,740,787
IssuesEvent
2018-07-14 12:52:56
MassTransit/MassTransit
https://api.github.com/repos/MassTransit/MassTransit
closed
System.InvalidOperationException: The method 'OnMessage' or 'OnMessageAsync' has already been called.
servicebus
### Is this a bug report? Yes. Looks like a recurrence of #747 's first issue. ### Can you also reproduce the problem with the latest version? We experienced the problem in 5.1.0. We have deployed 5.1.2 and are waiting to see if the problem occurs again. ### Environment * .NET 4.7.1 * Console EXEs and ASP.NET MVC web app on IIS * Built with Visual Studio 2017 15.7.2 * Running on Azure App Service x64 * MassTransit using Azure Service Bus * 3 processes connected to bus * Each process receives some broadcast messages via per-process queues * One process receives competing-consumer messages via a shared queue ### Steps to Reproduce Wait until problem occurs. No specific trigger is known. ### Expected Behavior MassTransit remains connected to Azure Service Bus indefinitely. ### Actual Behavior An exception from a `Task` is unobserved and rethrown from the finalizer thread. Afterward, MassTransit disconnects from Azure Service Bus and receives no more messages until the process is restarted. This occurs around the same time (within 1hr 30min) for all 3 of our processes connected to the bus. ### Reproducible Demo Would love to have one myself. ### Stack Trace ``` System.AggregateException: A Task's exception(s) were not observed either by Waiting on the Task or accessing its Exception property. As a result, the unobserved exception was rethrown by the finalizer thread. System.InvalidOperationException: The method 'OnMessage' or 'OnMessageAsync' has already been called. at Microsoft.ServiceBus.Messaging.MessageReceiver.OnMessage (Microsoft.ServiceBus, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35) at Microsoft.ServiceBus.Messaging.QueueClient.OnMessageAsync (Microsoft.ServiceBus, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35) at MassTransit.AzureServiceBusTransport.Transport.Receiver.Start (MassTransit.AzureServiceBusTransport, Version=5.1.0.1516, Culture=neutral, PublicKeyToken=b8e0e9f2f1e657fa) at MassTransit.AzureServiceBusTransport.Pipeline.MessageReceiverFilter+<GreenPipes-IFilter<MassTransit-AzureServiceBusTransport-ClientContext>-Send>d__7.MoveNext (MassTransit.AzureServiceBusTransport, Version=5.1.0.1516, Culture=neutral, PublicKeyToken=b8e0e9f2f1e657fa) at GreenPipes.Agents.PipeContextSupervisor`1+<GreenPipes-IPipeContextSource<TContext>-Send>d__8.MoveNext (GreenPipes, Version=2.1.0.106, Culture=neutral, PublicKeyToken=b800c4cfcdeea87b) at GreenPipes.Agents.PipeContextSupervisor`1+<GreenPipes-IPipeContextSource<TContext>-Send>d__8.MoveNext (GreenPipes, Version=2.1.0.106, Culture=neutral, PublicKeyToken=b800c4cfcdeea87b) at GreenPipes.Agents.PipeContextSupervisor`1+<GreenPipes-IPipeContextSource<TContext>-Send>d__8.MoveNext (GreenPipes, Version=2.1.0.106, Culture=neutral, PublicKeyToken=b800c4cfcdeea87b) at MassTransit.AzureServiceBusTransport.Transport.ReceiveTransport+<>c__DisplayClass16_0+<<Receiver>b__0>d.MoveNext (MassTransit.AzureServiceBusTransport, Version=5.1.0.1516, Culture=neutral, PublicKeyToken=b8e0e9f2f1e657fa) at MassTransit.AzureServiceBusTransport.Transport.ReceiveTransport+<>c__DisplayClass16_0+<<Receiver>b__0>d.MoveNext (MassTransit.AzureServiceBusTransport, Version=5.1.0.1516, Culture=neutral, PublicKeyToken=b8e0e9f2f1e657fa) at MassTransit.Policies.PipeRetryExtensions+<Retry>d__1.MoveNext (MassTransit, Version=5.1.0.1516, Culture=neutral, PublicKeyToken=b8e0e9f2f1e657fa) at MassTransit.Policies.PipeRetryExtensions+<Retry>d__1.MoveNext (MassTransit, Version=5.1.0.1516, Culture=neutral, PublicKeyToken=b8e0e9f2f1e657fa) at MassTransit.AzureServiceBusTransport.Transport.ReceiveTransport+<Receiver>d__16.MoveNext (MassTransit.AzureServiceBusTransport, Version=5.1.0.1516, Culture=neutral, PublicKeyToken=b8e0e9f2f1e657fa) ``` # Bus Configuration ```csharp private IBusControl CreateBusUsingAzureServiceBus() { return Bus.Factory.CreateUsingAzureServiceBus(c => { // Normalize the URI var uri = Configuration.HostUri; uri = ServiceBusEnvironment.CreateServiceUri(AzureServiceBusScheme, uri.Host, ""); // Configure connection to Azure Service Bus var host = c.Host(uri, h => { h.SharedAccessSignature(s => { s.KeyName = Configuration.Secret?.UserName; s.SharedAccessKey = Configuration.Secret?.Password; s.TokenTimeToLive = TimeSpan.FromDays(1); s.TokenScope = TokenScope.Namespace; }); }); // Disable retries by default c.UseRetry(a => a.None()); // Everything below is for message reception if (!Configuration.IsReceiveEnabled) return; // Ensure long-running consumers will not lose their lock on the message c.UseRenewLock(); // Configure the request queue (shared among all instances, persistent) c.ReceiveEndpoint(host, Configuration.QueueName, r => { ConfigureFilters(r); LoadRequestConsumers(r); }); // Configure the event queue (per-instance, auto-deleted) c.ReceiveEndpoint(host, r => { // NOTE: Though the queue will auto-delete after 5 min, the // subcriptions for it will remain, and worse, will fill up // with messages. An external maintenance script should run // periodically to clean up those go-nowhere subscriptions. // Source: https://github.com/MassTransit/MassTransit/issues/553 ConfigureFilters(r); LoadEventConsumers(r); }); // If the normal plumbing fails, MassTransit will move messages // to an error queue. We currently do not monitor the error // queue, so we just need to prevent it from filling up. c.ReceiveEndpoint(host, Configuration.QueueName + "_error", r => { // Log any request in the error queue r.Consumer<LoggingConsumer>(); }); }); } private static void ConfigureFilters(IPipeConfigurator<ConsumeContext> c) { c.UseFilter(new LoggingFilter <ConsumeContext>()); c.UseFilter(new ApplicationInsightsFilter<ConsumeContext>()); } protected virtual void LoadRequestConsumers(IReceiveEndpointConfigurator r) { // Messages that use competing-consumer r.LoadConsumersFrom(_context, IsRequestMessage); } protected virtual void LoadEventConsumers(IReceiveEndpointConfigurator r) { // Messages that are broadcast r.LoadConsumersFrom(_context, IsEventMessage); } ```
1.0
System.InvalidOperationException: The method 'OnMessage' or 'OnMessageAsync' has already been called. - ### Is this a bug report? Yes. Looks like a recurrence of #747 's first issue. ### Can you also reproduce the problem with the latest version? We experienced the problem in 5.1.0. We have deployed 5.1.2 and are waiting to see if the problem occurs again. ### Environment * .NET 4.7.1 * Console EXEs and ASP.NET MVC web app on IIS * Built with Visual Studio 2017 15.7.2 * Running on Azure App Service x64 * MassTransit using Azure Service Bus * 3 processes connected to bus * Each process receives some broadcast messages via per-process queues * One process receives competing-consumer messages via a shared queue ### Steps to Reproduce Wait until problem occurs. No specific trigger is known. ### Expected Behavior MassTransit remains connected to Azure Service Bus indefinitely. ### Actual Behavior An exception from a `Task` is unobserved and rethrown from the finalizer thread. Afterward, MassTransit disconnects from Azure Service Bus and receives no more messages until the process is restarted. This occurs around the same time (within 1hr 30min) for all 3 of our processes connected to the bus. ### Reproducible Demo Would love to have one myself. ### Stack Trace ``` System.AggregateException: A Task's exception(s) were not observed either by Waiting on the Task or accessing its Exception property. As a result, the unobserved exception was rethrown by the finalizer thread. System.InvalidOperationException: The method 'OnMessage' or 'OnMessageAsync' has already been called. at Microsoft.ServiceBus.Messaging.MessageReceiver.OnMessage (Microsoft.ServiceBus, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35) at Microsoft.ServiceBus.Messaging.QueueClient.OnMessageAsync (Microsoft.ServiceBus, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35) at MassTransit.AzureServiceBusTransport.Transport.Receiver.Start (MassTransit.AzureServiceBusTransport, Version=5.1.0.1516, Culture=neutral, PublicKeyToken=b8e0e9f2f1e657fa) at MassTransit.AzureServiceBusTransport.Pipeline.MessageReceiverFilter+<GreenPipes-IFilter<MassTransit-AzureServiceBusTransport-ClientContext>-Send>d__7.MoveNext (MassTransit.AzureServiceBusTransport, Version=5.1.0.1516, Culture=neutral, PublicKeyToken=b8e0e9f2f1e657fa) at GreenPipes.Agents.PipeContextSupervisor`1+<GreenPipes-IPipeContextSource<TContext>-Send>d__8.MoveNext (GreenPipes, Version=2.1.0.106, Culture=neutral, PublicKeyToken=b800c4cfcdeea87b) at GreenPipes.Agents.PipeContextSupervisor`1+<GreenPipes-IPipeContextSource<TContext>-Send>d__8.MoveNext (GreenPipes, Version=2.1.0.106, Culture=neutral, PublicKeyToken=b800c4cfcdeea87b) at GreenPipes.Agents.PipeContextSupervisor`1+<GreenPipes-IPipeContextSource<TContext>-Send>d__8.MoveNext (GreenPipes, Version=2.1.0.106, Culture=neutral, PublicKeyToken=b800c4cfcdeea87b) at MassTransit.AzureServiceBusTransport.Transport.ReceiveTransport+<>c__DisplayClass16_0+<<Receiver>b__0>d.MoveNext (MassTransit.AzureServiceBusTransport, Version=5.1.0.1516, Culture=neutral, PublicKeyToken=b8e0e9f2f1e657fa) at MassTransit.AzureServiceBusTransport.Transport.ReceiveTransport+<>c__DisplayClass16_0+<<Receiver>b__0>d.MoveNext (MassTransit.AzureServiceBusTransport, Version=5.1.0.1516, Culture=neutral, PublicKeyToken=b8e0e9f2f1e657fa) at MassTransit.Policies.PipeRetryExtensions+<Retry>d__1.MoveNext (MassTransit, Version=5.1.0.1516, Culture=neutral, PublicKeyToken=b8e0e9f2f1e657fa) at MassTransit.Policies.PipeRetryExtensions+<Retry>d__1.MoveNext (MassTransit, Version=5.1.0.1516, Culture=neutral, PublicKeyToken=b8e0e9f2f1e657fa) at MassTransit.AzureServiceBusTransport.Transport.ReceiveTransport+<Receiver>d__16.MoveNext (MassTransit.AzureServiceBusTransport, Version=5.1.0.1516, Culture=neutral, PublicKeyToken=b8e0e9f2f1e657fa) ``` # Bus Configuration ```csharp private IBusControl CreateBusUsingAzureServiceBus() { return Bus.Factory.CreateUsingAzureServiceBus(c => { // Normalize the URI var uri = Configuration.HostUri; uri = ServiceBusEnvironment.CreateServiceUri(AzureServiceBusScheme, uri.Host, ""); // Configure connection to Azure Service Bus var host = c.Host(uri, h => { h.SharedAccessSignature(s => { s.KeyName = Configuration.Secret?.UserName; s.SharedAccessKey = Configuration.Secret?.Password; s.TokenTimeToLive = TimeSpan.FromDays(1); s.TokenScope = TokenScope.Namespace; }); }); // Disable retries by default c.UseRetry(a => a.None()); // Everything below is for message reception if (!Configuration.IsReceiveEnabled) return; // Ensure long-running consumers will not lose their lock on the message c.UseRenewLock(); // Configure the request queue (shared among all instances, persistent) c.ReceiveEndpoint(host, Configuration.QueueName, r => { ConfigureFilters(r); LoadRequestConsumers(r); }); // Configure the event queue (per-instance, auto-deleted) c.ReceiveEndpoint(host, r => { // NOTE: Though the queue will auto-delete after 5 min, the // subcriptions for it will remain, and worse, will fill up // with messages. An external maintenance script should run // periodically to clean up those go-nowhere subscriptions. // Source: https://github.com/MassTransit/MassTransit/issues/553 ConfigureFilters(r); LoadEventConsumers(r); }); // If the normal plumbing fails, MassTransit will move messages // to an error queue. We currently do not monitor the error // queue, so we just need to prevent it from filling up. c.ReceiveEndpoint(host, Configuration.QueueName + "_error", r => { // Log any request in the error queue r.Consumer<LoggingConsumer>(); }); }); } private static void ConfigureFilters(IPipeConfigurator<ConsumeContext> c) { c.UseFilter(new LoggingFilter <ConsumeContext>()); c.UseFilter(new ApplicationInsightsFilter<ConsumeContext>()); } protected virtual void LoadRequestConsumers(IReceiveEndpointConfigurator r) { // Messages that use competing-consumer r.LoadConsumersFrom(_context, IsRequestMessage); } protected virtual void LoadEventConsumers(IReceiveEndpointConfigurator r) { // Messages that are broadcast r.LoadConsumersFrom(_context, IsEventMessage); } ```
non_infrastructure
system invalidoperationexception the method onmessage or onmessageasync has already been called is this a bug report yes looks like a recurrence of s first issue can you also reproduce the problem with the latest version we experienced the problem in we have deployed and are waiting to see if the problem occurs again environment net console exes and asp net mvc web app on iis built with visual studio running on azure app service masstransit using azure service bus processes connected to bus each process receives some broadcast messages via per process queues one process receives competing consumer messages via a shared queue steps to reproduce wait until problem occurs no specific trigger is known expected behavior masstransit remains connected to azure service bus indefinitely actual behavior an exception from a task is unobserved and rethrown from the finalizer thread afterward masstransit disconnects from azure service bus and receives no more messages until the process is restarted this occurs around the same time within for all of our processes connected to the bus reproducible demo would love to have one myself stack trace system aggregateexception a task s exception s were not observed either by waiting on the task or accessing its exception property as a result the unobserved exception was rethrown by the finalizer thread system invalidoperationexception the method onmessage or onmessageasync has already been called at microsoft servicebus messaging messagereceiver onmessage microsoft servicebus version culture neutral publickeytoken at microsoft servicebus messaging queueclient onmessageasync microsoft servicebus version culture neutral publickeytoken at masstransit azureservicebustransport transport receiver start masstransit azureservicebustransport version culture neutral publickeytoken at masstransit azureservicebustransport pipeline messagereceiverfilter send d movenext masstransit azureservicebustransport version culture neutral publickeytoken at greenpipes agents pipecontextsupervisor send d movenext greenpipes version culture neutral publickeytoken at greenpipes agents pipecontextsupervisor send d movenext greenpipes version culture neutral publickeytoken at greenpipes agents pipecontextsupervisor send d movenext greenpipes version culture neutral publickeytoken at masstransit azureservicebustransport transport receivetransport c b d movenext masstransit azureservicebustransport version culture neutral publickeytoken at masstransit azureservicebustransport transport receivetransport c b d movenext masstransit azureservicebustransport version culture neutral publickeytoken at masstransit policies piperetryextensions d movenext masstransit version culture neutral publickeytoken at masstransit policies piperetryextensions d movenext masstransit version culture neutral publickeytoken at masstransit azureservicebustransport transport receivetransport d movenext masstransit azureservicebustransport version culture neutral publickeytoken bus configuration csharp private ibuscontrol createbususingazureservicebus return bus factory createusingazureservicebus c normalize the uri var uri configuration hosturi uri servicebusenvironment createserviceuri azureservicebusscheme uri host configure connection to azure service bus var host c host uri h h sharedaccesssignature s s keyname configuration secret username s sharedaccesskey configuration secret password s tokentimetolive timespan fromdays s tokenscope tokenscope namespace disable retries by default c useretry a a none everything below is for message reception if configuration isreceiveenabled return ensure long running consumers will not lose their lock on the message c userenewlock configure the request queue shared among all instances persistent c receiveendpoint host configuration queuename r configurefilters r loadrequestconsumers r configure the event queue per instance auto deleted c receiveendpoint host r note though the queue will auto delete after min the subcriptions for it will remain and worse will fill up with messages an external maintenance script should run periodically to clean up those go nowhere subscriptions source configurefilters r loadeventconsumers r if the normal plumbing fails masstransit will move messages to an error queue we currently do not monitor the error queue so we just need to prevent it from filling up c receiveendpoint host configuration queuename error r log any request in the error queue r consumer private static void configurefilters ipipeconfigurator c c usefilter new loggingfilter c usefilter new applicationinsightsfilter protected virtual void loadrequestconsumers ireceiveendpointconfigurator r messages that use competing consumer r loadconsumersfrom context isrequestmessage protected virtual void loadeventconsumers ireceiveendpointconfigurator r messages that are broadcast r loadconsumersfrom context iseventmessage
0
234,313
25,826,518,241
IssuesEvent
2022-12-12 13:20:07
SocialSchools/socialschools-jobs
https://api.github.com/repos/SocialSchools/socialschools-jobs
opened
Django-1.11.29-py2.py3-none-any.whl: 2 vulnerabilities (highest severity is: 9.8)
security vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>Django-1.11.29-py2.py3-none-any.whl</b></p></summary> <p>A high-level Python Web framework that encourages rapid development and clean, pragmatic design.</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/49/49/178daa8725d29c475216259eb19e90b2aa0b8c0431af8c7e9b490ae6481d/Django-1.11.29-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/49/49/178daa8725d29c475216259eb19e90b2aa0b8c0431af8c7e9b490ae6481d/Django-1.11.29-py2.py3-none-any.whl</a></p> <p>Path to dependency file: /tmp/ws-scm/socialschools-jobs</p> <p>Path to vulnerable library: /tmp/ws-scm/socialschools-jobs</p> <p> <p>Found in HEAD commit: <a href="https://github.com/SocialSchools/socialschools-jobs/commit/8e79e0a792c8fdd85c6860f89dbbd47c6ca8b7ce">8e79e0a792c8fdd85c6860f89dbbd47c6ca8b7ce</a></p></details> ## Vulnerabilities | CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (Django version) | Remediation Available | | ------------- | ------------- | ----- | ----- | ----- | ------------- | --- | | [CVE-2022-34265](https://www.mend.io/vulnerability-database/CVE-2022-34265) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 9.8 | Django-1.11.29-py2.py3-none-any.whl | Direct | Django - 3.2.14,4.0.6 | &#9989; | | [CVE-2021-44420](https://www.mend.io/vulnerability-database/CVE-2021-44420) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.3 | Django-1.11.29-py2.py3-none-any.whl | Direct | Django - 2.2.25,3.1.14,3.2.10 | &#9989; | ## Details <details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-34265</summary> ### Vulnerable Library - <b>Django-1.11.29-py2.py3-none-any.whl</b></p> <p>A high-level Python Web framework that encourages rapid development and clean, pragmatic design.</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/49/49/178daa8725d29c475216259eb19e90b2aa0b8c0431af8c7e9b490ae6481d/Django-1.11.29-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/49/49/178daa8725d29c475216259eb19e90b2aa0b8c0431af8c7e9b490ae6481d/Django-1.11.29-py2.py3-none-any.whl</a></p> <p>Path to dependency file: /tmp/ws-scm/socialschools-jobs</p> <p>Path to vulnerable library: /tmp/ws-scm/socialschools-jobs</p> <p> Dependency Hierarchy: - :x: **Django-1.11.29-py2.py3-none-any.whl** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/SocialSchools/socialschools-jobs/commit/8e79e0a792c8fdd85c6860f89dbbd47c6ca8b7ce">8e79e0a792c8fdd85c6860f89dbbd47c6ca8b7ce</a></p> <p>Found in base branch: <b>master</b></p> </p> <p></p> ### Vulnerability Details <p> An issue was discovered in Django 3.2 before 3.2.14 and 4.0 before 4.0.6. The Trunc() and Extract() database functions are subject to SQL injection if untrusted data is used as a kind/lookup_name value. Applications that constrain the lookup name and kind choice to a known safe list are unaffected. Mend Note: After conducting further research, Mend has determined that all versions of Django before version 3.2.14 and before 4.0.6 are vulnerable to CVE-2022-34265. <p>Publish Date: 2022-07-04 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-34265>CVE-2022-34265</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>9.8</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.djangoproject.com/weblog/2022/jul/04/security-releases/">https://www.djangoproject.com/weblog/2022/jul/04/security-releases/</a></p> <p>Release Date: 2022-07-04</p> <p>Fix Resolution: Django - 3.2.14,4.0.6</p> </p> <p></p> :rescue_worker_helmet: Automatic Remediation is available for this issue </details><details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-44420</summary> ### Vulnerable Library - <b>Django-1.11.29-py2.py3-none-any.whl</b></p> <p>A high-level Python Web framework that encourages rapid development and clean, pragmatic design.</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/49/49/178daa8725d29c475216259eb19e90b2aa0b8c0431af8c7e9b490ae6481d/Django-1.11.29-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/49/49/178daa8725d29c475216259eb19e90b2aa0b8c0431af8c7e9b490ae6481d/Django-1.11.29-py2.py3-none-any.whl</a></p> <p>Path to dependency file: /tmp/ws-scm/socialschools-jobs</p> <p>Path to vulnerable library: /tmp/ws-scm/socialschools-jobs</p> <p> Dependency Hierarchy: - :x: **Django-1.11.29-py2.py3-none-any.whl** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/SocialSchools/socialschools-jobs/commit/8e79e0a792c8fdd85c6860f89dbbd47c6ca8b7ce">8e79e0a792c8fdd85c6860f89dbbd47c6ca8b7ce</a></p> <p>Found in base branch: <b>master</b></p> </p> <p></p> ### Vulnerability Details <p> In Django 2.2 before 2.2.25, 3.1 before 3.1.14, and 3.2 before 3.2.10, HTTP requests for URLs with trailing newlines could bypass upstream access control based on URL paths. <p>Publish Date: 2021-12-08 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-44420>CVE-2021-44420</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>7.3</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://docs.djangoproject.com/en/3.2/releases/security/">https://docs.djangoproject.com/en/3.2/releases/security/</a></p> <p>Release Date: 2021-12-08</p> <p>Fix Resolution: Django - 2.2.25,3.1.14,3.2.10</p> </p> <p></p> :rescue_worker_helmet: Automatic Remediation is available for this issue </details> *** <p>:rescue_worker_helmet: Automatic Remediation is available for this issue.</p>
True
Django-1.11.29-py2.py3-none-any.whl: 2 vulnerabilities (highest severity is: 9.8) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>Django-1.11.29-py2.py3-none-any.whl</b></p></summary> <p>A high-level Python Web framework that encourages rapid development and clean, pragmatic design.</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/49/49/178daa8725d29c475216259eb19e90b2aa0b8c0431af8c7e9b490ae6481d/Django-1.11.29-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/49/49/178daa8725d29c475216259eb19e90b2aa0b8c0431af8c7e9b490ae6481d/Django-1.11.29-py2.py3-none-any.whl</a></p> <p>Path to dependency file: /tmp/ws-scm/socialschools-jobs</p> <p>Path to vulnerable library: /tmp/ws-scm/socialschools-jobs</p> <p> <p>Found in HEAD commit: <a href="https://github.com/SocialSchools/socialschools-jobs/commit/8e79e0a792c8fdd85c6860f89dbbd47c6ca8b7ce">8e79e0a792c8fdd85c6860f89dbbd47c6ca8b7ce</a></p></details> ## Vulnerabilities | CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (Django version) | Remediation Available | | ------------- | ------------- | ----- | ----- | ----- | ------------- | --- | | [CVE-2022-34265](https://www.mend.io/vulnerability-database/CVE-2022-34265) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 9.8 | Django-1.11.29-py2.py3-none-any.whl | Direct | Django - 3.2.14,4.0.6 | &#9989; | | [CVE-2021-44420](https://www.mend.io/vulnerability-database/CVE-2021-44420) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.3 | Django-1.11.29-py2.py3-none-any.whl | Direct | Django - 2.2.25,3.1.14,3.2.10 | &#9989; | ## Details <details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-34265</summary> ### Vulnerable Library - <b>Django-1.11.29-py2.py3-none-any.whl</b></p> <p>A high-level Python Web framework that encourages rapid development and clean, pragmatic design.</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/49/49/178daa8725d29c475216259eb19e90b2aa0b8c0431af8c7e9b490ae6481d/Django-1.11.29-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/49/49/178daa8725d29c475216259eb19e90b2aa0b8c0431af8c7e9b490ae6481d/Django-1.11.29-py2.py3-none-any.whl</a></p> <p>Path to dependency file: /tmp/ws-scm/socialschools-jobs</p> <p>Path to vulnerable library: /tmp/ws-scm/socialschools-jobs</p> <p> Dependency Hierarchy: - :x: **Django-1.11.29-py2.py3-none-any.whl** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/SocialSchools/socialschools-jobs/commit/8e79e0a792c8fdd85c6860f89dbbd47c6ca8b7ce">8e79e0a792c8fdd85c6860f89dbbd47c6ca8b7ce</a></p> <p>Found in base branch: <b>master</b></p> </p> <p></p> ### Vulnerability Details <p> An issue was discovered in Django 3.2 before 3.2.14 and 4.0 before 4.0.6. The Trunc() and Extract() database functions are subject to SQL injection if untrusted data is used as a kind/lookup_name value. Applications that constrain the lookup name and kind choice to a known safe list are unaffected. Mend Note: After conducting further research, Mend has determined that all versions of Django before version 3.2.14 and before 4.0.6 are vulnerable to CVE-2022-34265. <p>Publish Date: 2022-07-04 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-34265>CVE-2022-34265</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>9.8</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.djangoproject.com/weblog/2022/jul/04/security-releases/">https://www.djangoproject.com/weblog/2022/jul/04/security-releases/</a></p> <p>Release Date: 2022-07-04</p> <p>Fix Resolution: Django - 3.2.14,4.0.6</p> </p> <p></p> :rescue_worker_helmet: Automatic Remediation is available for this issue </details><details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-44420</summary> ### Vulnerable Library - <b>Django-1.11.29-py2.py3-none-any.whl</b></p> <p>A high-level Python Web framework that encourages rapid development and clean, pragmatic design.</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/49/49/178daa8725d29c475216259eb19e90b2aa0b8c0431af8c7e9b490ae6481d/Django-1.11.29-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/49/49/178daa8725d29c475216259eb19e90b2aa0b8c0431af8c7e9b490ae6481d/Django-1.11.29-py2.py3-none-any.whl</a></p> <p>Path to dependency file: /tmp/ws-scm/socialschools-jobs</p> <p>Path to vulnerable library: /tmp/ws-scm/socialschools-jobs</p> <p> Dependency Hierarchy: - :x: **Django-1.11.29-py2.py3-none-any.whl** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/SocialSchools/socialschools-jobs/commit/8e79e0a792c8fdd85c6860f89dbbd47c6ca8b7ce">8e79e0a792c8fdd85c6860f89dbbd47c6ca8b7ce</a></p> <p>Found in base branch: <b>master</b></p> </p> <p></p> ### Vulnerability Details <p> In Django 2.2 before 2.2.25, 3.1 before 3.1.14, and 3.2 before 3.2.10, HTTP requests for URLs with trailing newlines could bypass upstream access control based on URL paths. <p>Publish Date: 2021-12-08 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-44420>CVE-2021-44420</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>7.3</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://docs.djangoproject.com/en/3.2/releases/security/">https://docs.djangoproject.com/en/3.2/releases/security/</a></p> <p>Release Date: 2021-12-08</p> <p>Fix Resolution: Django - 2.2.25,3.1.14,3.2.10</p> </p> <p></p> :rescue_worker_helmet: Automatic Remediation is available for this issue </details> *** <p>:rescue_worker_helmet: Automatic Remediation is available for this issue.</p>
non_infrastructure
django none any whl vulnerabilities highest severity is vulnerable library django none any whl a high level python web framework that encourages rapid development and clean pragmatic design library home page a href path to dependency file tmp ws scm socialschools jobs path to vulnerable library tmp ws scm socialschools jobs found in head commit a href vulnerabilities cve severity cvss dependency type fixed in django version remediation available high django none any whl direct django high django none any whl direct django details cve vulnerable library django none any whl a high level python web framework that encourages rapid development and clean pragmatic design library home page a href path to dependency file tmp ws scm socialschools jobs path to vulnerable library tmp ws scm socialschools jobs dependency hierarchy x django none any whl vulnerable library found in head commit a href found in base branch master vulnerability details an issue was discovered in django before and before the trunc and extract database functions are subject to sql injection if untrusted data is used as a kind lookup name value applications that constrain the lookup name and kind choice to a known safe list are unaffected mend note after conducting further research mend has determined that all versions of django before version and before are vulnerable to cve publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution django rescue worker helmet automatic remediation is available for this issue cve vulnerable library django none any whl a high level python web framework that encourages rapid development and clean pragmatic design library home page a href path to dependency file tmp ws scm socialschools jobs path to vulnerable library tmp ws scm socialschools jobs dependency hierarchy x django none any whl vulnerable library found in head commit a href found in base branch master vulnerability details in django before before and before http requests for urls with trailing newlines could bypass upstream access control based on url paths publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution django rescue worker helmet automatic remediation is available for this issue rescue worker helmet automatic remediation is available for this issue
0
12,720
9,935,731,671
IssuesEvent
2019-07-02 17:17:29
dart-lang/sdk
https://api.github.com/repos/dart-lang/sdk
closed
Add channel support to get_archive
area-infrastructure closed-obsolete p2-medium type-bug
tools/get_archive.py is broken, in that it has no idea about channels. I will add a quick fix, but we need to update this to support channels.
1.0
Add channel support to get_archive - tools/get_archive.py is broken, in that it has no idea about channels. I will add a quick fix, but we need to update this to support channels.
infrastructure
add channel support to get archive tools get archive py is broken in that it has no idea about channels i will add a quick fix but we need to update this to support channels
1
159,607
25,021,097,352
IssuesEvent
2022-11-04 00:47:01
sboxgame/issues
https://api.github.com/repos/sboxgame/issues
opened
TypeLibrary: Add System.Type.MakeGenericType equivalent
api design
### What it is? Currently, [Type.MakeGenericType](https://learn.microsoft.com/en-us/dotnet/api/system.type.makegenerictype?view=net-6.0) is not whitelisted and there is no `TypeLibrary` equivalent. ### What should it be? There should be a `MakeGenericType` method within `TypeDescription`s that just points to the regular [Type.MakeGenericType](https://learn.microsoft.com/en-us/dotnet/api/system.type.makegenerictype?view=net-6.0) method. My use case is for custom networking and transmitting types that contain generics will need to be re-constructed on the other side using the method mentioned.
1.0
TypeLibrary: Add System.Type.MakeGenericType equivalent - ### What it is? Currently, [Type.MakeGenericType](https://learn.microsoft.com/en-us/dotnet/api/system.type.makegenerictype?view=net-6.0) is not whitelisted and there is no `TypeLibrary` equivalent. ### What should it be? There should be a `MakeGenericType` method within `TypeDescription`s that just points to the regular [Type.MakeGenericType](https://learn.microsoft.com/en-us/dotnet/api/system.type.makegenerictype?view=net-6.0) method. My use case is for custom networking and transmitting types that contain generics will need to be re-constructed on the other side using the method mentioned.
non_infrastructure
typelibrary add system type makegenerictype equivalent what it is currently is not whitelisted and there is no typelibrary equivalent what should it be there should be a makegenerictype method within typedescription s that just points to the regular method my use case is for custom networking and transmitting types that contain generics will need to be re constructed on the other side using the method mentioned
0
4,645
3,875,579,993
IssuesEvent
2016-04-12 02:01:35
lionheart/openradar-mirror
https://api.github.com/repos/lionheart/openradar-mirror
opened
22016431: Calendar entries with uncommon characters in URL are not correctly linked
classification:ui/usability reproducible:always status:open
#### Description Summary: I have a calendar entry linking to https://primarschule-margelaecker.schule-wettingen.ch/aktuell/2015/3/29/projektwoche-"alles-mist" (for Example). On the mac, this links correctly when clicked (see attachment). On iOS, however, it seems like the text in the URL field is not linked directly but via URL autodetection, which fails. Clicking the link only opens the highlighted text as URL, not the complete URL field. Since this field is designated as a URL field, it should be just linkified without autodetection. I know I could just replace the quotes in the URL with correctly URL-encoded %22 but this won’t work for subscribed calendars. Also there is a separate issue (which I will file shortly) in which calendar (on both iOS and OS X) actually transforms the %22 inside imported and subscribed .ics files in the URL field back into a quote… Steps to Reproduce: 1. Subscribe to https://primarschule-margelaecker.schule-wettingen.ch/aktuell/calendar.ics (or the attached testcase.ics) 2. Locate the event called “Projektwoche "Alles Mist"” 3. Open the URL associated with the event. Expected Results: Safari opens on https://primarschule-margelaecker.schule-wettingen.ch/aktuell/2015/3/29/projektwoche-%22alles-mist%22 Actual Results: Safari opens on https://primarschule-margelaecker.schule-wettingen.ch/aktuell/2015/3/29/projektwoche- Version: iOS 9 beta (13A4305g) - Product Version: 9 Created: 2015-07-27T20:23:14.362320 Originated: 2015-07-27T00:00:00 Open Radar Link: http://www.openradar.me/22016431
True
22016431: Calendar entries with uncommon characters in URL are not correctly linked - #### Description Summary: I have a calendar entry linking to https://primarschule-margelaecker.schule-wettingen.ch/aktuell/2015/3/29/projektwoche-"alles-mist" (for Example). On the mac, this links correctly when clicked (see attachment). On iOS, however, it seems like the text in the URL field is not linked directly but via URL autodetection, which fails. Clicking the link only opens the highlighted text as URL, not the complete URL field. Since this field is designated as a URL field, it should be just linkified without autodetection. I know I could just replace the quotes in the URL with correctly URL-encoded %22 but this won’t work for subscribed calendars. Also there is a separate issue (which I will file shortly) in which calendar (on both iOS and OS X) actually transforms the %22 inside imported and subscribed .ics files in the URL field back into a quote… Steps to Reproduce: 1. Subscribe to https://primarschule-margelaecker.schule-wettingen.ch/aktuell/calendar.ics (or the attached testcase.ics) 2. Locate the event called “Projektwoche "Alles Mist"” 3. Open the URL associated with the event. Expected Results: Safari opens on https://primarschule-margelaecker.schule-wettingen.ch/aktuell/2015/3/29/projektwoche-%22alles-mist%22 Actual Results: Safari opens on https://primarschule-margelaecker.schule-wettingen.ch/aktuell/2015/3/29/projektwoche- Version: iOS 9 beta (13A4305g) - Product Version: 9 Created: 2015-07-27T20:23:14.362320 Originated: 2015-07-27T00:00:00 Open Radar Link: http://www.openradar.me/22016431
non_infrastructure
calendar entries with uncommon characters in url are not correctly linked description summary i have a calendar entry linking to for example on the mac this links correctly when clicked see attachment on ios however it seems like the text in the url field is not linked directly but via url autodetection which fails clicking the link only opens the highlighted text as url not the complete url field since this field is designated as a url field it should be just linkified without autodetection i know i could just replace the quotes in the url with correctly url encoded but this won’t work for subscribed calendars also there is a separate issue which i will file shortly in which calendar on both ios and os x actually transforms the inside imported and subscribed ics files in the url field back into a quote… steps to reproduce subscribe to or the attached testcase ics locate the event called “projektwoche alles mist ” open the url associated with the event expected results safari opens on actual results safari opens on version ios beta product version created originated open radar link
0
20,283
13,791,801,086
IssuesEvent
2020-10-09 12:43:38
zowe/zlux
https://api.github.com/repos/zowe/zlux
closed
No debug msg when TN3270 or VT fails to connect
Server Infrastructure bug terminal
On ukzowe3 ZPDT system, I tried to use TN3270 and VT inside the desktop. I made about 50 attempts to connect but only a few succeeded. The problem is, I see a log message when the connection succeeds, but when it fails there is no message, so I can't debug the problem. I'm looking in the SYSLOG of ZOWESVR on z/OS. I always see the connection attempt: ``` [66351] [2019-07-01 11:05:26.529 org.zowe.terminal.proxy.tn3270data INFO] - Saw Websocket request, method=GET ``` That one failed, so there is no further message. Here's one that worked ``` 19-07-02 10:13:49.314 org.zowe.terminal.proxy.tn3270data INFO] - Saw Websocket request, method=GET 19-07-02 10:13:49.329 org.zowe.terminal.proxy.tn3270data INFO] - [Host=127.0.0.1, Port=23, ClientIP=9.140.98.142] Connected. Total terminals connected: 1 19-07-02 10:13:49.331 org.zowe.terminal.proxy.tn3270data INFO] - Total TN3270 terminals connected: 1 19-07-02 10:13:49.746 _zsf.child INFO] - [Path=/zaas1/zowe/1.3.0/zlux-app-server/bin/zssServer.sh stdout]: done with system response ```
1.0
No debug msg when TN3270 or VT fails to connect - On ukzowe3 ZPDT system, I tried to use TN3270 and VT inside the desktop. I made about 50 attempts to connect but only a few succeeded. The problem is, I see a log message when the connection succeeds, but when it fails there is no message, so I can't debug the problem. I'm looking in the SYSLOG of ZOWESVR on z/OS. I always see the connection attempt: ``` [66351] [2019-07-01 11:05:26.529 org.zowe.terminal.proxy.tn3270data INFO] - Saw Websocket request, method=GET ``` That one failed, so there is no further message. Here's one that worked ``` 19-07-02 10:13:49.314 org.zowe.terminal.proxy.tn3270data INFO] - Saw Websocket request, method=GET 19-07-02 10:13:49.329 org.zowe.terminal.proxy.tn3270data INFO] - [Host=127.0.0.1, Port=23, ClientIP=9.140.98.142] Connected. Total terminals connected: 1 19-07-02 10:13:49.331 org.zowe.terminal.proxy.tn3270data INFO] - Total TN3270 terminals connected: 1 19-07-02 10:13:49.746 _zsf.child INFO] - [Path=/zaas1/zowe/1.3.0/zlux-app-server/bin/zssServer.sh stdout]: done with system response ```
infrastructure
no debug msg when or vt fails to connect on zpdt system i tried to use and vt inside the desktop i made about attempts to connect but only a few succeeded the problem is i see a log message when the connection succeeds but when it fails there is no message so i can t debug the problem i m looking in the syslog of zowesvr on z os i always see the connection attempt saw websocket request method get that one failed so there is no further message here s one that worked org zowe terminal proxy info saw websocket request method get org zowe terminal proxy info connected total terminals connected org zowe terminal proxy info total terminals connected zsf child info done with system response
1
197,699
15,688,518,024
IssuesEvent
2021-03-25 14:45:55
zammad/zammad
https://api.github.com/repos/zammad/zammad
closed
CSRF token verification failed
documentation verified
<!-- Hi there - thanks for filing an issue. Please ensure the following things before creating an issue - thank you! 🤓 Since november 15th we handle all requests, except real bugs, at our community board. Full explanation: https://community.zammad.org/t/major-change-regarding-github-issues-community-board/21 Please post: - Feature requests - Development questions - Technical questions on the board -> https://community.zammad.org ! If you think you hit a bug, please continue: - Search existing issues and the CHANGELOG.md for your issue - there might be a solution already - Make sure to use the latest version of Zammad if possible - Add the `log/production.log` file from your system. Attention: Make sure no confidential data is in it! - Please write the issue in english - Don't remove the template - otherwise we will close the issue without further comments - Ask questions about Zammad configuration and usage at our mailinglist. See: https://zammad.org/participate Note: We always do our best. Unfortunately, sometimes there are too many requests and we can't handle everything at once. If you want to prioritize/escalate your issue, you can do so by means of a support contract (see https://zammad.com/pricing#selfhosted). * The upper textblock will be removed automatically when you submit your issue * --> ### Infos: * Used Zammad version: 3.2 * Installation method (source, package, ..): YUM * Operating system: Centos 7 * Database + version: * Elasticsearch version: * Browser + version: ### Expected behavior: Upgraded to Zammad 3.2. and I get error message csrf token verification failed. Cannot log in. Newer before seen this error message. ### Actual behavior: Cannot log in. Linux setup is stayed same. ### Steps to reproduce the behavior: * Yes I'm sure this is a bug and no feature request or a general question.
1.0
CSRF token verification failed - <!-- Hi there - thanks for filing an issue. Please ensure the following things before creating an issue - thank you! 🤓 Since november 15th we handle all requests, except real bugs, at our community board. Full explanation: https://community.zammad.org/t/major-change-regarding-github-issues-community-board/21 Please post: - Feature requests - Development questions - Technical questions on the board -> https://community.zammad.org ! If you think you hit a bug, please continue: - Search existing issues and the CHANGELOG.md for your issue - there might be a solution already - Make sure to use the latest version of Zammad if possible - Add the `log/production.log` file from your system. Attention: Make sure no confidential data is in it! - Please write the issue in english - Don't remove the template - otherwise we will close the issue without further comments - Ask questions about Zammad configuration and usage at our mailinglist. See: https://zammad.org/participate Note: We always do our best. Unfortunately, sometimes there are too many requests and we can't handle everything at once. If you want to prioritize/escalate your issue, you can do so by means of a support contract (see https://zammad.com/pricing#selfhosted). * The upper textblock will be removed automatically when you submit your issue * --> ### Infos: * Used Zammad version: 3.2 * Installation method (source, package, ..): YUM * Operating system: Centos 7 * Database + version: * Elasticsearch version: * Browser + version: ### Expected behavior: Upgraded to Zammad 3.2. and I get error message csrf token verification failed. Cannot log in. Newer before seen this error message. ### Actual behavior: Cannot log in. Linux setup is stayed same. ### Steps to reproduce the behavior: * Yes I'm sure this is a bug and no feature request or a general question.
non_infrastructure
csrf token verification failed hi there thanks for filing an issue please ensure the following things before creating an issue thank you 🤓 since november we handle all requests except real bugs at our community board full explanation please post feature requests development questions technical questions on the board if you think you hit a bug please continue search existing issues and the changelog md for your issue there might be a solution already make sure to use the latest version of zammad if possible add the log production log file from your system attention make sure no confidential data is in it please write the issue in english don t remove the template otherwise we will close the issue without further comments ask questions about zammad configuration and usage at our mailinglist see note we always do our best unfortunately sometimes there are too many requests and we can t handle everything at once if you want to prioritize escalate your issue you can do so by means of a support contract see the upper textblock will be removed automatically when you submit your issue infos used zammad version installation method source package yum operating system centos database version elasticsearch version browser version expected behavior upgraded to zammad and i get error message csrf token verification failed cannot log in newer before seen this error message actual behavior cannot log in linux setup is stayed same steps to reproduce the behavior yes i m sure this is a bug and no feature request or a general question
0
25,071
18,074,058,344
IssuesEvent
2021-09-21 07:51:59
etcd-io/website
https://api.github.com/repos/etcd-io/website
closed
Complete file setup for proper use of cncf/docsy
infrastructure docsy e1-hours e2-days p1-high
Files should be setup as they have been done for https://github.com/grpc/grpc.io. (Details to follow)
1.0
Complete file setup for proper use of cncf/docsy - Files should be setup as they have been done for https://github.com/grpc/grpc.io. (Details to follow)
infrastructure
complete file setup for proper use of cncf docsy files should be setup as they have been done for details to follow
1
4,856
5,302,723,622
IssuesEvent
2017-02-10 13:54:34
camptocamp/ngeo
https://api.github.com/repos/camptocamp/ngeo
closed
Build fails with Node 5.0
Backlog Infrastructure
``` $ nvm use 5.0 $ make dist ... mkdir -p .build/ touch .build/node_modules.timestamp mkdir -p dist/ node buildtools/build.js buildtools/ngeo.json dist/ngeo.js module.js:339 throw err; ^ Error: Cannot find module 'openlayers/node_modules/closure-util' at Function.Module._resolveFilename (module.js:337:15) at Function.Module._load (module.js:287:25) at Module.require (module.js:366:17) at require (module.js:385:17) at Object.<anonymous> (/home/tsauerwein/projects/tests/ngeo/buildtools/build.js:7:15) at Module._compile (module.js:425:26) at Object.Module._extensions..js (module.js:432:10) at Module.load (module.js:356:32) at Function.Module._load (module.js:311:12) at Function.Module.runMain (module.js:457:10) make: *** [dist/ngeo.js] Error 1 ``` The problem is that with Node 5 (and npm 3.3.6) `closue-util` is no longer installed in `openlayers/node_modules/closure-util` but in `node_modules/closure-util`.
1.0
Build fails with Node 5.0 - ``` $ nvm use 5.0 $ make dist ... mkdir -p .build/ touch .build/node_modules.timestamp mkdir -p dist/ node buildtools/build.js buildtools/ngeo.json dist/ngeo.js module.js:339 throw err; ^ Error: Cannot find module 'openlayers/node_modules/closure-util' at Function.Module._resolveFilename (module.js:337:15) at Function.Module._load (module.js:287:25) at Module.require (module.js:366:17) at require (module.js:385:17) at Object.<anonymous> (/home/tsauerwein/projects/tests/ngeo/buildtools/build.js:7:15) at Module._compile (module.js:425:26) at Object.Module._extensions..js (module.js:432:10) at Module.load (module.js:356:32) at Function.Module._load (module.js:311:12) at Function.Module.runMain (module.js:457:10) make: *** [dist/ngeo.js] Error 1 ``` The problem is that with Node 5 (and npm 3.3.6) `closue-util` is no longer installed in `openlayers/node_modules/closure-util` but in `node_modules/closure-util`.
infrastructure
build fails with node nvm use make dist mkdir p build touch build node modules timestamp mkdir p dist node buildtools build js buildtools ngeo json dist ngeo js module js throw err error cannot find module openlayers node modules closure util at function module resolvefilename module js at function module load module js at module require module js at require module js at object home tsauerwein projects tests ngeo buildtools build js at module compile module js at object module extensions js module js at module load module js at function module load module js at function module runmain module js make error the problem is that with node and npm closue util is no longer installed in openlayers node modules closure util but in node modules closure util
1
714,026
24,548,176,062
IssuesEvent
2022-10-12 10:25:03
bryntum/support
https://api.github.com/repos/bryntum/support
closed
[REACT] Bryntum widget wrappers don't accept all component properties in React 18
bug resolved react high-priority forum
[Forum post](https://www.bryntum.com/forum/viewtopic.php?f=52&t=21349&p=105640#p105640) [Forum post 2](https://www.bryntum.com/forum/viewtopic.php?f=54&t=21713&p=107354#p107354) [Forum post 3](https://www.bryntum.com/forum/viewtopic.php?f=44&t=21953&p=108594#p108594) [Forum post 4](https://www.bryntum.com/forum/viewtopic.php?f=44&t=22584) To reproduce use React 18 and wrap your app into `<React.StrictMode>` ``` <React.StrictMode> <App /> </React.StrictMode> ``` Update - Forum post 2 The React.StrictMode also ignores some custom configurations when enabled. Related issue https://github.com/bryntum/support/issues/5390
1.0
[REACT] Bryntum widget wrappers don't accept all component properties in React 18 - [Forum post](https://www.bryntum.com/forum/viewtopic.php?f=52&t=21349&p=105640#p105640) [Forum post 2](https://www.bryntum.com/forum/viewtopic.php?f=54&t=21713&p=107354#p107354) [Forum post 3](https://www.bryntum.com/forum/viewtopic.php?f=44&t=21953&p=108594#p108594) [Forum post 4](https://www.bryntum.com/forum/viewtopic.php?f=44&t=22584) To reproduce use React 18 and wrap your app into `<React.StrictMode>` ``` <React.StrictMode> <App /> </React.StrictMode> ``` Update - Forum post 2 The React.StrictMode also ignores some custom configurations when enabled. Related issue https://github.com/bryntum/support/issues/5390
non_infrastructure
bryntum widget wrappers don t accept all component properties in react to reproduce use react and wrap your app into update forum post the react strictmode also ignores some custom configurations when enabled related issue
0
22,931
15,685,493,123
IssuesEvent
2021-03-25 11:17:43
byteleaf/companyon
https://api.github.com/repos/byteleaf/companyon
closed
Allow for external emails to login
Infrastructure
Currently Auth0 is only setup with Google auth and google auth only allows internal emails unless we go through some app verification process. To allow external emails we can consider either 1. going though the verification process (no idea how much effort this is) or 2. setup some other verification method with Auth0 (Might be the easiest option)
1.0
Allow for external emails to login - Currently Auth0 is only setup with Google auth and google auth only allows internal emails unless we go through some app verification process. To allow external emails we can consider either 1. going though the verification process (no idea how much effort this is) or 2. setup some other verification method with Auth0 (Might be the easiest option)
infrastructure
allow for external emails to login currently is only setup with google auth and google auth only allows internal emails unless we go through some app verification process to allow external emails we can consider either going though the verification process no idea how much effort this is or setup some other verification method with might be the easiest option
1
17,651
12,495,136,031
IssuesEvent
2020-06-01 12:38:25
Budibase/budibase
https://api.github.com/repos/Budibase/budibase
closed
Hosting portal - Infrastructure and deployment
infrastructure
- Sapper app wrapped up in docker container - Deployed to ECS - The building, tagging and pushing of the docker image should be automated when someone pushes to master on the hosting platform repository.
1.0
Hosting portal - Infrastructure and deployment - - Sapper app wrapped up in docker container - Deployed to ECS - The building, tagging and pushing of the docker image should be automated when someone pushes to master on the hosting platform repository.
infrastructure
hosting portal infrastructure and deployment sapper app wrapped up in docker container deployed to ecs the building tagging and pushing of the docker image should be automated when someone pushes to master on the hosting platform repository
1
24,480
17,296,545,939
IssuesEvent
2021-07-25 21:06:45
APSIMInitiative/ApsimX
https://api.github.com/repos/APSIMInitiative/ApsimX
closed
Formatting of upgrade form in gtk#3
bug interface/infrastructure
The top area displaying the available upgrades is not very tall and seems to be clipped to the Upgrade/ViewDetail buttons on right such that you can only see the top entry and a very small scrollbar on right with large white area at base of form. Can the top window area containing the list of upgrades be taller to show at least a few of the latest upgrades or have a window resize handle?
1.0
Formatting of upgrade form in gtk#3 - The top area displaying the available upgrades is not very tall and seems to be clipped to the Upgrade/ViewDetail buttons on right such that you can only see the top entry and a very small scrollbar on right with large white area at base of form. Can the top window area containing the list of upgrades be taller to show at least a few of the latest upgrades or have a window resize handle?
infrastructure
formatting of upgrade form in gtk the top area displaying the available upgrades is not very tall and seems to be clipped to the upgrade viewdetail buttons on right such that you can only see the top entry and a very small scrollbar on right with large white area at base of form can the top window area containing the list of upgrades be taller to show at least a few of the latest upgrades or have a window resize handle
1
48,002
19,900,813,304
IssuesEvent
2022-01-25 07:39:50
vmware/singleton
https://api.github.com/repos/vmware/singleton
closed
[BUG] [Service] UT failed for vip-manager-i18n
kind/bug area/java-service priority/high
**Describe the bug** When complie Singleton service, UT test failed as below: ``` com.vmware.vip.i18n.api.v1.translation.TranslationSyncAPITest > testUpdateTranslation FAILED java.lang.NullPointerException at TranslationSyncAPITest.java:68 ``` **Expected behavior** All UT should be successful.
1.0
[BUG] [Service] UT failed for vip-manager-i18n - **Describe the bug** When complie Singleton service, UT test failed as below: ``` com.vmware.vip.i18n.api.v1.translation.TranslationSyncAPITest > testUpdateTranslation FAILED java.lang.NullPointerException at TranslationSyncAPITest.java:68 ``` **Expected behavior** All UT should be successful.
non_infrastructure
ut failed for vip manager describe the bug when complie singleton service ut test failed as below com vmware vip api translation translationsyncapitest testupdatetranslation failed java lang nullpointerexception at translationsyncapitest java expected behavior all ut should be successful
0
65,073
16,100,813,933
IssuesEvent
2021-04-27 09:03:14
Kuari/Blog
https://api.github.com/repos/Kuari/Blog
opened
electron-builder踩坑系列---mac下窗口毛玻璃效果
Electron-builder
## 简介 一直觉得毛玻璃样式很炫,而要在`electron`中实现,本来是需要自己去写样式的,我在开发之前也去了解了下,想看看有没有大佬已经实现了,不过确实发现了一个[大佬的仓库](https://github.com/arkenthera/electron-vibrancy)分享了毛玻璃组件,但是其README也提到了官方仓库对于[mac的毛玻璃效果的pr](https://github.com/electron/electron/pull/7898),然后我去找了官方文档,已经有相关属性了,就很妙啊! 但是为什么标题要写“mac下”下呢,因为这个属性只对mac有效。(打工人落泪...) ## 官方文档 ### 文档地址 [https://www.electronjs.org/docs/api/browser-window](https://www.electronjs.org/docs/api/browser-window) ### 相关属性 - `vibrancy` String (可选) - 窗口是否使用 vibrancy 动态效果, 仅 macOS 中有效. Can be `appearance-based`, `light`, `dark`, `titlebar`, `selection`, `menu`, `popover`, `sidebar`, `medium-light`, `ultra-dark`, `header`, `sheet`, `window`, `hud`, `fullscreen-ui`, `tooltip`, `content`, `under-window`, or `under-page`. Please note that using `frame: false` in combination with a vibrancy value requires that you use a non-default `titleBarStyle` as well. Also note that `appearance-based`, `light`, `dark`, `medium-light`, and `ultra-dark` have been deprecated and will be removed in an upcoming version of macOS. - `visualEffectState`String (optional) - Specify how the material appearance should reflect window activity state on macOS. Must be used with the`vibrancy`property. 可能的值有 - `followWindow` - 当窗口处于激活状态时,后台应自动显示为激活状态,当窗口处于非激活状态时,后台应自动显示为非激活状态。 This is the default. - `active` - 后台应一直显示为激活状态。 - `inactive` - 后台应一直显示为非激活状态。 ## 实现 有了官方Buff加持,使起来就很方便了。 ```javascript // background.js let win = new BrowserWindow({ width: 800, height: 600, vibrancy: 'dark', // 'light', 'medium-light' etc visualEffectState: "active" // 这个参数不加的话,鼠标离开应用程序其背景就会变成白色 }) ``` 实现就是这么简单! 小伙伴儿们有兴趣的可以参考下我[这个项目](https://github.com/Kuari/QingKe),使用的毛玻璃样式。
1.0
electron-builder踩坑系列---mac下窗口毛玻璃效果 - ## 简介 一直觉得毛玻璃样式很炫,而要在`electron`中实现,本来是需要自己去写样式的,我在开发之前也去了解了下,想看看有没有大佬已经实现了,不过确实发现了一个[大佬的仓库](https://github.com/arkenthera/electron-vibrancy)分享了毛玻璃组件,但是其README也提到了官方仓库对于[mac的毛玻璃效果的pr](https://github.com/electron/electron/pull/7898),然后我去找了官方文档,已经有相关属性了,就很妙啊! 但是为什么标题要写“mac下”下呢,因为这个属性只对mac有效。(打工人落泪...) ## 官方文档 ### 文档地址 [https://www.electronjs.org/docs/api/browser-window](https://www.electronjs.org/docs/api/browser-window) ### 相关属性 - `vibrancy` String (可选) - 窗口是否使用 vibrancy 动态效果, 仅 macOS 中有效. Can be `appearance-based`, `light`, `dark`, `titlebar`, `selection`, `menu`, `popover`, `sidebar`, `medium-light`, `ultra-dark`, `header`, `sheet`, `window`, `hud`, `fullscreen-ui`, `tooltip`, `content`, `under-window`, or `under-page`. Please note that using `frame: false` in combination with a vibrancy value requires that you use a non-default `titleBarStyle` as well. Also note that `appearance-based`, `light`, `dark`, `medium-light`, and `ultra-dark` have been deprecated and will be removed in an upcoming version of macOS. - `visualEffectState`String (optional) - Specify how the material appearance should reflect window activity state on macOS. Must be used with the`vibrancy`property. 可能的值有 - `followWindow` - 当窗口处于激活状态时,后台应自动显示为激活状态,当窗口处于非激活状态时,后台应自动显示为非激活状态。 This is the default. - `active` - 后台应一直显示为激活状态。 - `inactive` - 后台应一直显示为非激活状态。 ## 实现 有了官方Buff加持,使起来就很方便了。 ```javascript // background.js let win = new BrowserWindow({ width: 800, height: 600, vibrancy: 'dark', // 'light', 'medium-light' etc visualEffectState: "active" // 这个参数不加的话,鼠标离开应用程序其背景就会变成白色 }) ``` 实现就是这么简单! 小伙伴儿们有兴趣的可以参考下我[这个项目](https://github.com/Kuari/QingKe),使用的毛玻璃样式。
non_infrastructure
electron builder踩坑系列 mac下窗口毛玻璃效果 简介 一直觉得毛玻璃样式很炫,而要在 electron 中实现,本来是需要自己去写样式的,我在开发之前也去了解了下,想看看有没有大佬已经实现了,不过确实发现了一个 但是为什么标题要写“mac下”下呢,因为这个属性只对mac有效。(打工人落泪 ) 官方文档 文档地址 相关属性 vibrancy string 可选 窗口是否使用 vibrancy 动态效果 仅 macos 中有效 can be appearance based light dark titlebar selection menu popover sidebar medium light ultra dark header sheet window hud fullscreen ui tooltip content under window or under page please note that using frame false in combination with a vibrancy value requires that you use a non default titlebarstyle as well also note that appearance based light dark medium light and ultra dark have been deprecated and will be removed in an upcoming version of macos visualeffectstate string optional specify how the material appearance should reflect window activity state on macos must be used with the vibrancy property 可能的值有 followwindow 当窗口处于激活状态时,后台应自动显示为激活状态,当窗口处于非激活状态时,后台应自动显示为非激活状态。 this is the default active 后台应一直显示为激活状态。 inactive 后台应一直显示为非激活状态。 实现 有了官方buff加持,使起来就很方便了。 javascript background js let win new browserwindow width height vibrancy dark light medium light etc visualeffectstate active 这个参数不加的话,鼠标离开应用程序其背景就会变成白色 实现就是这么简单! 小伙伴儿们有兴趣的可以参考下我
0
8,184
7,273,265,411
IssuesEvent
2018-02-21 03:50:30
APSIMInitiative/ApsimX
https://api.github.com/repos/APSIMInitiative/ApsimX
closed
Add checkpointing into the GUI
interface/infrastructure new feature
The ability to save input files and results to a named snapshot in the .db file would be really useful. Also being able to graph the current results and a checkpoint's results would be good. Also need to be able to delete checkpoints and revert all input files and results to a previously saved checkpoint would be needed.
1.0
Add checkpointing into the GUI - The ability to save input files and results to a named snapshot in the .db file would be really useful. Also being able to graph the current results and a checkpoint's results would be good. Also need to be able to delete checkpoints and revert all input files and results to a previously saved checkpoint would be needed.
infrastructure
add checkpointing into the gui the ability to save input files and results to a named snapshot in the db file would be really useful also being able to graph the current results and a checkpoint s results would be good also need to be able to delete checkpoints and revert all input files and results to a previously saved checkpoint would be needed
1
555,082
16,447,021,879
IssuesEvent
2021-05-20 20:55:19
ansible/awx
https://api.github.com/repos/ansible/awx
closed
Updated Locators Needed: Teams Access Tab- Add Teams Modal
component:ui priority:medium qe:blocking state:needs_devel type:bug
##### ISSUE TYPE - Bug Report ##### SUMMARY When reviewing coverage and adding stubs for the Teams Access tab, and checking to see if I would have the locators I need- I notice that the Add User/Team modal is missing some critical locators for me to be able to add coverage later. Please add these locators, thank you! ##### ENVIRONMENT * AWX version: Devel 3/3/21
1.0
Updated Locators Needed: Teams Access Tab- Add Teams Modal - ##### ISSUE TYPE - Bug Report ##### SUMMARY When reviewing coverage and adding stubs for the Teams Access tab, and checking to see if I would have the locators I need- I notice that the Add User/Team modal is missing some critical locators for me to be able to add coverage later. Please add these locators, thank you! ##### ENVIRONMENT * AWX version: Devel 3/3/21
non_infrastructure
updated locators needed teams access tab add teams modal issue type bug report summary when reviewing coverage and adding stubs for the teams access tab and checking to see if i would have the locators i need i notice that the add user team modal is missing some critical locators for me to be able to add coverage later please add these locators thank you environment awx version devel
0
35,398
31,166,471,796
IssuesEvent
2023-08-16 20:08:47
cal-itp/benefits
https://api.github.com/repos/cal-itp/benefits
closed
Clean up unused KeyVault secrets for payment processor
infrastructure
We moved to defining payment processor secrets per-agency to be able to swap between environments more easily. Let's clean up the unused "generic" payment processor secrets in KeyVault. - [x] `dev` - [x] `test` - [x] `prod`
1.0
Clean up unused KeyVault secrets for payment processor - We moved to defining payment processor secrets per-agency to be able to swap between environments more easily. Let's clean up the unused "generic" payment processor secrets in KeyVault. - [x] `dev` - [x] `test` - [x] `prod`
infrastructure
clean up unused keyvault secrets for payment processor we moved to defining payment processor secrets per agency to be able to swap between environments more easily let s clean up the unused generic payment processor secrets in keyvault dev test prod
1
28,635
23,408,423,571
IssuesEvent
2022-08-12 14:57:41
GCTC-NTGC/gc-digital-talent
https://api.github.com/repos/GCTC-NTGC/gc-digital-talent
closed
Errors during build/deploy on Azure should notify devs
infrastructure deployment
First, errors during build scripts should cause the task on Azure to fail. This can be accomplished by including `set -e` in the build script. Ideally, a failure during the build process could notify our slack bot. The command that runs after the artifact is deployed (including artisan migrate) should also cause some sort of notification if it fails.
1.0
Errors during build/deploy on Azure should notify devs - First, errors during build scripts should cause the task on Azure to fail. This can be accomplished by including `set -e` in the build script. Ideally, a failure during the build process could notify our slack bot. The command that runs after the artifact is deployed (including artisan migrate) should also cause some sort of notification if it fails.
infrastructure
errors during build deploy on azure should notify devs first errors during build scripts should cause the task on azure to fail this can be accomplished by including set e in the build script ideally a failure during the build process could notify our slack bot the command that runs after the artifact is deployed including artisan migrate should also cause some sort of notification if it fails
1
6,820
2,862,398,084
IssuesEvent
2015-06-04 04:04:47
red/red
https://api.github.com/repos/red/red
closed
GET followed by a path is not compiled correctly
Red status.built status.tested type.bug
``` test: func [input [block!] /local exp-res reason] [ exp-res: get input/expect ] test ["" expect true] ``` Once compiled and run, it produces the following error: ``` *** Script error: exp-res has no value *** Where: set *** Stack: test set set ```
1.0
GET followed by a path is not compiled correctly - ``` test: func [input [block!] /local exp-res reason] [ exp-res: get input/expect ] test ["" expect true] ``` Once compiled and run, it produces the following error: ``` *** Script error: exp-res has no value *** Where: set *** Stack: test set set ```
non_infrastructure
get followed by a path is not compiled correctly test func local exp res reason exp res get input expect test once compiled and run it produces the following error script error exp res has no value where set stack test set set
0
326,839
9,961,592,640
IssuesEvent
2019-07-07 06:31:13
dhis2/maintenance-app
https://api.github.com/repos/dhis2/maintenance-app
closed
Organisation unit management - Add location name in search text
enhancement priority:medium stale wontfix
I suggest that the name of the selected node is added to the search field at the top of the pane, for example "Search by name in Kambia". That would make it easier for the user to understand the difference between the two search functions.
1.0
Organisation unit management - Add location name in search text - I suggest that the name of the selected node is added to the search field at the top of the pane, for example "Search by name in Kambia". That would make it easier for the user to understand the difference between the two search functions.
non_infrastructure
organisation unit management add location name in search text i suggest that the name of the selected node is added to the search field at the top of the pane for example search by name in kambia that would make it easier for the user to understand the difference between the two search functions
0
256,304
8,127,332,765
IssuesEvent
2018-08-17 07:40:12
aowen87/BAR
https://api.github.com/repos/aowen87/BAR
closed
Deliberate misuse of SPH operator crashes engine.
Bug Likelihood: 3 - Occasional Priority: Normal Severity: 4 - Crash / Wrong Results
1. Open multi_rect3d.silo 2. Add a Pseudocolor of d 3. Add SPH operator (yes, it's already a structured dataset) 4. Draw plots The engine crashes. Program received signal EXC_BAD_ACCESS, Could not access memory. Reason: KERN_INVALID_ADDRESS at address: 0x0000000000000000 0x00000001050cd7d0 in vtkDataArrayTemplate<double>::GetTupleValue () (gdb) where #0 0x00000001050cd7d0 in vtkDataArrayTemplate<double>::GetTupleValue () #1 0x00000001050cfd28 in vtkDataArrayTemplate<double>::InsertTuple () #2 0x0000000104c897bd in vtkDataSetAttributes::CopyData () #3 0x0000000102bad117 in vtkRectilinearGridFacelistFilter_ProcessFaces<vtkDirectAccessor<double> > (nX=10, nY=10, nZ=10, outPointData=0x118994160, inPointData=0x118981d10, x=@0x7fff5fbf9238, y=@0x7fff5fbf9210, z=@0x7fff5fbf91e8, p=@0x7fff5fbf91c0) at vtkRectilinearGridFacelistFilter.C:199 #4 0x0000000102ba75c9 in vtkRectilinearGridFacelistFilter::RequestData (this=0x11898ed80, unnamed_arg=0x1189939f0, inputVector=0x1189913f0, outputVector=0x118991030) at vtkRectilinearGridFacelistFilter.C:443 #5 0x0000000104bd028d in vtkExecutive::CallAlgorithm () #6 0x0000000104bcc2ea in vtkDemandDrivenPipeline::ExecuteData () #7 0x0000000104bca841 in vtkCompositeDataPipeline::ExecuteData () #8 0x0000000104bce88e in vtkDemandDrivenPipeline::ProcessRequest () #9 0x0000000104be78f8 in vtkStreamingDemandDrivenPipeline::ProcessRequest () #10 0x0000000104bcdfbc in vtkDemandDrivenPipeline::UpdateData () #11 0x0000000104be60d7 in vtkStreamingDemandDrivenPipeline::Update () #12 0x0000000101b80dc3 in avtFacelistFilter::Take3DFaces (this=0x10a1d3c70, in_ds=0x11895f2d0, domain=0, label=@0x7fff5fbf9cf0, forceFaceConsolidation=false, mustCreatePolyData=false, info=@0x118984da8, fl=0x0) at avtFacelistFilter.C:561 #13 0x0000000101b82e9a in avtFacelistFilter::FindFaces (this=0x10a1d3c70, in_dr=0x118984d40, info=@0x118984da8, create3DCellNumbers=false, forceFaceConsolidation=false, createEdgeListFor2DDatasets=false, mustCreatePolyData=false, fl=0x0) at avtFacelistFilter.C:409 #14 0x0000000101b8365d in avtFacelistFilter::ExecuteDataTree (this=0x10a1d3c70, in_dr=0x118984d40) at avtFacelistFilter.C:299 #15 0x00000001026dfd94 in avtSIMODataTreeIterator::ExecuteDataTreeOnThread (cbdata=0x10a1d2d90) at avtSIMODataTreeIterator.C:268 #16 0x00000001026e01e8 in avtSIMODataTreeIterator::Execute (this=0x10a1d3c70, inDT=@0x7fff5fbfa138, outDT=@0x7fff5fbfa110) at avtSIMODataTreeIterator.C:357 #17 0x00000001026e0d63 in avtSIMODataTreeIterator::Execute (this=0x10a1d3c70) at avtSIMODataTreeIterator.C:160 #18 0x00000001026e0c66 in virtual thunk to avtSIMODataTreeIterator::Execute() () at avtSIMODataTreeIterator.C:206 #19 0x00000001026c4b93 in avtFilter::Update (this=0x10a1d3d00, contract=@0x7fff5fbfa648) at avtFilter.C:292 #20 0x00000001026c43a1 in virtual thunk to avtFilter::Update(ref_ptr<avtContract>) () at avtFilter.C:149 #21 0x00000001025d8313 in avtDataObject::Update (this=0x10a1d3d70, contract=@0x7fff5fbfa6b0) at avtDataObject.C:131 #22 0x00000001027023b5 in avtDataObjectSink::UpdateInput (this=0x10a1d3048, spec=@0x7fff5fbfa8f0) at avtDataObjectSink.C:157 #23 0x00000001026c4892 in avtFilter::Update (this=0x10a1d3008, contract=@0x7fff5fbfaee0) at avtFilter.C:258 #24 0x0000000101b8ec22 in avtGhostZoneAndFacelistFilter::Execute (this=0x10a1d2900) at avtGhostZoneAndFacelistFilter.C:376 #25 0x0000000101b8d5e6 in virtual thunk to avtGhostZoneAndFacelistFilter::Execute() () at avtGhostZoneAndFacelistFilter.C:439 #26 0x00000001026c4b93 in avtFilter::Update (this=0x10a1d2990, contract=@0x7fff5fbfb678) at avtFilter.C:292 #27 0x00000001026c43a1 in virtual thunk to avtFilter::Update(ref_ptr<avtContract>) () at avtFilter.C:149 #28 0x00000001025d8313 in avtDataObject::Update (this=0x10a1d2a00, contract=@0x7fff5fbfb6e0) at avtDataObject.C:131 #29 0x00000001027023b5 in avtDataObjectSink::UpdateInput (this=0x10a1d2088, spec=@0x7fff5fbfb920) at avtDataObjectSink.C:157 #30 0x00000001026c4892 in avtFilter::Update (this=0x10a1d2048, contract=@0x7fff5fbfbb98) at avtFilter.C:258 #31 0x00000001026c43a1 in virtual thunk to avtFilter::Update(ref_ptr<avtContract>) () at avtFilter.C:149 #32 0x00000001025d8313 in avtDataObject::Update (this=0x10a1d2550, contract=@0x7fff5fbfbc00) at avtDataObject.C:131 #33 0x00000001027023b5 in avtDataObjectSink::UpdateInput (this=0x10a1d5700, spec=@0x7fff5fbfbe40) at avtDataObjectSink.C:157 #34 0x00000001026c4892 in avtFilter::Update (this=0x10a1d56c0, contract=@0x7fff5fbfc0b8) at avtFilter.C:258 #35 0x00000001026c43a1 in virtual thunk to avtFilter::Update(ref_ptr<avtContract>) () at avtFilter.C:149 #36 0x00000001025d8313 in avtDataObject::Update (this=0x10a1d5730, contract=@0x7fff5fbfc120) at avtDataObject.C:131 #37 0x00000001027023b5 in avtDataObjectSink::UpdateInput (this=0x10a1d43a8, spec=@0x7fff5fbfc360) at avtDataObjectSink.C:157 #38 0x00000001026c4892 in avtFilter::Update (this=0x10a1d4368, contract=@0x7fff5fbfc5d8) at avtFilter.C:258 #39 0x00000001026c43a1 in virtual thunk to avtFilter::Update(ref_ptr<avtContract>) () at avtFilter.C:149 #40 0x00000001025d8313 in avtDataObject::Update (this=0x10a1d43e0, contract=@0x7fff5fbfc640) at avtDataObject.C:131 #41 0x00000001027023b5 in avtDataObjectSink::UpdateInput (this=0x10a1d4a10, spec=@0x7fff5fbfc880) at avtDataObjectSink.C:157 #42 0x00000001026c4892 in avtFilter::Update (this=0x10a1d49d0, contract=@0x7fff5fbfcaf8) at avtFilter.C:258 #43 0x00000001026c43a1 in virtual thunk to avtFilter::Update(ref_ptr<avtContract>) () at avtFilter.C:149 #44 0x00000001025d8313 in avtDataObject::Update (this=0x10a1d4a40, contract=@0x7fff5fbfd208) at avtDataObject.C:131 #45 0x0000000102718357 in avtTerminatingSink::Execute (this=0x118980150, contract=@0x7fff5fbfd718) at avtTerminatingSink.C:208 #46 0x0000000101769985 in avtPlot::Execute (this=0x10a1d1cd0, input=@0x7fff5fbfd938, contract=@0x7fff5fbfd928, atts=0x10a800028) at avtPlot.C:624 #47 0x0000000100035696 in DataNetwork::GetWriter (this=0x10a1a7220, dob=@0x7fff5fbfdbb8, contract=@0x7fff5fbfdba8, atts=0x10a800028) at DataNetwork.C:247 #48 0x00000001000cbfc8 in NetworkManager::GetOutput (this=0x1055636b0, respondWithNullData=false, calledForRender=false, cellCountMultiplier=0x7fff5fbfe260) at NetworkManager.C:2565 #49 0x000000010008796a in EngineRPCExecutor<ExecuteRPC>::Execute (this=0x10552bbb0, rpc=0x10583bd38) at Executors.h:1005 #50 0x00000001000993c4 in EngineRPCExecutor<ExecuteRPC>::Update (this=0x10552bbb0, s=0x10583bd68) at EngineRPCExecutor.h:67 #51 0x0000000104149872 in Subject::Notify (this=0x10583bd68) at Subject.C:193 #52 0x0000000103e48d01 in AttributeSubject::Notify (this=0x10583bd38) at AttributeSubject.C:99 #53 0x00000001041cf884 in Xfer::Process (this=0x1055218a0) at Xfer.C:416 #54 0x0000000100075d27 in Engine::ProcessInput (this=0x105521770) at Engine.C:1881 #55 0x0000000100079ef0 in Engine::EventLoop (this=0x105521770) at Engine.C:1825 #56 0x000000010001497d in EngineMain (argc=4, argv=0x7fff5fbfe738) at main.C:331 #57 0x0000000100014b50 in main (argc=12, argv=0x7fff5fbfe738) at main.C:394 (gdb) -----------------------REDMINE MIGRATION----------------------- This ticket was migrated from Redmine. As such, not all information was able to be captured in the transition. Below is a complete record of the original redmine ticket. Ticket number: 2534 Status: Resolved Project: VisIt Tracker: Bug Priority: Normal Subject: Deliberate misuse of SPH operator crashes engine. Assigned to: Kevin Griffin Category: Target version: 2.10.2 Author: Brad Whitlock Start: 02/18/2016 Due date: % Done: 100 Estimated time: Created: 02/18/2016 01:35 pm Updated: 03/24/2016 09:48 pm Likelihood: 3 - Occasional Severity: 4 - Crash / Wrong Results Found in version: 2.10.0 Impact: Expected Use: OS: All Support Group: Any Description: 1. Open multi_rect3d.silo 2. Add a Pseudocolor of d 3. Add SPH operator (yes, it's already a structured dataset) 4. Draw plots The engine crashes. Program received signal EXC_BAD_ACCESS, Could not access memory. Reason: KERN_INVALID_ADDRESS at address: 0x0000000000000000 0x00000001050cd7d0 in vtkDataArrayTemplate<double>::GetTupleValue () (gdb) where #0 0x00000001050cd7d0 in vtkDataArrayTemplate<double>::GetTupleValue () #1 0x00000001050cfd28 in vtkDataArrayTemplate<double>::InsertTuple () #2 0x0000000104c897bd in vtkDataSetAttributes::CopyData () #3 0x0000000102bad117 in vtkRectilinearGridFacelistFilter_ProcessFaces<vtkDirectAccessor<double> > (nX=10, nY=10, nZ=10, outPointData=0x118994160, inPointData=0x118981d10, x=@0x7fff5fbf9238, y=@0x7fff5fbf9210, z=@0x7fff5fbf91e8, p=@0x7fff5fbf91c0) at vtkRectilinearGridFacelistFilter.C:199 #4 0x0000000102ba75c9 in vtkRectilinearGridFacelistFilter::RequestData (this=0x11898ed80, unnamed_arg=0x1189939f0, inputVector=0x1189913f0, outputVector=0x118991030) at vtkRectilinearGridFacelistFilter.C:443 #5 0x0000000104bd028d in vtkExecutive::CallAlgorithm () #6 0x0000000104bcc2ea in vtkDemandDrivenPipeline::ExecuteData () #7 0x0000000104bca841 in vtkCompositeDataPipeline::ExecuteData () #8 0x0000000104bce88e in vtkDemandDrivenPipeline::ProcessRequest () #9 0x0000000104be78f8 in vtkStreamingDemandDrivenPipeline::ProcessRequest () #10 0x0000000104bcdfbc in vtkDemandDrivenPipeline::UpdateData () #11 0x0000000104be60d7 in vtkStreamingDemandDrivenPipeline::Update () #12 0x0000000101b80dc3 in avtFacelistFilter::Take3DFaces (this=0x10a1d3c70, in_ds=0x11895f2d0, domain=0, label=@0x7fff5fbf9cf0, forceFaceConsolidation=false, mustCreatePolyData=false, info=@0x118984da8, fl=0x0) at avtFacelistFilter.C:561 #13 0x0000000101b82e9a in avtFacelistFilter::FindFaces (this=0x10a1d3c70, in_dr=0x118984d40, info=@0x118984da8, create3DCellNumbers=false, forceFaceConsolidation=false, createEdgeListFor2DDatasets=false, mustCreatePolyData=false, fl=0x0) at avtFacelistFilter.C:409 #14 0x0000000101b8365d in avtFacelistFilter::ExecuteDataTree (this=0x10a1d3c70, in_dr=0x118984d40) at avtFacelistFilter.C:299 #15 0x00000001026dfd94 in avtSIMODataTreeIterator::ExecuteDataTreeOnThread (cbdata=0x10a1d2d90) at avtSIMODataTreeIterator.C:268 #16 0x00000001026e01e8 in avtSIMODataTreeIterator::Execute (this=0x10a1d3c70, inDT=@0x7fff5fbfa138, outDT=@0x7fff5fbfa110) at avtSIMODataTreeIterator.C:357 #17 0x00000001026e0d63 in avtSIMODataTreeIterator::Execute (this=0x10a1d3c70) at avtSIMODataTreeIterator.C:160 #18 0x00000001026e0c66 in virtual thunk to avtSIMODataTreeIterator::Execute() () at avtSIMODataTreeIterator.C:206 #19 0x00000001026c4b93 in avtFilter::Update (this=0x10a1d3d00, contract=@0x7fff5fbfa648) at avtFilter.C:292 #20 0x00000001026c43a1 in virtual thunk to avtFilter::Update(ref_ptr<avtContract>) () at avtFilter.C:149 #21 0x00000001025d8313 in avtDataObject::Update (this=0x10a1d3d70, contract=@0x7fff5fbfa6b0) at avtDataObject.C:131 #22 0x00000001027023b5 in avtDataObjectSink::UpdateInput (this=0x10a1d3048, spec=@0x7fff5fbfa8f0) at avtDataObjectSink.C:157 #23 0x00000001026c4892 in avtFilter::Update (this=0x10a1d3008, contract=@0x7fff5fbfaee0) at avtFilter.C:258 #24 0x0000000101b8ec22 in avtGhostZoneAndFacelistFilter::Execute (this=0x10a1d2900) at avtGhostZoneAndFacelistFilter.C:376 #25 0x0000000101b8d5e6 in virtual thunk to avtGhostZoneAndFacelistFilter::Execute() () at avtGhostZoneAndFacelistFilter.C:439 #26 0x00000001026c4b93 in avtFilter::Update (this=0x10a1d2990, contract=@0x7fff5fbfb678) at avtFilter.C:292 #27 0x00000001026c43a1 in virtual thunk to avtFilter::Update(ref_ptr<avtContract>) () at avtFilter.C:149 #28 0x00000001025d8313 in avtDataObject::Update (this=0x10a1d2a00, contract=@0x7fff5fbfb6e0) at avtDataObject.C:131 #29 0x00000001027023b5 in avtDataObjectSink::UpdateInput (this=0x10a1d2088, spec=@0x7fff5fbfb920) at avtDataObjectSink.C:157 #30 0x00000001026c4892 in avtFilter::Update (this=0x10a1d2048, contract=@0x7fff5fbfbb98) at avtFilter.C:258 #31 0x00000001026c43a1 in virtual thunk to avtFilter::Update(ref_ptr<avtContract>) () at avtFilter.C:149 #32 0x00000001025d8313 in avtDataObject::Update (this=0x10a1d2550, contract=@0x7fff5fbfbc00) at avtDataObject.C:131 #33 0x00000001027023b5 in avtDataObjectSink::UpdateInput (this=0x10a1d5700, spec=@0x7fff5fbfbe40) at avtDataObjectSink.C:157 #34 0x00000001026c4892 in avtFilter::Update (this=0x10a1d56c0, contract=@0x7fff5fbfc0b8) at avtFilter.C:258 #35 0x00000001026c43a1 in virtual thunk to avtFilter::Update(ref_ptr<avtContract>) () at avtFilter.C:149 #36 0x00000001025d8313 in avtDataObject::Update (this=0x10a1d5730, contract=@0x7fff5fbfc120) at avtDataObject.C:131 #37 0x00000001027023b5 in avtDataObjectSink::UpdateInput (this=0x10a1d43a8, spec=@0x7fff5fbfc360) at avtDataObjectSink.C:157 #38 0x00000001026c4892 in avtFilter::Update (this=0x10a1d4368, contract=@0x7fff5fbfc5d8) at avtFilter.C:258 #39 0x00000001026c43a1 in virtual thunk to avtFilter::Update(ref_ptr<avtContract>) () at avtFilter.C:149 #40 0x00000001025d8313 in avtDataObject::Update (this=0x10a1d43e0, contract=@0x7fff5fbfc640) at avtDataObject.C:131 #41 0x00000001027023b5 in avtDataObjectSink::UpdateInput (this=0x10a1d4a10, spec=@0x7fff5fbfc880) at avtDataObjectSink.C:157 #42 0x00000001026c4892 in avtFilter::Update (this=0x10a1d49d0, contract=@0x7fff5fbfcaf8) at avtFilter.C:258 #43 0x00000001026c43a1 in virtual thunk to avtFilter::Update(ref_ptr<avtContract>) () at avtFilter.C:149 #44 0x00000001025d8313 in avtDataObject::Update (this=0x10a1d4a40, contract=@0x7fff5fbfd208) at avtDataObject.C:131 #45 0x0000000102718357 in avtTerminatingSink::Execute (this=0x118980150, contract=@0x7fff5fbfd718) at avtTerminatingSink.C:208 #46 0x0000000101769985 in avtPlot::Execute (this=0x10a1d1cd0, input=@0x7fff5fbfd938, contract=@0x7fff5fbfd928, atts=0x10a800028) at avtPlot.C:624 #47 0x0000000100035696 in DataNetwork::GetWriter (this=0x10a1a7220, dob=@0x7fff5fbfdbb8, contract=@0x7fff5fbfdba8, atts=0x10a800028) at DataNetwork.C:247 #48 0x00000001000cbfc8 in NetworkManager::GetOutput (this=0x1055636b0, respondWithNullData=false, calledForRender=false, cellCountMultiplier=0x7fff5fbfe260) at NetworkManager.C:2565 #49 0x000000010008796a in EngineRPCExecutor<ExecuteRPC>::Execute (this=0x10552bbb0, rpc=0x10583bd38) at Executors.h:1005 #50 0x00000001000993c4 in EngineRPCExecutor<ExecuteRPC>::Update (this=0x10552bbb0, s=0x10583bd68) at EngineRPCExecutor.h:67 #51 0x0000000104149872 in Subject::Notify (this=0x10583bd68) at Subject.C:193 #52 0x0000000103e48d01 in AttributeSubject::Notify (this=0x10583bd38) at AttributeSubject.C:99 #53 0x00000001041cf884 in Xfer::Process (this=0x1055218a0) at Xfer.C:416 #54 0x0000000100075d27 in Engine::ProcessInput (this=0x105521770) at Engine.C:1881 #55 0x0000000100079ef0 in Engine::EventLoop (this=0x105521770) at Engine.C:1825 #56 0x000000010001497d in EngineMain (argc=4, argv=0x7fff5fbfe738) at main.C:331 #57 0x0000000100014b50 in main (argc=12, argv=0x7fff5fbfe738) at main.C:394 (gdb) Comments: Hello:I’ve fixed Bug #2534 (Deliberate misuse of SPH operator crashes engine).2.10RC:Sending operators/SPHResample/avtSPHResampleFilter.CSending resources/help/en_US/relnotes2.10.2.htmlTransmitting file data ..Committed revision 28312.Trunk:Sending operators/SPHResample/avtSPHResampleFilter.CSending resources/help/en_US/relnotes2.10.2.htmlTransmitting file data ..Committed revision 28314.-Kevin
1.0
Deliberate misuse of SPH operator crashes engine. - 1. Open multi_rect3d.silo 2. Add a Pseudocolor of d 3. Add SPH operator (yes, it's already a structured dataset) 4. Draw plots The engine crashes. Program received signal EXC_BAD_ACCESS, Could not access memory. Reason: KERN_INVALID_ADDRESS at address: 0x0000000000000000 0x00000001050cd7d0 in vtkDataArrayTemplate<double>::GetTupleValue () (gdb) where #0 0x00000001050cd7d0 in vtkDataArrayTemplate<double>::GetTupleValue () #1 0x00000001050cfd28 in vtkDataArrayTemplate<double>::InsertTuple () #2 0x0000000104c897bd in vtkDataSetAttributes::CopyData () #3 0x0000000102bad117 in vtkRectilinearGridFacelistFilter_ProcessFaces<vtkDirectAccessor<double> > (nX=10, nY=10, nZ=10, outPointData=0x118994160, inPointData=0x118981d10, x=@0x7fff5fbf9238, y=@0x7fff5fbf9210, z=@0x7fff5fbf91e8, p=@0x7fff5fbf91c0) at vtkRectilinearGridFacelistFilter.C:199 #4 0x0000000102ba75c9 in vtkRectilinearGridFacelistFilter::RequestData (this=0x11898ed80, unnamed_arg=0x1189939f0, inputVector=0x1189913f0, outputVector=0x118991030) at vtkRectilinearGridFacelistFilter.C:443 #5 0x0000000104bd028d in vtkExecutive::CallAlgorithm () #6 0x0000000104bcc2ea in vtkDemandDrivenPipeline::ExecuteData () #7 0x0000000104bca841 in vtkCompositeDataPipeline::ExecuteData () #8 0x0000000104bce88e in vtkDemandDrivenPipeline::ProcessRequest () #9 0x0000000104be78f8 in vtkStreamingDemandDrivenPipeline::ProcessRequest () #10 0x0000000104bcdfbc in vtkDemandDrivenPipeline::UpdateData () #11 0x0000000104be60d7 in vtkStreamingDemandDrivenPipeline::Update () #12 0x0000000101b80dc3 in avtFacelistFilter::Take3DFaces (this=0x10a1d3c70, in_ds=0x11895f2d0, domain=0, label=@0x7fff5fbf9cf0, forceFaceConsolidation=false, mustCreatePolyData=false, info=@0x118984da8, fl=0x0) at avtFacelistFilter.C:561 #13 0x0000000101b82e9a in avtFacelistFilter::FindFaces (this=0x10a1d3c70, in_dr=0x118984d40, info=@0x118984da8, create3DCellNumbers=false, forceFaceConsolidation=false, createEdgeListFor2DDatasets=false, mustCreatePolyData=false, fl=0x0) at avtFacelistFilter.C:409 #14 0x0000000101b8365d in avtFacelistFilter::ExecuteDataTree (this=0x10a1d3c70, in_dr=0x118984d40) at avtFacelistFilter.C:299 #15 0x00000001026dfd94 in avtSIMODataTreeIterator::ExecuteDataTreeOnThread (cbdata=0x10a1d2d90) at avtSIMODataTreeIterator.C:268 #16 0x00000001026e01e8 in avtSIMODataTreeIterator::Execute (this=0x10a1d3c70, inDT=@0x7fff5fbfa138, outDT=@0x7fff5fbfa110) at avtSIMODataTreeIterator.C:357 #17 0x00000001026e0d63 in avtSIMODataTreeIterator::Execute (this=0x10a1d3c70) at avtSIMODataTreeIterator.C:160 #18 0x00000001026e0c66 in virtual thunk to avtSIMODataTreeIterator::Execute() () at avtSIMODataTreeIterator.C:206 #19 0x00000001026c4b93 in avtFilter::Update (this=0x10a1d3d00, contract=@0x7fff5fbfa648) at avtFilter.C:292 #20 0x00000001026c43a1 in virtual thunk to avtFilter::Update(ref_ptr<avtContract>) () at avtFilter.C:149 #21 0x00000001025d8313 in avtDataObject::Update (this=0x10a1d3d70, contract=@0x7fff5fbfa6b0) at avtDataObject.C:131 #22 0x00000001027023b5 in avtDataObjectSink::UpdateInput (this=0x10a1d3048, spec=@0x7fff5fbfa8f0) at avtDataObjectSink.C:157 #23 0x00000001026c4892 in avtFilter::Update (this=0x10a1d3008, contract=@0x7fff5fbfaee0) at avtFilter.C:258 #24 0x0000000101b8ec22 in avtGhostZoneAndFacelistFilter::Execute (this=0x10a1d2900) at avtGhostZoneAndFacelistFilter.C:376 #25 0x0000000101b8d5e6 in virtual thunk to avtGhostZoneAndFacelistFilter::Execute() () at avtGhostZoneAndFacelistFilter.C:439 #26 0x00000001026c4b93 in avtFilter::Update (this=0x10a1d2990, contract=@0x7fff5fbfb678) at avtFilter.C:292 #27 0x00000001026c43a1 in virtual thunk to avtFilter::Update(ref_ptr<avtContract>) () at avtFilter.C:149 #28 0x00000001025d8313 in avtDataObject::Update (this=0x10a1d2a00, contract=@0x7fff5fbfb6e0) at avtDataObject.C:131 #29 0x00000001027023b5 in avtDataObjectSink::UpdateInput (this=0x10a1d2088, spec=@0x7fff5fbfb920) at avtDataObjectSink.C:157 #30 0x00000001026c4892 in avtFilter::Update (this=0x10a1d2048, contract=@0x7fff5fbfbb98) at avtFilter.C:258 #31 0x00000001026c43a1 in virtual thunk to avtFilter::Update(ref_ptr<avtContract>) () at avtFilter.C:149 #32 0x00000001025d8313 in avtDataObject::Update (this=0x10a1d2550, contract=@0x7fff5fbfbc00) at avtDataObject.C:131 #33 0x00000001027023b5 in avtDataObjectSink::UpdateInput (this=0x10a1d5700, spec=@0x7fff5fbfbe40) at avtDataObjectSink.C:157 #34 0x00000001026c4892 in avtFilter::Update (this=0x10a1d56c0, contract=@0x7fff5fbfc0b8) at avtFilter.C:258 #35 0x00000001026c43a1 in virtual thunk to avtFilter::Update(ref_ptr<avtContract>) () at avtFilter.C:149 #36 0x00000001025d8313 in avtDataObject::Update (this=0x10a1d5730, contract=@0x7fff5fbfc120) at avtDataObject.C:131 #37 0x00000001027023b5 in avtDataObjectSink::UpdateInput (this=0x10a1d43a8, spec=@0x7fff5fbfc360) at avtDataObjectSink.C:157 #38 0x00000001026c4892 in avtFilter::Update (this=0x10a1d4368, contract=@0x7fff5fbfc5d8) at avtFilter.C:258 #39 0x00000001026c43a1 in virtual thunk to avtFilter::Update(ref_ptr<avtContract>) () at avtFilter.C:149 #40 0x00000001025d8313 in avtDataObject::Update (this=0x10a1d43e0, contract=@0x7fff5fbfc640) at avtDataObject.C:131 #41 0x00000001027023b5 in avtDataObjectSink::UpdateInput (this=0x10a1d4a10, spec=@0x7fff5fbfc880) at avtDataObjectSink.C:157 #42 0x00000001026c4892 in avtFilter::Update (this=0x10a1d49d0, contract=@0x7fff5fbfcaf8) at avtFilter.C:258 #43 0x00000001026c43a1 in virtual thunk to avtFilter::Update(ref_ptr<avtContract>) () at avtFilter.C:149 #44 0x00000001025d8313 in avtDataObject::Update (this=0x10a1d4a40, contract=@0x7fff5fbfd208) at avtDataObject.C:131 #45 0x0000000102718357 in avtTerminatingSink::Execute (this=0x118980150, contract=@0x7fff5fbfd718) at avtTerminatingSink.C:208 #46 0x0000000101769985 in avtPlot::Execute (this=0x10a1d1cd0, input=@0x7fff5fbfd938, contract=@0x7fff5fbfd928, atts=0x10a800028) at avtPlot.C:624 #47 0x0000000100035696 in DataNetwork::GetWriter (this=0x10a1a7220, dob=@0x7fff5fbfdbb8, contract=@0x7fff5fbfdba8, atts=0x10a800028) at DataNetwork.C:247 #48 0x00000001000cbfc8 in NetworkManager::GetOutput (this=0x1055636b0, respondWithNullData=false, calledForRender=false, cellCountMultiplier=0x7fff5fbfe260) at NetworkManager.C:2565 #49 0x000000010008796a in EngineRPCExecutor<ExecuteRPC>::Execute (this=0x10552bbb0, rpc=0x10583bd38) at Executors.h:1005 #50 0x00000001000993c4 in EngineRPCExecutor<ExecuteRPC>::Update (this=0x10552bbb0, s=0x10583bd68) at EngineRPCExecutor.h:67 #51 0x0000000104149872 in Subject::Notify (this=0x10583bd68) at Subject.C:193 #52 0x0000000103e48d01 in AttributeSubject::Notify (this=0x10583bd38) at AttributeSubject.C:99 #53 0x00000001041cf884 in Xfer::Process (this=0x1055218a0) at Xfer.C:416 #54 0x0000000100075d27 in Engine::ProcessInput (this=0x105521770) at Engine.C:1881 #55 0x0000000100079ef0 in Engine::EventLoop (this=0x105521770) at Engine.C:1825 #56 0x000000010001497d in EngineMain (argc=4, argv=0x7fff5fbfe738) at main.C:331 #57 0x0000000100014b50 in main (argc=12, argv=0x7fff5fbfe738) at main.C:394 (gdb) -----------------------REDMINE MIGRATION----------------------- This ticket was migrated from Redmine. As such, not all information was able to be captured in the transition. Below is a complete record of the original redmine ticket. Ticket number: 2534 Status: Resolved Project: VisIt Tracker: Bug Priority: Normal Subject: Deliberate misuse of SPH operator crashes engine. Assigned to: Kevin Griffin Category: Target version: 2.10.2 Author: Brad Whitlock Start: 02/18/2016 Due date: % Done: 100 Estimated time: Created: 02/18/2016 01:35 pm Updated: 03/24/2016 09:48 pm Likelihood: 3 - Occasional Severity: 4 - Crash / Wrong Results Found in version: 2.10.0 Impact: Expected Use: OS: All Support Group: Any Description: 1. Open multi_rect3d.silo 2. Add a Pseudocolor of d 3. Add SPH operator (yes, it's already a structured dataset) 4. Draw plots The engine crashes. Program received signal EXC_BAD_ACCESS, Could not access memory. Reason: KERN_INVALID_ADDRESS at address: 0x0000000000000000 0x00000001050cd7d0 in vtkDataArrayTemplate<double>::GetTupleValue () (gdb) where #0 0x00000001050cd7d0 in vtkDataArrayTemplate<double>::GetTupleValue () #1 0x00000001050cfd28 in vtkDataArrayTemplate<double>::InsertTuple () #2 0x0000000104c897bd in vtkDataSetAttributes::CopyData () #3 0x0000000102bad117 in vtkRectilinearGridFacelistFilter_ProcessFaces<vtkDirectAccessor<double> > (nX=10, nY=10, nZ=10, outPointData=0x118994160, inPointData=0x118981d10, x=@0x7fff5fbf9238, y=@0x7fff5fbf9210, z=@0x7fff5fbf91e8, p=@0x7fff5fbf91c0) at vtkRectilinearGridFacelistFilter.C:199 #4 0x0000000102ba75c9 in vtkRectilinearGridFacelistFilter::RequestData (this=0x11898ed80, unnamed_arg=0x1189939f0, inputVector=0x1189913f0, outputVector=0x118991030) at vtkRectilinearGridFacelistFilter.C:443 #5 0x0000000104bd028d in vtkExecutive::CallAlgorithm () #6 0x0000000104bcc2ea in vtkDemandDrivenPipeline::ExecuteData () #7 0x0000000104bca841 in vtkCompositeDataPipeline::ExecuteData () #8 0x0000000104bce88e in vtkDemandDrivenPipeline::ProcessRequest () #9 0x0000000104be78f8 in vtkStreamingDemandDrivenPipeline::ProcessRequest () #10 0x0000000104bcdfbc in vtkDemandDrivenPipeline::UpdateData () #11 0x0000000104be60d7 in vtkStreamingDemandDrivenPipeline::Update () #12 0x0000000101b80dc3 in avtFacelistFilter::Take3DFaces (this=0x10a1d3c70, in_ds=0x11895f2d0, domain=0, label=@0x7fff5fbf9cf0, forceFaceConsolidation=false, mustCreatePolyData=false, info=@0x118984da8, fl=0x0) at avtFacelistFilter.C:561 #13 0x0000000101b82e9a in avtFacelistFilter::FindFaces (this=0x10a1d3c70, in_dr=0x118984d40, info=@0x118984da8, create3DCellNumbers=false, forceFaceConsolidation=false, createEdgeListFor2DDatasets=false, mustCreatePolyData=false, fl=0x0) at avtFacelistFilter.C:409 #14 0x0000000101b8365d in avtFacelistFilter::ExecuteDataTree (this=0x10a1d3c70, in_dr=0x118984d40) at avtFacelistFilter.C:299 #15 0x00000001026dfd94 in avtSIMODataTreeIterator::ExecuteDataTreeOnThread (cbdata=0x10a1d2d90) at avtSIMODataTreeIterator.C:268 #16 0x00000001026e01e8 in avtSIMODataTreeIterator::Execute (this=0x10a1d3c70, inDT=@0x7fff5fbfa138, outDT=@0x7fff5fbfa110) at avtSIMODataTreeIterator.C:357 #17 0x00000001026e0d63 in avtSIMODataTreeIterator::Execute (this=0x10a1d3c70) at avtSIMODataTreeIterator.C:160 #18 0x00000001026e0c66 in virtual thunk to avtSIMODataTreeIterator::Execute() () at avtSIMODataTreeIterator.C:206 #19 0x00000001026c4b93 in avtFilter::Update (this=0x10a1d3d00, contract=@0x7fff5fbfa648) at avtFilter.C:292 #20 0x00000001026c43a1 in virtual thunk to avtFilter::Update(ref_ptr<avtContract>) () at avtFilter.C:149 #21 0x00000001025d8313 in avtDataObject::Update (this=0x10a1d3d70, contract=@0x7fff5fbfa6b0) at avtDataObject.C:131 #22 0x00000001027023b5 in avtDataObjectSink::UpdateInput (this=0x10a1d3048, spec=@0x7fff5fbfa8f0) at avtDataObjectSink.C:157 #23 0x00000001026c4892 in avtFilter::Update (this=0x10a1d3008, contract=@0x7fff5fbfaee0) at avtFilter.C:258 #24 0x0000000101b8ec22 in avtGhostZoneAndFacelistFilter::Execute (this=0x10a1d2900) at avtGhostZoneAndFacelistFilter.C:376 #25 0x0000000101b8d5e6 in virtual thunk to avtGhostZoneAndFacelistFilter::Execute() () at avtGhostZoneAndFacelistFilter.C:439 #26 0x00000001026c4b93 in avtFilter::Update (this=0x10a1d2990, contract=@0x7fff5fbfb678) at avtFilter.C:292 #27 0x00000001026c43a1 in virtual thunk to avtFilter::Update(ref_ptr<avtContract>) () at avtFilter.C:149 #28 0x00000001025d8313 in avtDataObject::Update (this=0x10a1d2a00, contract=@0x7fff5fbfb6e0) at avtDataObject.C:131 #29 0x00000001027023b5 in avtDataObjectSink::UpdateInput (this=0x10a1d2088, spec=@0x7fff5fbfb920) at avtDataObjectSink.C:157 #30 0x00000001026c4892 in avtFilter::Update (this=0x10a1d2048, contract=@0x7fff5fbfbb98) at avtFilter.C:258 #31 0x00000001026c43a1 in virtual thunk to avtFilter::Update(ref_ptr<avtContract>) () at avtFilter.C:149 #32 0x00000001025d8313 in avtDataObject::Update (this=0x10a1d2550, contract=@0x7fff5fbfbc00) at avtDataObject.C:131 #33 0x00000001027023b5 in avtDataObjectSink::UpdateInput (this=0x10a1d5700, spec=@0x7fff5fbfbe40) at avtDataObjectSink.C:157 #34 0x00000001026c4892 in avtFilter::Update (this=0x10a1d56c0, contract=@0x7fff5fbfc0b8) at avtFilter.C:258 #35 0x00000001026c43a1 in virtual thunk to avtFilter::Update(ref_ptr<avtContract>) () at avtFilter.C:149 #36 0x00000001025d8313 in avtDataObject::Update (this=0x10a1d5730, contract=@0x7fff5fbfc120) at avtDataObject.C:131 #37 0x00000001027023b5 in avtDataObjectSink::UpdateInput (this=0x10a1d43a8, spec=@0x7fff5fbfc360) at avtDataObjectSink.C:157 #38 0x00000001026c4892 in avtFilter::Update (this=0x10a1d4368, contract=@0x7fff5fbfc5d8) at avtFilter.C:258 #39 0x00000001026c43a1 in virtual thunk to avtFilter::Update(ref_ptr<avtContract>) () at avtFilter.C:149 #40 0x00000001025d8313 in avtDataObject::Update (this=0x10a1d43e0, contract=@0x7fff5fbfc640) at avtDataObject.C:131 #41 0x00000001027023b5 in avtDataObjectSink::UpdateInput (this=0x10a1d4a10, spec=@0x7fff5fbfc880) at avtDataObjectSink.C:157 #42 0x00000001026c4892 in avtFilter::Update (this=0x10a1d49d0, contract=@0x7fff5fbfcaf8) at avtFilter.C:258 #43 0x00000001026c43a1 in virtual thunk to avtFilter::Update(ref_ptr<avtContract>) () at avtFilter.C:149 #44 0x00000001025d8313 in avtDataObject::Update (this=0x10a1d4a40, contract=@0x7fff5fbfd208) at avtDataObject.C:131 #45 0x0000000102718357 in avtTerminatingSink::Execute (this=0x118980150, contract=@0x7fff5fbfd718) at avtTerminatingSink.C:208 #46 0x0000000101769985 in avtPlot::Execute (this=0x10a1d1cd0, input=@0x7fff5fbfd938, contract=@0x7fff5fbfd928, atts=0x10a800028) at avtPlot.C:624 #47 0x0000000100035696 in DataNetwork::GetWriter (this=0x10a1a7220, dob=@0x7fff5fbfdbb8, contract=@0x7fff5fbfdba8, atts=0x10a800028) at DataNetwork.C:247 #48 0x00000001000cbfc8 in NetworkManager::GetOutput (this=0x1055636b0, respondWithNullData=false, calledForRender=false, cellCountMultiplier=0x7fff5fbfe260) at NetworkManager.C:2565 #49 0x000000010008796a in EngineRPCExecutor<ExecuteRPC>::Execute (this=0x10552bbb0, rpc=0x10583bd38) at Executors.h:1005 #50 0x00000001000993c4 in EngineRPCExecutor<ExecuteRPC>::Update (this=0x10552bbb0, s=0x10583bd68) at EngineRPCExecutor.h:67 #51 0x0000000104149872 in Subject::Notify (this=0x10583bd68) at Subject.C:193 #52 0x0000000103e48d01 in AttributeSubject::Notify (this=0x10583bd38) at AttributeSubject.C:99 #53 0x00000001041cf884 in Xfer::Process (this=0x1055218a0) at Xfer.C:416 #54 0x0000000100075d27 in Engine::ProcessInput (this=0x105521770) at Engine.C:1881 #55 0x0000000100079ef0 in Engine::EventLoop (this=0x105521770) at Engine.C:1825 #56 0x000000010001497d in EngineMain (argc=4, argv=0x7fff5fbfe738) at main.C:331 #57 0x0000000100014b50 in main (argc=12, argv=0x7fff5fbfe738) at main.C:394 (gdb) Comments: Hello:I’ve fixed Bug #2534 (Deliberate misuse of SPH operator crashes engine).2.10RC:Sending operators/SPHResample/avtSPHResampleFilter.CSending resources/help/en_US/relnotes2.10.2.htmlTransmitting file data ..Committed revision 28312.Trunk:Sending operators/SPHResample/avtSPHResampleFilter.CSending resources/help/en_US/relnotes2.10.2.htmlTransmitting file data ..Committed revision 28314.-Kevin
non_infrastructure
deliberate misuse of sph operator crashes engine open multi silo add a pseudocolor of d add sph operator yes it s already a structured dataset draw plots the engine crashes program received signal exc bad access could not access memory reason kern invalid address at address in vtkdataarraytemplate gettuplevalue gdb where in vtkdataarraytemplate gettuplevalue in vtkdataarraytemplate inserttuple in vtkdatasetattributes copydata in vtkrectilineargridfacelistfilter processfaces nx ny nz outpointdata inpointdata x y z p at vtkrectilineargridfacelistfilter c in vtkrectilineargridfacelistfilter requestdata this unnamed arg inputvector outputvector at vtkrectilineargridfacelistfilter c in vtkexecutive callalgorithm in vtkdemanddrivenpipeline executedata in vtkcompositedatapipeline executedata in vtkdemanddrivenpipeline processrequest in vtkstreamingdemanddrivenpipeline processrequest in vtkdemanddrivenpipeline updatedata in vtkstreamingdemanddrivenpipeline update in avtfacelistfilter this in ds domain label forcefaceconsolidation false mustcreatepolydata false info fl at avtfacelistfilter c in avtfacelistfilter findfaces this in dr info false forcefaceconsolidation false false mustcreatepolydata false fl at avtfacelistfilter c in avtfacelistfilter executedatatree this in dr at avtfacelistfilter c in avtsimodatatreeiterator executedatatreeonthread cbdata at avtsimodatatreeiterator c in avtsimodatatreeiterator execute this indt outdt at avtsimodatatreeiterator c in avtsimodatatreeiterator execute this at avtsimodatatreeiterator c in virtual thunk to avtsimodatatreeiterator execute at avtsimodatatreeiterator c in avtfilter update this contract at avtfilter c in virtual thunk to avtfilter update ref ptr at avtfilter c in avtdataobject update this contract at avtdataobject c in avtdataobjectsink updateinput this spec at avtdataobjectsink c in avtfilter update this contract at avtfilter c in avtghostzoneandfacelistfilter execute this at avtghostzoneandfacelistfilter c in virtual thunk to avtghostzoneandfacelistfilter execute at avtghostzoneandfacelistfilter c in avtfilter update this contract at avtfilter c in virtual thunk to avtfilter update ref ptr at avtfilter c in avtdataobject update this contract at avtdataobject c in avtdataobjectsink updateinput this spec at avtdataobjectsink c in avtfilter update this contract at avtfilter c in virtual thunk to avtfilter update ref ptr at avtfilter c in avtdataobject update this contract at avtdataobject c in avtdataobjectsink updateinput this spec at avtdataobjectsink c in avtfilter update this contract at avtfilter c in virtual thunk to avtfilter update ref ptr at avtfilter c in avtdataobject update this contract at avtdataobject c in avtdataobjectsink updateinput this spec at avtdataobjectsink c in avtfilter update this contract at avtfilter c in virtual thunk to avtfilter update ref ptr at avtfilter c in avtdataobject update this contract at avtdataobject c in avtdataobjectsink updateinput this spec at avtdataobjectsink c in avtfilter update this contract at avtfilter c in virtual thunk to avtfilter update ref ptr at avtfilter c in avtdataobject update this contract at avtdataobject c in avtterminatingsink execute this contract at avtterminatingsink c in avtplot execute this input contract atts at avtplot c in datanetwork getwriter this dob contract atts at datanetwork c in networkmanager getoutput this respondwithnulldata false calledforrender false cellcountmultiplier at networkmanager c in enginerpcexecutor execute this rpc at executors h in enginerpcexecutor update this s at enginerpcexecutor h in subject notify this at subject c in attributesubject notify this at attributesubject c in xfer process this at xfer c in engine processinput this at engine c in engine eventloop this at engine c in enginemain argc argv at main c in main argc argv at main c gdb redmine migration this ticket was migrated from redmine as such not all information was able to be captured in the transition below is a complete record of the original redmine ticket ticket number status resolved project visit tracker bug priority normal subject deliberate misuse of sph operator crashes engine assigned to kevin griffin category target version author brad whitlock start due date done estimated time created pm updated pm likelihood occasional severity crash wrong results found in version impact expected use os all support group any description open multi silo add a pseudocolor of d add sph operator yes it s already a structured dataset draw plots the engine crashes program received signal exc bad access could not access memory reason kern invalid address at address in vtkdataarraytemplate gettuplevalue gdb where in vtkdataarraytemplate gettuplevalue in vtkdataarraytemplate inserttuple in vtkdatasetattributes copydata in vtkrectilineargridfacelistfilter processfaces nx ny nz outpointdata inpointdata x y z p at vtkrectilineargridfacelistfilter c in vtkrectilineargridfacelistfilter requestdata this unnamed arg inputvector outputvector at vtkrectilineargridfacelistfilter c in vtkexecutive callalgorithm in vtkdemanddrivenpipeline executedata in vtkcompositedatapipeline executedata in vtkdemanddrivenpipeline processrequest in vtkstreamingdemanddrivenpipeline processrequest in vtkdemanddrivenpipeline updatedata in vtkstreamingdemanddrivenpipeline update in avtfacelistfilter this in ds domain label forcefaceconsolidation false mustcreatepolydata false info fl at avtfacelistfilter c in avtfacelistfilter findfaces this in dr info false forcefaceconsolidation false false mustcreatepolydata false fl at avtfacelistfilter c in avtfacelistfilter executedatatree this in dr at avtfacelistfilter c in avtsimodatatreeiterator executedatatreeonthread cbdata at avtsimodatatreeiterator c in avtsimodatatreeiterator execute this indt outdt at avtsimodatatreeiterator c in avtsimodatatreeiterator execute this at avtsimodatatreeiterator c in virtual thunk to avtsimodatatreeiterator execute at avtsimodatatreeiterator c in avtfilter update this contract at avtfilter c in virtual thunk to avtfilter update ref ptr at avtfilter c in avtdataobject update this contract at avtdataobject c in avtdataobjectsink updateinput this spec at avtdataobjectsink c in avtfilter update this contract at avtfilter c in avtghostzoneandfacelistfilter execute this at avtghostzoneandfacelistfilter c in virtual thunk to avtghostzoneandfacelistfilter execute at avtghostzoneandfacelistfilter c in avtfilter update this contract at avtfilter c in virtual thunk to avtfilter update ref ptr at avtfilter c in avtdataobject update this contract at avtdataobject c in avtdataobjectsink updateinput this spec at avtdataobjectsink c in avtfilter update this contract at avtfilter c in virtual thunk to avtfilter update ref ptr at avtfilter c in avtdataobject update this contract at avtdataobject c in avtdataobjectsink updateinput this spec at avtdataobjectsink c in avtfilter update this contract at avtfilter c in virtual thunk to avtfilter update ref ptr at avtfilter c in avtdataobject update this contract at avtdataobject c in avtdataobjectsink updateinput this spec at avtdataobjectsink c in avtfilter update this contract at avtfilter c in virtual thunk to avtfilter update ref ptr at avtfilter c in avtdataobject update this contract at avtdataobject c in avtdataobjectsink updateinput this spec at avtdataobjectsink c in avtfilter update this contract at avtfilter c in virtual thunk to avtfilter update ref ptr at avtfilter c in avtdataobject update this contract at avtdataobject c in avtterminatingsink execute this contract at avtterminatingsink c in avtplot execute this input contract atts at avtplot c in datanetwork getwriter this dob contract atts at datanetwork c in networkmanager getoutput this respondwithnulldata false calledforrender false cellcountmultiplier at networkmanager c in enginerpcexecutor execute this rpc at executors h in enginerpcexecutor update this s at enginerpcexecutor h in subject notify this at subject c in attributesubject notify this at attributesubject c in xfer process this at xfer c in engine processinput this at engine c in engine eventloop this at engine c in enginemain argc argv at main c in main argc argv at main c gdb comments hello i’ve fixed bug deliberate misuse of sph operator crashes engine sending operators sphresample avtsphresamplefilter csending resources help en us htmltransmitting file data committed revision trunk sending operators sphresample avtsphresamplefilter csending resources help en us htmltransmitting file data committed revision kevin
0
444,923
31,156,600,298
IssuesEvent
2023-08-16 13:30:20
dotnet/sdk-container-builds
https://api.github.com/repos/dotnet/sdk-container-builds
closed
Document CMD/Entrypoint behavior in detail
documentation
https://github.com/dotnet/sdk/pull/33037 is going to change the `ContainerEntrypoint/ContainerEntrypointArgs` structures. Once that merges we should update the matching sections in the docs to describe what happens and when. We will also want to link to Docker's documentation for CMD vs Entrypoint and when each should be used.
1.0
Document CMD/Entrypoint behavior in detail - https://github.com/dotnet/sdk/pull/33037 is going to change the `ContainerEntrypoint/ContainerEntrypointArgs` structures. Once that merges we should update the matching sections in the docs to describe what happens and when. We will also want to link to Docker's documentation for CMD vs Entrypoint and when each should be used.
non_infrastructure
document cmd entrypoint behavior in detail is going to change the containerentrypoint containerentrypointargs structures once that merges we should update the matching sections in the docs to describe what happens and when we will also want to link to docker s documentation for cmd vs entrypoint and when each should be used
0
763,625
26,765,805,200
IssuesEvent
2023-01-31 10:32:14
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
www.joinhoney.com - design is broken
priority-normal browser-fenix engine-gecko
<!-- @browser: Firefox Mobile 111.0 --> <!-- @ua_header: Mozilla/5.0 (Android 11; Mobile; rv:109.0) Gecko/111.0 Firefox/111.0 --> <!-- @reported_with: android-components-reporter --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/117614 --> <!-- @extra_labels: browser-fenix --> **URL**: https://www.joinhoney.com/mobile/join?from=instapage **Browser / Version**: Firefox Mobile 111.0 **Operating System**: Android 11 **Tested Another Browser**: Yes Other **Problem type**: Design is broken **Description**: Items not fully visible **Steps to Reproduce**: Thanks contribusion for me work <details> <summary>View the screenshot</summary> <img alt="Screenshot" src="https://webcompat.com/uploads/2023/1/0024e6c0-6351-4832-8b4c-806b06312251.jpeg"> </details> <details> <summary>Browser Configuration</summary> <ul> <li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20230127094652</li><li>channel: nightly</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li> </ul> </details> [View console log messages](https://webcompat.com/console_logs/2023/1/de93402a-7ada-4940-8f4b-06ec0c08936a) _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
www.joinhoney.com - design is broken - <!-- @browser: Firefox Mobile 111.0 --> <!-- @ua_header: Mozilla/5.0 (Android 11; Mobile; rv:109.0) Gecko/111.0 Firefox/111.0 --> <!-- @reported_with: android-components-reporter --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/117614 --> <!-- @extra_labels: browser-fenix --> **URL**: https://www.joinhoney.com/mobile/join?from=instapage **Browser / Version**: Firefox Mobile 111.0 **Operating System**: Android 11 **Tested Another Browser**: Yes Other **Problem type**: Design is broken **Description**: Items not fully visible **Steps to Reproduce**: Thanks contribusion for me work <details> <summary>View the screenshot</summary> <img alt="Screenshot" src="https://webcompat.com/uploads/2023/1/0024e6c0-6351-4832-8b4c-806b06312251.jpeg"> </details> <details> <summary>Browser Configuration</summary> <ul> <li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20230127094652</li><li>channel: nightly</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li> </ul> </details> [View console log messages](https://webcompat.com/console_logs/2023/1/de93402a-7ada-4940-8f4b-06ec0c08936a) _From [webcompat.com](https://webcompat.com/) with ❤️_
non_infrastructure
design is broken url browser version firefox mobile operating system android tested another browser yes other problem type design is broken description items not fully visible steps to reproduce thanks contribusion for me work view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel nightly hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️
0
807,631
30,012,171,833
IssuesEvent
2023-06-26 15:58:08
o3de/o3de
https://api.github.com/repos/o3de/o3de
closed
DPE Inspector: Complete expansion state fix
feature/editor kind/bug sig/content priority/major
**Describe the bug** There's an issue with restoring expansion state of fields where it's not always restored with chained adapters. Work on a fix had been started previously and is in a draft PR here: https://github.com/o3de/o3de/pull/15038 **Steps to reproduce** 1. Launch the Editor with the DPE enabled: `Editor.exe --ed_enableDPE=true` 2. Open a new or existing level. 3. Create a new entity. 4. Add several components to the entity. 5. Toggle several expansion buttons. 6. Unselect the Entity and then re-select it. **Expected behavior** Expansion state of fields should be restored exactly as before. **Actual behavior** Expansion state of some fields might be back to the default.
1.0
DPE Inspector: Complete expansion state fix - **Describe the bug** There's an issue with restoring expansion state of fields where it's not always restored with chained adapters. Work on a fix had been started previously and is in a draft PR here: https://github.com/o3de/o3de/pull/15038 **Steps to reproduce** 1. Launch the Editor with the DPE enabled: `Editor.exe --ed_enableDPE=true` 2. Open a new or existing level. 3. Create a new entity. 4. Add several components to the entity. 5. Toggle several expansion buttons. 6. Unselect the Entity and then re-select it. **Expected behavior** Expansion state of fields should be restored exactly as before. **Actual behavior** Expansion state of some fields might be back to the default.
non_infrastructure
dpe inspector complete expansion state fix describe the bug there s an issue with restoring expansion state of fields where it s not always restored with chained adapters work on a fix had been started previously and is in a draft pr here steps to reproduce launch the editor with the dpe enabled editor exe ed enabledpe true open a new or existing level create a new entity add several components to the entity toggle several expansion buttons unselect the entity and then re select it expected behavior expansion state of fields should be restored exactly as before actual behavior expansion state of some fields might be back to the default
0
808,578
30,089,103,810
IssuesEvent
2023-06-29 10:53:47
amanjaiman1/Product_3D
https://api.github.com/repos/amanjaiman1/Product_3D
closed
[Feat]: Add Home Screen for Customizer with Recent Designs and History
gssoc23 🟧 priority: high level3
### Is your feature request related to a problem? Please describe. I have identified a potential problem that could be addressed with the addition of a home screen featuring a "Recent Designs" section and a "History" feature. ### Describe the solution you'd like #### Recent Designer Section: Display a list of recently accessed designers, showcasing their names or thumbnails. Allow users to quickly revisit a specific designs by selecting it from the list. Enhance user convenience by eliminating the need to search for or recreate previous designs. #### History: Implement a history feature to keep track of the customizations made by users. Display a chronological list of past customizations, including timestamps. Enable users to review and restore previous designs from any point in time. ### Describe alternatives you've considered _No response_ ### Additional context _No response_ ### Code of Conduct - [X] I agree to follow this project's Code of Conduct - [X] I'm a GSSoC'23 contributor - [X] I want to work on this issue
1.0
[Feat]: Add Home Screen for Customizer with Recent Designs and History - ### Is your feature request related to a problem? Please describe. I have identified a potential problem that could be addressed with the addition of a home screen featuring a "Recent Designs" section and a "History" feature. ### Describe the solution you'd like #### Recent Designer Section: Display a list of recently accessed designers, showcasing their names or thumbnails. Allow users to quickly revisit a specific designs by selecting it from the list. Enhance user convenience by eliminating the need to search for or recreate previous designs. #### History: Implement a history feature to keep track of the customizations made by users. Display a chronological list of past customizations, including timestamps. Enable users to review and restore previous designs from any point in time. ### Describe alternatives you've considered _No response_ ### Additional context _No response_ ### Code of Conduct - [X] I agree to follow this project's Code of Conduct - [X] I'm a GSSoC'23 contributor - [X] I want to work on this issue
non_infrastructure
add home screen for customizer with recent designs and history is your feature request related to a problem please describe i have identified a potential problem that could be addressed with the addition of a home screen featuring a recent designs section and a history feature describe the solution you d like recent designer section display a list of recently accessed designers showcasing their names or thumbnails allow users to quickly revisit a specific designs by selecting it from the list enhance user convenience by eliminating the need to search for or recreate previous designs history implement a history feature to keep track of the customizations made by users display a chronological list of past customizations including timestamps enable users to review and restore previous designs from any point in time describe alternatives you ve considered no response additional context no response code of conduct i agree to follow this project s code of conduct i m a gssoc contributor i want to work on this issue
0
23,252
16,010,668,939
IssuesEvent
2021-04-20 10:07:34
spring-projects/spring-batch
https://api.github.com/repos/spring-projects/spring-batch
closed
FlatFileItemWriter creates empy line at eof. Could we allow option to disable?
in: infrastructure status: waiting-for-reporter type: feature
**Expected Behavior** To have an option in the FlatFileItemWriterBuilder that allows you to disable the empty line at eof. `.noEmptyLineAtEndOfFile()` in the example. ```java new FlatFileItemWriterBuilder<Metadata>() .name("writer") .resource(new FileUrlResource("fileuri")) .noEmptyLineAtEndOfFile() .formatted() .format("%-50.50s")) .names("field") .build(); ``` could result in a file that has no last empty line by altering the writer `doWrite` method. <!--- Tell us how it should work. Add a code example to explain what you think the feature should look like. This is optional, but it would help up understand your expectations. --> **Current Behavior** The for loop appends this.lineseparator at end of String that will be written to file. ```java @Override public String doWrite(List<? extends T> items) { StringBuilder lines = new StringBuilder(); for (T item : items) { //BECAUSE OF THIS LOOP, THE LINE SEPARATOR IS ALWAYS ADDED, ALSO ON LAST LINE lines.append(this.lineAggregator.aggregate(item)).append(this.lineSeparator); } return lines.toString(); } ``` <!--- Explain the difference from current behavior and why do you need this feature (aka why it is not possible to implement the desired functionality with the current version) --> **Context** - I tried decorating the writer to have my own doWrite method, but no getters are available for the lineAggregator or lineSeparator. - Recreating the builder / writer just for this one function is a bit overhead. - We can also adapt our existing file consumers (non-spring batch) but todays file creator cycles used in-house (non-spring batch) create no empty line. <!--- How has this issue affected you? What are you trying to accomplish? What other alternatives have you considered? Are you aware of any workarounds? -->
1.0
FlatFileItemWriter creates empy line at eof. Could we allow option to disable? - **Expected Behavior** To have an option in the FlatFileItemWriterBuilder that allows you to disable the empty line at eof. `.noEmptyLineAtEndOfFile()` in the example. ```java new FlatFileItemWriterBuilder<Metadata>() .name("writer") .resource(new FileUrlResource("fileuri")) .noEmptyLineAtEndOfFile() .formatted() .format("%-50.50s")) .names("field") .build(); ``` could result in a file that has no last empty line by altering the writer `doWrite` method. <!--- Tell us how it should work. Add a code example to explain what you think the feature should look like. This is optional, but it would help up understand your expectations. --> **Current Behavior** The for loop appends this.lineseparator at end of String that will be written to file. ```java @Override public String doWrite(List<? extends T> items) { StringBuilder lines = new StringBuilder(); for (T item : items) { //BECAUSE OF THIS LOOP, THE LINE SEPARATOR IS ALWAYS ADDED, ALSO ON LAST LINE lines.append(this.lineAggregator.aggregate(item)).append(this.lineSeparator); } return lines.toString(); } ``` <!--- Explain the difference from current behavior and why do you need this feature (aka why it is not possible to implement the desired functionality with the current version) --> **Context** - I tried decorating the writer to have my own doWrite method, but no getters are available for the lineAggregator or lineSeparator. - Recreating the builder / writer just for this one function is a bit overhead. - We can also adapt our existing file consumers (non-spring batch) but todays file creator cycles used in-house (non-spring batch) create no empty line. <!--- How has this issue affected you? What are you trying to accomplish? What other alternatives have you considered? Are you aware of any workarounds? -->
infrastructure
flatfileitemwriter creates empy line at eof could we allow option to disable expected behavior to have an option in the flatfileitemwriterbuilder that allows you to disable the empty line at eof noemptylineatendoffile in the example java new flatfileitemwriterbuilder name writer resource new fileurlresource fileuri noemptylineatendoffile formatted format names field build could result in a file that has no last empty line by altering the writer dowrite method current behavior the for loop appends this lineseparator at end of string that will be written to file java override public string dowrite list items stringbuilder lines new stringbuilder for t item items because of this loop the line separator is always added also on last line lines append this lineaggregator aggregate item append this lineseparator return lines tostring context i tried decorating the writer to have my own dowrite method but no getters are available for the lineaggregator or lineseparator recreating the builder writer just for this one function is a bit overhead we can also adapt our existing file consumers non spring batch but todays file creator cycles used in house non spring batch create no empty line how has this issue affected you what are you trying to accomplish what other alternatives have you considered are you aware of any workarounds
1
2,961
3,985,131,381
IssuesEvent
2016-05-07 17:33:03
asciidoctor/jekyll-asciidoc
https://api.github.com/repos/asciidoctor/jekyll-asciidoc
closed
Release 1.1.1
infrastructure
I think we're ready for another release. This release introduces some differences to how the plugin works enough to warrant at least a bump to the minor release number. The most important changes are as follows: * The AsciiDoc document title overrides the title set in the front matter or the title that's automatically generated (in the case of a post) * The AsciiDoc page-related attributes override the matching entries in the page data (i.e., front matter) * The value of page-related attributes are treated as YAML values (automatic type coercion) * page- is the default prefix for page-related AsciiDoc attributes (e.g., `page-layout`). * The key to configure the page attribute prefix is `asciidoc_page_attribute_prefix`; the value should not contain the trailing hyphen * The date of a post can be set using the `revdate` AsciiDoc attribute * Only configure the Asciidoctor options once (previously it was being called twice in serve mode) @mkobit would you like to do the release?
1.0
Release 1.1.1 - I think we're ready for another release. This release introduces some differences to how the plugin works enough to warrant at least a bump to the minor release number. The most important changes are as follows: * The AsciiDoc document title overrides the title set in the front matter or the title that's automatically generated (in the case of a post) * The AsciiDoc page-related attributes override the matching entries in the page data (i.e., front matter) * The value of page-related attributes are treated as YAML values (automatic type coercion) * page- is the default prefix for page-related AsciiDoc attributes (e.g., `page-layout`). * The key to configure the page attribute prefix is `asciidoc_page_attribute_prefix`; the value should not contain the trailing hyphen * The date of a post can be set using the `revdate` AsciiDoc attribute * Only configure the Asciidoctor options once (previously it was being called twice in serve mode) @mkobit would you like to do the release?
infrastructure
release i think we re ready for another release this release introduces some differences to how the plugin works enough to warrant at least a bump to the minor release number the most important changes are as follows the asciidoc document title overrides the title set in the front matter or the title that s automatically generated in the case of a post the asciidoc page related attributes override the matching entries in the page data i e front matter the value of page related attributes are treated as yaml values automatic type coercion page is the default prefix for page related asciidoc attributes e g page layout the key to configure the page attribute prefix is asciidoc page attribute prefix the value should not contain the trailing hyphen the date of a post can be set using the revdate asciidoc attribute only configure the asciidoctor options once previously it was being called twice in serve mode mkobit would you like to do the release
1
268,069
23,342,597,962
IssuesEvent
2022-08-09 15:07:33
WordPress/gutenberg
https://api.github.com/repos/WordPress/gutenberg
closed
"_edit_last" meta value doesn't appear to be saved at all with Gutenberg
[Type] Bug Needs Testing
### Description Saving in Gutenberg block editor does not seem to set the "_edit_last" to post meta values. Tried using classic editor and "_edit_last" was set correctly, so is the problem REST API related? Issue noticed when tried to show (in front end) the name of the user who made last revision but `get_the_modified_author()` function returns NULL. Also `get_post_meta($post_id)` function showed "_edit_last" value isn't set. ### Step-by-step reproduction instructions 1. Edit post/page 2. Update changes 2. "_edit_last" meta value doesn't seem to appear ### Screenshots, screen recording, code snippet _No response_ ### Environment info - Latest WordPress version, Latest Gutenberg which comes with latest WP ### Please confirm that you have searched existing issues in the repo. Yes ### Please confirm that you have tested with all plugins deactivated except Gutenberg. Yes
1.0
"_edit_last" meta value doesn't appear to be saved at all with Gutenberg - ### Description Saving in Gutenberg block editor does not seem to set the "_edit_last" to post meta values. Tried using classic editor and "_edit_last" was set correctly, so is the problem REST API related? Issue noticed when tried to show (in front end) the name of the user who made last revision but `get_the_modified_author()` function returns NULL. Also `get_post_meta($post_id)` function showed "_edit_last" value isn't set. ### Step-by-step reproduction instructions 1. Edit post/page 2. Update changes 2. "_edit_last" meta value doesn't seem to appear ### Screenshots, screen recording, code snippet _No response_ ### Environment info - Latest WordPress version, Latest Gutenberg which comes with latest WP ### Please confirm that you have searched existing issues in the repo. Yes ### Please confirm that you have tested with all plugins deactivated except Gutenberg. Yes
non_infrastructure
edit last meta value doesn t appear to be saved at all with gutenberg description saving in gutenberg block editor does not seem to set the edit last to post meta values tried using classic editor and edit last was set correctly so is the problem rest api related issue noticed when tried to show in front end the name of the user who made last revision but get the modified author function returns null also get post meta post id function showed edit last value isn t set step by step reproduction instructions edit post page update changes edit last meta value doesn t seem to appear screenshots screen recording code snippet no response environment info latest wordpress version latest gutenberg which comes with latest wp please confirm that you have searched existing issues in the repo yes please confirm that you have tested with all plugins deactivated except gutenberg yes
0
3,079
4,045,804,676
IssuesEvent
2016-05-22 08:30:56
dotnet/corefx
https://api.github.com/repos/dotnet/corefx
closed
Enable ApiCompat for CoreFX builds
Infrastructure
@venkat-raman251 setup ApiCompat and did some initial clean runs but ever enabled it in the corefx builds. We need to enable by adding RunApiCompat=true (see https://github.com/weshaggard/corefx/commit/f208510f7720116c139f7fedbb89bbf31ebcc184), but before turning it on we have to clean-up the existing issues that have occurred since then.
1.0
Enable ApiCompat for CoreFX builds - @venkat-raman251 setup ApiCompat and did some initial clean runs but ever enabled it in the corefx builds. We need to enable by adding RunApiCompat=true (see https://github.com/weshaggard/corefx/commit/f208510f7720116c139f7fedbb89bbf31ebcc184), but before turning it on we have to clean-up the existing issues that have occurred since then.
infrastructure
enable apicompat for corefx builds venkat setup apicompat and did some initial clean runs but ever enabled it in the corefx builds we need to enable by adding runapicompat true see but before turning it on we have to clean up the existing issues that have occurred since then
1
21,287
14,497,453,658
IssuesEvent
2020-12-11 14:15:29
epfl-si/xaas-admin
https://api.github.com/repos/epfl-si/xaas-admin
opened
[EPFL] Use Accred attribute to determine content of "support" role
Infrastructure
- [ ] Demander l'accès au web service d'accred pour avoir la liste des personnes qui font du support pour une unité - [ ] Mettre le contenu des groupes de support des BG avec le résultat de la requête qui demande toutes les personnes (en tenant compte de la hiérarchie) qui sont responsable info d'une unité.
1.0
[EPFL] Use Accred attribute to determine content of "support" role - - [ ] Demander l'accès au web service d'accred pour avoir la liste des personnes qui font du support pour une unité - [ ] Mettre le contenu des groupes de support des BG avec le résultat de la requête qui demande toutes les personnes (en tenant compte de la hiérarchie) qui sont responsable info d'une unité.
infrastructure
use accred attribute to determine content of support role demander l accès au web service d accred pour avoir la liste des personnes qui font du support pour une unité mettre le contenu des groupes de support des bg avec le résultat de la requête qui demande toutes les personnes en tenant compte de la hiérarchie qui sont responsable info d une unité
1
65,503
14,727,876,951
IssuesEvent
2021-01-06 09:11:10
Seagate/cortx-s3server
https://api.github.com/repos/Seagate/cortx-s3server
closed
CVE-2015-7576 (Low) detected in actionpack-4.2.2.gem
needs-attention needs-triage security vulnerability
## CVE-2015-7576 - Low Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>actionpack-4.2.2.gem</b></p></summary> <p>Web apps on Rails. Simple, battle-tested conventions for building and testing MVC web applications. Works with any Rack-compatible server.</p> <p>Library home page: <a href="https://rubygems.org/gems/actionpack-4.2.2.gem">https://rubygems.org/gems/actionpack-4.2.2.gem</a></p> <p> Dependency Hierarchy: - coffee-rails-4.1.0.gem (Root Library) - railties-4.2.2.gem - :x: **actionpack-4.2.2.gem** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/Seagate/cortx-s3server/commit/fde64200b4f94603ae17220b98da6422a531445e">fde64200b4f94603ae17220b98da6422a531445e</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The http_basic_authenticate_with method in actionpack/lib/action_controller/metal/http_authentication.rb in the Basic Authentication implementation in Action Controller in Ruby on Rails before 3.2.22.1, 4.0.x and 4.1.x before 4.1.14.1, 4.2.x before 4.2.5.1, and 5.x before 5.0.0.beta1.1 does not use a constant-time algorithm for verifying credentials, which makes it easier for remote attackers to bypass authentication by measuring timing differences. <p>Publish Date: 2016-02-16 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-7576>CVE-2015-7576</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>3.7</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2015-7576">https://nvd.nist.gov/vuln/detail/CVE-2015-7576</a></p> <p>Release Date: 2016-02-16</p> <p>Fix Resolution: 3.2.22.1,4.1.14.1,4.2.5.1,5.0.0.beta1.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2015-7576 (Low) detected in actionpack-4.2.2.gem - ## CVE-2015-7576 - Low Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>actionpack-4.2.2.gem</b></p></summary> <p>Web apps on Rails. Simple, battle-tested conventions for building and testing MVC web applications. Works with any Rack-compatible server.</p> <p>Library home page: <a href="https://rubygems.org/gems/actionpack-4.2.2.gem">https://rubygems.org/gems/actionpack-4.2.2.gem</a></p> <p> Dependency Hierarchy: - coffee-rails-4.1.0.gem (Root Library) - railties-4.2.2.gem - :x: **actionpack-4.2.2.gem** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/Seagate/cortx-s3server/commit/fde64200b4f94603ae17220b98da6422a531445e">fde64200b4f94603ae17220b98da6422a531445e</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The http_basic_authenticate_with method in actionpack/lib/action_controller/metal/http_authentication.rb in the Basic Authentication implementation in Action Controller in Ruby on Rails before 3.2.22.1, 4.0.x and 4.1.x before 4.1.14.1, 4.2.x before 4.2.5.1, and 5.x before 5.0.0.beta1.1 does not use a constant-time algorithm for verifying credentials, which makes it easier for remote attackers to bypass authentication by measuring timing differences. <p>Publish Date: 2016-02-16 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-7576>CVE-2015-7576</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>3.7</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2015-7576">https://nvd.nist.gov/vuln/detail/CVE-2015-7576</a></p> <p>Release Date: 2016-02-16</p> <p>Fix Resolution: 3.2.22.1,4.1.14.1,4.2.5.1,5.0.0.beta1.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_infrastructure
cve low detected in actionpack gem cve low severity vulnerability vulnerable library actionpack gem web apps on rails simple battle tested conventions for building and testing mvc web applications works with any rack compatible server library home page a href dependency hierarchy coffee rails gem root library railties gem x actionpack gem vulnerable library found in head commit a href found in base branch main vulnerability details the http basic authenticate with method in actionpack lib action controller metal http authentication rb in the basic authentication implementation in action controller in ruby on rails before x and x before x before and x before does not use a constant time algorithm for verifying credentials which makes it easier for remote attackers to bypass authentication by measuring timing differences publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
566,945
16,834,772,267
IssuesEvent
2021-06-18 10:27:44
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
www.twitch.tv - video or audio doesn't play
browser-firefox engine-gecko priority-critical type-webrender-enabled
<!-- @browser: Firefox 91.0 --> <!-- @ua_header: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Firefox/91.0 --> <!-- @reported_with: desktop-reporter --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/77545 --> <!-- @extra_labels: type-webrender-enabled --> **URL**: https://www.twitch.tv/gamerdeesquerda **Browser / Version**: Firefox 91.0 **Operating System**: Linux **Tested Another Browser**: Yes Other **Problem type**: Video or audio doesn't play **Description**: The video or audio does not play **Steps to Reproduce**: Firefox nightly run for a second and appear #3000 twitch error. Begun 2-3 days ago. On Firefox ESR runs normally. <details> <summary>View the screenshot</summary> <img alt="Screenshot" src="https://webcompat.com/uploads/2021/6/a763ccc3-772a-4d78-9e69-640960588493.jpeg"> </details> <details> <summary>Browser Configuration</summary> <ul> <li>gfx.webrender.all: true</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: true</li><li>image.mem.shared: true</li><li>buildID: 20210617095423</li><li>channel: nightly</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li> </ul> </details> [View console log messages](https://webcompat.com/console_logs/2021/6/575de732-3128-49ff-a671-70b635d8e677) _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
www.twitch.tv - video or audio doesn't play - <!-- @browser: Firefox 91.0 --> <!-- @ua_header: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Firefox/91.0 --> <!-- @reported_with: desktop-reporter --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/77545 --> <!-- @extra_labels: type-webrender-enabled --> **URL**: https://www.twitch.tv/gamerdeesquerda **Browser / Version**: Firefox 91.0 **Operating System**: Linux **Tested Another Browser**: Yes Other **Problem type**: Video or audio doesn't play **Description**: The video or audio does not play **Steps to Reproduce**: Firefox nightly run for a second and appear #3000 twitch error. Begun 2-3 days ago. On Firefox ESR runs normally. <details> <summary>View the screenshot</summary> <img alt="Screenshot" src="https://webcompat.com/uploads/2021/6/a763ccc3-772a-4d78-9e69-640960588493.jpeg"> </details> <details> <summary>Browser Configuration</summary> <ul> <li>gfx.webrender.all: true</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: true</li><li>image.mem.shared: true</li><li>buildID: 20210617095423</li><li>channel: nightly</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li> </ul> </details> [View console log messages](https://webcompat.com/console_logs/2021/6/575de732-3128-49ff-a671-70b635d8e677) _From [webcompat.com](https://webcompat.com/) with ❤️_
non_infrastructure
video or audio doesn t play url browser version firefox operating system linux tested another browser yes other problem type video or audio doesn t play description the video or audio does not play steps to reproduce firefox nightly run for a second and appear twitch error begun days ago on firefox esr runs normally view the screenshot img alt screenshot src browser configuration gfx webrender all true gfx webrender blob images true gfx webrender enabled true image mem shared true buildid channel nightly hastouchscreen false mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️
0
1,486
3,248,202,385
IssuesEvent
2015-10-17 03:38:51
t3kt/vjzual2
https://api.github.com/repos/t3kt/vjzual2
closed
module/parameter id/name system is incomplete
infrastructure
Each module instance has a unique ID string. Each parameter has a local name which is unique within the containing module instance. Each parameter has a unique ID generated by combining its local name with the id of the parent module. Example: multidelay1:level This will make it much easier to create the centralized module list (see #24), and fix various MIDI problems (see #80).
1.0
module/parameter id/name system is incomplete - Each module instance has a unique ID string. Each parameter has a local name which is unique within the containing module instance. Each parameter has a unique ID generated by combining its local name with the id of the parent module. Example: multidelay1:level This will make it much easier to create the centralized module list (see #24), and fix various MIDI problems (see #80).
infrastructure
module parameter id name system is incomplete each module instance has a unique id string each parameter has a local name which is unique within the containing module instance each parameter has a unique id generated by combining its local name with the id of the parent module example level this will make it much easier to create the centralized module list see and fix various midi problems see
1
31,024
7,295,930,746
IssuesEvent
2018-02-26 09:04:19
jOOQ/jOOQ
https://api.github.com/repos/jOOQ/jOOQ
closed
XMLDatabase should support reading files from classpath
C: Code Generation P: Medium R: Fixed T: Enhancement
The `DDLDatabase` supports reading files from the classpath. So should the `XMLDatabase`
1.0
XMLDatabase should support reading files from classpath - The `DDLDatabase` supports reading files from the classpath. So should the `XMLDatabase`
non_infrastructure
xmldatabase should support reading files from classpath the ddldatabase supports reading files from the classpath so should the xmldatabase
0
13,438
10,258,264,039
IssuesEvent
2019-08-21 22:22:43
microsoft/TypeScript
https://api.github.com/repos/microsoft/TypeScript
opened
Fourslash server tests silently ignore compiler option directives
Infrastructure
The existing working solution is to create a `tsconfig.json` fourslash-file, which is probably a better way to do things given that the server tests are supposed to mimic reality more closely, so my proposal would be to throw early when double-slash-attersand compiler directives are encountered. Semi-related: #25081
1.0
Fourslash server tests silently ignore compiler option directives - The existing working solution is to create a `tsconfig.json` fourslash-file, which is probably a better way to do things given that the server tests are supposed to mimic reality more closely, so my proposal would be to throw early when double-slash-attersand compiler directives are encountered. Semi-related: #25081
infrastructure
fourslash server tests silently ignore compiler option directives the existing working solution is to create a tsconfig json fourslash file which is probably a better way to do things given that the server tests are supposed to mimic reality more closely so my proposal would be to throw early when double slash attersand compiler directives are encountered semi related
1
130,170
27,630,047,338
IssuesEvent
2023-03-10 10:08:45
sourcegraph/sourcegraph
https://api.github.com/repos/sourcegraph/sourcegraph
closed
Packages: artifact hosts and package status in the admin view
team/code-intelligence design rfc-698 package-repos packages-starship
Packages will be a first-class entity in the system. We currently manage package hosts in the admin via a section called code hosts and packages in the repositories section. This will be confusing to admins. As such, we should move the management of package hosts out of repositories and onto their own settings pages. However, this is a low-priority and high-effort initiative. To address the issue more cost-effectively, we should rename the code host section to `Code and package hosts` and the repositories section to `Repositories and packages` ![image](https://user-images.githubusercontent.com/539268/173135656-aa332912-be51-40f3-bf39-63b5703054c6.png)
1.0
Packages: artifact hosts and package status in the admin view - Packages will be a first-class entity in the system. We currently manage package hosts in the admin via a section called code hosts and packages in the repositories section. This will be confusing to admins. As such, we should move the management of package hosts out of repositories and onto their own settings pages. However, this is a low-priority and high-effort initiative. To address the issue more cost-effectively, we should rename the code host section to `Code and package hosts` and the repositories section to `Repositories and packages` ![image](https://user-images.githubusercontent.com/539268/173135656-aa332912-be51-40f3-bf39-63b5703054c6.png)
non_infrastructure
packages artifact hosts and package status in the admin view packages will be a first class entity in the system we currently manage package hosts in the admin via a section called code hosts and packages in the repositories section this will be confusing to admins as such we should move the management of package hosts out of repositories and onto their own settings pages however this is a low priority and high effort initiative to address the issue more cost effectively we should rename the code host section to code and package hosts and the repositories section to repositories and packages
0
273,735
23,781,419,264
IssuesEvent
2022-09-02 05:30:21
wpeventmanager/wp-event-manager
https://api.github.com/repos/wpeventmanager/wp-event-manager
closed
Elementor - Single organizer/venue - Details are not availabel properly
In Testing
Single organizer/venue - Details are not availabel properly. ![image](https://user-images.githubusercontent.com/75515088/187406652-f241f1a6-1ba3-4fe2-8373-c87baa6355ce.png) ![image](https://user-images.githubusercontent.com/75515088/187406892-9bd04168-19e0-43b0-a1be-44dff4d9948c.png) ![image](https://user-images.githubusercontent.com/75515088/187406965-e1f14c28-3a32-4994-9920-fb3bae529224.png)
1.0
Elementor - Single organizer/venue - Details are not availabel properly - Single organizer/venue - Details are not availabel properly. ![image](https://user-images.githubusercontent.com/75515088/187406652-f241f1a6-1ba3-4fe2-8373-c87baa6355ce.png) ![image](https://user-images.githubusercontent.com/75515088/187406892-9bd04168-19e0-43b0-a1be-44dff4d9948c.png) ![image](https://user-images.githubusercontent.com/75515088/187406965-e1f14c28-3a32-4994-9920-fb3bae529224.png)
non_infrastructure
elementor single organizer venue details are not availabel properly single organizer venue details are not availabel properly
0
1,216
3,080,948,717
IssuesEvent
2015-08-22 06:51:40
codingteam/loglist
https://api.github.com/repos/codingteam/loglist
closed
Update CAPTCHA
infrastructure
After migration from Heroku domain is `loglist.net` not `www.loglist.net`. So the captcha service doesn' work because of that.
1.0
Update CAPTCHA - After migration from Heroku domain is `loglist.net` not `www.loglist.net`. So the captcha service doesn' work because of that.
infrastructure
update captcha after migration from heroku domain is loglist net not so the captcha service doesn work because of that
1
11,921
9,525,623,548
IssuesEvent
2019-04-28 13:50:20
eclipse/vorto
https://api.github.com/repos/eclipse/vorto
closed
Website Hudson/jenkins job failing
Infrastructure bug
The current web site deployment job on Eclipse Jenkins for Vorto is failing. According to logs, it seems to be a permission problem.
1.0
Website Hudson/jenkins job failing - The current web site deployment job on Eclipse Jenkins for Vorto is failing. According to logs, it seems to be a permission problem.
infrastructure
website hudson jenkins job failing the current web site deployment job on eclipse jenkins for vorto is failing according to logs it seems to be a permission problem
1
30,261
24,707,319,184
IssuesEvent
2022-10-19 20:16:51
dotnet/aspnetcore
https://api.github.com/repos/dotnet/aspnetcore
closed
Update rebranding instructions to include steps for updating template precedence values
task area-infrastructure
Here is where the current instructions are: https://github.com/dotnet/aspnetcore/blob/main/docs/UpdatingMajorVersionAndTFM.md
1.0
Update rebranding instructions to include steps for updating template precedence values - Here is where the current instructions are: https://github.com/dotnet/aspnetcore/blob/main/docs/UpdatingMajorVersionAndTFM.md
infrastructure
update rebranding instructions to include steps for updating template precedence values here is where the current instructions are
1
6,988
6,699,755,919
IssuesEvent
2017-10-11 00:00:52
dotnet/corefx
https://api.github.com/repos/dotnet/corefx
closed
Issues in CoreFx build and opening particular solution in VS 2017 community edition.
area-Infrastructure
I am seeing following issue in build and opening System.Runtime.Extension solution in VS 2017 Community edition IDE. About three (plus) weeks back these issues were not seen. Last weekend, I sync'd my fork with latest code in dotnet/master and since then these issues have been happening consistently. Not only on my branch for #22409 but also on the master branch in my fork. I also recreated my local git repo, but no luck. Issues are, a. On firing "build.cmd" / "build-managed.cmd" execution is stuck at below line seen on console for about 10 - 15 mins and, at that time network activity kicks in. I see download at about 300kBps from the server "blob.byaprdstr06a.store.core.windows.net". Roughly 50MB of stuff is downloaded and then the build proceeds. The line on console after which download begins is as below: ```D:\WinCPP\corefx\packages\Microsoft.TargetingPack.NETFramework.v4.6.1\1.0.1\lib\net461\sysglobl.dll (Microsoft.TargetingPack.NETFramework.v4.6.1.1.0.1) -> D:\WinCPP\corefx\bin\AnyOS.AnyCPU.Release\netfx\netcoreapp\sysglobl.dll``` b. On opening System.Runtime.Extension in VS 2017 Community edition IDE, it gets stuck at "Preparing Solution" for similar duration (10 - 15 mins). In this case too, network activity kicks in and devenv.exe appears to be downloading data from same server "blob.byaprdstr06a.store.core.windows.net". Alt+Tab into VS 2017 Community IDE at that point gives a pop up at lower right saying VS is busy and information has been sent to "Visual Studio Experience Improvement Program"... Both the above issues repeat on each run of build command or each time the solution is opened in VS IDE. Primarily the problem is that 50MB per build (or even on opening VS) translates to 500 MB if I were to build / open VS for about 10 times during one coding session; a number that can be easily exceeded. Secondly, the 10-15 min delay holds up build / opening VS IDE. Kindly help. I am wondering if the download is not able to write some sort of ".complete" file because of which the entire download is repeated every time I run build command or open the solution? This could very well be an issue with my setup that got introduced after I rebased about a month's data from dotnet/master. @karelz @weshaggard created this issue as per our discussion on gitter.
1.0
Issues in CoreFx build and opening particular solution in VS 2017 community edition. - I am seeing following issue in build and opening System.Runtime.Extension solution in VS 2017 Community edition IDE. About three (plus) weeks back these issues were not seen. Last weekend, I sync'd my fork with latest code in dotnet/master and since then these issues have been happening consistently. Not only on my branch for #22409 but also on the master branch in my fork. I also recreated my local git repo, but no luck. Issues are, a. On firing "build.cmd" / "build-managed.cmd" execution is stuck at below line seen on console for about 10 - 15 mins and, at that time network activity kicks in. I see download at about 300kBps from the server "blob.byaprdstr06a.store.core.windows.net". Roughly 50MB of stuff is downloaded and then the build proceeds. The line on console after which download begins is as below: ```D:\WinCPP\corefx\packages\Microsoft.TargetingPack.NETFramework.v4.6.1\1.0.1\lib\net461\sysglobl.dll (Microsoft.TargetingPack.NETFramework.v4.6.1.1.0.1) -> D:\WinCPP\corefx\bin\AnyOS.AnyCPU.Release\netfx\netcoreapp\sysglobl.dll``` b. On opening System.Runtime.Extension in VS 2017 Community edition IDE, it gets stuck at "Preparing Solution" for similar duration (10 - 15 mins). In this case too, network activity kicks in and devenv.exe appears to be downloading data from same server "blob.byaprdstr06a.store.core.windows.net". Alt+Tab into VS 2017 Community IDE at that point gives a pop up at lower right saying VS is busy and information has been sent to "Visual Studio Experience Improvement Program"... Both the above issues repeat on each run of build command or each time the solution is opened in VS IDE. Primarily the problem is that 50MB per build (or even on opening VS) translates to 500 MB if I were to build / open VS for about 10 times during one coding session; a number that can be easily exceeded. Secondly, the 10-15 min delay holds up build / opening VS IDE. Kindly help. I am wondering if the download is not able to write some sort of ".complete" file because of which the entire download is repeated every time I run build command or open the solution? This could very well be an issue with my setup that got introduced after I rebased about a month's data from dotnet/master. @karelz @weshaggard created this issue as per our discussion on gitter.
infrastructure
issues in corefx build and opening particular solution in vs community edition i am seeing following issue in build and opening system runtime extension solution in vs community edition ide about three plus weeks back these issues were not seen last weekend i sync d my fork with latest code in dotnet master and since then these issues have been happening consistently not only on my branch for but also on the master branch in my fork i also recreated my local git repo but no luck issues are a on firing build cmd build managed cmd execution is stuck at below line seen on console for about mins and at that time network activity kicks in i see download at about from the server blob store core windows net roughly of stuff is downloaded and then the build proceeds the line on console after which download begins is as below d wincpp corefx packages microsoft targetingpack netframework lib sysglobl dll microsoft targetingpack netframework d wincpp corefx bin anyos anycpu release netfx netcoreapp sysglobl dll b on opening system runtime extension in vs community edition ide it gets stuck at preparing solution for similar duration mins in this case too network activity kicks in and devenv exe appears to be downloading data from same server blob store core windows net alt tab into vs community ide at that point gives a pop up at lower right saying vs is busy and information has been sent to visual studio experience improvement program both the above issues repeat on each run of build command or each time the solution is opened in vs ide primarily the problem is that per build or even on opening vs translates to mb if i were to build open vs for about times during one coding session a number that can be easily exceeded secondly the min delay holds up build opening vs ide kindly help i am wondering if the download is not able to write some sort of complete file because of which the entire download is repeated every time i run build command or open the solution this could very well be an issue with my setup that got introduced after i rebased about a month s data from dotnet master karelz weshaggard created this issue as per our discussion on gitter
1
7,480
6,970,209,944
IssuesEvent
2017-12-11 09:29:51
usyd-blockchain/vandal
https://api.github.com/repos/usyd-blockchain/vandal
opened
Analytics input and output
enhancement infrastructure
A bunch of analytics are collected during dataflow analysis. Add a flag to toggle whether this occurs or not, and make it clear how to output this stuff. Maybe make an exporter which produces the analytics only, and nothing else. The analytics information should be a graph object member; then exporters can be modified to easily output this data along with the rest of it.
1.0
Analytics input and output - A bunch of analytics are collected during dataflow analysis. Add a flag to toggle whether this occurs or not, and make it clear how to output this stuff. Maybe make an exporter which produces the analytics only, and nothing else. The analytics information should be a graph object member; then exporters can be modified to easily output this data along with the rest of it.
infrastructure
analytics input and output a bunch of analytics are collected during dataflow analysis add a flag to toggle whether this occurs or not and make it clear how to output this stuff maybe make an exporter which produces the analytics only and nothing else the analytics information should be a graph object member then exporters can be modified to easily output this data along with the rest of it
1
434,142
30,444,317,751
IssuesEvent
2023-07-15 13:16:40
flyteorg/flyte
https://api.github.com/repos/flyteorg/flyte
closed
[Docs] Update K8s plugins that use the kubeflow operators
documentation
### Description [This docs page](https://docs.flyte.org/en/latest/deployment/plugins/k8s/index.html#deployment-plugin-setup-k8s) needs to be updated so that users install https://github.com/kubeflow/training-operator instead of the now-unmaintained repos for the separate operators (e.g. https://github.com/kubeflow/pytorch-operator) ### Are you sure this issue hasn't been raised already? - [X] Yes ### Have you read the Code of Conduct? - [X] Yes
1.0
[Docs] Update K8s plugins that use the kubeflow operators - ### Description [This docs page](https://docs.flyte.org/en/latest/deployment/plugins/k8s/index.html#deployment-plugin-setup-k8s) needs to be updated so that users install https://github.com/kubeflow/training-operator instead of the now-unmaintained repos for the separate operators (e.g. https://github.com/kubeflow/pytorch-operator) ### Are you sure this issue hasn't been raised already? - [X] Yes ### Have you read the Code of Conduct? - [X] Yes
non_infrastructure
update plugins that use the kubeflow operators description needs to be updated so that users install instead of the now unmaintained repos for the separate operators e g are you sure this issue hasn t been raised already yes have you read the code of conduct yes
0
18,839
13,133,371,521
IssuesEvent
2020-08-06 20:47:03
BCDevOps/developer-experience
https://api.github.com/repos/BCDevOps/developer-experience
closed
Sysdig Dashboard: CPU/Mem Capacity
Infrastructure Sysdig
https://trello.com/c/M5zL2PME/104-sysdig-dashboard-cpu-mem-capacity Mirror AdvSol Grafana Compute cluster top 6 panels - CPU and Mem - Usage, Requests, Limits (over time)
1.0
Sysdig Dashboard: CPU/Mem Capacity - https://trello.com/c/M5zL2PME/104-sysdig-dashboard-cpu-mem-capacity Mirror AdvSol Grafana Compute cluster top 6 panels - CPU and Mem - Usage, Requests, Limits (over time)
infrastructure
sysdig dashboard cpu mem capacity mirror advsol grafana compute cluster top panels cpu and mem usage requests limits over time
1
512,929
14,912,452,226
IssuesEvent
2021-01-22 12:41:55
bounswe/bounswe2020group2
https://api.github.com/repos/bounswe/bounswe2020group2
closed
[BACKEND] Implementing the Send Message Functionality
effort: high priority: high status: in progress type: back-end who: individual
I will implement the send message functionality which can be used by different users of the system.
1.0
[BACKEND] Implementing the Send Message Functionality - I will implement the send message functionality which can be used by different users of the system.
non_infrastructure
implementing the send message functionality i will implement the send message functionality which can be used by different users of the system
0
26,629
20,364,998,014
IssuesEvent
2022-02-21 03:51:51
happy-travel/agent-app-project
https://api.github.com/repos/happy-travel/agent-app-project
closed
TestConnector implementation
backend infrastructure load-testing
Need to implement a test connector with adjustable data returned by all of the implemented search and book endpoints (3 step search + book). - [x] #1182 - [x] ~~#1186~~ - [ ] ~~Add endpoints to load scenarios data as JSON files~~ - [ ] ~~#1187~~ - [x] #1192 - [x] #1195
1.0
TestConnector implementation - Need to implement a test connector with adjustable data returned by all of the implemented search and book endpoints (3 step search + book). - [x] #1182 - [x] ~~#1186~~ - [ ] ~~Add endpoints to load scenarios data as JSON files~~ - [ ] ~~#1187~~ - [x] #1192 - [x] #1195
infrastructure
testconnector implementation need to implement a test connector with adjustable data returned by all of the implemented search and book endpoints step search book add endpoints to load scenarios data as json files
1
277,977
21,057,930,817
IssuesEvent
2022-04-01 06:30:56
Denniszedead/ped
https://api.github.com/repos/Denniszedead/ped
opened
No indication of the brackets in notes
type.DocumentationBug severity.Medium
Notes before all instructions: ![image.png](https://raw.githubusercontent.com/Denniszedead/ped/main/files/40e0adfa-4373-4cda-9ece-ff0f11d21128.png) ![image.png](https://raw.githubusercontent.com/Denniszedead/ped/main/files/c09c1a1d-d33f-4ae3-a44d-53b03880e9d0.png) The notes do not indicate what the parentheses "{" means. <!--session: 1648793660206-df733c04-8ddf-4748-9583-a009f11f7085--> <!--Version: Web v3.4.2-->
1.0
No indication of the brackets in notes - Notes before all instructions: ![image.png](https://raw.githubusercontent.com/Denniszedead/ped/main/files/40e0adfa-4373-4cda-9ece-ff0f11d21128.png) ![image.png](https://raw.githubusercontent.com/Denniszedead/ped/main/files/c09c1a1d-d33f-4ae3-a44d-53b03880e9d0.png) The notes do not indicate what the parentheses "{" means. <!--session: 1648793660206-df733c04-8ddf-4748-9583-a009f11f7085--> <!--Version: Web v3.4.2-->
non_infrastructure
no indication of the brackets in notes notes before all instructions the notes do not indicate what the parentheses means
0
29,610
24,104,811,942
IssuesEvent
2022-09-20 06:25:08
woowacourse-teams/2022-kkogkkog
https://api.github.com/repos/woowacourse-teams/2022-kkogkkog
opened
[BE] 인덱스 설정 및 개선사항
🕋 backend 📖 docs 🌐 infrastructure
## 배경 ### List<Coupon> findAllBySender() - where coupon.sender_member_id = 1 <img width="1183" alt="image" src="https://user-images.githubusercontent.com/73531614/191182093-62d87801-4627-4be7-9837-75f1b0dc59dd.png"> <img width="646" alt="image" src="https://user-images.githubusercontent.com/73531614/191182291-5d09b0c5-13af-4ddf-a9d3-ac4f4f42f1cd.png"> <img width="799" alt="image" src="https://user-images.githubusercontent.com/73531614/191182488-e0d72932-67f2-4cd3-8d44-b3c56cb76404.png">
1.0
[BE] 인덱스 설정 및 개선사항 - ## 배경 ### List<Coupon> findAllBySender() - where coupon.sender_member_id = 1 <img width="1183" alt="image" src="https://user-images.githubusercontent.com/73531614/191182093-62d87801-4627-4be7-9837-75f1b0dc59dd.png"> <img width="646" alt="image" src="https://user-images.githubusercontent.com/73531614/191182291-5d09b0c5-13af-4ddf-a9d3-ac4f4f42f1cd.png"> <img width="799" alt="image" src="https://user-images.githubusercontent.com/73531614/191182488-e0d72932-67f2-4cd3-8d44-b3c56cb76404.png">
infrastructure
인덱스 설정 및 개선사항 배경 list findallbysender where coupon sender member id img width alt image src img width alt image src img width alt image src
1
21,587
14,657,632,162
IssuesEvent
2020-12-28 16:01:34
dotnet/runtime
https://api.github.com/repos/dotnet/runtime
closed
Publish Workload Pack for Mono AOT MSBuild Task
area-Infrastructure-mono tracking
Since there are many instances of Mono AOT tooling workload packs, we need to provide a separate workload pack that contains the Mono AOT MSBuild task. The task can be found [here](https://github.com/dotnet/runtime/tree/master/tools-local/tasks/mobile.tasks/AotCompilerTask)
1.0
Publish Workload Pack for Mono AOT MSBuild Task - Since there are many instances of Mono AOT tooling workload packs, we need to provide a separate workload pack that contains the Mono AOT MSBuild task. The task can be found [here](https://github.com/dotnet/runtime/tree/master/tools-local/tasks/mobile.tasks/AotCompilerTask)
infrastructure
publish workload pack for mono aot msbuild task since there are many instances of mono aot tooling workload packs we need to provide a separate workload pack that contains the mono aot msbuild task the task can be found
1
90,677
8,257,045,939
IssuesEvent
2018-09-13 02:34:59
dotnet/coreclr
https://api.github.com/repos/dotnet/coreclr
closed
New COM Activation tests broke official test build
test bug
Opened on behalf of @MattGal Looks like this was caused by https://github.com/dotnet/coreclr/pull/19760 ( @AaronRobinsonMSFT FYI) Warnings: 2 Status Message: failed Build : 3.0 - 20180911.04 (Product Build) Failing configurations: - Alpine3.6 - Build-Tests-R2R-Release - Build-Tests-Release - RedHat6 - Build-Tests-R2R-Release - Build-Tests-Release - RedHat 7 - Build-Tests-R2R-Release - Build-Tests-Release - OSX - Build-Tests-R2R-Release - Build-Tests-Release [Mission Control Build Info](https://mc.dot.net/#/product/netcore/30/source/official~2Fcoreclr~2Fmaster~2F/type/build~2Fproduct~2F/build/20180911.04/workItem/Orchestration/analysis/external/Link)
1.0
New COM Activation tests broke official test build - Opened on behalf of @MattGal Looks like this was caused by https://github.com/dotnet/coreclr/pull/19760 ( @AaronRobinsonMSFT FYI) Warnings: 2 Status Message: failed Build : 3.0 - 20180911.04 (Product Build) Failing configurations: - Alpine3.6 - Build-Tests-R2R-Release - Build-Tests-Release - RedHat6 - Build-Tests-R2R-Release - Build-Tests-Release - RedHat 7 - Build-Tests-R2R-Release - Build-Tests-Release - OSX - Build-Tests-R2R-Release - Build-Tests-Release [Mission Control Build Info](https://mc.dot.net/#/product/netcore/30/source/official~2Fcoreclr~2Fmaster~2F/type/build~2Fproduct~2F/build/20180911.04/workItem/Orchestration/analysis/external/Link)
non_infrastructure
new com activation tests broke official test build opened on behalf of mattgal looks like this was caused by aaronrobinsonmsft fyi warnings status message failed build product build failing configurations build tests release build tests release build tests release build tests release redhat build tests release build tests release osx build tests release build tests release
0
32,728
26,940,927,085
IssuesEvent
2023-02-08 02:05:49
APSIMInitiative/ApsimX
https://api.github.com/repos/APSIMInitiative/ApsimX
closed
Default search radius for searching APSOIL data base is too large
bug interface/infrastructure
Please replace the default of 100 km radius for searching soils from the database to 10 km. @her123
1.0
Default search radius for searching APSOIL data base is too large - Please replace the default of 100 km radius for searching soils from the database to 10 km. @her123
infrastructure
default search radius for searching apsoil data base is too large please replace the default of km radius for searching soils from the database to km
1
127,143
26,990,452,327
IssuesEvent
2023-02-09 19:26:41
MetaMask/design-tokens
https://api.github.com/repos/MetaMask/design-tokens
closed
[Mobile] Audit Banner
code design-system
### **Description** Audit the use cases and requirements for `Banner` Use the FigJam to collate screenshots, notes on component api and requirements etc. FigJam: https://www.figma.com/file/ZRc86y4pTE33gLMdrobHCJ/Banner-Audit?node-id=0%3A1&t=G28gM5MUDbJBab0s-1 ### **Technical Details** - Collect existing use cases across your assigned platform - Collect similar examples from third party design systems - Note down behaviours, attributes, questions and requirements that you notice about these use cases ### **Acceptance Criteria** - The majority if not all use cases from your assigned platform have been collected in screenshots and added to the FigJam - At least 3 examples from third party design systems have been collected and added to the FigJam - Have listed possible component names and identified your preferred name based on your research - Have listed down possible component api and identified the options ### **References** - [FigJam](https://www.figma.com/file/ZRc86y4pTE33gLMdrobHCJ/Banner-Audit?node-id=0%3A1&t=G28gM5MUDbJBab0s-1) - Read exercised `#05 Identify Existing Paradigms in Design and Code` and `#06 Identify Emerging and Interesting Paradigms in Design and Code` in the Design System in 90 Days workbook
1.0
[Mobile] Audit Banner - ### **Description** Audit the use cases and requirements for `Banner` Use the FigJam to collate screenshots, notes on component api and requirements etc. FigJam: https://www.figma.com/file/ZRc86y4pTE33gLMdrobHCJ/Banner-Audit?node-id=0%3A1&t=G28gM5MUDbJBab0s-1 ### **Technical Details** - Collect existing use cases across your assigned platform - Collect similar examples from third party design systems - Note down behaviours, attributes, questions and requirements that you notice about these use cases ### **Acceptance Criteria** - The majority if not all use cases from your assigned platform have been collected in screenshots and added to the FigJam - At least 3 examples from third party design systems have been collected and added to the FigJam - Have listed possible component names and identified your preferred name based on your research - Have listed down possible component api and identified the options ### **References** - [FigJam](https://www.figma.com/file/ZRc86y4pTE33gLMdrobHCJ/Banner-Audit?node-id=0%3A1&t=G28gM5MUDbJBab0s-1) - Read exercised `#05 Identify Existing Paradigms in Design and Code` and `#06 Identify Emerging and Interesting Paradigms in Design and Code` in the Design System in 90 Days workbook
non_infrastructure
audit banner description audit the use cases and requirements for banner use the figjam to collate screenshots notes on component api and requirements etc figjam technical details collect existing use cases across your assigned platform collect similar examples from third party design systems note down behaviours attributes questions and requirements that you notice about these use cases acceptance criteria the majority if not all use cases from your assigned platform have been collected in screenshots and added to the figjam at least examples from third party design systems have been collected and added to the figjam have listed possible component names and identified your preferred name based on your research have listed down possible component api and identified the options references read exercised identify existing paradigms in design and code and identify emerging and interesting paradigms in design and code in the design system in days workbook
0
52,406
13,224,717,570
IssuesEvent
2020-08-17 19:42:13
icecube-trac/tix4
https://api.github.com/repos/icecube-trac/tix4
opened
Broken files in 20222 (Trac #2171)
Incomplete Migration Migrated from Trac analysis defect
<details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/2171">https://code.icecube.wisc.edu/projects/icecube/ticket/2171</a>, reported by flauber</summary> <p> ```json { "status": "closed", "changetime": "2018-09-22T09:42:03", "_ts": "1537609323000821", "description": " Hello,\n\nseeing multiple broken files for the 20222 dataset. My current list is (mind you, some files might be broken but as I bunch them together I only get a error message for the first broken file):", "reporter": "flauber", "cc": "", "resolution": "duplicate", "time": "2018-07-10T11:42:03", "component": "analysis", "summary": "Broken files in 20222", "priority": "normal", "keywords": "broken files corsika 20222", "milestone": "", "owner": "", "type": "defect" } ``` </p> </details>
1.0
Broken files in 20222 (Trac #2171) - <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/2171">https://code.icecube.wisc.edu/projects/icecube/ticket/2171</a>, reported by flauber</summary> <p> ```json { "status": "closed", "changetime": "2018-09-22T09:42:03", "_ts": "1537609323000821", "description": " Hello,\n\nseeing multiple broken files for the 20222 dataset. My current list is (mind you, some files might be broken but as I bunch them together I only get a error message for the first broken file):", "reporter": "flauber", "cc": "", "resolution": "duplicate", "time": "2018-07-10T11:42:03", "component": "analysis", "summary": "Broken files in 20222", "priority": "normal", "keywords": "broken files corsika 20222", "milestone": "", "owner": "", "type": "defect" } ``` </p> </details>
non_infrastructure
broken files in trac migrated from json status closed changetime ts description hello n nseeing multiple broken files for the dataset my current list is mind you some files might be broken but as i bunch them together i only get a error message for the first broken file reporter flauber cc resolution duplicate time component analysis summary broken files in priority normal keywords broken files corsika milestone owner type defect
0
2,624
3,789,327,179
IssuesEvent
2016-03-21 17:34:39
servo/servo
https://api.github.com/repos/servo/servo
closed
Android nightly build fails to upload to S3
A-infrastructure P-android
The Android nightly compile step was fixed on August 6, but since then the S3 upload step is failing: ``` s3cmd put /home/servo/buildbot/slave/android-nightly/build/target/arm-linux-androideabi/release/servo s3://servo-rust/nightly/servo.apk in dir /home/servo/buildbot/slave/android-nightly/build (timeout 1200 secs) watching logfiles {} argv: ['s3cmd', 'put', '/home/servo/buildbot/slave/android-nightly/build/target/arm-linux-androideabi/release/servo', 's3://servo-rust/nightly/servo.apk'] environment: ADDRFAM=inet HOME=/home/servo IFACE=eth0 LOGICAL=eth0 METHOD=dhcp PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin PWD=/home/servo/buildbot/slave/android-nightly/build TERM=linux UPSTART_EVENTS=local-filesystems net-device-up UPSTART_INSTANCE= UPSTART_JOB=buildbot-slave using PTY: False ERROR: Parameter problem: Nothing to upload. program finished with exit code 64 elapsedTime=0.231612 ``` http://build.servo.org/builders/android-nightly?numbuilds=60
1.0
Android nightly build fails to upload to S3 - The Android nightly compile step was fixed on August 6, but since then the S3 upload step is failing: ``` s3cmd put /home/servo/buildbot/slave/android-nightly/build/target/arm-linux-androideabi/release/servo s3://servo-rust/nightly/servo.apk in dir /home/servo/buildbot/slave/android-nightly/build (timeout 1200 secs) watching logfiles {} argv: ['s3cmd', 'put', '/home/servo/buildbot/slave/android-nightly/build/target/arm-linux-androideabi/release/servo', 's3://servo-rust/nightly/servo.apk'] environment: ADDRFAM=inet HOME=/home/servo IFACE=eth0 LOGICAL=eth0 METHOD=dhcp PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin PWD=/home/servo/buildbot/slave/android-nightly/build TERM=linux UPSTART_EVENTS=local-filesystems net-device-up UPSTART_INSTANCE= UPSTART_JOB=buildbot-slave using PTY: False ERROR: Parameter problem: Nothing to upload. program finished with exit code 64 elapsedTime=0.231612 ``` http://build.servo.org/builders/android-nightly?numbuilds=60
infrastructure
android nightly build fails to upload to the android nightly compile step was fixed on august but since then the upload step is failing put home servo buildbot slave android nightly build target arm linux androideabi release servo servo rust nightly servo apk in dir home servo buildbot slave android nightly build timeout secs watching logfiles argv environment addrfam inet home home servo iface logical method dhcp path usr local sbin usr local bin usr bin usr sbin sbin bin pwd home servo buildbot slave android nightly build term linux upstart events local filesystems net device up upstart instance upstart job buildbot slave using pty false error parameter problem nothing to upload program finished with exit code elapsedtime
1
14,016
10,578,181,665
IssuesEvent
2019-10-07 21:54:07
DTFsquad/Kodiri-Kodflix-MERN
https://api.github.com/repos/DTFsquad/Kodiri-Kodflix-MERN
opened
App is broken in prod! - create a remote database
Database (Mongo) Infrastructure & DevOps
Oh, we've deployed our app to prod (git push heroku master) and...we've broken it! ![image](https://user-images.githubusercontent.com/36204941/66350918-630f9100-e954-11e9-8b03-2776f6cd6b24.png) the DevOps department has inspected the heroku logs (you can do it by typing heroku logs on a terminal: ![image](https://user-images.githubusercontent.com/36204941/66350939-77538e00-e954-11e9-86ba-3d81293e9c7f.png) Please do whatever necessary to fix the issue, as the app is completely broken at the moment so our customers will get angry soon. **Technical guidance:** The problem is quite obvious: we're trying to connect to a local database (localhost), which is fine at dev time, but it won't work in prod, as then app runs remotely on heroku, and hence doesn't have access to our local system (we'd have a big security issue otherwise!). To fix the problem, we have to create a remote database. We'll trust mLab, as it's widely used in the industry, and it comes with a free plan. 1. Sign up on mLab 2. Create a new database: ![image](https://user-images.githubusercontent.com/36204941/66351077-d44f4400-e954-11e9-9ef3-bd4f93368468.png) ![image](https://user-images.githubusercontent.com/36204941/66351092-db765200-e954-11e9-8c1f-f05090b04a17.png) ![image](https://user-images.githubusercontent.com/36204941/66351102-e630e700-e954-11e9-9917-1680401fe61b.png) ![image](https://user-images.githubusercontent.com/36204941/66351136-f6e15d00-e954-11e9-9423-5ff90663d2cc.png) 3. Click on the newly created database, and add a new user on it: ![image](https://user-images.githubusercontent.com/36204941/66351172-09f42d00-e955-11e9-869c-99bd07ca78ed.png) I'd suggest to user kodflix as a username, but please use a new password for security reasons 4. Prove we can connect to the newly created database from Robo 3T. 5. Create a new collection on it, called shows. 6. Copy each document from the local collection (right-click -> copy JSON), and paste it into the remote one via right-click -> Insert Document Original source: https://github.com/rmallols/kodflix/issues/29
1.0
App is broken in prod! - create a remote database - Oh, we've deployed our app to prod (git push heroku master) and...we've broken it! ![image](https://user-images.githubusercontent.com/36204941/66350918-630f9100-e954-11e9-8b03-2776f6cd6b24.png) the DevOps department has inspected the heroku logs (you can do it by typing heroku logs on a terminal: ![image](https://user-images.githubusercontent.com/36204941/66350939-77538e00-e954-11e9-86ba-3d81293e9c7f.png) Please do whatever necessary to fix the issue, as the app is completely broken at the moment so our customers will get angry soon. **Technical guidance:** The problem is quite obvious: we're trying to connect to a local database (localhost), which is fine at dev time, but it won't work in prod, as then app runs remotely on heroku, and hence doesn't have access to our local system (we'd have a big security issue otherwise!). To fix the problem, we have to create a remote database. We'll trust mLab, as it's widely used in the industry, and it comes with a free plan. 1. Sign up on mLab 2. Create a new database: ![image](https://user-images.githubusercontent.com/36204941/66351077-d44f4400-e954-11e9-9ef3-bd4f93368468.png) ![image](https://user-images.githubusercontent.com/36204941/66351092-db765200-e954-11e9-8c1f-f05090b04a17.png) ![image](https://user-images.githubusercontent.com/36204941/66351102-e630e700-e954-11e9-9917-1680401fe61b.png) ![image](https://user-images.githubusercontent.com/36204941/66351136-f6e15d00-e954-11e9-9423-5ff90663d2cc.png) 3. Click on the newly created database, and add a new user on it: ![image](https://user-images.githubusercontent.com/36204941/66351172-09f42d00-e955-11e9-869c-99bd07ca78ed.png) I'd suggest to user kodflix as a username, but please use a new password for security reasons 4. Prove we can connect to the newly created database from Robo 3T. 5. Create a new collection on it, called shows. 6. Copy each document from the local collection (right-click -> copy JSON), and paste it into the remote one via right-click -> Insert Document Original source: https://github.com/rmallols/kodflix/issues/29
infrastructure
app is broken in prod create a remote database oh we ve deployed our app to prod git push heroku master and we ve broken it the devops department has inspected the heroku logs you can do it by typing heroku logs on a terminal please do whatever necessary to fix the issue as the app is completely broken at the moment so our customers will get angry soon technical guidance the problem is quite obvious we re trying to connect to a local database localhost which is fine at dev time but it won t work in prod as then app runs remotely on heroku and hence doesn t have access to our local system we d have a big security issue otherwise to fix the problem we have to create a remote database we ll trust mlab as it s widely used in the industry and it comes with a free plan sign up on mlab create a new database click on the newly created database and add a new user on it i d suggest to user kodflix as a username but please use a new password for security reasons prove we can connect to the newly created database from robo create a new collection on it called shows copy each document from the local collection right click copy json and paste it into the remote one via right click insert document original source
1