Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 2 665 | labels stringlengths 4 554 | body stringlengths 3 235k | index stringclasses 6 values | text_combine stringlengths 96 235k | label stringclasses 2 values | text stringlengths 96 196k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
34,335 | 29,497,777,293 | IssuesEvent | 2023-06-02 18:34:25 | dart-lang/site-www | https://api.github.com/repos/dart-lang/site-www | closed | Out dated Markdown files with broken links | infrastructure needs-info links | ### Describe the problem
Lots of Markdown files under [src](https://github.com/dart-lang/site-www/tree/main/src) are outdated and links are broken and images are also used there are outdated. lots of files need small reviews and checking if they are working correctly.
ps: I am a beginner issuer so if there is something I don't know please tell me
### Expected fix
according to me as far as i know we should check all the markdown files one by one and check all the links on other pages and images. as well as we should check if the images used there are updated or not.
### Additional context
[index.md](https://github.com/dart-lang/site-www/blob/main/src/web/index.md)
this an example of one of the broken markdown files | 1.0 | Out dated Markdown files with broken links - ### Describe the problem
Lots of Markdown files under [src](https://github.com/dart-lang/site-www/tree/main/src) are outdated and links are broken and images are also used there are outdated. lots of files need small reviews and checking if they are working correctly.
ps: I am a beginner issuer so if there is something I don't know please tell me
### Expected fix
according to me as far as i know we should check all the markdown files one by one and check all the links on other pages and images. as well as we should check if the images used there are updated or not.
### Additional context
[index.md](https://github.com/dart-lang/site-www/blob/main/src/web/index.md)
this an example of one of the broken markdown files | infrastructure | out dated markdown files with broken links describe the problem lots of markdown files under are outdated and links are broken and images are also used there are outdated lots of files need small reviews and checking if they are working correctly ps i am a beginner issuer so if there is something i don t know please tell me expected fix according to me as far as i know we should check all the markdown files one by one and check all the links on other pages and images as well as we should check if the images used there are updated or not additional context this an example of one of the broken markdown files | 1 |
97,154 | 28,112,999,548 | IssuesEvent | 2023-03-31 08:40:26 | jOOQ/jOOQ | https://api.github.com/repos/jOOQ/jOOQ | opened | Remove examples in favour of our demo | T: Enhancement C: Documentation C: Build P: Medium E: All Editions | The various examples don't really show how jOOQ works. They do show how jOOQ can be integrated with some third parties, like flyway or javafx, but that doesn't make for really interesting examples.
The demo on the other hand shows tons of actual jOOQ API examples and is much more interesting to work with.
I have always wanted to move the examples into a separate repository, such that we can have them work with the latest jOOQ version, rather than with the current snapshot:
https://github.com/jOOQ/jOOQ/issues/3846
But if we just remove them, that problem will be solved more simply.
Some examples are worth retaining somewhere, including:
- [ ] https://github.com/jOOQ/jOOQ/tree/main/jOOQ-examples/jOOQ-checker-framework-example
- [ ] https://github.com/jOOQ/jOOQ/tree/main/jOOQ-examples/jOOQ-oracle-example
- [ ] https://github.com/jOOQ/jOOQ/tree/main/jOOQ-examples/jOOQ-spring-boot-example
But maybe, this can be done later.
----
See also:
- https://github.com/jOOQ/jOOQ/tree/main/jOOQ-examples
- https://github.com/jOOQ/demo | 1.0 | Remove examples in favour of our demo - The various examples don't really show how jOOQ works. They do show how jOOQ can be integrated with some third parties, like flyway or javafx, but that doesn't make for really interesting examples.
The demo on the other hand shows tons of actual jOOQ API examples and is much more interesting to work with.
I have always wanted to move the examples into a separate repository, such that we can have them work with the latest jOOQ version, rather than with the current snapshot:
https://github.com/jOOQ/jOOQ/issues/3846
But if we just remove them, that problem will be solved more simply.
Some examples are worth retaining somewhere, including:
- [ ] https://github.com/jOOQ/jOOQ/tree/main/jOOQ-examples/jOOQ-checker-framework-example
- [ ] https://github.com/jOOQ/jOOQ/tree/main/jOOQ-examples/jOOQ-oracle-example
- [ ] https://github.com/jOOQ/jOOQ/tree/main/jOOQ-examples/jOOQ-spring-boot-example
But maybe, this can be done later.
----
See also:
- https://github.com/jOOQ/jOOQ/tree/main/jOOQ-examples
- https://github.com/jOOQ/demo | non_infrastructure | remove examples in favour of our demo the various examples don t really show how jooq works they do show how jooq can be integrated with some third parties like flyway or javafx but that doesn t make for really interesting examples the demo on the other hand shows tons of actual jooq api examples and is much more interesting to work with i have always wanted to move the examples into a separate repository such that we can have them work with the latest jooq version rather than with the current snapshot but if we just remove them that problem will be solved more simply some examples are worth retaining somewhere including but maybe this can be done later see also | 0 |
670,299 | 22,684,375,446 | IssuesEvent | 2022-07-04 12:50:40 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | old.reddit.com - site is not usable | browser-firefox priority-critical engine-gecko | <!-- @browser: Firefox 102.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Firefox/102.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/106873 -->
**URL**: https://old.reddit.com/hot/
**Browser / Version**: Firefox 102.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes Chrome
**Problem type**: Site is not usable
**Description**: Page not loading correctly
**Steps to Reproduce**:
102 will not load reddit or old.reddit. Switching to private does not fix the issue. The pages load normally in Chrome.
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | old.reddit.com - site is not usable - <!-- @browser: Firefox 102.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Firefox/102.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/106873 -->
**URL**: https://old.reddit.com/hot/
**Browser / Version**: Firefox 102.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes Chrome
**Problem type**: Site is not usable
**Description**: Page not loading correctly
**Steps to Reproduce**:
102 will not load reddit or old.reddit. Switching to private does not fix the issue. The pages load normally in Chrome.
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | non_infrastructure | old reddit com site is not usable url browser version firefox operating system windows tested another browser yes chrome problem type site is not usable description page not loading correctly steps to reproduce will not load reddit or old reddit switching to private does not fix the issue the pages load normally in chrome browser configuration none from with ❤️ | 0 |
89,007 | 17,772,048,868 | IssuesEvent | 2021-08-30 14:41:49 | googleapis/python-video-transcoder | https://api.github.com/repos/googleapis/python-video-transcoder | closed | Migrate from master to main branch | type: process api: transcoder | As part of the umbrella issue googleapis/google-cloud-python#10579, we need to switch the default branch from `master` to `main`. Also, all occurrences of `master` should be renamed to `main` (except in cases where URLs could be broken, because the migration has not happened yet). | 1.0 | Migrate from master to main branch - As part of the umbrella issue googleapis/google-cloud-python#10579, we need to switch the default branch from `master` to `main`. Also, all occurrences of `master` should be renamed to `main` (except in cases where URLs could be broken, because the migration has not happened yet). | non_infrastructure | migrate from master to main branch as part of the umbrella issue googleapis google cloud python we need to switch the default branch from master to main also all occurrences of master should be renamed to main except in cases where urls could be broken because the migration has not happened yet | 0 |
2,839 | 3,908,944,065 | IssuesEvent | 2016-04-19 17:32:40 | openSAIL/spec_plots | https://api.github.com/repos/openSAIL/spec_plots | closed | Run code through PEP8 style checker. | fixed infrastructure | Might as well get this into the normal standard, also gain experience using pyFlakes (recommended by python instructor at a week-long course I took). | 1.0 | Run code through PEP8 style checker. - Might as well get this into the normal standard, also gain experience using pyFlakes (recommended by python instructor at a week-long course I took). | infrastructure | run code through style checker might as well get this into the normal standard also gain experience using pyflakes recommended by python instructor at a week long course i took | 1 |
37,292 | 18,264,700,907 | IssuesEvent | 2021-10-04 06:57:31 | rakudo/rakudo | https://api.github.com/repos/rakudo/rakudo | closed | `where` in single `multi` vs. `sub` is 10x slower | performance dispatching | Notice that simply changing `sub` to `multi` makes this code 10x slower. At first, I thought auto-genned proto were the cause, but even if you add one of your own the same issue exists.
I'm guessing this might be due to candidate selection process invoking the `where` just to figure out
if types match, but is there any improvements we can make here?
```
<Zoffix_> m: sub foo ($ where rand.so) {}; my $n = 42e0; { for ^100_000 { foo $n }; say now - ENTER now; }
<camelia> rakudo-moar 64bdb3dd7: OUTPUT: «0.11642038»
<Zoffix_> m: multi foo ($ where rand.so) {}; my $n = 42e0; { for ^100_000 { foo $n }; say now - ENTER now; }
<camelia> rakudo-moar 64bdb3dd7: OUTPUT: «1.23743391»
<Zoffix_> m: proto foo(|) {*}; multi foo ($ where rand.so) {}; my $n = 42e0; { for ^100_000 { foo $n }; say now - ENTER now; }
<camelia> rakudo-moar 64bdb3dd7: OUTPUT: «1.25060981»
<Zoffix_> m: say 1.23743391 / 0.11642038
<camelia> rakudo-moar 64bdb3dd7: OUTPUT: «10.629014525»
``` | True | `where` in single `multi` vs. `sub` is 10x slower - Notice that simply changing `sub` to `multi` makes this code 10x slower. At first, I thought auto-genned proto were the cause, but even if you add one of your own the same issue exists.
I'm guessing this might be due to candidate selection process invoking the `where` just to figure out
if types match, but is there any improvements we can make here?
```
<Zoffix_> m: sub foo ($ where rand.so) {}; my $n = 42e0; { for ^100_000 { foo $n }; say now - ENTER now; }
<camelia> rakudo-moar 64bdb3dd7: OUTPUT: «0.11642038»
<Zoffix_> m: multi foo ($ where rand.so) {}; my $n = 42e0; { for ^100_000 { foo $n }; say now - ENTER now; }
<camelia> rakudo-moar 64bdb3dd7: OUTPUT: «1.23743391»
<Zoffix_> m: proto foo(|) {*}; multi foo ($ where rand.so) {}; my $n = 42e0; { for ^100_000 { foo $n }; say now - ENTER now; }
<camelia> rakudo-moar 64bdb3dd7: OUTPUT: «1.25060981»
<Zoffix_> m: say 1.23743391 / 0.11642038
<camelia> rakudo-moar 64bdb3dd7: OUTPUT: «10.629014525»
``` | non_infrastructure | where in single multi vs sub is slower notice that simply changing sub to multi makes this code slower at first i thought auto genned proto were the cause but even if you add one of your own the same issue exists i m guessing this might be due to candidate selection process invoking the where just to figure out if types match but is there any improvements we can make here m sub foo where rand so my n for foo n say now enter now rakudo moar output « » m multi foo where rand so my n for foo n say now enter now rakudo moar output « » m proto foo multi foo where rand so my n for foo n say now enter now rakudo moar output « » m say rakudo moar output « » | 0 |
5,803 | 5,964,189,324 | IssuesEvent | 2017-05-30 08:10:58 | AdguardTeam/AdguardFilters | https://api.github.com/repos/AdguardTeam/AdguardFilters | opened | Extended css rule is not included into the filter | Bug Infrastructure | Here is the rule:
https://github.com/AdguardTeam/AdguardFilters/blob/master/GermanFilter/sections/css_extended.txt#L14
Rule text:
`berlin.de##.teaser[-ext-has='>.inner>h3>a.trakkking,>.inner>h3:not(:has(>*))']`
But it is not included into the filter:
https://filters.adtidy.org/windows/filters/6.txt | 1.0 | Extended css rule is not included into the filter - Here is the rule:
https://github.com/AdguardTeam/AdguardFilters/blob/master/GermanFilter/sections/css_extended.txt#L14
Rule text:
`berlin.de##.teaser[-ext-has='>.inner>h3>a.trakkking,>.inner>h3:not(:has(>*))']`
But it is not included into the filter:
https://filters.adtidy.org/windows/filters/6.txt | infrastructure | extended css rule is not included into the filter here is the rule rule text berlin de teaser but it is not included into the filter | 1 |
194,810 | 6,899,404,852 | IssuesEvent | 2017-11-24 13:40:11 | SelfKeyFoundation/selfkey-token | https://api.github.com/repos/SelfKeyFoundation/selfkey-token | closed | Quarterly vesting for founders' tokens | feature low priority | Vesting for founders' tokens should be done on a quarterly basis (each 3 months a batch is released) for a total of 4 releases. | 1.0 | Quarterly vesting for founders' tokens - Vesting for founders' tokens should be done on a quarterly basis (each 3 months a batch is released) for a total of 4 releases. | non_infrastructure | quarterly vesting for founders tokens vesting for founders tokens should be done on a quarterly basis each months a batch is released for a total of releases | 0 |
28,737 | 23,471,266,942 | IssuesEvent | 2022-08-16 22:11:38 | flutter/website | https://api.github.com/repos/flutter/website | closed | [PAGE ISSUE]: 'Flutter architectural overview' | duplicate infrastructure p2-medium | ### Page URL
https://docs.flutter.dev/resources/architectural-overview/
### Page source
https://github.com/flutter/website/tree/main/src/resources/architectural-overview.md
### Describe the problem
Flutter official documentation is missing dark theme. Would be awesome if you added a functionality to switch to the dark theme.
### Expected fix
_No response_
### Additional context
_No response_ | 1.0 | [PAGE ISSUE]: 'Flutter architectural overview' - ### Page URL
https://docs.flutter.dev/resources/architectural-overview/
### Page source
https://github.com/flutter/website/tree/main/src/resources/architectural-overview.md
### Describe the problem
Flutter official documentation is missing dark theme. Would be awesome if you added a functionality to switch to the dark theme.
### Expected fix
_No response_
### Additional context
_No response_ | infrastructure | flutter architectural overview page url page source describe the problem flutter official documentation is missing dark theme would be awesome if you added a functionality to switch to the dark theme expected fix no response additional context no response | 1 |
326,510 | 27,996,016,685 | IssuesEvent | 2023-03-27 08:31:38 | microsoft/AzureStorageExplorer | https://api.github.com/repos/microsoft/AzureStorageExplorer | opened | Succeed to rename/move one file in a SAS attached file share with permissions 'Read, List, Delete' | 🧪 testing :gear: sas :gear: files | **Storage Explorer Version:** 1.29.0-dev
**Build Number:** 20230327.1
**Branch:** main
**Platform/OS:** Windows 10/Linux Ubuntu 22.04/MacOS Ventura 13.2.1 (Apple M1 Pro)
**Architecture:** ia32/x64
**How Found:** From running test cases
**Regression From:** Not a regression
## Steps to Reproduce ##
1. Expand one storage account -> File Shares.
2. Create a file share -> Upload a file.
3. SAS attach the file share with permissions 'Read, List, Delete'.
4. Switch to the attached file share -> Right click the file.
5. Click 'Rename...' -> Input a valid name -> Click 'Rename'.
6. Check whether fails to rename the file.
## Expected Experience ##
Fail to rename the file.
## Actual Experience ##
Succeed to rename the file.
## Additional Context ##
1. This issue also reproduces in one SAS attached file share with permissions 'Read, List, Write'.
2. This issue also reproduces when the SAS URL is created by access policy.
3. This issue does not reproduce for one ADLS Gen2 blob container. | 1.0 | Succeed to rename/move one file in a SAS attached file share with permissions 'Read, List, Delete' - **Storage Explorer Version:** 1.29.0-dev
**Build Number:** 20230327.1
**Branch:** main
**Platform/OS:** Windows 10/Linux Ubuntu 22.04/MacOS Ventura 13.2.1 (Apple M1 Pro)
**Architecture:** ia32/x64
**How Found:** From running test cases
**Regression From:** Not a regression
## Steps to Reproduce ##
1. Expand one storage account -> File Shares.
2. Create a file share -> Upload a file.
3. SAS attach the file share with permissions 'Read, List, Delete'.
4. Switch to the attached file share -> Right click the file.
5. Click 'Rename...' -> Input a valid name -> Click 'Rename'.
6. Check whether fails to rename the file.
## Expected Experience ##
Fail to rename the file.
## Actual Experience ##
Succeed to rename the file.
## Additional Context ##
1. This issue also reproduces in one SAS attached file share with permissions 'Read, List, Write'.
2. This issue also reproduces when the SAS URL is created by access policy.
3. This issue does not reproduce for one ADLS Gen2 blob container. | non_infrastructure | succeed to rename move one file in a sas attached file share with permissions read list delete storage explorer version dev build number branch main platform os windows linux ubuntu macos ventura apple pro architecture how found from running test cases regression from not a regression steps to reproduce expand one storage account file shares create a file share upload a file sas attach the file share with permissions read list delete switch to the attached file share right click the file click rename input a valid name click rename check whether fails to rename the file expected experience fail to rename the file actual experience succeed to rename the file additional context this issue also reproduces in one sas attached file share with permissions read list write this issue also reproduces when the sas url is created by access policy this issue does not reproduce for one adls blob container | 0 |
34,344 | 29,504,742,105 | IssuesEvent | 2023-06-03 06:44:19 | oven-sh/bun | https://api.github.com/repos/oven-sh/bun | closed | Zig stage1 compiler needs 11 GB of ram to compile a debug build of Bun | infrastructure | A lot of perf stuff for Bun happens at compile time, which costs memory
This will be fixed eventually by Zig's stage2 compiler. However, need to think about some ways to work around this in the meantime. It is okay for me on my machine with 64 GB of ram, but this will be a barrier for contributors. | 1.0 | Zig stage1 compiler needs 11 GB of ram to compile a debug build of Bun - A lot of perf stuff for Bun happens at compile time, which costs memory
This will be fixed eventually by Zig's stage2 compiler. However, need to think about some ways to work around this in the meantime. It is okay for me on my machine with 64 GB of ram, but this will be a barrier for contributors. | infrastructure | zig compiler needs gb of ram to compile a debug build of bun a lot of perf stuff for bun happens at compile time which costs memory this will be fixed eventually by zig s compiler however need to think about some ways to work around this in the meantime it is okay for me on my machine with gb of ram but this will be a barrier for contributors | 1 |
828,037 | 31,808,404,184 | IssuesEvent | 2023-09-13 15:14:16 | sciris/sciris | https://api.github.com/repos/sciris/sciris | opened | Consider sshcluster | enhancement lowpriority big | There are lots of ultra-powerful distributed compute methods, but not a lot of just-powerful-enough ones that do exactly what you need and nothing more (or less). Consider the situation where:
- You have a list of known VMs, accessible via SSH, with conda installed
- Environments change rarely, and are always handled by conda
- Scripts change frequently, but are accessible via GitHub
- Everything is pure Python
Then it should be "pretty easy" to create a seamless user experience via the following tools:
- [ ] Remote Jupyter servers / ipykernels
- [ ] Port forward over SSH
- [ ] Automatic syncing via Git
Example:
```py
import sshcluster
user = 'testuser'
hosts = ['129.4.12.3', '72.58.2.8'] # the hosts to connect to
env = 'myenv' # conda environment on each host
repo = 'http://github.com/myorg/myproj' # the repo to sync from
wd = '/home/testuser/myproj' # the local folder for the repo
def job(x):
out = 0
for i in range(1e9):
out += i*x
return out
clust = sshcluster.make(user=user, hosts=hosts, env=env, repo=repo, wd=wd)
clust.sync() # syncs GitHub and checks versions on each environment
clust.submit(job, x=np.arange(100)) # starts the jobs; can disconnect the client now
...
result = clust.get() # gets the results
```
Alternative: [ray](https://ray-robert.readthedocs.io/en/latest/using-ray-on-a-large-cluster.html) is probably the closest, but requires a lot of configuration.
NB: if implemented, this would be a separate library, not part of Sciris. | 1.0 | Consider sshcluster - There are lots of ultra-powerful distributed compute methods, but not a lot of just-powerful-enough ones that do exactly what you need and nothing more (or less). Consider the situation where:
- You have a list of known VMs, accessible via SSH, with conda installed
- Environments change rarely, and are always handled by conda
- Scripts change frequently, but are accessible via GitHub
- Everything is pure Python
Then it should be "pretty easy" to create a seamless user experience via the following tools:
- [ ] Remote Jupyter servers / ipykernels
- [ ] Port forward over SSH
- [ ] Automatic syncing via Git
Example:
```py
import sshcluster
user = 'testuser'
hosts = ['129.4.12.3', '72.58.2.8'] # the hosts to connect to
env = 'myenv' # conda environment on each host
repo = 'http://github.com/myorg/myproj' # the repo to sync from
wd = '/home/testuser/myproj' # the local folder for the repo
def job(x):
out = 0
for i in range(1e9):
out += i*x
return out
clust = sshcluster.make(user=user, hosts=hosts, env=env, repo=repo, wd=wd)
clust.sync() # syncs GitHub and checks versions on each environment
clust.submit(job, x=np.arange(100)) # starts the jobs; can disconnect the client now
...
result = clust.get() # gets the results
```
Alternative: [ray](https://ray-robert.readthedocs.io/en/latest/using-ray-on-a-large-cluster.html) is probably the closest, but requires a lot of configuration.
NB: if implemented, this would be a separate library, not part of Sciris. | non_infrastructure | consider sshcluster there are lots of ultra powerful distributed compute methods but not a lot of just powerful enough ones that do exactly what you need and nothing more or less consider the situation where you have a list of known vms accessible via ssh with conda installed environments change rarely and are always handled by conda scripts change frequently but are accessible via github everything is pure python then it should be pretty easy to create a seamless user experience via the following tools remote jupyter servers ipykernels port forward over ssh automatic syncing via git example py import sshcluster user testuser hosts the hosts to connect to env myenv conda environment on each host repo the repo to sync from wd home testuser myproj the local folder for the repo def job x out for i in range out i x return out clust sshcluster make user user hosts hosts env env repo repo wd wd clust sync syncs github and checks versions on each environment clust submit job x np arange starts the jobs can disconnect the client now result clust get gets the results alternative is probably the closest but requires a lot of configuration nb if implemented this would be a separate library not part of sciris | 0 |
74,109 | 7,375,101,663 | IssuesEvent | 2018-03-13 22:43:56 | Esri/crowdsource-manager | https://api.github.com/repos/Esri/crowdsource-manager | closed | BUG-000110591 The search function in Crowdsource Manager returns all results if the web map has a search function defined and contains a hosted feature layer that has set of filters applied using the expression 'Any'. | Bug Delivered Medium Triage test case | <div><div><ol><li>Add a hosted feature layer to a new Map Viewer. </li><li>Under the content, hover over the layer and click on the Filter icon.</li><li>In the Filter window, click on '+Add another expression', and select the expression 'Any' in the line that states "Display features in the layer that match ....". </li><li>Select filters of your choosing for both expressions and click Apply Filter.</li><li>Save the map as New map.</li><li>Navigate to Content, and open the newly create web map to view the Overview page.</li><li>Click on the Settings tab, and scroll down to 'Find Locations', under Application Settings.</li><li>Check the option 'By Layer', and click on 'Add Layer'.</li><li>Select a layer of your choosing and click Save.</li><li>Share the item with a group (this is to create a Crowdsource Manager app). If prompted to update the sharing settings of the hosted feature layer in the web map, click Ok.</li><li>Navigate back to the Content page.</li><li>Click on 'Create', and select 'Using a Template' under App.
<ul><li>Sample of the app: <a href="http://ess.maps.arcgis.com/home/item.html?id=18bad0dda684457fac2d8176539ba68b" target="_blank">http://ess.maps.arcgis.com/home/item.html?id=18bad0dda684457fac2d8176539ba68b</a><br> (The search in the sample is based on Reference).</li></ul>
</li><li>Select the Scrowsource Manager and proceed to create web app.</li><li>In the web app configuration page, select the group you shared the map with, and then select the web map saved earlier.</li><li>Save the app, and click on 'Launch'.</li><li>Click on the search icon at the top right, and search for an item based on the layer set earlier in step 8. Notice the search doesn't work, instead it returns all results.</li></ol>
Notes:
<ul><li>This doesn't happen if the set of filters had the expression 'All' in step 3. </li><li>Reproduced on Portal for ArcGIS 10.5.1 as well.</li><li>Attached are screenshots comparing both search results of filter sets using 'Any' and 'All' expressions.</li></ul></div><div>
<span><b>Salesforce ID:</b> BUG-000110591</span><br/>
<span><b>Salesforce Submitter:</b> Ahmed Abdulwahab</span><br/>
<span><b>Salesforce Submit Date:</b> 1/05/2018 12:53 PM</span><br/>
<span><b>Salesforce Bug Type:</b> Failure/Error</span><br/>
<span><b>Salesforce Severity:</b> Medium</span><br/>
<span><b>Repro Data:</b> \\esri.com\sf_filestore\PRD\Attachments\Defects\BUG-000110591</span><br/>
<span><b>Work Around:</b> (n/a)</span><br/>
<span><b>Product:</b> (n/a)</span><br/>
<span><b>Functional Category:</b> ArcGIS Online Application Templates</span><br/>
<span><b>Client Platform:</b> (n/a)</span><br/>
<span><b>Version Found:</b> N/A</span><br/>
<span><b>Planned Version Fixed:</b> (n/a)</span><br/>
<span><b>Comment:</b> (n/a)</span><br/>
</div></div>
| 1.0 | BUG-000110591 The search function in Crowdsource Manager returns all results if the web map has a search function defined and contains a hosted feature layer that has set of filters applied using the expression 'Any'. - <div><div><ol><li>Add a hosted feature layer to a new Map Viewer. </li><li>Under the content, hover over the layer and click on the Filter icon.</li><li>In the Filter window, click on '+Add another expression', and select the expression 'Any' in the line that states "Display features in the layer that match ....". </li><li>Select filters of your choosing for both expressions and click Apply Filter.</li><li>Save the map as New map.</li><li>Navigate to Content, and open the newly create web map to view the Overview page.</li><li>Click on the Settings tab, and scroll down to 'Find Locations', under Application Settings.</li><li>Check the option 'By Layer', and click on 'Add Layer'.</li><li>Select a layer of your choosing and click Save.</li><li>Share the item with a group (this is to create a Crowdsource Manager app). If prompted to update the sharing settings of the hosted feature layer in the web map, click Ok.</li><li>Navigate back to the Content page.</li><li>Click on 'Create', and select 'Using a Template' under App.
<ul><li>Sample of the app: <a href="http://ess.maps.arcgis.com/home/item.html?id=18bad0dda684457fac2d8176539ba68b" target="_blank">http://ess.maps.arcgis.com/home/item.html?id=18bad0dda684457fac2d8176539ba68b</a><br> (The search in the sample is based on Reference).</li></ul>
</li><li>Select the Scrowsource Manager and proceed to create web app.</li><li>In the web app configuration page, select the group you shared the map with, and then select the web map saved earlier.</li><li>Save the app, and click on 'Launch'.</li><li>Click on the search icon at the top right, and search for an item based on the layer set earlier in step 8. Notice the search doesn't work, instead it returns all results.</li></ol>
Notes:
<ul><li>This doesn't happen if the set of filters had the expression 'All' in step 3. </li><li>Reproduced on Portal for ArcGIS 10.5.1 as well.</li><li>Attached are screenshots comparing both search results of filter sets using 'Any' and 'All' expressions.</li></ul></div><div>
<span><b>Salesforce ID:</b> BUG-000110591</span><br/>
<span><b>Salesforce Submitter:</b> Ahmed Abdulwahab</span><br/>
<span><b>Salesforce Submit Date:</b> 1/05/2018 12:53 PM</span><br/>
<span><b>Salesforce Bug Type:</b> Failure/Error</span><br/>
<span><b>Salesforce Severity:</b> Medium</span><br/>
<span><b>Repro Data:</b> \\esri.com\sf_filestore\PRD\Attachments\Defects\BUG-000110591</span><br/>
<span><b>Work Around:</b> (n/a)</span><br/>
<span><b>Product:</b> (n/a)</span><br/>
<span><b>Functional Category:</b> ArcGIS Online Application Templates</span><br/>
<span><b>Client Platform:</b> (n/a)</span><br/>
<span><b>Version Found:</b> N/A</span><br/>
<span><b>Planned Version Fixed:</b> (n/a)</span><br/>
<span><b>Comment:</b> (n/a)</span><br/>
</div></div>
| non_infrastructure | bug the search function in crowdsource manager returns all results if the web map has a search function defined and contains a hosted feature layer that has set of filters applied using the expression any add a hosted feature layer to a new map viewer under the content hover over the layer and click on the filter icon in the filter window click on add another expression and select the expression any in the line that states quot display features in the layer that match quot select filters of your choosing for both expressions and click apply filter save the map as new map navigate to content and open the newly create web map to view the overview page click on the settings tab and scroll down to find locations under application settings check the option by layer and click on add layer select a layer of your choosing and click save share the item with a group this is to create a crowdsource manager app if prompted to update the sharing settings of the hosted feature layer in the web map click ok navigate back to the content page click on create and select using a template under app sample of the app the search in the sample is based on reference select the scrowsource manager and proceed to create web app in the web app configuration page select the group you shared the map with and then select the web map saved earlier save the app and click on launch click on the search icon at the top right and search for an item based on the layer set earlier in step notice the search doesn t work instead it returns all results notes this doesn t happen if the set of filters had the expression all in step reproduced on portal for arcgis as well attached are screenshots comparing both search results of filter sets using any and all expressions salesforce id bug salesforce submitter ahmed abdulwahab salesforce submit date pm salesforce bug type failure error salesforce severity medium repro data esri com sf filestore prd attachments defects bug work around n a product n a functional category arcgis online application templates client platform n a version found n a planned version fixed n a comment n a | 0 |
271,477 | 23,606,527,948 | IssuesEvent | 2022-08-24 08:44:40 | wazuh/wazuh-qa | https://api.github.com/repos/wazuh/wazuh-qa | closed | E2E tests: Refactor timeouts and tasks names | team/qa type/enhancement subteam/qa-hurricane test/e2e | ## Description
The E2E tests have been developed in #2872 with the goal of being run manually. However, after such development, we have picked up a number of necessary changes that are common or impact all E2E tests.
Right now, the default timeout to find an event is 20 seconds, in case the event takes a long time to appear, but since our alerts are already obtained, we don't have to give a time frame for the alert to appear, so we could reduce the timeout and, in case of error, the test will finish faster.
In addition, we need to rename some tasks, especially in the configuration playbook, to more descriptive names.
## Tasks
- [x] (**T1**) Decrease timeouts
- [x] (**T2**) Rename tasks
- [x] (**T3**) Rename hosts
- [x] (**T4**) Upload external files to S3
- [x] (**T5**) Fix timestamps regex | 1.0 | E2E tests: Refactor timeouts and tasks names - ## Description
The E2E tests have been developed in #2872 with the goal of being run manually. However, after such development, we have picked up a number of necessary changes that are common or impact all E2E tests.
Right now, the default timeout to find an event is 20 seconds, in case the event takes a long time to appear, but since our alerts are already obtained, we don't have to give a time frame for the alert to appear, so we could reduce the timeout and, in case of error, the test will finish faster.
In addition, we need to rename some tasks, especially in the configuration playbook, to more descriptive names.
## Tasks
- [x] (**T1**) Decrease timeouts
- [x] (**T2**) Rename tasks
- [x] (**T3**) Rename hosts
- [x] (**T4**) Upload external files to S3
- [x] (**T5**) Fix timestamps regex | non_infrastructure | tests refactor timeouts and tasks names description the tests have been developed in with the goal of being run manually however after such development we have picked up a number of necessary changes that are common or impact all tests right now the default timeout to find an event is seconds in case the event takes a long time to appear but since our alerts are already obtained we don t have to give a time frame for the alert to appear so we could reduce the timeout and in case of error the test will finish faster in addition we need to rename some tasks especially in the configuration playbook to more descriptive names tasks decrease timeouts rename tasks rename hosts upload external files to fix timestamps regex | 0 |
29,948 | 24,422,453,057 | IssuesEvent | 2022-10-05 21:45:03 | opendatahub-io/odh-dashboard | https://api.github.com/repos/opendatahub-io/odh-dashboard | closed | [Bug]: ISV QuickStarts show up on Quick start when application is not installed or feature is disabled | kind/bug infrastructure priority/high | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
The Quickstart and documentation for the ISV will still be visible for the following conditions:
- ISV CR is removed
- Feature flag is set and the ISV CR is removed.
### Expected Behavior
The Quickstart and documentation should not show up when the ISV CR is removed, even if the feature flag is set.
### Steps To Reproduce
1. Remove and ISV CR instance under ODHApplication CRD that has a Quickstart or Documentation feature flag (e.g. Anaconda).
2. Enable the feature flag in features.json (e.g. "gpu-computing": true)
3. Verify the documentation and quick starts are not visible in the Resource page.
### Workaround (if any)
None
### OpenShift Infrastructure Version
N/A
### Openshift Version
N/A
### What browsers are you seeing the problem on?
Firefox, Chrome, Safari, Microsoft Edge
### Open Data Hub Version
N/A
### Relevant log output
_No response_ | 1.0 | [Bug]: ISV QuickStarts show up on Quick start when application is not installed or feature is disabled - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
The Quickstart and documentation for the ISV will still be visible for the following conditions:
- ISV CR is removed
- Feature flag is set and the ISV CR is removed.
### Expected Behavior
The Quickstart and documentation should not show up when the ISV CR is removed, even if the feature flag is set.
### Steps To Reproduce
1. Remove and ISV CR instance under ODHApplication CRD that has a Quickstart or Documentation feature flag (e.g. Anaconda).
2. Enable the feature flag in features.json (e.g. "gpu-computing": true)
3. Verify the documentation and quick starts are not visible in the Resource page.
### Workaround (if any)
None
### OpenShift Infrastructure Version
N/A
### Openshift Version
N/A
### What browsers are you seeing the problem on?
Firefox, Chrome, Safari, Microsoft Edge
### Open Data Hub Version
N/A
### Relevant log output
_No response_ | infrastructure | isv quickstarts show up on quick start when application is not installed or feature is disabled is there an existing issue for this i have searched the existing issues current behavior the quickstart and documentation for the isv will still be visible for the following conditions isv cr is removed feature flag is set and the isv cr is removed expected behavior the quickstart and documentation should not show up when the isv cr is removed even if the feature flag is set steps to reproduce remove and isv cr instance under odhapplication crd that has a quickstart or documentation feature flag e g anaconda enable the feature flag in features json e g gpu computing true verify the documentation and quick starts are not visible in the resource page workaround if any none openshift infrastructure version n a openshift version n a what browsers are you seeing the problem on firefox chrome safari microsoft edge open data hub version n a relevant log output no response | 1 |
22,322 | 15,102,729,109 | IssuesEvent | 2021-02-08 09:23:36 | unitaryfund/mitiq | https://api.github.com/repos/unitaryfund/mitiq | closed | Infrastructure for hosting literature results reproduced with Mitiq | infrastructure | As discussed in #365, I think notebooks are the best way to show how literature results can be reproduced with Mitiq. If there is a consensus on this, we need to decide where to host them. The only current notebook is in [mitiq-internal](https://github.com/unitaryfund/mitiq-internal/tree/master/lit/kandala) which is UF exclusive. | 1.0 | Infrastructure for hosting literature results reproduced with Mitiq - As discussed in #365, I think notebooks are the best way to show how literature results can be reproduced with Mitiq. If there is a consensus on this, we need to decide where to host them. The only current notebook is in [mitiq-internal](https://github.com/unitaryfund/mitiq-internal/tree/master/lit/kandala) which is UF exclusive. | infrastructure | infrastructure for hosting literature results reproduced with mitiq as discussed in i think notebooks are the best way to show how literature results can be reproduced with mitiq if there is a consensus on this we need to decide where to host them the only current notebook is in which is uf exclusive | 1 |
32,468 | 26,717,355,214 | IssuesEvent | 2023-01-28 17:47:31 | cookiecutter/cookiecutter-django | https://api.github.com/repos/cookiecutter/cookiecutter-django | closed | Build on master branch is broken | project infrastructure | ## What happened?
Just merged a PR, and the build failed with an installtion error. The error happens when pre-commit tries to install isort using Poetry's build backend which had [new version](https://github.com/python-poetry/poetry-core/releases/tag/1.5.0) released a few hours ago:
- https://github.com/PyCQA/isort/issues/2077
- https://github.com/PyCQA/isort/pull/2078
Not an issue with this repo, but I'm opening this for visibility. | 1.0 | Build on master branch is broken - ## What happened?
Just merged a PR, and the build failed with an installtion error. The error happens when pre-commit tries to install isort using Poetry's build backend which had [new version](https://github.com/python-poetry/poetry-core/releases/tag/1.5.0) released a few hours ago:
- https://github.com/PyCQA/isort/issues/2077
- https://github.com/PyCQA/isort/pull/2078
Not an issue with this repo, but I'm opening this for visibility. | infrastructure | build on master branch is broken what happened just merged a pr and the build failed with an installtion error the error happens when pre commit tries to install isort using poetry s build backend which had released a few hours ago not an issue with this repo but i m opening this for visibility | 1 |
26,498 | 20,160,571,230 | IssuesEvent | 2022-02-09 21:02:58 | cal-itp/data-infra | https://api.github.com/repos/cal-itp/data-infra | closed | Research: Are joins failing due to whitespace? | infrastructure | Spun out of #914. We identified a feed that had corrupt data (`service_id` had leading whitespace in one file but not another), leading to a failed join and missing data in final views. We are wondering whether other joins are running into this same issue, so we would want to run some checks in BigQuery to see whether there are other joins where agencies are being dropped due to whitespace mismatches.
cc @holly-g for prioritization. We basically aren't sure of impacts right now because we aren't sure whether other feeds/joins are affected, so this would be a research task to figure that out. @Nkdiaz had some ideas about how to approach this. | 1.0 | Research: Are joins failing due to whitespace? - Spun out of #914. We identified a feed that had corrupt data (`service_id` had leading whitespace in one file but not another), leading to a failed join and missing data in final views. We are wondering whether other joins are running into this same issue, so we would want to run some checks in BigQuery to see whether there are other joins where agencies are being dropped due to whitespace mismatches.
cc @holly-g for prioritization. We basically aren't sure of impacts right now because we aren't sure whether other feeds/joins are affected, so this would be a research task to figure that out. @Nkdiaz had some ideas about how to approach this. | infrastructure | research are joins failing due to whitespace spun out of we identified a feed that had corrupt data service id had leading whitespace in one file but not another leading to a failed join and missing data in final views we are wondering whether other joins are running into this same issue so we would want to run some checks in bigquery to see whether there are other joins where agencies are being dropped due to whitespace mismatches cc holly g for prioritization we basically aren t sure of impacts right now because we aren t sure whether other feeds joins are affected so this would be a research task to figure that out nkdiaz had some ideas about how to approach this | 1 |
4,858 | 5,302,747,108 | IssuesEvent | 2017-02-10 13:58:47 | camptocamp/ngeo | https://api.github.com/repos/camptocamp/ngeo | closed | Removes the unused dist | Infrastructure Ready | The files:
* dist/ngeo.js
* dist/ngeo-debug.js
* dist/ngeo.js.map
* dist/ngeo.css
* dist/gmf.js
* dist/gmf.js.map
Are never used that I suspect that they will not be usable => remove them. | 1.0 | Removes the unused dist - The files:
* dist/ngeo.js
* dist/ngeo-debug.js
* dist/ngeo.js.map
* dist/ngeo.css
* dist/gmf.js
* dist/gmf.js.map
Are never used that I suspect that they will not be usable => remove them. | infrastructure | removes the unused dist the files dist ngeo js dist ngeo debug js dist ngeo js map dist ngeo css dist gmf js dist gmf js map are never used that i suspect that they will not be usable remove them | 1 |
17,264 | 12,267,974,512 | IssuesEvent | 2020-05-07 11:41:50 | libero/reviewer | https://api.github.com/repos/libero/reviewer | opened | Containerise browsertests | Infrastructure testing | Result of #841
Once browsertests have been established in reviewer-client:
- [ ] containerise the testtool and tests
- [ ] use container to replace browser tests in umbrella
- [ ] hook up renovate to keep tests in sync with client | 1.0 | Containerise browsertests - Result of #841
Once browsertests have been established in reviewer-client:
- [ ] containerise the testtool and tests
- [ ] use container to replace browser tests in umbrella
- [ ] hook up renovate to keep tests in sync with client | infrastructure | containerise browsertests result of once browsertests have been established in reviewer client containerise the testtool and tests use container to replace browser tests in umbrella hook up renovate to keep tests in sync with client | 1 |
447,651 | 31,718,392,726 | IssuesEvent | 2023-09-10 05:34:32 | vuejs-translations/docs-bn | https://api.github.com/repos/vuejs-translations/docs-bn | closed | api/options-lifecycle.md | documentation | @abuansarpatowary check this page
push to branch: `api/options-lifecycle.md` | 1.0 | api/options-lifecycle.md - @abuansarpatowary check this page
push to branch: `api/options-lifecycle.md` | non_infrastructure | api options lifecycle md abuansarpatowary check this page push to branch api options lifecycle md | 0 |
27,140 | 21,194,159,692 | IssuesEvent | 2022-04-08 21:18:13 | firebase/firebase-ios-sdk | https://api.github.com/repos/firebase/firebase-ios-sdk | closed | Update spectesting.yml for Firebase 9 | Infrastructure | Review and update included and excluded pods for `pod spec lint` testing.
PRs like #9311 may have disabled it since `pod spec lint` testing only works on master since it depends on the SpecsTesting repo. | 1.0 | Update spectesting.yml for Firebase 9 - Review and update included and excluded pods for `pod spec lint` testing.
PRs like #9311 may have disabled it since `pod spec lint` testing only works on master since it depends on the SpecsTesting repo. | infrastructure | update spectesting yml for firebase review and update included and excluded pods for pod spec lint testing prs like may have disabled it since pod spec lint testing only works on master since it depends on the specstesting repo | 1 |
369,173 | 25,830,399,149 | IssuesEvent | 2022-12-12 15:43:19 | jart/blink | https://api.github.com/repos/jart/blink | closed | Incorrect instructions in README.md ("o///blink/tui") | bug documentation | "build/bootstrap/make.com -j8 o///blink/tui"
At least for me, this 3-slash version doesn't actually build tui.
"o//blink/tui" works just fine. | 1.0 | Incorrect instructions in README.md ("o///blink/tui") - "build/bootstrap/make.com -j8 o///blink/tui"
At least for me, this 3-slash version doesn't actually build tui.
"o//blink/tui" works just fine. | non_infrastructure | incorrect instructions in readme md o blink tui build bootstrap make com o blink tui at least for me this slash version doesn t actually build tui o blink tui works just fine | 0 |
138,186 | 18,771,466,854 | IssuesEvent | 2021-11-06 22:50:29 | samqws-marketing/box_mojito | https://api.github.com/repos/samqws-marketing/box_mojito | opened | CVE-2020-25638 (High) detected in hibernate-core-5.4.21.Final.jar | security vulnerability | ## CVE-2020-25638 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>hibernate-core-5.4.21.Final.jar</b></p></summary>
<p>Hibernate's core ORM functionality</p>
<p>Library home page: <a href="http://hibernate.org/orm">http://hibernate.org/orm</a></p>
<p>Path to dependency file: box_mojito/webapp/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/hibernate/hibernate-core/5.4.21.Final/hibernate-core-5.4.21.Final.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-data-jpa-2.3.4.RELEASE.jar (Root Library)
- :x: **hibernate-core-5.4.21.Final.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/samqws-marketing/box_mojito/commit/65290aeb818102fa2443a637efdccebebfed1eb9">65290aeb818102fa2443a637efdccebebfed1eb9</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A flaw was found in hibernate-core in versions prior to and including 5.4.23.Final. A SQL injection in the implementation of the JPA Criteria API can permit unsanitized literals when a literal is used in the SQL comments of the query. This flaw could allow an attacker to access unauthorized information or possibly conduct further attacks. The highest threat from this vulnerability is to data confidentiality and integrity.
<p>Publish Date: 2020-12-02
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-25638>CVE-2020-25638</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://in.relation.to/2020/11/19/hibernate-orm-5424-final-release/">https://in.relation.to/2020/11/19/hibernate-orm-5424-final-release/</a></p>
<p>Release Date: 2020-12-02</p>
<p>Fix Resolution: org.hibernate:hibernate-core:5.3.20.Final,5.4.24.Final</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.hibernate","packageName":"hibernate-core","packageVersion":"5.4.21.Final","packageFilePaths":["/webapp/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.springframework.boot:spring-boot-starter-data-jpa:2.3.4.RELEASE;org.hibernate:hibernate-core:5.4.21.Final","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.hibernate:hibernate-core:5.3.20.Final,5.4.24.Final"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-25638","vulnerabilityDetails":"A flaw was found in hibernate-core in versions prior to and including 5.4.23.Final. A SQL injection in the implementation of the JPA Criteria API can permit unsanitized literals when a literal is used in the SQL comments of the query. This flaw could allow an attacker to access unauthorized information or possibly conduct further attacks. The highest threat from this vulnerability is to data confidentiality and integrity.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-25638","cvss3Severity":"high","cvss3Score":"7.4","cvss3Metrics":{"A":"None","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | True | CVE-2020-25638 (High) detected in hibernate-core-5.4.21.Final.jar - ## CVE-2020-25638 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>hibernate-core-5.4.21.Final.jar</b></p></summary>
<p>Hibernate's core ORM functionality</p>
<p>Library home page: <a href="http://hibernate.org/orm">http://hibernate.org/orm</a></p>
<p>Path to dependency file: box_mojito/webapp/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/hibernate/hibernate-core/5.4.21.Final/hibernate-core-5.4.21.Final.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-data-jpa-2.3.4.RELEASE.jar (Root Library)
- :x: **hibernate-core-5.4.21.Final.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/samqws-marketing/box_mojito/commit/65290aeb818102fa2443a637efdccebebfed1eb9">65290aeb818102fa2443a637efdccebebfed1eb9</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A flaw was found in hibernate-core in versions prior to and including 5.4.23.Final. A SQL injection in the implementation of the JPA Criteria API can permit unsanitized literals when a literal is used in the SQL comments of the query. This flaw could allow an attacker to access unauthorized information or possibly conduct further attacks. The highest threat from this vulnerability is to data confidentiality and integrity.
<p>Publish Date: 2020-12-02
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-25638>CVE-2020-25638</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://in.relation.to/2020/11/19/hibernate-orm-5424-final-release/">https://in.relation.to/2020/11/19/hibernate-orm-5424-final-release/</a></p>
<p>Release Date: 2020-12-02</p>
<p>Fix Resolution: org.hibernate:hibernate-core:5.3.20.Final,5.4.24.Final</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.hibernate","packageName":"hibernate-core","packageVersion":"5.4.21.Final","packageFilePaths":["/webapp/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.springframework.boot:spring-boot-starter-data-jpa:2.3.4.RELEASE;org.hibernate:hibernate-core:5.4.21.Final","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.hibernate:hibernate-core:5.3.20.Final,5.4.24.Final"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-25638","vulnerabilityDetails":"A flaw was found in hibernate-core in versions prior to and including 5.4.23.Final. A SQL injection in the implementation of the JPA Criteria API can permit unsanitized literals when a literal is used in the SQL comments of the query. This flaw could allow an attacker to access unauthorized information or possibly conduct further attacks. The highest threat from this vulnerability is to data confidentiality and integrity.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-25638","cvss3Severity":"high","cvss3Score":"7.4","cvss3Metrics":{"A":"None","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | non_infrastructure | cve high detected in hibernate core final jar cve high severity vulnerability vulnerable library hibernate core final jar hibernate s core orm functionality library home page a href path to dependency file box mojito webapp pom xml path to vulnerable library home wss scanner repository org hibernate hibernate core final hibernate core final jar dependency hierarchy spring boot starter data jpa release jar root library x hibernate core final jar vulnerable library found in head commit a href found in base branch master vulnerability details a flaw was found in hibernate core in versions prior to and including final a sql injection in the implementation of the jpa criteria api can permit unsanitized literals when a literal is used in the sql comments of the query this flaw could allow an attacker to access unauthorized information or possibly conduct further attacks the highest threat from this vulnerability is to data confidentiality and integrity publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org hibernate hibernate core final final isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree org springframework boot spring boot starter data jpa release org hibernate hibernate core final isminimumfixversionavailable true minimumfixversion org hibernate hibernate core final final basebranches vulnerabilityidentifier cve vulnerabilitydetails a flaw was found in hibernate core in versions prior to and including final a sql injection in the implementation of the jpa criteria api can permit unsanitized literals when a literal is used in the sql comments of the query this flaw could allow an attacker to access unauthorized information or possibly conduct further attacks the highest threat from this vulnerability is to data confidentiality and integrity vulnerabilityurl | 0 |
13,603 | 10,341,043,699 | IssuesEvent | 2019-09-04 00:26:24 | aspnet/AspNetCore | https://api.github.com/repos/aspnet/AspNetCore | closed | Compat testing of popular libraries for 3.0 | Test area-infrastructure certified good 👍 fowler 👻 Spooky 👻 | As part of 3.0, there have been a [number of changes](https://github.com/aspnet/announcements/issues?q=is%3Aopen+is%3Aissue+label%3A%22Breaking+change%22+label%3A3.0.0) that may impact existing libraries that customers are using today. There of course also could be unintentional changes that end up breaking some of these libraries. It would be good to know about these sooner rather than later in order to help address the issues (e.g. fix the unintended break, send a PR to the library to make it work in 3.0, log an issue with the library with details, etc.).
To that end, we've compiled a list of popular libraries used by ASP.NET Core projects in (somewhat) descending popularity order. The list was constructed using a number of data sources including [NuGet package download counts](https://www.nuget.org/stats/packages), VS telemetry, [Twitter interactions](https://twitter.com/DamianEdwards/status/1133886079843495936), and team assumptions/knowledge. Any library that was determined to have usage by less than approx. 1% of the ASP.NET Core user base was excluded. *Note that the library names used here are actually the assembly names which in some cases do not match the NuGet package name.*
## Work description
For each library in this list, the idea is to complete a basic "Hello World" scenario applicable for that library using the latest available version, and record one of the following outcomes:
1. No issue found, library works in an ASP.NET Core 3.0 project
1. Library needs to react to 3.0 change, issue logged on library project
1. Library needs to react to 3.0 change, PR sent to library project
1. Unintended ASP.NET Core change causes library issues, issue logged/PR sent to ASP.NET Core to fix the issue and restore compatibility with library
## Library list
1. [ ] Swashbuckle.AspNetCore
1. [ ] AutoMapper
1. [ ] Serilog
1. [ ] Dapper
1. [ ] MessagePack
1. [ ] IdentityModel
1. [ ] NLog
1. [ ] Autofac
1. [ ] IdentityServer4
1. [ ] log4net
1. [ ] Polly
1. [ ] FluentValidation.AspNetCore
1. [ ] Npgsql
1. [ ] StackExchange.Redis
1. [ ] Microsoft.AspNetCore.Mvc.Versioning
1. [ ] RabbitMQ.Client
1. [ ] MongoDB.Driver
1. [ ] RestSharp
1. [ ] MySQL.Data
1. [ ] AWSSDK.Core
1. [ ] Hangfire.Core
1. [ ] MySQLConnector
1. [ ] MediatR
1. [ ] Npgsql.EntityFrameworkCore.PostgreSQL
1. [ ] Pomelo.EntityFrameworkCore.MySQL
1. [ ] Elasticsearch.Net
1. [ ] CsvHelper
1. [ ] Microsoft.Azure.ServiceBus
1. [ ] Oracle.ManagedDataAccess
1. [ ] NSwag.AspNetCore
1. [ ] protobuf-net
1. [ ] Microsoft.OData.Core
1. [ ] nodatime
1. [ ] MiniProfiler.AspNetCore
1. [ ] Ocelot
1. [ ] MySQL.Data.EntityFrameworkCore
1. [ ] Microsoft.AspNetCore.OData
1. [ ] Consul
1. [ ] ServiceStack.Text
1. [ ] Microsoft.Azure.Storage.Common
1. [ ] Microsoft.Azure.EventHubs
1. [ ] NWebsec.AspNetCore.Middleware
1. [ ] Confluent.Kakfa
1. [ ] SimpleInjector
1. [ ] MassTransit
| 1.0 | Compat testing of popular libraries for 3.0 - As part of 3.0, there have been a [number of changes](https://github.com/aspnet/announcements/issues?q=is%3Aopen+is%3Aissue+label%3A%22Breaking+change%22+label%3A3.0.0) that may impact existing libraries that customers are using today. There of course also could be unintentional changes that end up breaking some of these libraries. It would be good to know about these sooner rather than later in order to help address the issues (e.g. fix the unintended break, send a PR to the library to make it work in 3.0, log an issue with the library with details, etc.).
To that end, we've compiled a list of popular libraries used by ASP.NET Core projects in (somewhat) descending popularity order. The list was constructed using a number of data sources including [NuGet package download counts](https://www.nuget.org/stats/packages), VS telemetry, [Twitter interactions](https://twitter.com/DamianEdwards/status/1133886079843495936), and team assumptions/knowledge. Any library that was determined to have usage by less than approx. 1% of the ASP.NET Core user base was excluded. *Note that the library names used here are actually the assembly names which in some cases do not match the NuGet package name.*
## Work description
For each library in this list, the idea is to complete a basic "Hello World" scenario applicable for that library using the latest available version, and record one of the following outcomes:
1. No issue found, library works in an ASP.NET Core 3.0 project
1. Library needs to react to 3.0 change, issue logged on library project
1. Library needs to react to 3.0 change, PR sent to library project
1. Unintended ASP.NET Core change causes library issues, issue logged/PR sent to ASP.NET Core to fix the issue and restore compatibility with library
## Library list
1. [ ] Swashbuckle.AspNetCore
1. [ ] AutoMapper
1. [ ] Serilog
1. [ ] Dapper
1. [ ] MessagePack
1. [ ] IdentityModel
1. [ ] NLog
1. [ ] Autofac
1. [ ] IdentityServer4
1. [ ] log4net
1. [ ] Polly
1. [ ] FluentValidation.AspNetCore
1. [ ] Npgsql
1. [ ] StackExchange.Redis
1. [ ] Microsoft.AspNetCore.Mvc.Versioning
1. [ ] RabbitMQ.Client
1. [ ] MongoDB.Driver
1. [ ] RestSharp
1. [ ] MySQL.Data
1. [ ] AWSSDK.Core
1. [ ] Hangfire.Core
1. [ ] MySQLConnector
1. [ ] MediatR
1. [ ] Npgsql.EntityFrameworkCore.PostgreSQL
1. [ ] Pomelo.EntityFrameworkCore.MySQL
1. [ ] Elasticsearch.Net
1. [ ] CsvHelper
1. [ ] Microsoft.Azure.ServiceBus
1. [ ] Oracle.ManagedDataAccess
1. [ ] NSwag.AspNetCore
1. [ ] protobuf-net
1. [ ] Microsoft.OData.Core
1. [ ] nodatime
1. [ ] MiniProfiler.AspNetCore
1. [ ] Ocelot
1. [ ] MySQL.Data.EntityFrameworkCore
1. [ ] Microsoft.AspNetCore.OData
1. [ ] Consul
1. [ ] ServiceStack.Text
1. [ ] Microsoft.Azure.Storage.Common
1. [ ] Microsoft.Azure.EventHubs
1. [ ] NWebsec.AspNetCore.Middleware
1. [ ] Confluent.Kakfa
1. [ ] SimpleInjector
1. [ ] MassTransit
| infrastructure | compat testing of popular libraries for as part of there have been a that may impact existing libraries that customers are using today there of course also could be unintentional changes that end up breaking some of these libraries it would be good to know about these sooner rather than later in order to help address the issues e g fix the unintended break send a pr to the library to make it work in log an issue with the library with details etc to that end we ve compiled a list of popular libraries used by asp net core projects in somewhat descending popularity order the list was constructed using a number of data sources including vs telemetry and team assumptions knowledge any library that was determined to have usage by less than approx of the asp net core user base was excluded note that the library names used here are actually the assembly names which in some cases do not match the nuget package name work description for each library in this list the idea is to complete a basic hello world scenario applicable for that library using the latest available version and record one of the following outcomes no issue found library works in an asp net core project library needs to react to change issue logged on library project library needs to react to change pr sent to library project unintended asp net core change causes library issues issue logged pr sent to asp net core to fix the issue and restore compatibility with library library list swashbuckle aspnetcore automapper serilog dapper messagepack identitymodel nlog autofac polly fluentvalidation aspnetcore npgsql stackexchange redis microsoft aspnetcore mvc versioning rabbitmq client mongodb driver restsharp mysql data awssdk core hangfire core mysqlconnector mediatr npgsql entityframeworkcore postgresql pomelo entityframeworkcore mysql elasticsearch net csvhelper microsoft azure servicebus oracle manageddataaccess nswag aspnetcore protobuf net microsoft odata core nodatime miniprofiler aspnetcore ocelot mysql data entityframeworkcore microsoft aspnetcore odata consul servicestack text microsoft azure storage common microsoft azure eventhubs nwebsec aspnetcore middleware confluent kakfa simpleinjector masstransit | 1 |
789,656 | 27,800,297,524 | IssuesEvent | 2023-03-17 15:22:25 | CDCgov/prime-reportstream | https://api.github.com/repos/CDCgov/prime-reportstream | closed | UP Translation step generates a filtered FHIR bundle with only the resources needed by a receiver | pipeline High Priority platform | ### User Story:
As an receiver, I want to be able to receive only the data that I need, so I don't have to filter the data when I get it.
### Description/Use Case
Receivers are going to want to determine which reportable conditions they want to receive from ReportStream as we start ingesting all reportable conditions. We need a mechanism to filter out reportable conditions they want.
### Risks/Impacts/Considerations
### Dev Notes:
* See design in #7588
* This is to be done in the translation step. Use the data placed in the FHIR Bundle from #8095 to know which Observation resources to keep and generate a new FHIR Bundle with the rest removed.
* Use the code generated from #8096 to remove the resources.
* This new FHIR bundle is then translated as needed and sent to the receiver.
* The original FHIR Bundle is not changed.
### Acceptance Criteria
* The UP translate step can remove unneeded Observations from a FHIR Bundle before sending to a receiver.
* Any translated report (e.g. FHIR to HL7 v2) contains only the needed Observation data.
| 1.0 | UP Translation step generates a filtered FHIR bundle with only the resources needed by a receiver - ### User Story:
As an receiver, I want to be able to receive only the data that I need, so I don't have to filter the data when I get it.
### Description/Use Case
Receivers are going to want to determine which reportable conditions they want to receive from ReportStream as we start ingesting all reportable conditions. We need a mechanism to filter out reportable conditions they want.
### Risks/Impacts/Considerations
### Dev Notes:
* See design in #7588
* This is to be done in the translation step. Use the data placed in the FHIR Bundle from #8095 to know which Observation resources to keep and generate a new FHIR Bundle with the rest removed.
* Use the code generated from #8096 to remove the resources.
* This new FHIR bundle is then translated as needed and sent to the receiver.
* The original FHIR Bundle is not changed.
### Acceptance Criteria
* The UP translate step can remove unneeded Observations from a FHIR Bundle before sending to a receiver.
* Any translated report (e.g. FHIR to HL7 v2) contains only the needed Observation data.
| non_infrastructure | up translation step generates a filtered fhir bundle with only the resources needed by a receiver user story as an receiver i want to be able to receive only the data that i need so i don t have to filter the data when i get it description use case receivers are going to want to determine which reportable conditions they want to receive from reportstream as we start ingesting all reportable conditions we need a mechanism to filter out reportable conditions they want risks impacts considerations dev notes see design in this is to be done in the translation step use the data placed in the fhir bundle from to know which observation resources to keep and generate a new fhir bundle with the rest removed use the code generated from to remove the resources this new fhir bundle is then translated as needed and sent to the receiver the original fhir bundle is not changed acceptance criteria the up translate step can remove unneeded observations from a fhir bundle before sending to a receiver any translated report e g fhir to contains only the needed observation data | 0 |
3,166 | 12,226,516,301 | IssuesEvent | 2020-05-03 11:17:59 | gfleetwood/asteres | https://api.github.com/repos/gfleetwood/asteres | opened | nocomplexity/SecurityPrivacyReferenceArchitecture (44663811) | Python maintain | https://github.com/nocomplexity/SecurityPrivacyReferenceArchitecture
Open Repository for the Open Security and Privacy Reference Architecture | True | nocomplexity/SecurityPrivacyReferenceArchitecture (44663811) - https://github.com/nocomplexity/SecurityPrivacyReferenceArchitecture
Open Repository for the Open Security and Privacy Reference Architecture | non_infrastructure | nocomplexity securityprivacyreferencearchitecture open repository for the open security and privacy reference architecture | 0 |
35,157 | 30,799,125,474 | IssuesEvent | 2023-07-31 22:52:19 | dotnet/docker-tools | https://api.github.com/repos/dotnet/docker-tools | closed | Align ImageBuilder image platform validation with containerd | bug area-infrastructure | There is a bug in the Docker legacy builder that results in `arm64` images being produced with the `arm64/v8` variant. When switching to BuildKit, the `arm64/v8` platform gets normalized to `arm64` with no variant due to [this code ](https://github.com/containerd/containerd/blob/96de54db4385279a5f8474b5de6658f6336b6fd8/platforms/database.go#L87-L92)in containerd.
Currently, ImageBuilder checks the variant here, but doesn't take into account that `arm64` and `arm64/v8` are supposedly compatible:
https://github.com/dotnet/docker-tools/blob/1bc4ea539f067c09d19627ca16ec9fcc38b240b5/src/Microsoft.DotNet.ImageBuilder/src/Commands/BuildCommand.cs#L412-L437
See https://github.com/moby/buildkit/issues/4039 for more details.
This is blocking https://github.com/dotnet/docker-tools/issues/1159 | 1.0 | Align ImageBuilder image platform validation with containerd - There is a bug in the Docker legacy builder that results in `arm64` images being produced with the `arm64/v8` variant. When switching to BuildKit, the `arm64/v8` platform gets normalized to `arm64` with no variant due to [this code ](https://github.com/containerd/containerd/blob/96de54db4385279a5f8474b5de6658f6336b6fd8/platforms/database.go#L87-L92)in containerd.
Currently, ImageBuilder checks the variant here, but doesn't take into account that `arm64` and `arm64/v8` are supposedly compatible:
https://github.com/dotnet/docker-tools/blob/1bc4ea539f067c09d19627ca16ec9fcc38b240b5/src/Microsoft.DotNet.ImageBuilder/src/Commands/BuildCommand.cs#L412-L437
See https://github.com/moby/buildkit/issues/4039 for more details.
This is blocking https://github.com/dotnet/docker-tools/issues/1159 | infrastructure | align imagebuilder image platform validation with containerd there is a bug in the docker legacy builder that results in images being produced with the variant when switching to buildkit the platform gets normalized to with no variant due to containerd currently imagebuilder checks the variant here but doesn t take into account that and are supposedly compatible see for more details this is blocking | 1 |
4,624 | 5,202,768,331 | IssuesEvent | 2017-01-24 10:35:29 | dashtinejad/tredestone | https://api.github.com/repos/dashtinejad/tredestone | opened | Finalize the Publish button behaviour | infrastructure | Decide what should we do when the user clicks the publish button.
Fast suggestion: redirecting it publish controller of main project. | 1.0 | Finalize the Publish button behaviour - Decide what should we do when the user clicks the publish button.
Fast suggestion: redirecting it publish controller of main project. | infrastructure | finalize the publish button behaviour decide what should we do when the user clicks the publish button fast suggestion redirecting it publish controller of main project | 1 |
23,808 | 16,600,536,932 | IssuesEvent | 2021-06-01 18:47:20 | cooperative-computing-lab/cctools | https://api.github.com/repos/cooperative-computing-lab/cctools | closed | Split Off Work Queue Examples | Work Queue enhancement infrastructure | @nhazekam @btovar please split off the work queue examples into a separate repo. I think that includes apps/*, allpairs, wavefront, sand, and perhaps others. Please complete this weekl.
It looks like Ben already created it: https://github.com/cooperative-computing-lab/work-queue-examples
| 1.0 | Split Off Work Queue Examples - @nhazekam @btovar please split off the work queue examples into a separate repo. I think that includes apps/*, allpairs, wavefront, sand, and perhaps others. Please complete this weekl.
It looks like Ben already created it: https://github.com/cooperative-computing-lab/work-queue-examples
| infrastructure | split off work queue examples nhazekam btovar please split off the work queue examples into a separate repo i think that includes apps allpairs wavefront sand and perhaps others please complete this weekl it looks like ben already created it | 1 |
5,260 | 5,542,082,452 | IssuesEvent | 2017-03-22 14:20:38 | gahansen/Albany | https://api.github.com/repos/gahansen/Albany | opened | Don't create broken symlinks in the build directory | Infrastructure | Certain test `CMakeLists.txt` files are creating symlinks to programs that don't exist. This was getting in the way of some tools I tried to develop, which try to traverse the build tree (and failed to follow the broken symlinks). I'll try to change things so that symlinks are only created if the programs they point to exist. | 1.0 | Don't create broken symlinks in the build directory - Certain test `CMakeLists.txt` files are creating symlinks to programs that don't exist. This was getting in the way of some tools I tried to develop, which try to traverse the build tree (and failed to follow the broken symlinks). I'll try to change things so that symlinks are only created if the programs they point to exist. | infrastructure | don t create broken symlinks in the build directory certain test cmakelists txt files are creating symlinks to programs that don t exist this was getting in the way of some tools i tried to develop which try to traverse the build tree and failed to follow the broken symlinks i ll try to change things so that symlinks are only created if the programs they point to exist | 1 |
24,785 | 17,775,544,470 | IssuesEvent | 2021-08-30 18:42:55 | 18F/tts-tech-portfolio | https://api.github.com/repos/18F/tts-tech-portfolio | closed | plan for better SaaS user management | t: weeks g: accepted i: infrastructure | ## Background information
Related to #92, we have a need to remove user accounts from various SaaS when offboarding people. Would be great to handle this in one place. A [System for Cross-domain Identity Management (SCIM)](https://www.okta.com/blog/2017/01/what-is-scim/) is something that could help here.
## Implementation steps
- [x] Research what SCIM SaaS is available
- [x] Determine which of the SaaS we use supports SCIM
- [ ] Create issue(s) for any follow-up work
## Acceptance criteria
- [x] Roadmap for how to handle user management with third-party services TTS uses | 1.0 | plan for better SaaS user management - ## Background information
Related to #92, we have a need to remove user accounts from various SaaS when offboarding people. Would be great to handle this in one place. A [System for Cross-domain Identity Management (SCIM)](https://www.okta.com/blog/2017/01/what-is-scim/) is something that could help here.
## Implementation steps
- [x] Research what SCIM SaaS is available
- [x] Determine which of the SaaS we use supports SCIM
- [ ] Create issue(s) for any follow-up work
## Acceptance criteria
- [x] Roadmap for how to handle user management with third-party services TTS uses | infrastructure | plan for better saas user management background information related to we have a need to remove user accounts from various saas when offboarding people would be great to handle this in one place a is something that could help here implementation steps research what scim saas is available determine which of the saas we use supports scim create issue s for any follow up work acceptance criteria roadmap for how to handle user management with third party services tts uses | 1 |
23,505 | 16,356,770,242 | IssuesEvent | 2021-05-14 00:13:08 | matthaeusheer/pokino | https://api.github.com/repos/matthaeusheer/pokino | closed | Setup Test Coverage | infrastructure | - [x] connect test coverage
- [x] how to do this with containers? how to scan every container individually etc.
- [x] add badge | 1.0 | Setup Test Coverage - - [x] connect test coverage
- [x] how to do this with containers? how to scan every container individually etc.
- [x] add badge | infrastructure | setup test coverage connect test coverage how to do this with containers how to scan every container individually etc add badge | 1 |
510,413 | 14,790,274,339 | IssuesEvent | 2021-01-12 11:47:34 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | opened | [0.9.2 staging-1897] Add support for non-latin chars for error. | Category: Tech Priority: Medium | Step to reproduce:
- have Windows 10 with non-latin language (I have russian)
- change IPAddress to something wrong:

- restart server. Server will crash and I have this dump:
```
--BEGIN DUMP--
Dump Time
01/12/2021 14:22:31
Exception
Exception: ENetError
Message:Требуемый адрес для своего контекста неверен.
Source:StrangeLoopGames.ENet
ENet.ENetError: Требуемый адрес для своего контекста неверен.
at ENet.Library.ThrowLastError()
at ENet.Host.Create(Address address, Int32 peerLimit, Int32 channelLimit, UInt32 incomingBandwidth, UInt32 outgoingBandwidth, Int32 bufferSize)
at ENet.Host.Create(Address address, Int32 peerLimit, Int32 channelLimit)
at ENet.Host.Create(Address address, Int32 peerLimit)
at Eco.Networking.ENet.ENetUdpPeer.Start()
at Eco.Plugins.Networking.NetworkServer.Initialize(IUdpLibrary udpLib)
at Eco.Plugins.Networking.NetworkManager.Run()
at System.Threading.ThreadHelper.ThreadStart_Context(Object state)
at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state)
--- End of stack trace from previous location where exception was thrown ---
at System.Threading.ThreadHelper.ThreadStart()
--END DUMP--
```
[ServerCrash ENetError 01122231.txt](https://github.com/StrangeLoopGames/EcoIssues/files/5801799/ServerCrash.ENetError.01122231.txt)
I have strange Требуемый адрес для своего контекста неверен instead of words.

| 1.0 | [0.9.2 staging-1897] Add support for non-latin chars for error. - Step to reproduce:
- have Windows 10 with non-latin language (I have russian)
- change IPAddress to something wrong:

- restart server. Server will crash and I have this dump:
```
--BEGIN DUMP--
Dump Time
01/12/2021 14:22:31
Exception
Exception: ENetError
Message:Требуемый адрес для своего контекста неверен.
Source:StrangeLoopGames.ENet
ENet.ENetError: Требуемый адрес для своего контекста неверен.
at ENet.Library.ThrowLastError()
at ENet.Host.Create(Address address, Int32 peerLimit, Int32 channelLimit, UInt32 incomingBandwidth, UInt32 outgoingBandwidth, Int32 bufferSize)
at ENet.Host.Create(Address address, Int32 peerLimit, Int32 channelLimit)
at ENet.Host.Create(Address address, Int32 peerLimit)
at Eco.Networking.ENet.ENetUdpPeer.Start()
at Eco.Plugins.Networking.NetworkServer.Initialize(IUdpLibrary udpLib)
at Eco.Plugins.Networking.NetworkManager.Run()
at System.Threading.ThreadHelper.ThreadStart_Context(Object state)
at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state)
--- End of stack trace from previous location where exception was thrown ---
at System.Threading.ThreadHelper.ThreadStart()
--END DUMP--
```
[ServerCrash ENetError 01122231.txt](https://github.com/StrangeLoopGames/EcoIssues/files/5801799/ServerCrash.ENetError.01122231.txt)
I have strange Требуемый адрес для своего контекста неверен instead of words.

| non_infrastructure | add support for non latin chars for error step to reproduce have windows with non latin language i have russian change ipaddress to something wrong restart server server will crash and i have this dump begin dump dump time exception exception eneterror message рўсђрµр±сѓрµрјс‹р№ р°рґсђрµсѓ рґр»сџ сѓрірѕрµрірѕ рєрѕрѕс‚рµрєсѓс‚р° рѕрµрірµсђрµрѕ source strangeloopgames enet enet eneterror рўсђрµр±сѓрµрјс‹р№ р°рґсђрµсѓ рґр»сџ сѓрірѕрµрірѕ рєрѕрѕс‚рµрєсѓс‚р° рѕрµрірµсђрµрѕ at enet library throwlasterror at enet host create address address peerlimit channellimit incomingbandwidth outgoingbandwidth buffersize at enet host create address address peerlimit channellimit at enet host create address address peerlimit at eco networking enet enetudppeer start at eco plugins networking networkserver initialize iudplibrary udplib at eco plugins networking networkmanager run at system threading threadhelper threadstart context object state at system threading executioncontext runinternal executioncontext executioncontext contextcallback callback object state end of stack trace from previous location where exception was thrown at system threading threadhelper threadstart end dump i have strange рўсђрµр±сѓрµрјс‹р№ р°рґсђрµсѓ рґр»сџ сѓрірѕрµрірѕ рєрѕрѕс‚рµрєсѓс‚р° рѕрµрірµсђрµрѕ instead of words | 0 |
24,813 | 2,673,474,501 | IssuesEvent | 2015-03-24 19:24:33 | Ramouch0/GSExtended | https://api.github.com/repos/Ramouch0/GSExtended | closed | Player in -1024px screen hiding chat input box | low priority | this is an issue with grooveshark but it can be temporary patched by gsx.
the chatbox is partially covered by the player when the screen has less than 1024px width, thus the input can't be focused:

it can be fixed by putting the chat over the player:
> @media only screen and (max-width: 529px), only screen and (max-width: 1024px) and (min-width: 530px) {
.in-broadcast #chat-sidebar {
z-index: 30000;
}
}
i already reported this to support. | 1.0 | Player in -1024px screen hiding chat input box - this is an issue with grooveshark but it can be temporary patched by gsx.
the chatbox is partially covered by the player when the screen has less than 1024px width, thus the input can't be focused:

it can be fixed by putting the chat over the player:
> @media only screen and (max-width: 529px), only screen and (max-width: 1024px) and (min-width: 530px) {
.in-broadcast #chat-sidebar {
z-index: 30000;
}
}
i already reported this to support. | non_infrastructure | player in screen hiding chat input box this is an issue with grooveshark but it can be temporary patched by gsx the chatbox is partially covered by the player when the screen has less than width thus the input can t be focused it can be fixed by putting the chat over the player media only screen and max width only screen and max width and min width in broadcast chat sidebar z index i already reported this to support | 0 |
561,520 | 16,618,649,434 | IssuesEvent | 2021-06-02 20:22:48 | misterveiga/cds | https://api.github.com/repos/misterveiga/cds | closed | Allowing the quick_mute reactions to be used in mod-alerts. | enhancement feature request high priority | When a Moderator reacts to a message in mod-alerts with either the qm_30 or qm_60, it will take action to the user and their message.
| 1.0 | Allowing the quick_mute reactions to be used in mod-alerts. - When a Moderator reacts to a message in mod-alerts with either the qm_30 or qm_60, it will take action to the user and their message.
| non_infrastructure | allowing the quick mute reactions to be used in mod alerts when a moderator reacts to a message in mod alerts with either the qm or qm it will take action to the user and their message | 0 |
3,057 | 4,031,168,887 | IssuesEvent | 2016-05-18 16:17:48 | ubiquits/ubiquits | https://api.github.com/repos/ubiquits/ubiquits | closed | Implement full stack angular infrastructure POC | infrastructure | Proof of concept of full stack angular with hapi
Tasks
- [x] angular compiling with webpack
- [ ] angular watcher for browser/* changes
- [ ] nodemon watcher for api/* changes
- [x] compilation to es5 for browser on build
- [ ] compilation to es2015 for api on build
- [ ] test runner watching browser files
- [ ] test runner watching api files
- [ ] integrated watchers
- [ ] source mapping for api console & debugger
- [ ] composer json running postgres db
- [ ] connection to db from localhost with sequelize
- [ ] travis ci automated testing
- [ ] coverage results pushed to codeclimate
- [ ] infrastructure tidied up and coverage up to 100% | 1.0 | Implement full stack angular infrastructure POC - Proof of concept of full stack angular with hapi
Tasks
- [x] angular compiling with webpack
- [ ] angular watcher for browser/* changes
- [ ] nodemon watcher for api/* changes
- [x] compilation to es5 for browser on build
- [ ] compilation to es2015 for api on build
- [ ] test runner watching browser files
- [ ] test runner watching api files
- [ ] integrated watchers
- [ ] source mapping for api console & debugger
- [ ] composer json running postgres db
- [ ] connection to db from localhost with sequelize
- [ ] travis ci automated testing
- [ ] coverage results pushed to codeclimate
- [ ] infrastructure tidied up and coverage up to 100% | infrastructure | implement full stack angular infrastructure poc proof of concept of full stack angular with hapi tasks angular compiling with webpack angular watcher for browser changes nodemon watcher for api changes compilation to for browser on build compilation to for api on build test runner watching browser files test runner watching api files integrated watchers source mapping for api console debugger composer json running postgres db connection to db from localhost with sequelize travis ci automated testing coverage results pushed to codeclimate infrastructure tidied up and coverage up to | 1 |
3,376 | 4,272,920,084 | IssuesEvent | 2016-07-13 15:47:44 | dinyar/uGMTfirmware | https://api.github.com/repos/dinyar/uGMTfirmware | opened | Release v3.0.0 | infrastructure | Following the switch to mp7fw v2.2.0 a new uGMT version should be released. As this is backwards incompatible and requires updated online software the version number will be 3.0.0 | 1.0 | Release v3.0.0 - Following the switch to mp7fw v2.2.0 a new uGMT version should be released. As this is backwards incompatible and requires updated online software the version number will be 3.0.0 | infrastructure | release following the switch to a new ugmt version should be released as this is backwards incompatible and requires updated online software the version number will be | 1 |
164,061 | 25,916,910,604 | IssuesEvent | 2022-12-15 18:08:49 | fleawig/jessefarberdotcom | https://api.github.com/repos/fleawig/jessefarberdotcom | opened | Add player control to image lightboxes | enhancement high priority JJ design updates | Currently, pages with image grids allow you to click on an image card to get to a lightbox. This can be clicked through using keyboard or left-side/right-side arrows.
Add a "player" type element somewhere obvious to house the navigation controls. It would also be a good chance to add swiping for mobile views.
AC:
- [ ] Page shows player that lets you navigate forward/back/exit etc.
- [ ] Lightboxes can be swiped through (fwd/back/exit) on mobile devices | 1.0 | Add player control to image lightboxes - Currently, pages with image grids allow you to click on an image card to get to a lightbox. This can be clicked through using keyboard or left-side/right-side arrows.
Add a "player" type element somewhere obvious to house the navigation controls. It would also be a good chance to add swiping for mobile views.
AC:
- [ ] Page shows player that lets you navigate forward/back/exit etc.
- [ ] Lightboxes can be swiped through (fwd/back/exit) on mobile devices | non_infrastructure | add player control to image lightboxes currently pages with image grids allow you to click on an image card to get to a lightbox this can be clicked through using keyboard or left side right side arrows add a player type element somewhere obvious to house the navigation controls it would also be a good chance to add swiping for mobile views ac page shows player that lets you navigate forward back exit etc lightboxes can be swiped through fwd back exit on mobile devices | 0 |
96,195 | 10,925,174,345 | IssuesEvent | 2019-11-22 11:52:12 | benoitc/gunicorn | https://api.github.com/repos/benoitc/gunicorn | closed | How gunicorn selects worker for each task? | Documentation | I have a question when using gunicorn gthread worker type.
the basic configs I have are as follow:
```
worker_class = gthread
workers = 20
threads = 1
max_requests = 50000
max_requests_jitter = 3
timeout = 70
graceful_timeout = 30
limit_request_line = 0
limit_request_fields = 200
limit_request_fields_size = 0
preload_app = True
```
As in the config, I set 20 workers, which can be seen when I start the test

each worker only has one thread.
Then I sent 3 groups of test, each group has 4 concurrent post requests. From the document, I understand that 20 workers should be able to handle 20 concurrent tasks at the same time. However, I found that one worker is used twice (for two tasks) in the test, which causedthe second task did not start until the first task had finished. I'm wondering why didn't gunicorn choose free worker to handle the second task? (there should be some free workers in this situation)
following figures shows some details of my test:
the response time obtained from the post side.

the actual processing time in each workers (from gunicorn log):

from these two figures, it is clear that the response time of the 11th requests from post side (0.41s) is different with the actual processing time in the worker (0.25s). To figure out the reason, I found that the 9th and 11th requests used the same woker with pid 56596. This also happened for 5th, 7th and 8th requests. Why did gunicorn ignore other free wokers?
Anying wrong in my config settings? What's the logic in gunicorn to select wokers for each task?
Thanks for any help in advance! | 1.0 | How gunicorn selects worker for each task? - I have a question when using gunicorn gthread worker type.
the basic configs I have are as follow:
```
worker_class = gthread
workers = 20
threads = 1
max_requests = 50000
max_requests_jitter = 3
timeout = 70
graceful_timeout = 30
limit_request_line = 0
limit_request_fields = 200
limit_request_fields_size = 0
preload_app = True
```
As in the config, I set 20 workers, which can be seen when I start the test

each worker only has one thread.
Then I sent 3 groups of test, each group has 4 concurrent post requests. From the document, I understand that 20 workers should be able to handle 20 concurrent tasks at the same time. However, I found that one worker is used twice (for two tasks) in the test, which causedthe second task did not start until the first task had finished. I'm wondering why didn't gunicorn choose free worker to handle the second task? (there should be some free workers in this situation)
following figures shows some details of my test:
the response time obtained from the post side.

the actual processing time in each workers (from gunicorn log):

from these two figures, it is clear that the response time of the 11th requests from post side (0.41s) is different with the actual processing time in the worker (0.25s). To figure out the reason, I found that the 9th and 11th requests used the same woker with pid 56596. This also happened for 5th, 7th and 8th requests. Why did gunicorn ignore other free wokers?
Anying wrong in my config settings? What's the logic in gunicorn to select wokers for each task?
Thanks for any help in advance! | non_infrastructure | how gunicorn selects worker for each task i have a question when using gunicorn gthread worker type the basic configs i have are as follow worker class gthread workers threads max requests max requests jitter timeout graceful timeout limit request line limit request fields limit request fields size preload app true as in the config i set workers which can be seen when i start the test each worker only has one thread then i sent groups of test each group has concurrent post requests from the document i understand that workers should be able to handle concurrent tasks at the same time however i found that one worker is used twice for two tasks in the test which causedthe second task did not start until the first task had finished i m wondering why didn t gunicorn choose free worker to handle the second task there should be some free workers in this situation following figures shows some details of my test the response time obtained from the post side the actual processing time in each workers from gunicorn log from these two figures it is clear that the response time of the requests from post side is different with the actual processing time in the worker to figure out the reason i found that the and requests used the same woker with pid this also happened for and requests why did gunicorn ignore other free wokers anying wrong in my config settings what s the logic in gunicorn to select wokers for each task thanks for any help in advance | 0 |
14,377 | 10,773,946,497 | IssuesEvent | 2019-11-03 00:24:16 | Treasure-Hunting/App | https://api.github.com/repos/Treasure-Hunting/App | closed | 使うフィクスチャを注入する | infrastructure | Heroku には $FIXTURE = 'second_data' で設定済みなので, Dockerfile 及び entrypoint.sh でそれを注入できるようにしてください. | 1.0 | 使うフィクスチャを注入する - Heroku には $FIXTURE = 'second_data' で設定済みなので, Dockerfile 及び entrypoint.sh でそれを注入できるようにしてください. | infrastructure | 使うフィクスチャを注入する heroku には fixture second data で設定済みなので, dockerfile 及び entrypoint sh でそれを注入できるようにしてください. | 1 |
169,674 | 14,227,411,091 | IssuesEvent | 2020-11-18 01:13:47 | dotnet/diagnostics | https://api.github.com/repos/dotnet/diagnostics | closed | Create a documentation page for runtime events specific to CoreCLR | documentation p1 | We have https://docs.microsoft.com/en-us/dotnet/framework/performance/clr-etw-events that describe events emitted by CLR (from .NET Framework days) but these contain events that are no longer emitted by CoreCLR and some event payloads have changed.
We should create a page in docs.microsoft.com describing events that's specific to CoreCLR. | 1.0 | Create a documentation page for runtime events specific to CoreCLR - We have https://docs.microsoft.com/en-us/dotnet/framework/performance/clr-etw-events that describe events emitted by CLR (from .NET Framework days) but these contain events that are no longer emitted by CoreCLR and some event payloads have changed.
We should create a page in docs.microsoft.com describing events that's specific to CoreCLR. | non_infrastructure | create a documentation page for runtime events specific to coreclr we have that describe events emitted by clr from net framework days but these contain events that are no longer emitted by coreclr and some event payloads have changed we should create a page in docs microsoft com describing events that s specific to coreclr | 0 |
243,565 | 18,717,871,951 | IssuesEvent | 2021-11-03 08:18:43 | lab-antwerp-1/group1-HackYourFuture | https://api.github.com/repos/lab-antwerp-1/group1-HackYourFuture | closed | volunteer page wireframe | documentation | # create volunteer page wireframe
>this will be done on `volunteer-wireframe` branch
- [ ] copy .jpg file to /assets
- [ ] create volunteer-wireframe.md
- [ ] use md to link .jpg file
| 1.0 | volunteer page wireframe - # create volunteer page wireframe
>this will be done on `volunteer-wireframe` branch
- [ ] copy .jpg file to /assets
- [ ] create volunteer-wireframe.md
- [ ] use md to link .jpg file
| non_infrastructure | volunteer page wireframe create volunteer page wireframe this will be done on volunteer wireframe branch copy jpg file to assets create volunteer wireframe md use md to link jpg file | 0 |
123,269 | 12,196,043,679 | IssuesEvent | 2020-04-29 18:23:58 | kwk/test-llvm-bz-import-5 | https://api.github.com/repos/kwk/test-llvm-bz-import-5 | closed | Incorrect function calls in the tutorials for building the fadd, fmul, and fsub operations | BZ-BUG-STATUS: RESOLVED BZ-RESOLUTION: FIXED Documentation/General docs dummy import from bugzilla | This issue was imported from Bugzilla https://bugs.llvm.org/show_bug.cgi?id=10316. | 1.0 | Incorrect function calls in the tutorials for building the fadd, fmul, and fsub operations - This issue was imported from Bugzilla https://bugs.llvm.org/show_bug.cgi?id=10316. | non_infrastructure | incorrect function calls in the tutorials for building the fadd fmul and fsub operations this issue was imported from bugzilla | 0 |
18,577 | 13,055,912,017 | IssuesEvent | 2020-07-30 03:05:55 | icecube-trac/tix2 | https://api.github.com/repos/icecube-trac/tix2 | opened | [fill-ratio] sphinx documentation is missing (Trac #1198) | Incomplete Migration Migrated from Trac infrastructure task | Migrated from https://code.icecube.wisc.edu/ticket/1198
```json
{
"status": "closed",
"changetime": "2019-02-13T14:11:57",
"description": "There is only a rather short doxygen documentation. Either the documentation should be linked to the Sphinx documentation or the index.dox file should be transformed into an index.rst file. The project's maintainer should also be added at the top of the Sphinx documentation.",
"reporter": "kkrings",
"cc": "",
"resolution": "fixed",
"_ts": "1550067117911749",
"component": "infrastructure",
"summary": "[fill-ratio] sphinx documentation is missing",
"priority": "blocker",
"keywords": "",
"time": "2015-08-19T18:18:33",
"milestone": "",
"owner": "mjl5147",
"type": "task"
}
```
| 1.0 | [fill-ratio] sphinx documentation is missing (Trac #1198) - Migrated from https://code.icecube.wisc.edu/ticket/1198
```json
{
"status": "closed",
"changetime": "2019-02-13T14:11:57",
"description": "There is only a rather short doxygen documentation. Either the documentation should be linked to the Sphinx documentation or the index.dox file should be transformed into an index.rst file. The project's maintainer should also be added at the top of the Sphinx documentation.",
"reporter": "kkrings",
"cc": "",
"resolution": "fixed",
"_ts": "1550067117911749",
"component": "infrastructure",
"summary": "[fill-ratio] sphinx documentation is missing",
"priority": "blocker",
"keywords": "",
"time": "2015-08-19T18:18:33",
"milestone": "",
"owner": "mjl5147",
"type": "task"
}
```
| infrastructure | sphinx documentation is missing trac migrated from json status closed changetime description there is only a rather short doxygen documentation either the documentation should be linked to the sphinx documentation or the index dox file should be transformed into an index rst file the project s maintainer should also be added at the top of the sphinx documentation reporter kkrings cc resolution fixed ts component infrastructure summary sphinx documentation is missing priority blocker keywords time milestone owner type task | 1 |
349,127 | 10,459,108,153 | IssuesEvent | 2019-09-20 10:06:08 | wso2/product-is | https://api.github.com/repos/wso2/product-is | closed | [Config] Template CacheConfig CacheManager to import dynamic configs in identity.xml.j2 | Affected/5.9.0-Alpha Priority/High Severity/Minor Type/Improvement config | Template CacheConfig CacheManager to import dynamic configs in identity.xml.j2 as some components such as "Private Key JWT Client Authentication for OIDC" is not packed by default but when using will need cache configuration.
Ref: Step 4 of
https://is.docs.wso2.com/en/5.9.0/using-wso2-identity-server/private-key-jwt-client-authentication-for-oidc | 1.0 | [Config] Template CacheConfig CacheManager to import dynamic configs in identity.xml.j2 - Template CacheConfig CacheManager to import dynamic configs in identity.xml.j2 as some components such as "Private Key JWT Client Authentication for OIDC" is not packed by default but when using will need cache configuration.
Ref: Step 4 of
https://is.docs.wso2.com/en/5.9.0/using-wso2-identity-server/private-key-jwt-client-authentication-for-oidc | non_infrastructure | template cacheconfig cachemanager to import dynamic configs in identity xml template cacheconfig cachemanager to import dynamic configs in identity xml as some components such as private key jwt client authentication for oidc is not packed by default but when using will need cache configuration ref step of | 0 |
62,156 | 8,578,669,462 | IssuesEvent | 2018-11-13 06:18:10 | github/orchestrator | https://api.github.com/repos/github/orchestrator | closed | Installation documentation uses incorrect config sample filenames | documentation | The two files
```
/usr/local/orchestrator/orchestrator-sample-sqlite.conf.json
/usr/local/orchestrator/orchestrator-sample.conf.json
```
are created after installing with yum. The filenames do not match `orchestrator.conf.json.sample` mentioned in [installation documentation](https://github.com/github/orchestrator/blob/master/docs/install.md) and so are an impediment to anyone wishing to install orchestrator for the first time as you can't find the configuration files without having a good search around. | 1.0 | Installation documentation uses incorrect config sample filenames - The two files
```
/usr/local/orchestrator/orchestrator-sample-sqlite.conf.json
/usr/local/orchestrator/orchestrator-sample.conf.json
```
are created after installing with yum. The filenames do not match `orchestrator.conf.json.sample` mentioned in [installation documentation](https://github.com/github/orchestrator/blob/master/docs/install.md) and so are an impediment to anyone wishing to install orchestrator for the first time as you can't find the configuration files without having a good search around. | non_infrastructure | installation documentation uses incorrect config sample filenames the two files usr local orchestrator orchestrator sample sqlite conf json usr local orchestrator orchestrator sample conf json are created after installing with yum the filenames do not match orchestrator conf json sample mentioned in and so are an impediment to anyone wishing to install orchestrator for the first time as you can t find the configuration files without having a good search around | 0 |
61,683 | 7,494,890,511 | IssuesEvent | 2018-04-07 15:03:39 | wp-media/wp-rocket | https://api.github.com/repos/wp-media/wp-rocket | opened | Missing CDN data in admin UI | design minor | When you add a new Cname in the settings, the `reserved for` value is not displayed next to the Cname URL. As we can't expect our users to remember which Cnames is set for which type of files, this data should also be displayed.
Tagging you @edenpulse on it so that you can think about the design of it.

| 1.0 | Missing CDN data in admin UI - When you add a new Cname in the settings, the `reserved for` value is not displayed next to the Cname URL. As we can't expect our users to remember which Cnames is set for which type of files, this data should also be displayed.
Tagging you @edenpulse on it so that you can think about the design of it.

| non_infrastructure | missing cdn data in admin ui when you add a new cname in the settings the reserved for value is not displayed next to the cname url as we can t expect our users to remember which cnames is set for which type of files this data should also be displayed tagging you edenpulse on it so that you can think about the design of it | 0 |
15,530 | 11,574,773,402 | IssuesEvent | 2020-02-21 08:16:14 | SonarSource/sonarlint-visualstudio | https://api.github.com/repos/SonarSource/sonarlint-visualstudio | closed | Provide a Dogfood channel | Infrastructure Type: Task | For internal dogfood in SonarSource, would be great to have latest master build deployed at a location that could be configured as an additional extension gallery.
See [Create a private gallery for self-hosted Visual Studio extensions](https://devblogs.microsoft.com/visualstudio/create-a-private-gallery-for-self-hosted-visual-studio-extensions/) and the [Private Gallery Creator](https://github.com/madskristensen/PrivateGalleryCreator) extension.
If we wanted to publish dogfood builds publicly we could use http://vsixgallery.com/.
Other options:
* Azure blog storage: see http://www.cazzulino.com/azure-functions-vs-gallery.html
* Generic repos in JFrog Artifactory | 1.0 | Provide a Dogfood channel - For internal dogfood in SonarSource, would be great to have latest master build deployed at a location that could be configured as an additional extension gallery.
See [Create a private gallery for self-hosted Visual Studio extensions](https://devblogs.microsoft.com/visualstudio/create-a-private-gallery-for-self-hosted-visual-studio-extensions/) and the [Private Gallery Creator](https://github.com/madskristensen/PrivateGalleryCreator) extension.
If we wanted to publish dogfood builds publicly we could use http://vsixgallery.com/.
Other options:
* Azure blog storage: see http://www.cazzulino.com/azure-functions-vs-gallery.html
* Generic repos in JFrog Artifactory | infrastructure | provide a dogfood channel for internal dogfood in sonarsource would be great to have latest master build deployed at a location that could be configured as an additional extension gallery see and the extension if we wanted to publish dogfood builds publicly we could use other options azure blog storage see generic repos in jfrog artifactory | 1 |
534,764 | 15,648,467,695 | IssuesEvent | 2021-03-23 05:47:56 | wso2/product-apim | https://api.github.com/repos/wso2/product-apim | opened | [AsyncAPI] Deploy Sample API | Priority/High Type/Improvement | ### Describe your problem(s)
For a user who tries out AsyncAPIs, it is better to have the "Deploy Sample API" feature as there for REST APIs. | 1.0 | [AsyncAPI] Deploy Sample API - ### Describe your problem(s)
For a user who tries out AsyncAPIs, it is better to have the "Deploy Sample API" feature as there for REST APIs. | non_infrastructure | deploy sample api describe your problem s for a user who tries out asyncapis it is better to have the deploy sample api feature as there for rest apis | 0 |
6,642 | 6,541,797,432 | IssuesEvent | 2017-09-01 21:56:22 | dart-lang/site-webdev | https://api.github.com/repos/dart-lang/site-webdev | opened | API: Add entries to selected core dart libraries | enhancement Infrastructure | Namely for async, core, convert, html.
@kwalrath - let me know if this is still something that you would want. I believe that the main issue is that you wanted an indication that the links were external. | 1.0 | API: Add entries to selected core dart libraries - Namely for async, core, convert, html.
@kwalrath - let me know if this is still something that you would want. I believe that the main issue is that you wanted an indication that the links were external. | infrastructure | api add entries to selected core dart libraries namely for async core convert html kwalrath let me know if this is still something that you would want i believe that the main issue is that you wanted an indication that the links were external | 1 |
1,823 | 2,574,962,966 | IssuesEvent | 2015-02-11 20:01:51 | dotCMS/core | https://api.github.com/repos/dotCMS/core | closed | Fix Unit tests | Merged Type : Unit Testing | com.dotmarketing.business.IdentifierAPITest.testing404
com.dotmarketing.portlets.linkchecker.business.LinkCheckerAPITest.findInvalidLinks
com.dotmarketing.business.RoleAPITest.loadRolesForUser
http://cb.dotcms.com/view/master-3.1/job/Git_Master_JUnit_Runner_MSSQL_3.1/71/testReport/ | 1.0 | Fix Unit tests - com.dotmarketing.business.IdentifierAPITest.testing404
com.dotmarketing.portlets.linkchecker.business.LinkCheckerAPITest.findInvalidLinks
com.dotmarketing.business.RoleAPITest.loadRolesForUser
http://cb.dotcms.com/view/master-3.1/job/Git_Master_JUnit_Runner_MSSQL_3.1/71/testReport/ | non_infrastructure | fix unit tests com dotmarketing business identifierapitest com dotmarketing portlets linkchecker business linkcheckerapitest findinvalidlinks com dotmarketing business roleapitest loadrolesforuser | 0 |
16,317 | 11,911,636,779 | IssuesEvent | 2020-03-31 08:57:29 | raiden-network/raiden-services | https://api.github.com/repos/raiden-network/raiden-services | closed | Register test deployment services with test registry | Infrastructure :office: PFS :rocket: | We need to register the testnet deployments of the PFS with the service registry. | 1.0 | Register test deployment services with test registry - We need to register the testnet deployments of the PFS with the service registry. | infrastructure | register test deployment services with test registry we need to register the testnet deployments of the pfs with the service registry | 1 |
20,989 | 14,270,721,287 | IssuesEvent | 2020-11-21 08:40:08 | airyhq/airy | https://api.github.com/repos/airyhq/airy | closed | Linting cleanup | infrastructure | - [x] add prettier to docs
- [x] cleanup //:check target
- [x] move the lint.sh script to scripts | 1.0 | Linting cleanup - - [x] add prettier to docs
- [x] cleanup //:check target
- [x] move the lint.sh script to scripts | infrastructure | linting cleanup add prettier to docs cleanup check target move the lint sh script to scripts | 1 |
11,101 | 8,925,754,184 | IssuesEvent | 2019-01-22 00:31:02 | cxong/cdogs-sdl | https://api.github.com/repos/cxong/cdogs-sdl | closed | Create deb package | infrastructure | Also:
- Create separate cdogs-sdl and cdogs-sdl-data packages
- Submit to debian
## <bountysource-plugin>
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/4010647-create-deb-package?utm_campaign=plugin&utm_content=tracker%2F347422&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F347422&utm_medium=issues&utm_source=github).
</bountysource-plugin>
| 1.0 | Create deb package - Also:
- Create separate cdogs-sdl and cdogs-sdl-data packages
- Submit to debian
## <bountysource-plugin>
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/4010647-create-deb-package?utm_campaign=plugin&utm_content=tracker%2F347422&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F347422&utm_medium=issues&utm_source=github).
</bountysource-plugin>
| infrastructure | create deb package also create separate cdogs sdl and cdogs sdl data packages submit to debian want to back this issue we accept bounties via | 1 |
18,731 | 13,087,479,921 | IssuesEvent | 2020-08-02 12:30:18 | eslint/eslint | https://api.github.com/repos/eslint/eslint | closed | devdeps: leche has been deprecated | enhancement evaluating help wanted infrastructure | <!--
ESLint adheres to the [JS Foundation Code of Conduct](https://js.foundation/community/code-of-conduct).
This template is for requesting a change that is not a bug fix, rule change, or new rule. If you are here for another reason, please see below:
1. To report a bug: https://eslint.org/docs/developer-guide/contributing/reporting-bugs
2. To request a rule change: https://eslint.org/docs/developer-guide/contributing/rule-changes
3. To propose a new rule: https://eslint.org/docs/developer-guide/contributing/new-rules
4. If you have any questions, please stop by our chatroom: https://gitter.im/eslint/eslint
Note that leaving sections blank will make it difficult for us to troubleshoot and we may have to close the issue.
-->
**The version of ESLint you are using.**
v7.0.0(latest)
**The problem you want to solve.**
https://github.com/box/leche#deprecated
> Box has migrated to using Jest and no longer uses mocha or leche. We no longer support changes, pull requests, or upgrades to this package.
**Your take on the correct solution to problem.**
do we want to switch to jest, or just find an alternative?
**Are you willing to submit a pull request to implement this change?**
y | 1.0 | devdeps: leche has been deprecated - <!--
ESLint adheres to the [JS Foundation Code of Conduct](https://js.foundation/community/code-of-conduct).
This template is for requesting a change that is not a bug fix, rule change, or new rule. If you are here for another reason, please see below:
1. To report a bug: https://eslint.org/docs/developer-guide/contributing/reporting-bugs
2. To request a rule change: https://eslint.org/docs/developer-guide/contributing/rule-changes
3. To propose a new rule: https://eslint.org/docs/developer-guide/contributing/new-rules
4. If you have any questions, please stop by our chatroom: https://gitter.im/eslint/eslint
Note that leaving sections blank will make it difficult for us to troubleshoot and we may have to close the issue.
-->
**The version of ESLint you are using.**
v7.0.0(latest)
**The problem you want to solve.**
https://github.com/box/leche#deprecated
> Box has migrated to using Jest and no longer uses mocha or leche. We no longer support changes, pull requests, or upgrades to this package.
**Your take on the correct solution to problem.**
do we want to switch to jest, or just find an alternative?
**Are you willing to submit a pull request to implement this change?**
y | infrastructure | devdeps leche has been deprecated eslint adheres to the this template is for requesting a change that is not a bug fix rule change or new rule if you are here for another reason please see below to report a bug to request a rule change to propose a new rule if you have any questions please stop by our chatroom note that leaving sections blank will make it difficult for us to troubleshoot and we may have to close the issue the version of eslint you are using latest the problem you want to solve box has migrated to using jest and no longer uses mocha or leche we no longer support changes pull requests or upgrades to this package your take on the correct solution to problem do we want to switch to jest or just find an alternative are you willing to submit a pull request to implement this change y | 1 |
123,216 | 16,456,419,527 | IssuesEvent | 2021-05-21 13:10:45 | cagov/ui-claim-tracker | https://api.github.com/repos/cagov/ui-claim-tracker | closed | Continue refining the content to prep for launch | Design Size: S | ### Description
Prep content variations and align with product before content testing.
### Acceptance Criteria
- [x] Refine content drafts for MVP scenarios
- [x] Refine content drafts for the base state (aka what most claimants will see)
- [x] Refine content drafts for expired claims (aka no active claim)
<!--
_Note_ When you create this issue, remember to add:
- an assignee
- the project, so that it will show up in our kanban view
- a label for story points estimate (or comment at the assignee to request that they add an estimate)
- a label for priority
-->
| 1.0 | Continue refining the content to prep for launch - ### Description
Prep content variations and align with product before content testing.
### Acceptance Criteria
- [x] Refine content drafts for MVP scenarios
- [x] Refine content drafts for the base state (aka what most claimants will see)
- [x] Refine content drafts for expired claims (aka no active claim)
<!--
_Note_ When you create this issue, remember to add:
- an assignee
- the project, so that it will show up in our kanban view
- a label for story points estimate (or comment at the assignee to request that they add an estimate)
- a label for priority
-->
| non_infrastructure | continue refining the content to prep for launch description prep content variations and align with product before content testing acceptance criteria refine content drafts for mvp scenarios refine content drafts for the base state aka what most claimants will see refine content drafts for expired claims aka no active claim note when you create this issue remember to add an assignee the project so that it will show up in our kanban view a label for story points estimate or comment at the assignee to request that they add an estimate a label for priority | 0 |
21,431 | 14,565,447,232 | IssuesEvent | 2020-12-17 07:16:31 | grpc/grpc.io | https://api.github.com/repos/grpc/grpc.io | closed | Move site pages into content/en | cleanup/refactoring e0-minutes e1-hours infrastructure | We should consider moving the site pages into `content/en`.
IMHO, it makes sense to do this either a bit before, or in the context of, the **docsy migration** (#479).
As a result, it will make it easier to adopt other natural language translations of the site -- e.g., #408 was an initial PR for Korean. | 1.0 | Move site pages into content/en - We should consider moving the site pages into `content/en`.
IMHO, it makes sense to do this either a bit before, or in the context of, the **docsy migration** (#479).
As a result, it will make it easier to adopt other natural language translations of the site -- e.g., #408 was an initial PR for Korean. | infrastructure | move site pages into content en we should consider moving the site pages into content en imho it makes sense to do this either a bit before or in the context of the docsy migration as a result it will make it easier to adopt other natural language translations of the site e g was an initial pr for korean | 1 |
2,618 | 3,789,030,598 | IssuesEvent | 2016-03-21 16:32:33 | dotnet/roslyn | https://api.github.com/repos/dotnet/roslyn | closed | Tool that generates RoslynLanguageServices.wxs need to include KeyPath="Yes" for pkgdef files | 0 - Backlog Area-Infrastructure Bug | see this.
https://microsoft.sharepoint.com/teams/DD_Tenets/Acquisition/_layouts/15/WopiFrame2.aspx?sourcedoc={7d98ff50-19aa-4406-b9ca-000e7ea56073}&action=edit&wd=target%28Authoring%2Eone%7C11C4B80B%2D57B1%2D4DC0%2D982F%2DDBDF7A7231C1%2FWhat%20is%20KeyPath%3F%7C8D490882%2DBD45%2D4C98%2DA6CA%2D51E38F2860B6%2F%29
and comment from Raja from this PR.
https://devdiv.visualstudio.com/DefaultCollection/DevDiv/_git/VS/pullrequest/16460?view=discussion
we need to update our tool to include KeyPath="Yes" | 1.0 | Tool that generates RoslynLanguageServices.wxs need to include KeyPath="Yes" for pkgdef files - see this.
https://microsoft.sharepoint.com/teams/DD_Tenets/Acquisition/_layouts/15/WopiFrame2.aspx?sourcedoc={7d98ff50-19aa-4406-b9ca-000e7ea56073}&action=edit&wd=target%28Authoring%2Eone%7C11C4B80B%2D57B1%2D4DC0%2D982F%2DDBDF7A7231C1%2FWhat%20is%20KeyPath%3F%7C8D490882%2DBD45%2D4C98%2DA6CA%2D51E38F2860B6%2F%29
and comment from Raja from this PR.
https://devdiv.visualstudio.com/DefaultCollection/DevDiv/_git/VS/pullrequest/16460?view=discussion
we need to update our tool to include KeyPath="Yes" | infrastructure | tool that generates roslynlanguageservices wxs need to include keypath yes for pkgdef files see this and comment from raja from this pr we need to update our tool to include keypath yes | 1 |
8,680 | 11,811,427,508 | IssuesEvent | 2020-03-19 18:11:07 | pacificclimate/climate-explorer-data-prep | https://api.github.com/repos/pacificclimate/climate-explorer-data-prep | closed | Calculate baseline data for p2a summary table | process new data | The plan2adapt summary table shows a spread of how much possible variables change over time. In order to do this, we need to have "baseline" values, which are defined as the 1961-1990 period. The baseline dataset is the `anusplin` dataset.
The following values are missing:
anusplin hdd
-------------
- [x] generate hdd climatologies
- [x] index hdd climatological data
anusplin prsn
-------------
- [x] generate prsn data
- [x] generate prsn climatologies
- [x] index prsn climatological data
anusplin fdETCCDI
-------------------
- [x] generate fdETCCDI data
- [x] generate fdETCCDI climatologies
- [x] index fdETCCDI climatological data | 1.0 | Calculate baseline data for p2a summary table - The plan2adapt summary table shows a spread of how much possible variables change over time. In order to do this, we need to have "baseline" values, which are defined as the 1961-1990 period. The baseline dataset is the `anusplin` dataset.
The following values are missing:
anusplin hdd
-------------
- [x] generate hdd climatologies
- [x] index hdd climatological data
anusplin prsn
-------------
- [x] generate prsn data
- [x] generate prsn climatologies
- [x] index prsn climatological data
anusplin fdETCCDI
-------------------
- [x] generate fdETCCDI data
- [x] generate fdETCCDI climatologies
- [x] index fdETCCDI climatological data | non_infrastructure | calculate baseline data for summary table the summary table shows a spread of how much possible variables change over time in order to do this we need to have baseline values which are defined as the period the baseline dataset is the anusplin dataset the following values are missing anusplin hdd generate hdd climatologies index hdd climatological data anusplin prsn generate prsn data generate prsn climatologies index prsn climatological data anusplin fdetccdi generate fdetccdi data generate fdetccdi climatologies index fdetccdi climatological data | 0 |
98,747 | 30,105,751,193 | IssuesEvent | 2023-06-30 00:58:06 | facebookincubator/velox | https://api.github.com/repos/facebookincubator/velox | closed | macos-build-macos-intel fails intermittently when git clone homebrew-core | build | ### Problem description
The macos-build-macos-intel job in CircleCI fails intermittently with the following failure. An example is https://app.circleci.com/pipelines/github/facebookincubator/velox/27990/workflows/c0eb07e8-0e26-4cdc-87e7-d75440c95e12/jobs/176570.
Cloning into '/Users/distiller/deps/Library/Taps/homebrew/homebrew-core'...
remote: Enumerating objects: 12258, done.
remote: Counting objects: 100% (12251/12251), done.
remote: fatal: object 3d33edf328ac7e52d9c1c025df61e29c001a006c cannot be read
remote: aborting due to possible repository corruption on the remote side.
fatal: early EOF
fatal: fetch-pack: invalid index-pack output
Error: Failure while executing; `git clone https://github.com/Homebrew/homebrew-core /Users/distiller/deps/Library/Taps/homebrew/homebrew-core --origin=origin --template=` exited with 128.
### System information
CircleCI
### CMake log
```bash
#!/bin/bash --login -eo pipefail
export PATH=~/deps/bin:~/deps/opt/bison/bin:~/deps/opt/flex/bin:${PATH}
mkdir -p .ccache
export CCACHE_DIR=$(pwd)/.ccache
ccache -sz -M 5Gi
brew install openssl@1.1
brew link --overwrite --force openssl@1.1
export PATH="/Users/distiller/deps/opt/openssl@1.1/bin:$PATH"
export OPENSSL_ROOT_DIR=$(brew --prefix openssl@1.1)
cmake -B _build/debug -GNinja -DTREAT_WARNINGS_AS_ERRORS=1 -DENABLE_ALL_WARNINGS=1 -DCMAKE_BUILD_TYPE=Debug -DCMAKE_PREFIX_PATH=~/deps -DCMAKE_CXX_COMPILER_LAUNCHER=ccache -DFLEX_INCLUDE_DIR=~/deps/opt/flex/include
ninja -C _build/debug
ccache -s
Cacheable calls: 1417 / 1417 (100.0%)
Hits: 34 / 1417 ( 2.40%)
Direct: 33 / 34 (97.06%)
Preprocessed: 1 / 34 ( 2.94%)
Misses: 1383 / 1417 (97.60%)
Local storage:
Cache size (GiB): 1.1 / 5.0 (21.85%)
Hits: 34 / 1417 ( 2.40%)
Misses: 1383 / 1417 (97.60%)
Statistics zeroed
Set cache size limit to 5.0 GiB
Initialized empty Git repository in /Users/distiller/deps/.git/
remote: Enumerating objects: 243157, done.
remote: Counting objects: 100% (841/841), done.
remote: Compressing objects: 100% (495/495), done.
remote: Total 243157 (delta 381), reused 747 (delta 309), pack-reused 242316
Receiving objects: 100% (243157/243157), 71.20 MiB | 53.41 MiB/s, done.
Running `brew update --auto-update`...
Resolving deltas: 100% (177871/177871), done.
From ssh://github.com/Homebrew/brew
* [new branch] master -> origin/master
* [new tag] 0.1 -> 0.1
* [new tag] 0.2 -> 0.2
* [new tag] 0.3 -> 0.3
* [new tag] 0.4 -> 0.4
* [new tag] 0.5 -> 0.5
* [new tag] 0.6 -> 0.6
* [new tag] 0.7 -> 0.7
* [new tag] 0.7.1 -> 0.7.1
* [new tag] 0.8 -> 0.8
* [new tag] 0.8.1 -> 0.8.1
* [new tag] 0.9 -> 0.9
* [new tag] 0.9.1 -> 0.9.1
* [new tag] 0.9.2 -> 0.9.2
* [new tag] 0.9.3 -> 0.9.3
* [new tag] 0.9.4 -> 0.9.4
* [new tag] 0.9.5 -> 0.9.5
* [new tag] 0.9.8 -> 0.9.8
* [new tag] 0.9.9 -> 0.9.9
* [new tag] 1.0.0 -> 1.0.0
* [new tag] 1.0.1 -> 1.0.1
* [new tag] 1.0.2 -> 1.0.2
* [new tag] 1.0.3 -> 1.0.3
* [new tag] 1.0.4 -> 1.0.4
* [new tag] 1.0.5 -> 1.0.5
* [new tag] 1.0.6 -> 1.0.6
* [new tag] 1.0.7 -> 1.0.7
* [new tag] 1.0.8 -> 1.0.8
* [new tag] 1.0.9 -> 1.0.9
* [new tag] 1.1.0 -> 1.1.0
* [new tag] 1.1.1 -> 1.1.1
* [new tag] 1.1.10 -> 1.1.10
* [new tag] 1.1.11 -> 1.1.11
* [new tag] 1.1.12 -> 1.1.12
* [new tag] 1.1.13 -> 1.1.13
* [new tag] 1.1.2 -> 1.1.2
* [new tag] 1.1.3 -> 1.1.3
* [new tag] 1.1.4 -> 1.1.4
* [new tag] 1.1.5 -> 1.1.5
* [new tag] 1.1.6 -> 1.1.6
* [new tag] 1.1.7 -> 1.1.7
* [new tag] 1.1.8 -> 1.1.8
* [new tag] 1.1.9 -> 1.1.9
* [new tag] 1.2.0 -> 1.2.0
* [new tag] 1.2.1 -> 1.2.1
* [new tag] 1.2.2 -> 1.2.2
* [new tag] 1.2.3 -> 1.2.3
* [new tag] 1.2.4 -> 1.2.4
* [new tag] 1.2.5 -> 1.2.5
* [new tag] 1.2.6 -> 1.2.6
* [new tag] 1.3.0 -> 1.3.0
* [new tag] 1.3.1 -> 1.3.1
* [new tag] 1.3.2 -> 1.3.2
* [new tag] 1.3.3 -> 1.3.3
* [new tag] 1.3.4 -> 1.3.4
* [new tag] 1.3.5 -> 1.3.5
* [new tag] 1.3.6 -> 1.3.6
* [new tag] 1.3.7 -> 1.3.7
* [new tag] 1.3.8 -> 1.3.8
* [new tag] 1.3.9 -> 1.3.9
* [new tag] 1.4.0 -> 1.4.0
* [new tag] 1.4.1 -> 1.4.1
* [new tag] 1.4.2 -> 1.4.2
* [new tag] 1.4.3 -> 1.4.3
* [new tag] 1.5.0 -> 1.5.0
* [new tag] 1.5.1 -> 1.5.1
* [new tag] 1.5.10 -> 1.5.10
* [new tag] 1.5.11 -> 1.5.11
* [new tag] 1.5.12 -> 1.5.12
* [new tag] 1.5.13 -> 1.5.13
* [new tag] 1.5.14 -> 1.5.14
* [new tag] 1.5.2 -> 1.5.2
* [new tag] 1.5.3 -> 1.5.3
* [new tag] 1.5.4 -> 1.5.4
* [new tag] 1.5.5 -> 1.5.5
* [new tag] 1.5.6 -> 1.5.6
* [new tag] 1.5.7 -> 1.5.7
* [new tag] 1.5.8 -> 1.5.8
* [new tag] 1.5.9 -> 1.5.9
* [new tag] 1.6.0 -> 1.6.0
* [new tag] 1.6.1 -> 1.6.1
* [new tag] 1.6.10 -> 1.6.10
* [new tag] 1.6.11 -> 1.6.11
* [new tag] 1.6.12 -> 1.6.12
* [new tag] 1.6.13 -> 1.6.13
* [new tag] 1.6.14 -> 1.6.14
* [new tag] 1.6.15 -> 1.6.15
* [new tag] 1.6.16 -> 1.6.16
* [new tag] 1.6.17 -> 1.6.17
* [new tag] 1.6.2 -> 1.6.2
* [new tag] 1.6.3 -> 1.6.3
* [new tag] 1.6.4 -> 1.6.4
* [new tag] 1.6.5 -> 1.6.5
* [new tag] 1.6.6 -> 1.6.6
* [new tag] 1.6.7 -> 1.6.7
* [new tag] 1.6.8 -> 1.6.8
* [new tag] 1.6.9 -> 1.6.9
* [new tag] 1.7.0 -> 1.7.0
* [new tag] 1.7.1 -> 1.7.1
* [new tag] 1.7.2 -> 1.7.2
* [new tag] 1.7.3 -> 1.7.3
* [new tag] 1.7.4 -> 1.7.4
* [new tag] 1.7.5 -> 1.7.5
* [new tag] 1.7.6 -> 1.7.6
* [new tag] 1.7.7 -> 1.7.7
* [new tag] 1.8.0 -> 1.8.0
* [new tag] 1.8.1 -> 1.8.1
* [new tag] 1.8.2 -> 1.8.2
* [new tag] 1.8.3 -> 1.8.3
* [new tag] 1.8.4 -> 1.8.4
* [new tag] 1.8.5 -> 1.8.5
* [new tag] 1.8.6 -> 1.8.6
* [new tag] 1.9.0 -> 1.9.0
* [new tag] 1.9.1 -> 1.9.1
* [new tag] 1.9.2 -> 1.9.2
* [new tag] 1.9.3 -> 1.9.3
* [new tag] 2.0.0 -> 2.0.0
* [new tag] 2.0.1 -> 2.0.1
* [new tag] 2.0.2 -> 2.0.2
* [new tag] 2.0.3 -> 2.0.3
* [new tag] 2.0.4 -> 2.0.4
* [new tag] 2.0.5 -> 2.0.5
* [new tag] 2.0.6 -> 2.0.6
* [new tag] 2.1.0 -> 2.1.0
* [new tag] 2.1.1 -> 2.1.1
* [new tag] 2.1.10 -> 2.1.10
* [new tag] 2.1.11 -> 2.1.11
* [new tag] 2.1.12 -> 2.1.12
* [new tag] 2.1.13 -> 2.1.13
* [new tag] 2.1.14 -> 2.1.14
* [new tag] 2.1.15 -> 2.1.15
* [new tag] 2.1.16 -> 2.1.16
* [new tag] 2.1.2 -> 2.1.2
* [new tag] 2.1.3 -> 2.1.3
* [new tag] 2.1.4 -> 2.1.4
* [new tag] 2.1.5 -> 2.1.5
* [new tag] 2.1.6 -> 2.1.6
* [new tag] 2.1.7 -> 2.1.7
* [new tag] 2.1.8 -> 2.1.8
* [new tag] 2.1.9 -> 2.1.9
* [new tag] 2.2.0 -> 2.2.0
* [new tag] 2.2.1 -> 2.2.1
* [new tag] 2.2.10 -> 2.2.10
* [new tag] 2.2.11 -> 2.2.11
* [new tag] 2.2.12 -> 2.2.12
* [new tag] 2.2.13 -> 2.2.13
* [new tag] 2.2.14 -> 2.2.14
* [new tag] 2.2.15 -> 2.2.15
* [new tag] 2.2.16 -> 2.2.16
* [new tag] 2.2.17 -> 2.2.17
* [new tag] 2.2.2 -> 2.2.2
* [new tag] 2.2.3 -> 2.2.3
* [new tag] 2.2.4 -> 2.2.4
* [new tag] 2.2.5 -> 2.2.5
* [new tag] 2.2.6 -> 2.2.6
* [new tag] 2.2.7 -> 2.2.7
* [new tag] 2.2.8 -> 2.2.8
* [new tag] 2.2.9 -> 2.2.9
* [new tag] 2.3.0 -> 2.3.0
* [new tag] 2.4.0 -> 2.4.0
* [new tag] 2.4.1 -> 2.4.1
* [new tag] 2.4.10 -> 2.4.10
* [new tag] 2.4.11 -> 2.4.11
* [new tag] 2.4.12 -> 2.4.12
* [new tag] 2.4.13 -> 2.4.13
* [new tag] 2.4.14 -> 2.4.14
* [new tag] 2.4.15 -> 2.4.15
* [new tag] 2.4.16 -> 2.4.16
* [new tag] 2.4.2 -> 2.4.2
* [new tag] 2.4.3 -> 2.4.3
* [new tag] 2.4.4 -> 2.4.4
* [new tag] 2.4.5 -> 2.4.5
* [new tag] 2.4.6 -> 2.4.6
* [new tag] 2.4.7 -> 2.4.7
* [new tag] 2.4.8 -> 2.4.8
* [new tag] 2.4.9 -> 2.4.9
* [new tag] 2.5.0 -> 2.5.0
* [new tag] 2.5.1 -> 2.5.1
* [new tag] 2.5.10 -> 2.5.10
* [new tag] 2.5.11 -> 2.5.11
* [new tag] 2.5.12 -> 2.5.12
* [new tag] 2.5.2 -> 2.5.2
* [new tag] 2.5.3 -> 2.5.3
* [new tag] 2.5.4 -> 2.5.4
* [new tag] 2.5.5 -> 2.5.5
* [new tag] 2.5.6 -> 2.5.6
* [new tag] 2.5.7 -> 2.5.7
* [new tag] 2.5.8 -> 2.5.8
* [new tag] 2.5.9 -> 2.5.9
* [new tag] 2.6.0 -> 2.6.0
* [new tag] 2.6.1 -> 2.6.1
* [new tag] 2.6.2 -> 2.6.2
* [new tag] 2.7.0 -> 2.7.0
* [new tag] 2.7.1 -> 2.7.1
* [new tag] 2.7.2 -> 2.7.2
* [new tag] 2.7.3 -> 2.7.3
* [new tag] 2.7.4 -> 2.7.4
* [new tag] 2.7.5 -> 2.7.5
* [new tag] 2.7.6 -> 2.7.6
* [new tag] 2.7.7 -> 2.7.7
* [new tag] 3.0.0 -> 3.0.0
* [new tag] 3.0.1 -> 3.0.1
* [new tag] 3.0.10 -> 3.0.10
* [new tag] 3.0.11 -> 3.0.11
* [new tag] 3.0.2 -> 3.0.2
* [new tag] 3.0.3 -> 3.0.3
* [new tag] 3.0.4 -> 3.0.4
* [new tag] 3.0.5 -> 3.0.5
* [new tag] 3.0.6 -> 3.0.6
* [new tag] 3.0.7 -> 3.0.7
* [new tag] 3.0.8 -> 3.0.8
* [new tag] 3.0.9 -> 3.0.9
* [new tag] 3.1.0 -> 3.1.0
* [new tag] 3.1.1 -> 3.1.1
* [new tag] 3.1.10 -> 3.1.10
* [new tag] 3.1.11 -> 3.1.11
* [new tag] 3.1.12 -> 3.1.12
* [new tag] 3.1.2 -> 3.1.2
* [new tag] 3.1.3 -> 3.1.3
* [new tag] 3.1.4 -> 3.1.4
* [new tag] 3.1.5 -> 3.1.5
* [new tag] 3.1.6 -> 3.1.6
* [new tag] 3.1.7 -> 3.1.7
* [new tag] 3.1.8 -> 3.1.8
* [new tag] 3.1.9 -> 3.1.9
* [new tag] 3.2.0 -> 3.2.0
* [new tag] 3.2.1 -> 3.2.1
* [new tag] 3.2.10 -> 3.2.10
* [new tag] 3.2.11 -> 3.2.11
* [new tag] 3.2.12 -> 3.2.12
* [new tag] 3.2.13 -> 3.2.13
* [new tag] 3.2.14 -> 3.2.14
* [new tag] 3.2.15 -> 3.2.15
* [new tag] 3.2.16 -> 3.2.16
* [new tag] 3.2.17 -> 3.2.17
* [new tag] 3.2.2 -> 3.2.2
* [new tag] 3.2.3 -> 3.2.3
* [new tag] 3.2.4 -> 3.2.4
* [new tag] 3.2.5 -> 3.2.5
* [new tag] 3.2.6 -> 3.2.6
* [new tag] 3.2.7 -> 3.2.7
* [new tag] 3.2.8 -> 3.2.8
* [new tag] 3.2.9 -> 3.2.9
* [new tag] 3.3.0 -> 3.3.0
* [new tag] 3.3.1 -> 3.3.1
* [new tag] 3.3.10 -> 3.3.10
* [new tag] 3.3.11 -> 3.3.11
* [new tag] 3.3.12 -> 3.3.12
* [new tag] 3.3.13 -> 3.3.13
* [new tag] 3.3.14 -> 3.3.14
* [new tag] 3.3.15 -> 3.3.15
* [new tag] 3.3.16 -> 3.3.16
* [new tag] 3.3.2 -> 3.3.2
* [new tag] 3.3.3 -> 3.3.3
* [new tag] 3.3.4 -> 3.3.4
* [new tag] 3.3.5 -> 3.3.5
* [new tag] 3.3.6 -> 3.3.6
* [new tag] 3.3.7 -> 3.3.7
* [new tag] 3.3.8 -> 3.3.8
* [new tag] 3.3.9 -> 3.3.9
* [new tag] 3.4.0 -> 3.4.0
* [new tag] 3.4.1 -> 3.4.1
* [new tag] 3.4.10 -> 3.4.10
* [new tag] 3.4.11 -> 3.4.11
* [new tag] 3.4.2 -> 3.4.2
* [new tag] 3.4.3 -> 3.4.3
* [new tag] 3.4.4 -> 3.4.4
* [new tag] 3.4.5 -> 3.4.5
* [new tag] 3.4.6 -> 3.4.6
* [new tag] 3.4.7 -> 3.4.7
* [new tag] 3.4.8 -> 3.4.8
* [new tag] 3.4.9 -> 3.4.9
* [new tag] 3.5.0 -> 3.5.0
* [new tag] 3.5.1 -> 3.5.1
* [new tag] 3.5.10 -> 3.5.10
* [new tag] 3.5.2 -> 3.5.2
* [new tag] 3.5.3 -> 3.5.3
* [new tag] 3.5.4 -> 3.5.4
* [new tag] 3.5.5 -> 3.5.5
* [new tag] 3.5.6 -> 3.5.6
* [new tag] 3.5.7 -> 3.5.7
* [new tag] 3.5.8 -> 3.5.8
* [new tag] 3.5.9 -> 3.5.9
* [new tag] 3.6.0 -> 3.6.0
* [new tag] 3.6.1 -> 3.6.1
* [new tag] 3.6.10 -> 3.6.10
* [new tag] 3.6.11 -> 3.6.11
* [new tag] 3.6.12 -> 3.6.12
* [new tag] 3.6.13 -> 3.6.13
* [new tag] 3.6.14 -> 3.6.14
* [new tag] 3.6.15 -> 3.6.15
* [new tag] 3.6.16 -> 3.6.16
* [new tag] 3.6.17 -> 3.6.17
* [new tag] 3.6.18 -> 3.6.18
* [new tag] 3.6.19 -> 3.6.19
* [new tag] 3.6.2 -> 3.6.2
* [new tag] 3.6.20 -> 3.6.20
* [new tag] 3.6.21 -> 3.6.21
* [new tag] 3.6.3 -> 3.6.3
* [new tag] 3.6.4 -> 3.6.4
* [new tag] 3.6.5 -> 3.6.5
* [new tag] 3.6.6 -> 3.6.6
* [new tag] 3.6.7 -> 3.6.7
* [new tag] 3.6.8 -> 3.6.8
* [new tag] 3.6.9 -> 3.6.9
* [new tag] 4.0.0 -> 4.0.0
* [new tag] 4.0.1 -> 4.0.1
* [new tag] 4.0.10 -> 4.0.10
* [new tag] 4.0.11 -> 4.0.11
* [new tag] 4.0.12 -> 4.0.12
* [new tag] 4.0.13 -> 4.0.13
* [new tag] 4.0.14 -> 4.0.14
* [new tag] 4.0.15 -> 4.0.15
* [new tag] 4.0.16 -> 4.0.16
* [new tag] 4.0.17 -> 4.0.17
* [new tag] 4.0.18 -> 4.0.18
* [new tag] 4.0.19 -> 4.0.19
* [new tag] 4.0.2 -> 4.0.2
* [new tag] 4.0.20 -> 4.0.20
* [new tag] 4.0.21 -> 4.0.21
* [new tag] 4.0.22 -> 4.0.22
* [new tag] 4.0.23 -> 4.0.23
* [new tag] 4.0.24 -> 4.0.24
* [new tag] 4.0.25 -> 4.0.25
* [new tag] 4.0.26 -> 4.0.26
* [new tag] 4.0.3 -> 4.0.3
* [new tag] 4.0.4 -> 4.0.4
* [new tag] 4.0.5 -> 4.0.5
* [new tag] 4.0.6 -> 4.0.6
* [new tag] 4.0.7 -> 4.0.7
* [new tag] 4.0.8 -> 4.0.8
* [new tag] 4.0.9 -> 4.0.9
HEAD is now at 36fc91b7c Merge pull request #15607 from MikeMcQuaid/eval_all_api
==> Homebrew has enabled anonymous aggregate formula and cask analytics.
Read the analytics documentation (and how to opt-out) here:
https://docs.brew.sh/Analytics
No analytics have been recorded yet (nor will be during this `brew` run).
==> Homebrew is run entirely by unpaid volunteers. Please consider donating:
https://github.com/Homebrew/brew#donations
==> Tapping homebrew/core
Cloning into '/Users/distiller/deps/Library/Taps/homebrew/homebrew-core'...
remote: Enumerating objects: 12258, done.
remote: Counting objects: 100% (12251/12251), done.
remote: fatal: object 3d33edf328ac7e52d9c1c025df61e29c001a006c cannot be read
remote: aborting due to possible repository corruption on the remote side.
fatal: early EOF
fatal: fetch-pack: invalid index-pack output
Error: Failure while executing; `git clone https://github.com/Homebrew/homebrew-core /Users/distiller/deps/Library/Taps/homebrew/homebrew-core --origin=origin --template=` exited with 128.
==> Tapping homebrew/core
Cloning into '/Users/distiller/deps/Library/Taps/homebrew/homebrew-core'...
remote: Enumerating objects: 12258, done.
remote: Counting objects: 100% (12251/12251), done.
remote: fatal: object 3d33edf328ac7e52d9c1c025df61e29c001a006c cannot be read
remote: aborting due to possible repository corruption on the remote side.
fatal: early EOF
fatal: fetch-pack: invalid index-pack output
Error: Failure while executing; `git clone https://github.com/Homebrew/homebrew-core /Users/distiller/deps/Library/Taps/homebrew/homebrew-core --origin=origin --template=` exited with 128.
Exited with code exit status 1
CircleCI received exit code 1
```
| 1.0 | macos-build-macos-intel fails intermittently when git clone homebrew-core - ### Problem description
The macos-build-macos-intel job in CircleCI fails intermittently with the following failure. An example is https://app.circleci.com/pipelines/github/facebookincubator/velox/27990/workflows/c0eb07e8-0e26-4cdc-87e7-d75440c95e12/jobs/176570.
Cloning into '/Users/distiller/deps/Library/Taps/homebrew/homebrew-core'...
remote: Enumerating objects: 12258, done.
remote: Counting objects: 100% (12251/12251), done.
remote: fatal: object 3d33edf328ac7e52d9c1c025df61e29c001a006c cannot be read
remote: aborting due to possible repository corruption on the remote side.
fatal: early EOF
fatal: fetch-pack: invalid index-pack output
Error: Failure while executing; `git clone https://github.com/Homebrew/homebrew-core /Users/distiller/deps/Library/Taps/homebrew/homebrew-core --origin=origin --template=` exited with 128.
### System information
CircleCI
### CMake log
```bash
#!/bin/bash --login -eo pipefail
export PATH=~/deps/bin:~/deps/opt/bison/bin:~/deps/opt/flex/bin:${PATH}
mkdir -p .ccache
export CCACHE_DIR=$(pwd)/.ccache
ccache -sz -M 5Gi
brew install openssl@1.1
brew link --overwrite --force openssl@1.1
export PATH="/Users/distiller/deps/opt/openssl@1.1/bin:$PATH"
export OPENSSL_ROOT_DIR=$(brew --prefix openssl@1.1)
cmake -B _build/debug -GNinja -DTREAT_WARNINGS_AS_ERRORS=1 -DENABLE_ALL_WARNINGS=1 -DCMAKE_BUILD_TYPE=Debug -DCMAKE_PREFIX_PATH=~/deps -DCMAKE_CXX_COMPILER_LAUNCHER=ccache -DFLEX_INCLUDE_DIR=~/deps/opt/flex/include
ninja -C _build/debug
ccache -s
Cacheable calls: 1417 / 1417 (100.0%)
Hits: 34 / 1417 ( 2.40%)
Direct: 33 / 34 (97.06%)
Preprocessed: 1 / 34 ( 2.94%)
Misses: 1383 / 1417 (97.60%)
Local storage:
Cache size (GiB): 1.1 / 5.0 (21.85%)
Hits: 34 / 1417 ( 2.40%)
Misses: 1383 / 1417 (97.60%)
Statistics zeroed
Set cache size limit to 5.0 GiB
Initialized empty Git repository in /Users/distiller/deps/.git/
remote: Enumerating objects: 243157, done.
remote: Counting objects: 100% (841/841), done.
remote: Compressing objects: 100% (495/495), done.
remote: Total 243157 (delta 381), reused 747 (delta 309), pack-reused 242316
Receiving objects: 100% (243157/243157), 71.20 MiB | 53.41 MiB/s, done.
Running `brew update --auto-update`...
Resolving deltas: 100% (177871/177871), done.
From ssh://github.com/Homebrew/brew
* [new branch] master -> origin/master
* [new tag] 0.1 -> 0.1
* [new tag] 0.2 -> 0.2
* [new tag] 0.3 -> 0.3
* [new tag] 0.4 -> 0.4
* [new tag] 0.5 -> 0.5
* [new tag] 0.6 -> 0.6
* [new tag] 0.7 -> 0.7
* [new tag] 0.7.1 -> 0.7.1
* [new tag] 0.8 -> 0.8
* [new tag] 0.8.1 -> 0.8.1
* [new tag] 0.9 -> 0.9
* [new tag] 0.9.1 -> 0.9.1
* [new tag] 0.9.2 -> 0.9.2
* [new tag] 0.9.3 -> 0.9.3
* [new tag] 0.9.4 -> 0.9.4
* [new tag] 0.9.5 -> 0.9.5
* [new tag] 0.9.8 -> 0.9.8
* [new tag] 0.9.9 -> 0.9.9
* [new tag] 1.0.0 -> 1.0.0
* [new tag] 1.0.1 -> 1.0.1
* [new tag] 1.0.2 -> 1.0.2
* [new tag] 1.0.3 -> 1.0.3
* [new tag] 1.0.4 -> 1.0.4
* [new tag] 1.0.5 -> 1.0.5
* [new tag] 1.0.6 -> 1.0.6
* [new tag] 1.0.7 -> 1.0.7
* [new tag] 1.0.8 -> 1.0.8
* [new tag] 1.0.9 -> 1.0.9
* [new tag] 1.1.0 -> 1.1.0
* [new tag] 1.1.1 -> 1.1.1
* [new tag] 1.1.10 -> 1.1.10
* [new tag] 1.1.11 -> 1.1.11
* [new tag] 1.1.12 -> 1.1.12
* [new tag] 1.1.13 -> 1.1.13
* [new tag] 1.1.2 -> 1.1.2
* [new tag] 1.1.3 -> 1.1.3
* [new tag] 1.1.4 -> 1.1.4
* [new tag] 1.1.5 -> 1.1.5
* [new tag] 1.1.6 -> 1.1.6
* [new tag] 1.1.7 -> 1.1.7
* [new tag] 1.1.8 -> 1.1.8
* [new tag] 1.1.9 -> 1.1.9
* [new tag] 1.2.0 -> 1.2.0
* [new tag] 1.2.1 -> 1.2.1
* [new tag] 1.2.2 -> 1.2.2
* [new tag] 1.2.3 -> 1.2.3
* [new tag] 1.2.4 -> 1.2.4
* [new tag] 1.2.5 -> 1.2.5
* [new tag] 1.2.6 -> 1.2.6
* [new tag] 1.3.0 -> 1.3.0
* [new tag] 1.3.1 -> 1.3.1
* [new tag] 1.3.2 -> 1.3.2
* [new tag] 1.3.3 -> 1.3.3
* [new tag] 1.3.4 -> 1.3.4
* [new tag] 1.3.5 -> 1.3.5
* [new tag] 1.3.6 -> 1.3.6
* [new tag] 1.3.7 -> 1.3.7
* [new tag] 1.3.8 -> 1.3.8
* [new tag] 1.3.9 -> 1.3.9
* [new tag] 1.4.0 -> 1.4.0
* [new tag] 1.4.1 -> 1.4.1
* [new tag] 1.4.2 -> 1.4.2
* [new tag] 1.4.3 -> 1.4.3
* [new tag] 1.5.0 -> 1.5.0
* [new tag] 1.5.1 -> 1.5.1
* [new tag] 1.5.10 -> 1.5.10
* [new tag] 1.5.11 -> 1.5.11
* [new tag] 1.5.12 -> 1.5.12
* [new tag] 1.5.13 -> 1.5.13
* [new tag] 1.5.14 -> 1.5.14
* [new tag] 1.5.2 -> 1.5.2
* [new tag] 1.5.3 -> 1.5.3
* [new tag] 1.5.4 -> 1.5.4
* [new tag] 1.5.5 -> 1.5.5
* [new tag] 1.5.6 -> 1.5.6
* [new tag] 1.5.7 -> 1.5.7
* [new tag] 1.5.8 -> 1.5.8
* [new tag] 1.5.9 -> 1.5.9
* [new tag] 1.6.0 -> 1.6.0
* [new tag] 1.6.1 -> 1.6.1
* [new tag] 1.6.10 -> 1.6.10
* [new tag] 1.6.11 -> 1.6.11
* [new tag] 1.6.12 -> 1.6.12
* [new tag] 1.6.13 -> 1.6.13
* [new tag] 1.6.14 -> 1.6.14
* [new tag] 1.6.15 -> 1.6.15
* [new tag] 1.6.16 -> 1.6.16
* [new tag] 1.6.17 -> 1.6.17
* [new tag] 1.6.2 -> 1.6.2
* [new tag] 1.6.3 -> 1.6.3
* [new tag] 1.6.4 -> 1.6.4
* [new tag] 1.6.5 -> 1.6.5
* [new tag] 1.6.6 -> 1.6.6
* [new tag] 1.6.7 -> 1.6.7
* [new tag] 1.6.8 -> 1.6.8
* [new tag] 1.6.9 -> 1.6.9
* [new tag] 1.7.0 -> 1.7.0
* [new tag] 1.7.1 -> 1.7.1
* [new tag] 1.7.2 -> 1.7.2
* [new tag] 1.7.3 -> 1.7.3
* [new tag] 1.7.4 -> 1.7.4
* [new tag] 1.7.5 -> 1.7.5
* [new tag] 1.7.6 -> 1.7.6
* [new tag] 1.7.7 -> 1.7.7
* [new tag] 1.8.0 -> 1.8.0
* [new tag] 1.8.1 -> 1.8.1
* [new tag] 1.8.2 -> 1.8.2
* [new tag] 1.8.3 -> 1.8.3
* [new tag] 1.8.4 -> 1.8.4
* [new tag] 1.8.5 -> 1.8.5
* [new tag] 1.8.6 -> 1.8.6
* [new tag] 1.9.0 -> 1.9.0
* [new tag] 1.9.1 -> 1.9.1
* [new tag] 1.9.2 -> 1.9.2
* [new tag] 1.9.3 -> 1.9.3
* [new tag] 2.0.0 -> 2.0.0
* [new tag] 2.0.1 -> 2.0.1
* [new tag] 2.0.2 -> 2.0.2
* [new tag] 2.0.3 -> 2.0.3
* [new tag] 2.0.4 -> 2.0.4
* [new tag] 2.0.5 -> 2.0.5
* [new tag] 2.0.6 -> 2.0.6
* [new tag] 2.1.0 -> 2.1.0
* [new tag] 2.1.1 -> 2.1.1
* [new tag] 2.1.10 -> 2.1.10
* [new tag] 2.1.11 -> 2.1.11
* [new tag] 2.1.12 -> 2.1.12
* [new tag] 2.1.13 -> 2.1.13
* [new tag] 2.1.14 -> 2.1.14
* [new tag] 2.1.15 -> 2.1.15
* [new tag] 2.1.16 -> 2.1.16
* [new tag] 2.1.2 -> 2.1.2
* [new tag] 2.1.3 -> 2.1.3
* [new tag] 2.1.4 -> 2.1.4
* [new tag] 2.1.5 -> 2.1.5
* [new tag] 2.1.6 -> 2.1.6
* [new tag] 2.1.7 -> 2.1.7
* [new tag] 2.1.8 -> 2.1.8
* [new tag] 2.1.9 -> 2.1.9
* [new tag] 2.2.0 -> 2.2.0
* [new tag] 2.2.1 -> 2.2.1
* [new tag] 2.2.10 -> 2.2.10
* [new tag] 2.2.11 -> 2.2.11
* [new tag] 2.2.12 -> 2.2.12
* [new tag] 2.2.13 -> 2.2.13
* [new tag] 2.2.14 -> 2.2.14
* [new tag] 2.2.15 -> 2.2.15
* [new tag] 2.2.16 -> 2.2.16
* [new tag] 2.2.17 -> 2.2.17
* [new tag] 2.2.2 -> 2.2.2
* [new tag] 2.2.3 -> 2.2.3
* [new tag] 2.2.4 -> 2.2.4
* [new tag] 2.2.5 -> 2.2.5
* [new tag] 2.2.6 -> 2.2.6
* [new tag] 2.2.7 -> 2.2.7
* [new tag] 2.2.8 -> 2.2.8
* [new tag] 2.2.9 -> 2.2.9
* [new tag] 2.3.0 -> 2.3.0
* [new tag] 2.4.0 -> 2.4.0
* [new tag] 2.4.1 -> 2.4.1
* [new tag] 2.4.10 -> 2.4.10
* [new tag] 2.4.11 -> 2.4.11
* [new tag] 2.4.12 -> 2.4.12
* [new tag] 2.4.13 -> 2.4.13
* [new tag] 2.4.14 -> 2.4.14
* [new tag] 2.4.15 -> 2.4.15
* [new tag] 2.4.16 -> 2.4.16
* [new tag] 2.4.2 -> 2.4.2
* [new tag] 2.4.3 -> 2.4.3
* [new tag] 2.4.4 -> 2.4.4
* [new tag] 2.4.5 -> 2.4.5
* [new tag] 2.4.6 -> 2.4.6
* [new tag] 2.4.7 -> 2.4.7
* [new tag] 2.4.8 -> 2.4.8
* [new tag] 2.4.9 -> 2.4.9
* [new tag] 2.5.0 -> 2.5.0
* [new tag] 2.5.1 -> 2.5.1
* [new tag] 2.5.10 -> 2.5.10
* [new tag] 2.5.11 -> 2.5.11
* [new tag] 2.5.12 -> 2.5.12
* [new tag] 2.5.2 -> 2.5.2
* [new tag] 2.5.3 -> 2.5.3
* [new tag] 2.5.4 -> 2.5.4
* [new tag] 2.5.5 -> 2.5.5
* [new tag] 2.5.6 -> 2.5.6
* [new tag] 2.5.7 -> 2.5.7
* [new tag] 2.5.8 -> 2.5.8
* [new tag] 2.5.9 -> 2.5.9
* [new tag] 2.6.0 -> 2.6.0
* [new tag] 2.6.1 -> 2.6.1
* [new tag] 2.6.2 -> 2.6.2
* [new tag] 2.7.0 -> 2.7.0
* [new tag] 2.7.1 -> 2.7.1
* [new tag] 2.7.2 -> 2.7.2
* [new tag] 2.7.3 -> 2.7.3
* [new tag] 2.7.4 -> 2.7.4
* [new tag] 2.7.5 -> 2.7.5
* [new tag] 2.7.6 -> 2.7.6
* [new tag] 2.7.7 -> 2.7.7
* [new tag] 3.0.0 -> 3.0.0
* [new tag] 3.0.1 -> 3.0.1
* [new tag] 3.0.10 -> 3.0.10
* [new tag] 3.0.11 -> 3.0.11
* [new tag] 3.0.2 -> 3.0.2
* [new tag] 3.0.3 -> 3.0.3
* [new tag] 3.0.4 -> 3.0.4
* [new tag] 3.0.5 -> 3.0.5
* [new tag] 3.0.6 -> 3.0.6
* [new tag] 3.0.7 -> 3.0.7
* [new tag] 3.0.8 -> 3.0.8
* [new tag] 3.0.9 -> 3.0.9
* [new tag] 3.1.0 -> 3.1.0
* [new tag] 3.1.1 -> 3.1.1
* [new tag] 3.1.10 -> 3.1.10
* [new tag] 3.1.11 -> 3.1.11
* [new tag] 3.1.12 -> 3.1.12
* [new tag] 3.1.2 -> 3.1.2
* [new tag] 3.1.3 -> 3.1.3
* [new tag] 3.1.4 -> 3.1.4
* [new tag] 3.1.5 -> 3.1.5
* [new tag] 3.1.6 -> 3.1.6
* [new tag] 3.1.7 -> 3.1.7
* [new tag] 3.1.8 -> 3.1.8
* [new tag] 3.1.9 -> 3.1.9
* [new tag] 3.2.0 -> 3.2.0
* [new tag] 3.2.1 -> 3.2.1
* [new tag] 3.2.10 -> 3.2.10
* [new tag] 3.2.11 -> 3.2.11
* [new tag] 3.2.12 -> 3.2.12
* [new tag] 3.2.13 -> 3.2.13
* [new tag] 3.2.14 -> 3.2.14
* [new tag] 3.2.15 -> 3.2.15
* [new tag] 3.2.16 -> 3.2.16
* [new tag] 3.2.17 -> 3.2.17
* [new tag] 3.2.2 -> 3.2.2
* [new tag] 3.2.3 -> 3.2.3
* [new tag] 3.2.4 -> 3.2.4
* [new tag] 3.2.5 -> 3.2.5
* [new tag] 3.2.6 -> 3.2.6
* [new tag] 3.2.7 -> 3.2.7
* [new tag] 3.2.8 -> 3.2.8
* [new tag] 3.2.9 -> 3.2.9
* [new tag] 3.3.0 -> 3.3.0
* [new tag] 3.3.1 -> 3.3.1
* [new tag] 3.3.10 -> 3.3.10
* [new tag] 3.3.11 -> 3.3.11
* [new tag] 3.3.12 -> 3.3.12
* [new tag] 3.3.13 -> 3.3.13
* [new tag] 3.3.14 -> 3.3.14
* [new tag] 3.3.15 -> 3.3.15
* [new tag] 3.3.16 -> 3.3.16
* [new tag] 3.3.2 -> 3.3.2
* [new tag] 3.3.3 -> 3.3.3
* [new tag] 3.3.4 -> 3.3.4
* [new tag] 3.3.5 -> 3.3.5
* [new tag] 3.3.6 -> 3.3.6
* [new tag] 3.3.7 -> 3.3.7
* [new tag] 3.3.8 -> 3.3.8
* [new tag] 3.3.9 -> 3.3.9
* [new tag] 3.4.0 -> 3.4.0
* [new tag] 3.4.1 -> 3.4.1
* [new tag] 3.4.10 -> 3.4.10
* [new tag] 3.4.11 -> 3.4.11
* [new tag] 3.4.2 -> 3.4.2
* [new tag] 3.4.3 -> 3.4.3
* [new tag] 3.4.4 -> 3.4.4
* [new tag] 3.4.5 -> 3.4.5
* [new tag] 3.4.6 -> 3.4.6
* [new tag] 3.4.7 -> 3.4.7
* [new tag] 3.4.8 -> 3.4.8
* [new tag] 3.4.9 -> 3.4.9
* [new tag] 3.5.0 -> 3.5.0
* [new tag] 3.5.1 -> 3.5.1
* [new tag] 3.5.10 -> 3.5.10
* [new tag] 3.5.2 -> 3.5.2
* [new tag] 3.5.3 -> 3.5.3
* [new tag] 3.5.4 -> 3.5.4
* [new tag] 3.5.5 -> 3.5.5
* [new tag] 3.5.6 -> 3.5.6
* [new tag] 3.5.7 -> 3.5.7
* [new tag] 3.5.8 -> 3.5.8
* [new tag] 3.5.9 -> 3.5.9
* [new tag] 3.6.0 -> 3.6.0
* [new tag] 3.6.1 -> 3.6.1
* [new tag] 3.6.10 -> 3.6.10
* [new tag] 3.6.11 -> 3.6.11
* [new tag] 3.6.12 -> 3.6.12
* [new tag] 3.6.13 -> 3.6.13
* [new tag] 3.6.14 -> 3.6.14
* [new tag] 3.6.15 -> 3.6.15
* [new tag] 3.6.16 -> 3.6.16
* [new tag] 3.6.17 -> 3.6.17
* [new tag] 3.6.18 -> 3.6.18
* [new tag] 3.6.19 -> 3.6.19
* [new tag] 3.6.2 -> 3.6.2
* [new tag] 3.6.20 -> 3.6.20
* [new tag] 3.6.21 -> 3.6.21
* [new tag] 3.6.3 -> 3.6.3
* [new tag] 3.6.4 -> 3.6.4
* [new tag] 3.6.5 -> 3.6.5
* [new tag] 3.6.6 -> 3.6.6
* [new tag] 3.6.7 -> 3.6.7
* [new tag] 3.6.8 -> 3.6.8
* [new tag] 3.6.9 -> 3.6.9
* [new tag] 4.0.0 -> 4.0.0
* [new tag] 4.0.1 -> 4.0.1
* [new tag] 4.0.10 -> 4.0.10
* [new tag] 4.0.11 -> 4.0.11
* [new tag] 4.0.12 -> 4.0.12
* [new tag] 4.0.13 -> 4.0.13
* [new tag] 4.0.14 -> 4.0.14
* [new tag] 4.0.15 -> 4.0.15
* [new tag] 4.0.16 -> 4.0.16
* [new tag] 4.0.17 -> 4.0.17
* [new tag] 4.0.18 -> 4.0.18
* [new tag] 4.0.19 -> 4.0.19
* [new tag] 4.0.2 -> 4.0.2
* [new tag] 4.0.20 -> 4.0.20
* [new tag] 4.0.21 -> 4.0.21
* [new tag] 4.0.22 -> 4.0.22
* [new tag] 4.0.23 -> 4.0.23
* [new tag] 4.0.24 -> 4.0.24
* [new tag] 4.0.25 -> 4.0.25
* [new tag] 4.0.26 -> 4.0.26
* [new tag] 4.0.3 -> 4.0.3
* [new tag] 4.0.4 -> 4.0.4
* [new tag] 4.0.5 -> 4.0.5
* [new tag] 4.0.6 -> 4.0.6
* [new tag] 4.0.7 -> 4.0.7
* [new tag] 4.0.8 -> 4.0.8
* [new tag] 4.0.9 -> 4.0.9
HEAD is now at 36fc91b7c Merge pull request #15607 from MikeMcQuaid/eval_all_api
==> Homebrew has enabled anonymous aggregate formula and cask analytics.
Read the analytics documentation (and how to opt-out) here:
https://docs.brew.sh/Analytics
No analytics have been recorded yet (nor will be during this `brew` run).
==> Homebrew is run entirely by unpaid volunteers. Please consider donating:
https://github.com/Homebrew/brew#donations
==> Tapping homebrew/core
Cloning into '/Users/distiller/deps/Library/Taps/homebrew/homebrew-core'...
remote: Enumerating objects: 12258, done.
remote: Counting objects: 100% (12251/12251), done.
remote: fatal: object 3d33edf328ac7e52d9c1c025df61e29c001a006c cannot be read
remote: aborting due to possible repository corruption on the remote side.
fatal: early EOF
fatal: fetch-pack: invalid index-pack output
Error: Failure while executing; `git clone https://github.com/Homebrew/homebrew-core /Users/distiller/deps/Library/Taps/homebrew/homebrew-core --origin=origin --template=` exited with 128.
==> Tapping homebrew/core
Cloning into '/Users/distiller/deps/Library/Taps/homebrew/homebrew-core'...
remote: Enumerating objects: 12258, done.
remote: Counting objects: 100% (12251/12251), done.
remote: fatal: object 3d33edf328ac7e52d9c1c025df61e29c001a006c cannot be read
remote: aborting due to possible repository corruption on the remote side.
fatal: early EOF
fatal: fetch-pack: invalid index-pack output
Error: Failure while executing; `git clone https://github.com/Homebrew/homebrew-core /Users/distiller/deps/Library/Taps/homebrew/homebrew-core --origin=origin --template=` exited with 128.
Exited with code exit status 1
CircleCI received exit code 1
```
| non_infrastructure | macos build macos intel fails intermittently when git clone homebrew core problem description the macos build macos intel job in circleci fails intermittently with the following failure an example is cloning into users distiller deps library taps homebrew homebrew core remote enumerating objects done remote counting objects done remote fatal object cannot be read remote aborting due to possible repository corruption on the remote side fatal early eof fatal fetch pack invalid index pack output error failure while executing git clone users distiller deps library taps homebrew homebrew core origin origin template exited with system information circleci cmake log bash bin bash login eo pipefail export path deps bin deps opt bison bin deps opt flex bin path mkdir p ccache export ccache dir pwd ccache ccache sz m brew install openssl brew link overwrite force openssl export path users distiller deps opt openssl bin path export openssl root dir brew prefix openssl cmake b build debug gninja dtreat warnings as errors denable all warnings dcmake build type debug dcmake prefix path deps dcmake cxx compiler launcher ccache dflex include dir deps opt flex include ninja c build debug ccache s cacheable calls hits direct preprocessed misses local storage cache size gib hits misses statistics zeroed set cache size limit to gib initialized empty git repository in users distiller deps git remote enumerating objects done remote counting objects done remote compressing objects done remote total delta reused delta pack reused receiving objects mib mib s done running brew update auto update resolving deltas done from ssh github com homebrew brew master origin master head is now at merge pull request from mikemcquaid eval all api homebrew has enabled anonymous aggregate formula and cask analytics read the analytics documentation and how to opt out here no analytics have been recorded yet nor will be during this brew run homebrew is run entirely by unpaid volunteers please consider donating tapping homebrew core cloning into users distiller deps library taps homebrew homebrew core remote enumerating objects done remote counting objects done remote fatal object cannot be read remote aborting due to possible repository corruption on the remote side fatal early eof fatal fetch pack invalid index pack output error failure while executing git clone users distiller deps library taps homebrew homebrew core origin origin template exited with tapping homebrew core cloning into users distiller deps library taps homebrew homebrew core remote enumerating objects done remote counting objects done remote fatal object cannot be read remote aborting due to possible repository corruption on the remote side fatal early eof fatal fetch pack invalid index pack output error failure while executing git clone users distiller deps library taps homebrew homebrew core origin origin template exited with exited with code exit status circleci received exit code | 0 |
754,390 | 26,385,320,962 | IssuesEvent | 2023-01-12 11:49:23 | gamefreedomgit/Maelstrom | https://api.github.com/repos/gamefreedomgit/Maelstrom | closed | [NPC] Rescued Survivor | NPC Movement Priority: Low Status: Confirmed | Rescued Survivors in the Loading Room should exit the portal and likely go over to where the rest of the Rescued Survivors are, see video for reference: https://youtu.be/NiB0iAd6LVw?t=507
Currently there are a ton of NPCs all stacked on top of each other doing nothing:

This will likely cause infinite NPC spawn and bring down server load over time | 1.0 | [NPC] Rescued Survivor - Rescued Survivors in the Loading Room should exit the portal and likely go over to where the rest of the Rescued Survivors are, see video for reference: https://youtu.be/NiB0iAd6LVw?t=507
Currently there are a ton of NPCs all stacked on top of each other doing nothing:

This will likely cause infinite NPC spawn and bring down server load over time | non_infrastructure | rescued survivor rescued survivors in the loading room should exit the portal and likely go over to where the rest of the rescued survivors are see video for reference currently there are a ton of npcs all stacked on top of each other doing nothing this will likely cause infinite npc spawn and bring down server load over time | 0 |
24,954 | 17,937,361,239 | IssuesEvent | 2021-09-10 17:05:27 | hackforla/website | https://api.github.com/repos/hackforla/website | closed | Research How to Host a Replica of Our Website | Research role: front end Size: Large Collaborative Work time sensitive Status: Updated Feature: Infrastructure | ### Overview
We would like to invite designers to be more involved in reviewing pull requests but realized that it'll be a learning curve for some volunteers.
Our team came up with a possible solution to build and run a replica of our website so that everyone can easily make changes to the replica without directly making changes to our actual website/our repo.
### Action Items
- [x] Research to see if it is feasible.
- How do other companies do it?
- How can we implement it in our team?
- [x] Research what are the steps?
- [x] Clone the Hack for LA Website and deploy to the app of choice. (As of 8/17/21, we chose Netlify.)
- [x] Test how the application works with forks.
- [ ] Discuss with the design and development team to see if this mock-up/prototype is workable.
### Resources/Instructions
- [Render](https://render.com/)
- [Travis-CI](https://www.travis-ci.com/)
- [Travis-CI Tutorial](https://docs.travis-ci.com/user/tutorial/)
- [Jekyll Doc on Travis CI](https://jekyllrb.com/docs/continuous-integration/travis-ci/)
- [squash.io](https://www.squash.io/)
- [Continuous Deployment with Netlify](https://docs.netlify.com/configure-builds/get-started/#build-settings)
- [Staging Environment with Netlify](https://www.unixtutorial.org/staging-environment-with-netlify/)
- [Migrating Your Jekyll Site to Netlify](https://www.netlify.com/blog/2017/05/11/migrating-your-jekyll-site-to-netlify/)
- [Documentation - About Netlify's Split Testing](https://docs.netlify.com/site-deploys/split-testing/)
- [YouTube - About Netlify's Split Testing](https://www.youtube.com/watch?v=r0ZA0zhLjkE)
| 1.0 | Research How to Host a Replica of Our Website - ### Overview
We would like to invite designers to be more involved in reviewing pull requests but realized that it'll be a learning curve for some volunteers.
Our team came up with a possible solution to build and run a replica of our website so that everyone can easily make changes to the replica without directly making changes to our actual website/our repo.
### Action Items
- [x] Research to see if it is feasible.
- How do other companies do it?
- How can we implement it in our team?
- [x] Research what are the steps?
- [x] Clone the Hack for LA Website and deploy to the app of choice. (As of 8/17/21, we chose Netlify.)
- [x] Test how the application works with forks.
- [ ] Discuss with the design and development team to see if this mock-up/prototype is workable.
### Resources/Instructions
- [Render](https://render.com/)
- [Travis-CI](https://www.travis-ci.com/)
- [Travis-CI Tutorial](https://docs.travis-ci.com/user/tutorial/)
- [Jekyll Doc on Travis CI](https://jekyllrb.com/docs/continuous-integration/travis-ci/)
- [squash.io](https://www.squash.io/)
- [Continuous Deployment with Netlify](https://docs.netlify.com/configure-builds/get-started/#build-settings)
- [Staging Environment with Netlify](https://www.unixtutorial.org/staging-environment-with-netlify/)
- [Migrating Your Jekyll Site to Netlify](https://www.netlify.com/blog/2017/05/11/migrating-your-jekyll-site-to-netlify/)
- [Documentation - About Netlify's Split Testing](https://docs.netlify.com/site-deploys/split-testing/)
- [YouTube - About Netlify's Split Testing](https://www.youtube.com/watch?v=r0ZA0zhLjkE)
| infrastructure | research how to host a replica of our website overview we would like to invite designers to be more involved in reviewing pull requests but realized that it ll be a learning curve for some volunteers our team came up with a possible solution to build and run a replica of our website so that everyone can easily make changes to the replica without directly making changes to our actual website our repo action items research to see if it is feasible how do other companies do it how can we implement it in our team research what are the steps clone the hack for la website and deploy to the app of choice as of we chose netlify test how the application works with forks discuss with the design and development team to see if this mock up prototype is workable resources instructions | 1 |
21,170 | 14,407,897,806 | IssuesEvent | 2020-12-03 22:40:47 | dotnet/aspnetcore | https://api.github.com/repos/dotnet/aspnetcore | closed | Download, configure and run the official source code of ASP.NET Core 3.x, report this error after executing the restore command (下载、配置、运行 ASP.NET Core 3.x 官方源码,执行restore命令之后报此错误!!) | Needs: Attention :wave: area-infrastructure |
C:\Users\Administrator\.nuget\packages\microsoft.build.tasks.git\1.1.0-beta-20206-02\build\Microsoft.Build.Tasks.Git.targets(24,5): error : Unable to locate repository with working directory that contains directory 'D:\aspnetcore-master\src\Analyzers\Internal.AspNetCore.Analyzers\src'. [D:\aspnetcore-master\src\Analyzers\Internal.AspNetCore.Analyzers\src\Internal.AspNetCore.Analyzers.csproj]
C:\Users\Administrator\.nuget\packages\microsoft.build.tasks.git\1.1.0-beta-20206-02\build\Microsoft.Build.Tasks.Git.targets(47,5): error : Unable to locate repository with working directory that contains directory 'D:\aspnetcore-master\src\Analyzers\Internal.AspNetCore.Analyzers\src'. [D:\aspnetcore-master\src\Analyzers\Internal.AspNetCore.Analyzers\src\Internal.AspNetCore.Analyzers.csproj]
C:\Users\Administrator\.nuget\packages\microsoft.sourcelink.common\1.1.0-beta-20206-02\build\Microsoft.SourceLink.Common.targets(52,5): error : Source control information is not available - the generated source link is empty. [D:\aspnetcore-master\src\Analyzers\Internal.AspNetCore.Analyzers\src\Internal.AspNetCore.Analyzers.csproj]
Internal.AspNetCore.Analyzers -> D:\aspnetcore-master\artifacts\bin\Internal.AspNetCore.Analyzers\Release\netstandard1.3\Internal.AspNetCore.Analyzers.dll
C:\Users\Administrator\.nuget\packages\microsoft.build.tasks.git\1.1.0-beta-20206-02\build\Microsoft.Build.Tasks.Git.targets(24,5): error : Unable to locate repository with working directory that contains directory 'D:\aspnetcore-master\eng\tools\RepoTasks'. [D:\aspnetcore-master\eng\tools\RepoTasks\RepoTasks.csproj]
C:\Users\Administrator\.nuget\packages\microsoft.build.tasks.git\1.1.0-beta-20206-02\build\Microsoft.Build.Tasks.Git.targets(47,5): error : Unable to locate repository with working directory that contains directory 'D:\aspnetcore-master\eng\tools\RepoTasks'. [D:\aspnetcore-master\eng\tools\RepoTasks\RepoTasks.csproj]
C:\Users\Administrator\.nuget\packages\microsoft.sourcelink.common\1.1.0-beta-20206-02\build\Microsoft.SourceLink.Common.targets(52,5): error : Source control information is not available - the generated source link is empty. [D:\aspnetcore-master\eng\tools\RepoTasks\RepoTasks.csproj]
RepoTasks -> D:\aspnetcore-master\artifacts\bin\RepoTasks\Release\net5.0\RepoTasks.dll
C:\Users\Administrator\.nuget\packages\microsoft.build.tasks.git\1.1.0-beta-20206-02\build\Microsoft.Build.Tasks.Git.targets(24,5): error : Unable to locate repository with working directory that contains directory 'D:\aspnetcore-master\eng\tools\RepoTasks'. [D:\aspnetcore-master\eng\tools\RepoTasks\RepoTasks.csproj]
C:\Users\Administrator\.nuget\packages\microsoft.build.tasks.git\1.1.0-beta-20206-02\build\Microsoft.Build.Tasks.Git.targets(47,5): error : Unable to locate repository with working directory that contains directory 'D:\aspnetcore-master\eng\tools\RepoTasks'. [D:\aspnetcore-master\eng\tools\RepoTasks\RepoTasks.csproj]
C:\Users\Administrator\.nuget\packages\microsoft.sourcelink.common\1.1.0-beta-20206-02\build\Microsoft.SourceLink.Common.targets(52,5): error : Source control information is not available - the generated source link is empty. [D:\aspnetcore-master\eng\tools\RepoTasks\RepoTasks.csproj]
RepoTasks -> D:\aspnetcore-master\artifacts\bin\RepoTasks\Release\net472\RepoTasks.dll
Build failed.
| 1.0 | Download, configure and run the official source code of ASP.NET Core 3.x, report this error after executing the restore command (下载、配置、运行 ASP.NET Core 3.x 官方源码,执行restore命令之后报此错误!!) -
C:\Users\Administrator\.nuget\packages\microsoft.build.tasks.git\1.1.0-beta-20206-02\build\Microsoft.Build.Tasks.Git.targets(24,5): error : Unable to locate repository with working directory that contains directory 'D:\aspnetcore-master\src\Analyzers\Internal.AspNetCore.Analyzers\src'. [D:\aspnetcore-master\src\Analyzers\Internal.AspNetCore.Analyzers\src\Internal.AspNetCore.Analyzers.csproj]
C:\Users\Administrator\.nuget\packages\microsoft.build.tasks.git\1.1.0-beta-20206-02\build\Microsoft.Build.Tasks.Git.targets(47,5): error : Unable to locate repository with working directory that contains directory 'D:\aspnetcore-master\src\Analyzers\Internal.AspNetCore.Analyzers\src'. [D:\aspnetcore-master\src\Analyzers\Internal.AspNetCore.Analyzers\src\Internal.AspNetCore.Analyzers.csproj]
C:\Users\Administrator\.nuget\packages\microsoft.sourcelink.common\1.1.0-beta-20206-02\build\Microsoft.SourceLink.Common.targets(52,5): error : Source control information is not available - the generated source link is empty. [D:\aspnetcore-master\src\Analyzers\Internal.AspNetCore.Analyzers\src\Internal.AspNetCore.Analyzers.csproj]
Internal.AspNetCore.Analyzers -> D:\aspnetcore-master\artifacts\bin\Internal.AspNetCore.Analyzers\Release\netstandard1.3\Internal.AspNetCore.Analyzers.dll
C:\Users\Administrator\.nuget\packages\microsoft.build.tasks.git\1.1.0-beta-20206-02\build\Microsoft.Build.Tasks.Git.targets(24,5): error : Unable to locate repository with working directory that contains directory 'D:\aspnetcore-master\eng\tools\RepoTasks'. [D:\aspnetcore-master\eng\tools\RepoTasks\RepoTasks.csproj]
C:\Users\Administrator\.nuget\packages\microsoft.build.tasks.git\1.1.0-beta-20206-02\build\Microsoft.Build.Tasks.Git.targets(47,5): error : Unable to locate repository with working directory that contains directory 'D:\aspnetcore-master\eng\tools\RepoTasks'. [D:\aspnetcore-master\eng\tools\RepoTasks\RepoTasks.csproj]
C:\Users\Administrator\.nuget\packages\microsoft.sourcelink.common\1.1.0-beta-20206-02\build\Microsoft.SourceLink.Common.targets(52,5): error : Source control information is not available - the generated source link is empty. [D:\aspnetcore-master\eng\tools\RepoTasks\RepoTasks.csproj]
RepoTasks -> D:\aspnetcore-master\artifacts\bin\RepoTasks\Release\net5.0\RepoTasks.dll
C:\Users\Administrator\.nuget\packages\microsoft.build.tasks.git\1.1.0-beta-20206-02\build\Microsoft.Build.Tasks.Git.targets(24,5): error : Unable to locate repository with working directory that contains directory 'D:\aspnetcore-master\eng\tools\RepoTasks'. [D:\aspnetcore-master\eng\tools\RepoTasks\RepoTasks.csproj]
C:\Users\Administrator\.nuget\packages\microsoft.build.tasks.git\1.1.0-beta-20206-02\build\Microsoft.Build.Tasks.Git.targets(47,5): error : Unable to locate repository with working directory that contains directory 'D:\aspnetcore-master\eng\tools\RepoTasks'. [D:\aspnetcore-master\eng\tools\RepoTasks\RepoTasks.csproj]
C:\Users\Administrator\.nuget\packages\microsoft.sourcelink.common\1.1.0-beta-20206-02\build\Microsoft.SourceLink.Common.targets(52,5): error : Source control information is not available - the generated source link is empty. [D:\aspnetcore-master\eng\tools\RepoTasks\RepoTasks.csproj]
RepoTasks -> D:\aspnetcore-master\artifacts\bin\RepoTasks\Release\net472\RepoTasks.dll
Build failed.
| infrastructure | download configure and run the official source code of asp net core x report this error after executing the restore command 下载、配置、运行 asp net core x 官方源码,执行restore命令之后报此错误!! c users administrator nuget packages microsoft build tasks git beta build microsoft build tasks git targets error unable to locate repository with working directory that contains directory d aspnetcore master src analyzers internal aspnetcore analyzers src c users administrator nuget packages microsoft build tasks git beta build microsoft build tasks git targets error unable to locate repository with working directory that contains directory d aspnetcore master src analyzers internal aspnetcore analyzers src c users administrator nuget packages microsoft sourcelink common beta build microsoft sourcelink common targets error source control information is not available the generated source link is empty internal aspnetcore analyzers d aspnetcore master artifacts bin internal aspnetcore analyzers release internal aspnetcore analyzers dll c users administrator nuget packages microsoft build tasks git beta build microsoft build tasks git targets error unable to locate repository with working directory that contains directory d aspnetcore master eng tools repotasks c users administrator nuget packages microsoft build tasks git beta build microsoft build tasks git targets error unable to locate repository with working directory that contains directory d aspnetcore master eng tools repotasks c users administrator nuget packages microsoft sourcelink common beta build microsoft sourcelink common targets error source control information is not available the generated source link is empty repotasks d aspnetcore master artifacts bin repotasks release repotasks dll c users administrator nuget packages microsoft build tasks git beta build microsoft build tasks git targets error unable to locate repository with working directory that contains directory d aspnetcore master eng tools repotasks c users administrator nuget packages microsoft build tasks git beta build microsoft build tasks git targets error unable to locate repository with working directory that contains directory d aspnetcore master eng tools repotasks c users administrator nuget packages microsoft sourcelink common beta build microsoft sourcelink common targets error source control information is not available the generated source link is empty repotasks d aspnetcore master artifacts bin repotasks release repotasks dll build failed | 1 |
2,405 | 3,669,208,184 | IssuesEvent | 2016-02-21 02:53:26 | dotnet/corefx | https://api.github.com/repos/dotnet/corefx | closed | Microsoft.CSharp fails to build when running MSBuild on Mono | bug Infrastructure X-Plat | When using build.sh on Linux Microsoft.CSharp runs into some build issues. I am going to disable it for now and follow up with this issue. | 1.0 | Microsoft.CSharp fails to build when running MSBuild on Mono - When using build.sh on Linux Microsoft.CSharp runs into some build issues. I am going to disable it for now and follow up with this issue. | infrastructure | microsoft csharp fails to build when running msbuild on mono when using build sh on linux microsoft csharp runs into some build issues i am going to disable it for now and follow up with this issue | 1 |
66,410 | 16,609,544,730 | IssuesEvent | 2021-06-02 09:46:26 | Crocoblock/suggestions | https://api.github.com/repos/Crocoblock/suggestions | closed | Problem displaying discount percentage on the single page of the jet woobuilder plugin for products that do not have a discount. | JetWooBuilder | When we use this widget (https://prnt.sc/1363q97) to display the discount percentage, if the product does not have a discount, the widget will remain empty. https://prnt.sc/1363mf2
Please correct this distance. Thanks
| 1.0 | Problem displaying discount percentage on the single page of the jet woobuilder plugin for products that do not have a discount. - When we use this widget (https://prnt.sc/1363q97) to display the discount percentage, if the product does not have a discount, the widget will remain empty. https://prnt.sc/1363mf2
Please correct this distance. Thanks
| non_infrastructure | problem displaying discount percentage on the single page of the jet woobuilder plugin for products that do not have a discount when we use this widget to display the discount percentage if the product does not have a discount the widget will remain empty please correct this distance thanks | 0 |
499,325 | 14,445,211,894 | IssuesEvent | 2020-12-07 22:32:42 | ImagingDataCommons/IDC-WebApp | https://api.github.com/repos/ImagingDataCommons/IDC-WebApp | opened | after export of large manifest of cohort default settings in export panel are set to files not BigQuery | bug cohorts priority | 
| 1.0 | after export of large manifest of cohort default settings in export panel are set to files not BigQuery - 
| non_infrastructure | after export of large manifest of cohort default settings in export panel are set to files not bigquery | 0 |
14,990 | 11,286,722,638 | IssuesEvent | 2020-01-16 01:41:30 | sass/sass | https://api.github.com/repos/sass/sass | closed | Create a tool to migrate users to @use | infrastructure | [The new module system](https://github.com/sass/language/blob/master/proposal/module-system.md) is on the horizon, and we want to make it as easy as possible for users to migrate to it. Probably the biggest value-add for this migration would be creating a tool that takes a Sass entrypoint and converts it and everything it imports to `@use`. This would include:
* Replacing top-level `@import`s with `@use`s.
* Adding new `@use`s for variables, functions, and mixins that are referred to without explicit imports (including core library functions).
* Adding namespaces to variable, function, and mixin uses.
* Replacing overridden variables with `with` blocks.
* Replacing nested `@import`s with `@include load-css()`.
* Adding namespaces to `get-function()` calls whose referent can be statically determined, and printing warnings for others.
In order to do this, the migration tool would probably need to partially evaluate the Sass files in order to determine where each variable, mixin, and function was originally defined. It could then use source span information to modify code in-place without disrupting the rest of the stylesheet. | 1.0 | Create a tool to migrate users to @use - [The new module system](https://github.com/sass/language/blob/master/proposal/module-system.md) is on the horizon, and we want to make it as easy as possible for users to migrate to it. Probably the biggest value-add for this migration would be creating a tool that takes a Sass entrypoint and converts it and everything it imports to `@use`. This would include:
* Replacing top-level `@import`s with `@use`s.
* Adding new `@use`s for variables, functions, and mixins that are referred to without explicit imports (including core library functions).
* Adding namespaces to variable, function, and mixin uses.
* Replacing overridden variables with `with` blocks.
* Replacing nested `@import`s with `@include load-css()`.
* Adding namespaces to `get-function()` calls whose referent can be statically determined, and printing warnings for others.
In order to do this, the migration tool would probably need to partially evaluate the Sass files in order to determine where each variable, mixin, and function was originally defined. It could then use source span information to modify code in-place without disrupting the rest of the stylesheet. | infrastructure | create a tool to migrate users to use is on the horizon and we want to make it as easy as possible for users to migrate to it probably the biggest value add for this migration would be creating a tool that takes a sass entrypoint and converts it and everything it imports to use this would include replacing top level import s with use s adding new use s for variables functions and mixins that are referred to without explicit imports including core library functions adding namespaces to variable function and mixin uses replacing overridden variables with with blocks replacing nested import s with include load css adding namespaces to get function calls whose referent can be statically determined and printing warnings for others in order to do this the migration tool would probably need to partially evaluate the sass files in order to determine where each variable mixin and function was originally defined it could then use source span information to modify code in place without disrupting the rest of the stylesheet | 1 |
24,792 | 17,779,540,048 | IssuesEvent | 2021-08-31 01:15:50 | wanted2/caineng.in | https://api.github.com/repos/wanted2/caineng.in | opened | 3 điều nên làm để quản lý tài khoản AWS | enhancement project-management programming infrastructure | **Is your feature request related to a problem? Please describe.**
Cách đây khá lâu, tôi có nhận lời nhờ của bạn tôi để cài đặt mấy cái xử lý log cho server của bạn.
Ngay phút đầu tiên bạn đưa cho tôi tài khoản root của bạn và yêu cầu tôi sử dụng để làm việc.
Tôi hỏi bạn, bạn có biết uy lực của tài khoản root nó lớn thế nào không mà lại dễ dàng giao cho người ngoài vậy?
Đồng thời bạn có hiểu tôi chỉ là người làm giúp một task nhỏ thì đâu cần tài khoản root?
Quản lý tài khoản là việc không hề nhỏ, và 3 việc sau chắc chắn sẽ giúp bạn:
1. Thiết lập nhiều lớp bảo mật tài khoản root.
2. Thiết lập hệ thống phân quyền chi tiết.
3. Đừng bao giờ nói "tôi không biết dịch vụ AWS SSO là cái gì?" Bạn cần hiểu AWS SSO bảo mật hơn IAM nhiều.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| 1.0 | 3 điều nên làm để quản lý tài khoản AWS - **Is your feature request related to a problem? Please describe.**
Cách đây khá lâu, tôi có nhận lời nhờ của bạn tôi để cài đặt mấy cái xử lý log cho server của bạn.
Ngay phút đầu tiên bạn đưa cho tôi tài khoản root của bạn và yêu cầu tôi sử dụng để làm việc.
Tôi hỏi bạn, bạn có biết uy lực của tài khoản root nó lớn thế nào không mà lại dễ dàng giao cho người ngoài vậy?
Đồng thời bạn có hiểu tôi chỉ là người làm giúp một task nhỏ thì đâu cần tài khoản root?
Quản lý tài khoản là việc không hề nhỏ, và 3 việc sau chắc chắn sẽ giúp bạn:
1. Thiết lập nhiều lớp bảo mật tài khoản root.
2. Thiết lập hệ thống phân quyền chi tiết.
3. Đừng bao giờ nói "tôi không biết dịch vụ AWS SSO là cái gì?" Bạn cần hiểu AWS SSO bảo mật hơn IAM nhiều.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| infrastructure | điều nên làm để quản lý tài khoản aws is your feature request related to a problem please describe cách đây khá lâu tôi có nhận lời nhờ của bạn tôi để cài đặt mấy cái xử lý log cho server của bạn ngay phút đầu tiên bạn đưa cho tôi tài khoản root của bạn và yêu cầu tôi sử dụng để làm việc tôi hỏi bạn bạn có biết uy lực của tài khoản root nó lớn thế nào không mà lại dễ dàng giao cho người ngoài vậy đồng thời bạn có hiểu tôi chỉ là người làm giúp một task nhỏ thì đâu cần tài khoản root quản lý tài khoản là việc không hề nhỏ và việc sau chắc chắn sẽ giúp bạn thiết lập nhiều lớp bảo mật tài khoản root thiết lập hệ thống phân quyền chi tiết đừng bao giờ nói tôi không biết dịch vụ aws sso là cái gì bạn cần hiểu aws sso bảo mật hơn iam nhiều describe the solution you d like a clear and concise description of what you want to happen describe alternatives you ve considered a clear and concise description of any alternative solutions or features you ve considered additional context add any other context or screenshots about the feature request here | 1 |
139,062 | 11,236,290,303 | IssuesEvent | 2020-01-09 10:09:16 | hazelcast/hazelcast | https://api.github.com/repos/hazelcast/hazelcast | closed | MetadataRaftGroupTest.when_metadataClusterNodeFallsFarBehind_then_itInstallsSnapshot | Module: CP Subsystem Source: Internal Team: Core Type: Test-Failure | %subj% test failure on PR builder (master branch):
```
Stacktrace
com.hazelcast.cp.exception.StaleAppendRequestException
at com.hazelcast.cp.internal.raft.impl.RaftNodeImpl$HeartbeatTask.innerRun(RaftNodeImpl.java:1000)
at com.hazelcast.cp.internal.raft.impl.task.RaftNodeStatusAwareTask.run(RaftNodeStatusAwareTask.java:47)
at com.hazelcast.cp.internal.util.PartitionSpecificRunnableAdaptor.run(PartitionSpecificRunnableAdaptor.java:41)
at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:163)
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:159)
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:127)
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.run(OperationThread.java:110)
at ------ submitted from ------.(Unknown Source)
at com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.resolve(InvocationFuture.java:137)
at com.hazelcast.spi.impl.AbstractInvocationFuture$1.run(AbstractInvocationFuture.java:250)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
at com.hazelcast.internal.util.executor.HazelcastManagedThread.executeRun(HazelcastManagedThread.java:64)
at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:80)
```
http://jenkins.hazelcast.com/job/Hazelcast-pr-builder/3392/testReport/junit/com.hazelcast.cp.internal/MetadataRaftGroupTest/when_metadataClusterNodeFallsFarBehind_then_itInstallsSnapshot/ | 1.0 | MetadataRaftGroupTest.when_metadataClusterNodeFallsFarBehind_then_itInstallsSnapshot - %subj% test failure on PR builder (master branch):
```
Stacktrace
com.hazelcast.cp.exception.StaleAppendRequestException
at com.hazelcast.cp.internal.raft.impl.RaftNodeImpl$HeartbeatTask.innerRun(RaftNodeImpl.java:1000)
at com.hazelcast.cp.internal.raft.impl.task.RaftNodeStatusAwareTask.run(RaftNodeStatusAwareTask.java:47)
at com.hazelcast.cp.internal.util.PartitionSpecificRunnableAdaptor.run(PartitionSpecificRunnableAdaptor.java:41)
at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:163)
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:159)
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:127)
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.run(OperationThread.java:110)
at ------ submitted from ------.(Unknown Source)
at com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.resolve(InvocationFuture.java:137)
at com.hazelcast.spi.impl.AbstractInvocationFuture$1.run(AbstractInvocationFuture.java:250)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
at com.hazelcast.internal.util.executor.HazelcastManagedThread.executeRun(HazelcastManagedThread.java:64)
at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:80)
```
http://jenkins.hazelcast.com/job/Hazelcast-pr-builder/3392/testReport/junit/com.hazelcast.cp.internal/MetadataRaftGroupTest/when_metadataClusterNodeFallsFarBehind_then_itInstallsSnapshot/ | non_infrastructure | metadataraftgrouptest when metadataclusternodefallsfarbehind then itinstallssnapshot subj test failure on pr builder master branch stacktrace com hazelcast cp exception staleappendrequestexception at com hazelcast cp internal raft impl raftnodeimpl heartbeattask innerrun raftnodeimpl java at com hazelcast cp internal raft impl task raftnodestatusawaretask run raftnodestatusawaretask java at com hazelcast cp internal util partitionspecificrunnableadaptor run partitionspecificrunnableadaptor java at com hazelcast spi impl operationservice impl operationrunnerimpl run operationrunnerimpl java at com hazelcast spi impl operationexecutor impl operationthread process operationthread java at com hazelcast spi impl operationexecutor impl operationthread process operationthread java at com hazelcast spi impl operationexecutor impl operationthread run operationthread java at submitted from unknown source at com hazelcast spi impl operationservice impl invocationfuture resolve invocationfuture java at com hazelcast spi impl abstractinvocationfuture run abstractinvocationfuture java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java at com hazelcast internal util executor hazelcastmanagedthread executerun hazelcastmanagedthread java at com hazelcast internal util executor hazelcastmanagedthread run hazelcastmanagedthread java | 0 |
15,680 | 20,241,202,948 | IssuesEvent | 2022-02-14 09:27:34 | Creators-of-Create/Create | https://api.github.com/repos/Creators-of-Create/Create | closed | Ticking entity Mechanical Plough fatal crash | bug compatibility | Crash report: https://pastebin.com/UAVCmUZu
Building [this](https://imgur.com/a/lIDT7pk) contraption and puting a minecart or a furnace minecart crashes the game. The chassis range is not the case, tested both with 8 and 1 setting, both in creative and survival. (reproducable) | True | Ticking entity Mechanical Plough fatal crash - Crash report: https://pastebin.com/UAVCmUZu
Building [this](https://imgur.com/a/lIDT7pk) contraption and puting a minecart or a furnace minecart crashes the game. The chassis range is not the case, tested both with 8 and 1 setting, both in creative and survival. (reproducable) | non_infrastructure | ticking entity mechanical plough fatal crash crash report building contraption and puting a minecart or a furnace minecart crashes the game the chassis range is not the case tested both with and setting both in creative and survival reproducable | 0 |
27,351 | 21,650,370,759 | IssuesEvent | 2022-05-06 08:43:52 | dotnet/runtime | https://api.github.com/repos/dotnet/runtime | closed | Temporarily disabling Win-ARM64 testing in PR | arch-arm64 os-windows area-Infrastructure-coreclr tracking in-pr | Due to decreased capacity in our Helix Win-arm64 queue while we solve some product-impacting imaging issues, we are disabling testing windows arm64 against runtime tests for the time being. Tests will still run in rolling builds of the runtime twice a day and other stress flavors. We will update this issue as we get more information regarding ETAs and test availability. If you expect your change to be potentially impactful to ARM64, please exercise caution and test offline if possible or abstain from merging until this work item is done.
Context: https://github.com/dotnet/runtime/pull/67771
| 1.0 | Temporarily disabling Win-ARM64 testing in PR - Due to decreased capacity in our Helix Win-arm64 queue while we solve some product-impacting imaging issues, we are disabling testing windows arm64 against runtime tests for the time being. Tests will still run in rolling builds of the runtime twice a day and other stress flavors. We will update this issue as we get more information regarding ETAs and test availability. If you expect your change to be potentially impactful to ARM64, please exercise caution and test offline if possible or abstain from merging until this work item is done.
Context: https://github.com/dotnet/runtime/pull/67771
| infrastructure | temporarily disabling win testing in pr due to decreased capacity in our helix win queue while we solve some product impacting imaging issues we are disabling testing windows against runtime tests for the time being tests will still run in rolling builds of the runtime twice a day and other stress flavors we will update this issue as we get more information regarding etas and test availability if you expect your change to be potentially impactful to please exercise caution and test offline if possible or abstain from merging until this work item is done context | 1 |
148,925 | 23,400,156,198 | IssuesEvent | 2022-08-12 07:02:50 | chapel-lang/chapel | https://api.github.com/repos/chapel-lang/chapel | opened | moveFrom, moveInitialize: the compiler needs to be aware | type: Design area: Compiler area: Libraries / Modules area: Language | These functions are also discussed in #20330 and #20328.
The charter of the `Memory.Initialization` module is "[to enable the user] to implement collections in a manner similar to those implemented by the Chapel standard modules (such as List or Map)."
I claim that at present the functions `moveFrom` and `moveInitialize` do not achieve this purpose.
#### Example 1: `LinkedList.pop_front()`
I expect pop_front() to pass to the caller the ownership of the element being popped:
```chpl
var oldFront = ... the front listNode ...;
var ret = moveFrom(oldFront.data);
delete oldFront;
return ret;
```
However, it does not -- and cannot do that. Because `delete oldFront` deinitializes `oldFront.data` among other things - because it does not know that `oldFront.data` has been "consumed". So if I called `moveFrom(oldFront.data)`, the deinitializer would deinitialize invalid memory contents.
#### Example 2: `chpl__hashtable.clearSlot()`
I expect the `out val` formal to take over the ownership of `tableEntry.val`:
```chpl
proc chpl__hashtable.clearSlot(ref tableEntry, out key: keyType, out val: valType) {
moveInitialize(key, moveFrom(tableEntry.key));
moveInitialize(val, moveFrom(tableEntry.val));
....
}
```
However, it does not do that. If it did, the compiler would not know that `key` and `value` would be initialized and would default-initialize them prior to the moveInitialize() calls. So moveInitialize() would clobber objects without deinitializing them properly. Instead, `key` and `value` get initialized using `=`:
```chpl
proc chpl__hashtable.clearSlot(ref tableEntry, out key: keyType, out val: valType) {
key = moveFrom(tableEntry.key);
val = moveFrom(tableEntry.val);
....
}
```
The problem with that is that the compiler inserts calls to chpl__coerceMove(), passing in the results of `moveFrom` and using the returned values to initialize `key`/`value`. chpl__coerceMove() is a Really Big Function and is completely unnecessary here.
In summary, this issue calls for compiler awareness of initialization and deinitialization performed by `moveFrom` and `moveInitialize`. | 1.0 | moveFrom, moveInitialize: the compiler needs to be aware - These functions are also discussed in #20330 and #20328.
The charter of the `Memory.Initialization` module is "[to enable the user] to implement collections in a manner similar to those implemented by the Chapel standard modules (such as List or Map)."
I claim that at present the functions `moveFrom` and `moveInitialize` do not achieve this purpose.
#### Example 1: `LinkedList.pop_front()`
I expect pop_front() to pass to the caller the ownership of the element being popped:
```chpl
var oldFront = ... the front listNode ...;
var ret = moveFrom(oldFront.data);
delete oldFront;
return ret;
```
However, it does not -- and cannot do that. Because `delete oldFront` deinitializes `oldFront.data` among other things - because it does not know that `oldFront.data` has been "consumed". So if I called `moveFrom(oldFront.data)`, the deinitializer would deinitialize invalid memory contents.
#### Example 2: `chpl__hashtable.clearSlot()`
I expect the `out val` formal to take over the ownership of `tableEntry.val`:
```chpl
proc chpl__hashtable.clearSlot(ref tableEntry, out key: keyType, out val: valType) {
moveInitialize(key, moveFrom(tableEntry.key));
moveInitialize(val, moveFrom(tableEntry.val));
....
}
```
However, it does not do that. If it did, the compiler would not know that `key` and `value` would be initialized and would default-initialize them prior to the moveInitialize() calls. So moveInitialize() would clobber objects without deinitializing them properly. Instead, `key` and `value` get initialized using `=`:
```chpl
proc chpl__hashtable.clearSlot(ref tableEntry, out key: keyType, out val: valType) {
key = moveFrom(tableEntry.key);
val = moveFrom(tableEntry.val);
....
}
```
The problem with that is that the compiler inserts calls to chpl__coerceMove(), passing in the results of `moveFrom` and using the returned values to initialize `key`/`value`. chpl__coerceMove() is a Really Big Function and is completely unnecessary here.
In summary, this issue calls for compiler awareness of initialization and deinitialization performed by `moveFrom` and `moveInitialize`. | non_infrastructure | movefrom moveinitialize the compiler needs to be aware these functions are also discussed in and the charter of the memory initialization module is to implement collections in a manner similar to those implemented by the chapel standard modules such as list or map i claim that at present the functions movefrom and moveinitialize do not achieve this purpose example linkedlist pop front i expect pop front to pass to the caller the ownership of the element being popped chpl var oldfront the front listnode var ret movefrom oldfront data delete oldfront return ret however it does not and cannot do that because delete oldfront deinitializes oldfront data among other things because it does not know that oldfront data has been consumed so if i called movefrom oldfront data the deinitializer would deinitialize invalid memory contents example chpl hashtable clearslot i expect the out val formal to take over the ownership of tableentry val chpl proc chpl hashtable clearslot ref tableentry out key keytype out val valtype moveinitialize key movefrom tableentry key moveinitialize val movefrom tableentry val however it does not do that if it did the compiler would not know that key and value would be initialized and would default initialize them prior to the moveinitialize calls so moveinitialize would clobber objects without deinitializing them properly instead key and value get initialized using chpl proc chpl hashtable clearslot ref tableentry out key keytype out val valtype key movefrom tableentry key val movefrom tableentry val the problem with that is that the compiler inserts calls to chpl coercemove passing in the results of movefrom and using the returned values to initialize key value chpl coercemove is a really big function and is completely unnecessary here in summary this issue calls for compiler awareness of initialization and deinitialization performed by movefrom and moveinitialize | 0 |
17,765 | 12,539,699,459 | IssuesEvent | 2020-06-05 09:03:59 | clarity-h2020/csis | https://api.github.com/repos/clarity-h2020/csis | closed | Create (and update) wiki page listing all services endpoints we are using | BB: Infrastructure enhancement | We need some place where we can easily list and find all service endpoints instead of having to review tons of emails/issues for finding the service url we need everytime.
| 1.0 | Create (and update) wiki page listing all services endpoints we are using - We need some place where we can easily list and find all service endpoints instead of having to review tons of emails/issues for finding the service url we need everytime.
| infrastructure | create and update wiki page listing all services endpoints we are using we need some place where we can easily list and find all service endpoints instead of having to review tons of emails issues for finding the service url we need everytime | 1 |
27,776 | 22,334,364,542 | IssuesEvent | 2022-06-14 17:06:22 | department-of-veterans-affairs/va.gov-team | https://api.github.com/repos/department-of-veterans-affairs/va.gov-team | reopened | Create IAM "devops" group with Resource-Scoped Administrator Permissions | operations team-platform-infrastructure | # Description
As a devops engineer, I will initially be given admin access to all the things. Becoming familiar with the resources we, the identity team, maintain, I will be able to create resource-scoped admin permissions policies to apply to an `identity-devops` IAM group using [permission boundaries for IAM entities](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_boundaries.html).
This "could" be the basis for a generic devops engineer IAM group, but since all embedded devops engineers will be managing different resources, this may not be possible/applicable and/or continually be a work in progress.
# Acceptance Criteria
- Default and/or Custom AMI policies
- an `identity-devops` IAM group created with the appropriate resource-scoped admin IAM policies (e.g. permission boundary)
- possibly a "default" devops IAM policy / group (e.g. again, using permission boundary)
[more info](https://vfs.atlassian.net/wiki/spaces/ECP/pages/2110029855/Grant+Resource-Scoped+Administrator+Permissions+to+Non-Infrastructure+Platform-Team-Attached+DevOps+Engineers) | 1.0 | Create IAM "devops" group with Resource-Scoped Administrator Permissions - # Description
As a devops engineer, I will initially be given admin access to all the things. Becoming familiar with the resources we, the identity team, maintain, I will be able to create resource-scoped admin permissions policies to apply to an `identity-devops` IAM group using [permission boundaries for IAM entities](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_boundaries.html).
This "could" be the basis for a generic devops engineer IAM group, but since all embedded devops engineers will be managing different resources, this may not be possible/applicable and/or continually be a work in progress.
# Acceptance Criteria
- Default and/or Custom AMI policies
- an `identity-devops` IAM group created with the appropriate resource-scoped admin IAM policies (e.g. permission boundary)
- possibly a "default" devops IAM policy / group (e.g. again, using permission boundary)
[more info](https://vfs.atlassian.net/wiki/spaces/ECP/pages/2110029855/Grant+Resource-Scoped+Administrator+Permissions+to+Non-Infrastructure+Platform-Team-Attached+DevOps+Engineers) | infrastructure | create iam devops group with resource scoped administrator permissions description as a devops engineer i will initially be given admin access to all the things becoming familiar with the resources we the identity team maintain i will be able to create resource scoped admin permissions policies to apply to an identity devops iam group using this could be the basis for a generic devops engineer iam group but since all embedded devops engineers will be managing different resources this may not be possible applicable and or continually be a work in progress acceptance criteria default and or custom ami policies an identity devops iam group created with the appropriate resource scoped admin iam policies e g permission boundary possibly a default devops iam policy group e g again using permission boundary | 1 |
15,885 | 20,073,119,570 | IssuesEvent | 2022-02-04 09:38:37 | prisma/prisma | https://api.github.com/repos/prisma/prisma | opened | Error: [/Users/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/native-tls-0.2.8/src/imp/security_framework.rs:87:36] called `Option::unwrap()` on a `None` value | kind/bug process/candidate topic: error reporting team/migrations | <!-- If required, please update the title to be clear and descriptive -->
Command: `prisma db pull`
Version: `3.8.1`
Binary Version: `34df67547cf5598f5a6cd3eb45f14ee70c3fb86f`
Report: https://prisma-errors.netlify.app/report/13658
OS: `arm64 darwin 21.1.0`
JS Stacktrace:
```
Error: [/Users/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/native-tls-0.2.8/src/imp/security_framework.rs:87:36] called `Option::unwrap()` on a `None` value
at ChildProcess.<anonymous> (/Users....npm/_npx/2778af9cee32ff87/node_modules/prisma/build/index.js:46398:30)
at ChildProcess.emit (node:events:390:28)
at Process.ChildProcess._handle.onexit (node:internal/child_process:290:12)
```
Rust Stacktrace:
```
0: backtrace::backtrace::trace
1: backtrace::capture::Backtrace::new
2: user_facing_errors::Error::new_in_panic_hook
3: user_facing_errors::panic_hook::set_panic_hook::{{closure}}
4: std::panicking::rust_panic_with_hook
5: std::panicking::begin_panic_handler::{{closure}}
6: std::sys_common::backtrace::__rust_end_short_backtrace
7: _rust_begin_unwind
8: core::panicking::panic_fmt
9: core::panicking::panic
10: native_tls::imp::Identity::from_pkcs12
11: native_tls::Identity::from_pkcs12
12: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
13: introspection_engine::rpc::RpcImpl::load_connector::{{closure}}
14: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
15: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
16: <futures_util::future::either::Either<A,B> as core::future::future::Future>::poll
17: <futures_util::future::future::Then<Fut1,Fut2,F> as core::future::future::Future>::poll
18: json_rpc_stdio::handle_stdin_next_line::{{closure}}
19: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
20: introspection_engine::main
21: std::sys_common::backtrace::__rust_begin_short_backtrace
22: std::rt::lang_start::{{closure}}
23: std::rt::lang_start_internal
24: std::rt::lang_start
25: _main
```
| 1.0 | Error: [/Users/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/native-tls-0.2.8/src/imp/security_framework.rs:87:36] called `Option::unwrap()` on a `None` value - <!-- If required, please update the title to be clear and descriptive -->
Command: `prisma db pull`
Version: `3.8.1`
Binary Version: `34df67547cf5598f5a6cd3eb45f14ee70c3fb86f`
Report: https://prisma-errors.netlify.app/report/13658
OS: `arm64 darwin 21.1.0`
JS Stacktrace:
```
Error: [/Users/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/native-tls-0.2.8/src/imp/security_framework.rs:87:36] called `Option::unwrap()` on a `None` value
at ChildProcess.<anonymous> (/Users....npm/_npx/2778af9cee32ff87/node_modules/prisma/build/index.js:46398:30)
at ChildProcess.emit (node:events:390:28)
at Process.ChildProcess._handle.onexit (node:internal/child_process:290:12)
```
Rust Stacktrace:
```
0: backtrace::backtrace::trace
1: backtrace::capture::Backtrace::new
2: user_facing_errors::Error::new_in_panic_hook
3: user_facing_errors::panic_hook::set_panic_hook::{{closure}}
4: std::panicking::rust_panic_with_hook
5: std::panicking::begin_panic_handler::{{closure}}
6: std::sys_common::backtrace::__rust_end_short_backtrace
7: _rust_begin_unwind
8: core::panicking::panic_fmt
9: core::panicking::panic
10: native_tls::imp::Identity::from_pkcs12
11: native_tls::Identity::from_pkcs12
12: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
13: introspection_engine::rpc::RpcImpl::load_connector::{{closure}}
14: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
15: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
16: <futures_util::future::either::Either<A,B> as core::future::future::Future>::poll
17: <futures_util::future::future::Then<Fut1,Fut2,F> as core::future::future::Future>::poll
18: json_rpc_stdio::handle_stdin_next_line::{{closure}}
19: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
20: introspection_engine::main
21: std::sys_common::backtrace::__rust_begin_short_backtrace
22: std::rt::lang_start::{{closure}}
23: std::rt::lang_start_internal
24: std::rt::lang_start
25: _main
```
| non_infrastructure | error called option unwrap on a none value command prisma db pull version binary version report os darwin js stacktrace error called option unwrap on a none value at childprocess users npm npx node modules prisma build index js at childprocess emit node events at process childprocess handle onexit node internal child process rust stacktrace backtrace backtrace trace backtrace capture backtrace new user facing errors error new in panic hook user facing errors panic hook set panic hook closure std panicking rust panic with hook std panicking begin panic handler closure std sys common backtrace rust end short backtrace rust begin unwind core panicking panic fmt core panicking panic native tls imp identity from native tls identity from as core future future future poll introspection engine rpc rpcimpl load connector closure as core future future future poll as core future future future poll as core future future future poll as core future future future poll json rpc stdio handle stdin next line closure as core future future future poll introspection engine main std sys common backtrace rust begin short backtrace std rt lang start closure std rt lang start internal std rt lang start main | 0 |
24,405 | 17,199,964,481 | IssuesEvent | 2021-07-17 02:55:33 | MrDogeBro/quicknav | https://api.github.com/repos/MrDogeBro/quicknav | closed | Move More Towards Shortcut Names | status/pending type/enhancement type/infrastructure | ## Description
Using calls as the way to access an item in a command can be a bit cumbersome sometimes. Also, if you just use the calls, the name has no purpose except to be used for organization and all in the list command.
## Proposed Solution
Instead of using one of the calls when using commands such as add and remove, the commands would use the shortcut name to make everything a bit easier and reduce confusion. This would require a bit of infrastructure change but it could be very helpful and make the shortcut name have more of a purpose.
| 1.0 | Move More Towards Shortcut Names - ## Description
Using calls as the way to access an item in a command can be a bit cumbersome sometimes. Also, if you just use the calls, the name has no purpose except to be used for organization and all in the list command.
## Proposed Solution
Instead of using one of the calls when using commands such as add and remove, the commands would use the shortcut name to make everything a bit easier and reduce confusion. This would require a bit of infrastructure change but it could be very helpful and make the shortcut name have more of a purpose.
| infrastructure | move more towards shortcut names description using calls as the way to access an item in a command can be a bit cumbersome sometimes also if you just use the calls the name has no purpose except to be used for organization and all in the list command proposed solution instead of using one of the calls when using commands such as add and remove the commands would use the shortcut name to make everything a bit easier and reduce confusion this would require a bit of infrastructure change but it could be very helpful and make the shortcut name have more of a purpose | 1 |
663,292 | 22,172,186,449 | IssuesEvent | 2022-06-06 02:58:11 | ZapSquared/Quaver | https://api.github.com/repos/ZapSquared/Quaver | closed | Search command do not work properly when used in a voice channel's text channel | type:bug released on @next affects:functionality priority:p0 | **Describe the bug**
What isn't working as intended, and what does it affect?
Search command
Functionality
**Affected versions**
What versions are affected by this bug? (e.g. >=3.0.1, 2.5.1-2.6.3, >=1.2.0)
3.4.2-next.4
**Steps to reproduce**
Steps to reproduce the behavior. (e.g. click on a button, enter a value, etc. and see error)
1. Join any voice channel
2. Use the search command inside the voice channel's text channel.
3. Use the buttons to navigate and pick a track.
**Expected behavior**
What is expected to happen?
Queue the requested tracks.
**Actual behavior**
What actually happens? Attach or add errors or screenshots here as well.
Nothing happens, no errors in the console. In discord, the following is shown:
```text
This interaction failed
``` | 1.0 | Search command do not work properly when used in a voice channel's text channel - **Describe the bug**
What isn't working as intended, and what does it affect?
Search command
Functionality
**Affected versions**
What versions are affected by this bug? (e.g. >=3.0.1, 2.5.1-2.6.3, >=1.2.0)
3.4.2-next.4
**Steps to reproduce**
Steps to reproduce the behavior. (e.g. click on a button, enter a value, etc. and see error)
1. Join any voice channel
2. Use the search command inside the voice channel's text channel.
3. Use the buttons to navigate and pick a track.
**Expected behavior**
What is expected to happen?
Queue the requested tracks.
**Actual behavior**
What actually happens? Attach or add errors or screenshots here as well.
Nothing happens, no errors in the console. In discord, the following is shown:
```text
This interaction failed
``` | non_infrastructure | search command do not work properly when used in a voice channel s text channel describe the bug what isn t working as intended and what does it affect search command functionality affected versions what versions are affected by this bug e g next steps to reproduce steps to reproduce the behavior e g click on a button enter a value etc and see error join any voice channel use the search command inside the voice channel s text channel use the buttons to navigate and pick a track expected behavior what is expected to happen queue the requested tracks actual behavior what actually happens attach or add errors or screenshots here as well nothing happens no errors in the console in discord the following is shown text this interaction failed | 0 |
79,638 | 15,241,426,474 | IssuesEvent | 2021-02-19 08:25:18 | GIScience/ohsome-api | https://api.github.com/repos/GIScience/ohsome-api | opened | Add property 'contributionTypes' to 'properties' parameter | code quality enhancement | Currently, the `@contributionType` field gets added to the response features of the `/contributions/{geometryType}` endpoints if the `properties=metadata` parameter is given in the request. However, as it is not directly related to the other fields that get added via setting this parameter, it would be better to have it as a distinct value of the `properties` parameter, so that it gets added via setting `properties=contributionTypes` (or alternatively `contribution types`). This property should only be allowed for the `/contributions/{geometryType}` endpoint. | 1.0 | Add property 'contributionTypes' to 'properties' parameter - Currently, the `@contributionType` field gets added to the response features of the `/contributions/{geometryType}` endpoints if the `properties=metadata` parameter is given in the request. However, as it is not directly related to the other fields that get added via setting this parameter, it would be better to have it as a distinct value of the `properties` parameter, so that it gets added via setting `properties=contributionTypes` (or alternatively `contribution types`). This property should only be allowed for the `/contributions/{geometryType}` endpoint. | non_infrastructure | add property contributiontypes to properties parameter currently the contributiontype field gets added to the response features of the contributions geometrytype endpoints if the properties metadata parameter is given in the request however as it is not directly related to the other fields that get added via setting this parameter it would be better to have it as a distinct value of the properties parameter so that it gets added via setting properties contributiontypes or alternatively contribution types this property should only be allowed for the contributions geometrytype endpoint | 0 |
25,343 | 18,509,499,415 | IssuesEvent | 2021-10-19 23:47:01 | GMLC-TDC/HELICS | https://api.github.com/repos/GMLC-TDC/HELICS | closed | Build Universal 2 binaries for macOS releases | enhancement good first issue hacktoberfest infrastructure/ci | The macOS release binaries should include native support for arm64.
On an intel mac, I used this command for building Universal 2 binaries:
`cmake .. -DCMAKE_OSX_ARCHITECTURES="arm64;x86_64" -DHELICS_DISABLE_BOOST=ON -DHELICS_ZMQ_SUBPROJECT=ON -DHELICS_ZMQ_FORCE_SUBPROJECT=ON`
For the release build workflows, `HELICS_ZMQ_FORCE_SUBPROJECT` and `HELICS_DISABLE_BOOST` aren't needed.
Adding support for release builds should be mostly adding `-DCMAKE_OSX_ARCHITECTURES="arm64;x86_64"` to the cmake configure step in these two files:
- https://github.com/GMLC-TDC/HELICS/blob/main/.github/actions/release-build/installer-macOS.sh
- https://github.com/GMLC-TDC/HELICS/blob/main/.github/actions/release-build/shared-library-macOS.sh
After that it should be validating on a mac that the artifacts produced by the release workflow are universal binaries. | 1.0 | Build Universal 2 binaries for macOS releases - The macOS release binaries should include native support for arm64.
On an intel mac, I used this command for building Universal 2 binaries:
`cmake .. -DCMAKE_OSX_ARCHITECTURES="arm64;x86_64" -DHELICS_DISABLE_BOOST=ON -DHELICS_ZMQ_SUBPROJECT=ON -DHELICS_ZMQ_FORCE_SUBPROJECT=ON`
For the release build workflows, `HELICS_ZMQ_FORCE_SUBPROJECT` and `HELICS_DISABLE_BOOST` aren't needed.
Adding support for release builds should be mostly adding `-DCMAKE_OSX_ARCHITECTURES="arm64;x86_64"` to the cmake configure step in these two files:
- https://github.com/GMLC-TDC/HELICS/blob/main/.github/actions/release-build/installer-macOS.sh
- https://github.com/GMLC-TDC/HELICS/blob/main/.github/actions/release-build/shared-library-macOS.sh
After that it should be validating on a mac that the artifacts produced by the release workflow are universal binaries. | infrastructure | build universal binaries for macos releases the macos release binaries should include native support for on an intel mac i used this command for building universal binaries cmake dcmake osx architectures dhelics disable boost on dhelics zmq subproject on dhelics zmq force subproject on for the release build workflows helics zmq force subproject and helics disable boost aren t needed adding support for release builds should be mostly adding dcmake osx architectures to the cmake configure step in these two files after that it should be validating on a mac that the artifacts produced by the release workflow are universal binaries | 1 |
79,139 | 7,697,096,431 | IssuesEvent | 2018-05-18 17:31:27 | ValveSoftware/steam-for-linux | https://api.github.com/repos/ValveSoftware/steam-for-linux | closed | String Interpolation issue on error dialog. | Need Retest Steam client reviewed | I tried to install a game on my Linux laptop while a game was already playing on my Windows box. The Steam client gave me an error dialog with the title #Steam_OtherSessionPlaying_Title, and text of #Steam_OtherSessionPlaying_Text. Which are obviously variable names/template placeholders that did not get rendered out to their contents.
System Settings:
```
Processor Information:
Vendor: GenuineIntel
CPU Family: 0x6
CPU Model: 0x2a
CPU Stepping: 0x7
CPU Type: 0x0
Speed: 3200 Mhz
4 logical processors
2 physical processors
HyperThreading: Supported
FCMOV: Supported
SSE2: Supported
SSE3: Supported
SSSE3: Supported
SSE4a: Unsupported
SSE41: Supported
SSE42: Supported
Network Information:
Network Speed:
Operating System Version:
Linux (64 bit)
Kernel Name: Linux
Kernel Version: 3.17.4-1-ARCH
X Server Vendor: The X.Org Foundation
X Server Release: 11602000
X Window Manager: Fluxbox
Steam Runtime Version: steam-runtime-release_2013-10-23
Video Card:
Driver: Intel Open Source Technology Center Mesa DRI Intel(R) Sandybridge Mobile x86/MMX/SSE2
Driver Version: 3.0 Mesa 10.3.5
OpenGL Version: 3.0
Desktop Color Depth: 24 bits per pixel
Monitor Refresh Rate: 60 Hz
VendorID: 0x8086
DeviceID: 0x126
Number of Monitors: 1
Number of Logical Video Cards: 1
Primary Display Resolution: 1920 x 1080
Desktop Resolution: 1920 x 1080
Primary Display Size: 13.54" x 7.60" (15.51" diag)
34.4cm x 19.3cm (39.4cm diag)
Primary VRAM Not Detected
Sound card:
Audio device: Intel CougarPoint HDMI
Memory:
RAM: 5855 Mb
Miscellaneous:
UI Language: English
LANG: en_US.UTF-8
Microphone: Not set
Total Hard Disk Space Available: 144434 Mb
Largest Free Hard Disk Block: 84136 Mb
Installed software:
Recent Failure Reports:
```

| 1.0 | String Interpolation issue on error dialog. - I tried to install a game on my Linux laptop while a game was already playing on my Windows box. The Steam client gave me an error dialog with the title #Steam_OtherSessionPlaying_Title, and text of #Steam_OtherSessionPlaying_Text. Which are obviously variable names/template placeholders that did not get rendered out to their contents.
System Settings:
```
Processor Information:
Vendor: GenuineIntel
CPU Family: 0x6
CPU Model: 0x2a
CPU Stepping: 0x7
CPU Type: 0x0
Speed: 3200 Mhz
4 logical processors
2 physical processors
HyperThreading: Supported
FCMOV: Supported
SSE2: Supported
SSE3: Supported
SSSE3: Supported
SSE4a: Unsupported
SSE41: Supported
SSE42: Supported
Network Information:
Network Speed:
Operating System Version:
Linux (64 bit)
Kernel Name: Linux
Kernel Version: 3.17.4-1-ARCH
X Server Vendor: The X.Org Foundation
X Server Release: 11602000
X Window Manager: Fluxbox
Steam Runtime Version: steam-runtime-release_2013-10-23
Video Card:
Driver: Intel Open Source Technology Center Mesa DRI Intel(R) Sandybridge Mobile x86/MMX/SSE2
Driver Version: 3.0 Mesa 10.3.5
OpenGL Version: 3.0
Desktop Color Depth: 24 bits per pixel
Monitor Refresh Rate: 60 Hz
VendorID: 0x8086
DeviceID: 0x126
Number of Monitors: 1
Number of Logical Video Cards: 1
Primary Display Resolution: 1920 x 1080
Desktop Resolution: 1920 x 1080
Primary Display Size: 13.54" x 7.60" (15.51" diag)
34.4cm x 19.3cm (39.4cm diag)
Primary VRAM Not Detected
Sound card:
Audio device: Intel CougarPoint HDMI
Memory:
RAM: 5855 Mb
Miscellaneous:
UI Language: English
LANG: en_US.UTF-8
Microphone: Not set
Total Hard Disk Space Available: 144434 Mb
Largest Free Hard Disk Block: 84136 Mb
Installed software:
Recent Failure Reports:
```

| non_infrastructure | string interpolation issue on error dialog i tried to install a game on my linux laptop while a game was already playing on my windows box the steam client gave me an error dialog with the title steam othersessionplaying title and text of steam othersessionplaying text which are obviously variable names template placeholders that did not get rendered out to their contents system settings processor information vendor genuineintel cpu family cpu model cpu stepping cpu type speed mhz logical processors physical processors hyperthreading supported fcmov supported supported supported supported unsupported supported supported network information network speed operating system version linux bit kernel name linux kernel version arch x server vendor the x org foundation x server release x window manager fluxbox steam runtime version steam runtime release video card driver intel open source technology center mesa dri intel r sandybridge mobile mmx driver version mesa opengl version desktop color depth bits per pixel monitor refresh rate hz vendorid deviceid number of monitors number of logical video cards primary display resolution x desktop resolution x primary display size x diag x diag primary vram not detected sound card audio device intel cougarpoint hdmi memory ram mb miscellaneous ui language english lang en us utf microphone not set total hard disk space available mb largest free hard disk block mb installed software recent failure reports | 0 |
35,483 | 31,710,943,190 | IssuesEvent | 2023-09-09 09:01:13 | surge-synthesizer/surge-rack | https://api.github.com/repos/surge-synthesizer/surge-rack | closed | Delay module insertion causes Rack 2.4.1 to crash | Infrastructure Resolved Awaiting User Confirm | Rack 2.4.1 Crashes when loading delay module from library bowser: (Win 11)
1. Open library
2. Select delay module
3. Par-tay
100% repeatable in existing patch, not sure if it occurs in blank patch.
VCV Log attached:
[log.txt](https://github.com/surge-synthesizer/surge-rack/files/12555926/log.txt)
| 1.0 | Delay module insertion causes Rack 2.4.1 to crash - Rack 2.4.1 Crashes when loading delay module from library bowser: (Win 11)
1. Open library
2. Select delay module
3. Par-tay
100% repeatable in existing patch, not sure if it occurs in blank patch.
VCV Log attached:
[log.txt](https://github.com/surge-synthesizer/surge-rack/files/12555926/log.txt)
| infrastructure | delay module insertion causes rack to crash rack crashes when loading delay module from library bowser win open library select delay module par tay repeatable in existing patch not sure if it occurs in blank patch vcv log attached | 1 |
29,944 | 24,419,434,885 | IssuesEvent | 2022-10-05 18:54:36 | jmcgeheeiv/pyfakefs | https://api.github.com/repos/jmcgeheeiv/pyfakefs | reopened | Move to pytest-dev organization | infrastructure | As discussed with @jmcgeheeiv, it would be a good idea to transfer this repo to the pytest-dev organization. This would help to ensure continued maintenance, and also possibly to get more eyes for issues and code reviews (given enough eyeballs...).
While pyfakefs is not only a pytest plugin, as it can also be used with other test frameworks, pytest-dev seems like a good match, as the majority of new pyfakefs users seem to use it via the `fs` fixture, and improving pytest support is a sensible goal for pyfakefs.
We seem to meet most of the [preconditions](https://github.com/pytest-dev/pytest/blob/master/CONTRIBUTING.rst#submitting-plugins-to-pytest-dev), except the name (nothing we can do about this, changing it now would not be a good idea), and the authors info, which can easily be added.
The first step will be to clarify if pyfakefs is feasible for pytest-dev - I will write a respective mail in the pytest-dev list, and we can go on from there of we get a go.
cc @nicoddemus | 1.0 | Move to pytest-dev organization - As discussed with @jmcgeheeiv, it would be a good idea to transfer this repo to the pytest-dev organization. This would help to ensure continued maintenance, and also possibly to get more eyes for issues and code reviews (given enough eyeballs...).
While pyfakefs is not only a pytest plugin, as it can also be used with other test frameworks, pytest-dev seems like a good match, as the majority of new pyfakefs users seem to use it via the `fs` fixture, and improving pytest support is a sensible goal for pyfakefs.
We seem to meet most of the [preconditions](https://github.com/pytest-dev/pytest/blob/master/CONTRIBUTING.rst#submitting-plugins-to-pytest-dev), except the name (nothing we can do about this, changing it now would not be a good idea), and the authors info, which can easily be added.
The first step will be to clarify if pyfakefs is feasible for pytest-dev - I will write a respective mail in the pytest-dev list, and we can go on from there of we get a go.
cc @nicoddemus | infrastructure | move to pytest dev organization as discussed with jmcgeheeiv it would be a good idea to transfer this repo to the pytest dev organization this would help to ensure continued maintenance and also possibly to get more eyes for issues and code reviews given enough eyeballs while pyfakefs is not only a pytest plugin as it can also be used with other test frameworks pytest dev seems like a good match as the majority of new pyfakefs users seem to use it via the fs fixture and improving pytest support is a sensible goal for pyfakefs we seem to meet most of the except the name nothing we can do about this changing it now would not be a good idea and the authors info which can easily be added the first step will be to clarify if pyfakefs is feasible for pytest dev i will write a respective mail in the pytest dev list and we can go on from there of we get a go cc nicoddemus | 1 |
5,047 | 5,400,931,309 | IssuesEvent | 2017-02-27 23:23:55 | vmware/docker-volume-vsphere | https://api.github.com/repos/vmware/docker-volume-vsphere | closed | Achieve CI stability while running builds against vmware/master | component/test-infrastructure | Please keep posting CI-infra related problem to this issue.
1. Need to add retry mechanism for deploy-esx to avoid known vsphere issue (as shown below).
https://ci.vmware.run/vmware/docker-volume-vsphere/1346
```
Errors:
[InstallationError]
There was an error checking file system on altbootbank, please see log for detail.
Please refer to the log file for more details.
=> deployESXInstall: Installation hit an error on root@192.168.31.62 Thu Feb 23 00:41:15 UTC 2017
make[1]: *** [deploy-esx] Error 2
make: *** [deploy-esx] Error 2
```
//CC @pdhamdhere | 1.0 | Achieve CI stability while running builds against vmware/master - Please keep posting CI-infra related problem to this issue.
1. Need to add retry mechanism for deploy-esx to avoid known vsphere issue (as shown below).
https://ci.vmware.run/vmware/docker-volume-vsphere/1346
```
Errors:
[InstallationError]
There was an error checking file system on altbootbank, please see log for detail.
Please refer to the log file for more details.
=> deployESXInstall: Installation hit an error on root@192.168.31.62 Thu Feb 23 00:41:15 UTC 2017
make[1]: *** [deploy-esx] Error 2
make: *** [deploy-esx] Error 2
```
//CC @pdhamdhere | infrastructure | achieve ci stability while running builds against vmware master please keep posting ci infra related problem to this issue need to add retry mechanism for deploy esx to avoid known vsphere issue as shown below errors there was an error checking file system on altbootbank please see log for detail please refer to the log file for more details deployesxinstall installation hit an error on root thu feb utc make error make error cc pdhamdhere | 1 |
442,264 | 30,825,889,605 | IssuesEvent | 2023-08-01 20:01:58 | MetPX/sarracenia | https://api.github.com/repos/MetPX/sarracenia | closed | new example with plugin and moth for downloading without files. | enhancement NewUseCase v3only Documentation |
It has come up in a few discussions, so I think it is worth providing some examples of handling a message, and downloading the resource in python itself, without going throught he actual file download. Seems to be a common use case. Would want an example using:
* after_accept plugin.
* extending the moth example.
* jupyter version of both approaches.
| 1.0 | new example with plugin and moth for downloading without files. -
It has come up in a few discussions, so I think it is worth providing some examples of handling a message, and downloading the resource in python itself, without going throught he actual file download. Seems to be a common use case. Would want an example using:
* after_accept plugin.
* extending the moth example.
* jupyter version of both approaches.
| non_infrastructure | new example with plugin and moth for downloading without files it has come up in a few discussions so i think it is worth providing some examples of handling a message and downloading the resource in python itself without going throught he actual file download seems to be a common use case would want an example using after accept plugin extending the moth example jupyter version of both approaches | 0 |
33,258 | 27,333,837,448 | IssuesEvent | 2023-02-26 00:19:39 | waldo-vision/waldo | https://api.github.com/repos/waldo-vision/waldo | closed | Create Cheating Label option in video submission page | frontend backend infrastructure | Add option to label a submitted clip as 'Aimbot' or 'No Cheats' when a Trusted User (or higher privileged role) is submitting a clip.
This option should not be visible to standard users.
I'm imagining a dropdown list item where a user can select from the options to allow us to expand the types of cheats that can be labelled in the future. | 1.0 | Create Cheating Label option in video submission page - Add option to label a submitted clip as 'Aimbot' or 'No Cheats' when a Trusted User (or higher privileged role) is submitting a clip.
This option should not be visible to standard users.
I'm imagining a dropdown list item where a user can select from the options to allow us to expand the types of cheats that can be labelled in the future. | infrastructure | create cheating label option in video submission page add option to label a submitted clip as aimbot or no cheats when a trusted user or higher privileged role is submitting a clip this option should not be visible to standard users i m imagining a dropdown list item where a user can select from the options to allow us to expand the types of cheats that can be labelled in the future | 1 |
34,578 | 7,457,445,282 | IssuesEvent | 2018-03-30 04:28:28 | kerdokullamae/test_koik_issued | https://api.github.com/repos/kerdokullamae/test_koik_issued | closed | KÜ detailvaates olev muudatusettepaneku loend ja sisu - ülevaatus | C: AVAR P: highest R: fixed T: defect | **Reported by sven syld on 2 Apr 2014 08:23 UTC**
#990-st
1) panna ikoonidele tooltipid ka külge "Kinnita ettepanek" ja "Lükka ettepanek tagasi", ikoonide tähendused ei pruugi olla kõige selgemad
2) ettepanek ei peaks olema link, võib ka tavaline span olla
3) taha asi ajaxi peale, nii et saaks ka admini poolel ajaxi kinnitamist-tagasilükkamist teha. Ajaxiga listi taaslaadides arvestada p5-ga
4) kui muudatusettepanekuid ei ole, siis võiks kasti ära peita
| 1.0 | KÜ detailvaates olev muudatusettepaneku loend ja sisu - ülevaatus - **Reported by sven syld on 2 Apr 2014 08:23 UTC**
#990-st
1) panna ikoonidele tooltipid ka külge "Kinnita ettepanek" ja "Lükka ettepanek tagasi", ikoonide tähendused ei pruugi olla kõige selgemad
2) ettepanek ei peaks olema link, võib ka tavaline span olla
3) taha asi ajaxi peale, nii et saaks ka admini poolel ajaxi kinnitamist-tagasilükkamist teha. Ajaxiga listi taaslaadides arvestada p5-ga
4) kui muudatusettepanekuid ei ole, siis võiks kasti ära peita
| non_infrastructure | kü detailvaates olev muudatusettepaneku loend ja sisu ülevaatus reported by sven syld on apr utc st panna ikoonidele tooltipid ka külge kinnita ettepanek ja lükka ettepanek tagasi ikoonide tähendused ei pruugi olla kõige selgemad ettepanek ei peaks olema link võib ka tavaline span olla taha asi ajaxi peale nii et saaks ka admini poolel ajaxi kinnitamist tagasilükkamist teha ajaxiga listi taaslaadides arvestada ga kui muudatusettepanekuid ei ole siis võiks kasti ära peita | 0 |
31,239 | 25,473,085,503 | IssuesEvent | 2022-11-25 11:59:15 | dotnet/runtime | https://api.github.com/repos/dotnet/runtime | closed | System.Runtime.CompilerServices.Unsafe is not obey .NET Standard 2.0 (it should support .NET Core 2.1 App) | area-Infrastructure-libraries | ### Description
A picture is worth a thousand words.

### Reproduction Steps
1. Create a Console project with **.NET Core 2.1**.
2. Install System.Net.Http.Json 6.0.0 package
3. Build project
### Expected behavior
Build successfully.
### Actual behavior
It shows:
```txt
Severity Code Description Project File Line Suppression State
Error System.Runtime.CompilerServices.Unsafe doesn't support netcoreapp2.1. Consider updating your TargetFramework to netcoreapp3.1 or later. ConsoleApp2 C:\Users\user\.nuget\packages\system.runtime.compilerservices.unsafe\6.0.0\buildTransitive\netcoreapp2.0\System.Runtime.CompilerServices.Unsafe.targets 4
```
### Regression?
I'm not sure.
### Known Workarounds
Downgrade to System.Net.Http.Json 5.0.0
### Configuration
N/A
### Other information
N/A | 1.0 | System.Runtime.CompilerServices.Unsafe is not obey .NET Standard 2.0 (it should support .NET Core 2.1 App) - ### Description
A picture is worth a thousand words.

### Reproduction Steps
1. Create a Console project with **.NET Core 2.1**.
2. Install System.Net.Http.Json 6.0.0 package
3. Build project
### Expected behavior
Build successfully.
### Actual behavior
It shows:
```txt
Severity Code Description Project File Line Suppression State
Error System.Runtime.CompilerServices.Unsafe doesn't support netcoreapp2.1. Consider updating your TargetFramework to netcoreapp3.1 or later. ConsoleApp2 C:\Users\user\.nuget\packages\system.runtime.compilerservices.unsafe\6.0.0\buildTransitive\netcoreapp2.0\System.Runtime.CompilerServices.Unsafe.targets 4
```
### Regression?
I'm not sure.
### Known Workarounds
Downgrade to System.Net.Http.Json 5.0.0
### Configuration
N/A
### Other information
N/A | infrastructure | system runtime compilerservices unsafe is not obey net standard it should support net core app description a picture is worth a thousand words reproduction steps create a console project with net core install system net http json package build project expected behavior build successfully actual behavior it shows txt severity code description project file line suppression state error system runtime compilerservices unsafe doesn t support consider updating your targetframework to or later c users user nuget packages system runtime compilerservices unsafe buildtransitive system runtime compilerservices unsafe targets regression i m not sure known workarounds downgrade to system net http json configuration n a other information n a | 1 |
297,818 | 9,182,101,998 | IssuesEvent | 2019-03-05 11:56:33 | aiidateam/aiida_core | https://api.github.com/repos/aiidateam/aiida_core | closed | `LinkManager.get_node_by_label` should raise if result is not uniquely defined | priority/critical-blocking topic/orm type/bug | Currently it simply returns the first results if there are multiple hits, which can be dangerous for users who do not realize it. | 1.0 | `LinkManager.get_node_by_label` should raise if result is not uniquely defined - Currently it simply returns the first results if there are multiple hits, which can be dangerous for users who do not realize it. | non_infrastructure | linkmanager get node by label should raise if result is not uniquely defined currently it simply returns the first results if there are multiple hits which can be dangerous for users who do not realize it | 0 |
14,181 | 10,688,682,377 | IssuesEvent | 2019-10-22 18:45:14 | celo-org/celo-monorepo | https://api.github.com/repos/celo-org/celo-monorepo | opened | Celotool testnet deploy should default to the latest master commit of celo-blockchain | infrastructure | ### Expected Behavior
Deploying a fresh testnet without a specific env files defaults to the latest master commit
### Current Behavior
Uses potentially old commit in `.env` file | 1.0 | Celotool testnet deploy should default to the latest master commit of celo-blockchain - ### Expected Behavior
Deploying a fresh testnet without a specific env files defaults to the latest master commit
### Current Behavior
Uses potentially old commit in `.env` file | infrastructure | celotool testnet deploy should default to the latest master commit of celo blockchain expected behavior deploying a fresh testnet without a specific env files defaults to the latest master commit current behavior uses potentially old commit in env file | 1 |
595,442 | 18,067,012,381 | IssuesEvent | 2021-09-20 20:29:04 | arfc/pygenesys | https://api.github.com/repos/arfc/pygenesys | opened | How do I contribute to pygenesis? | Comp:Core Difficulty:1-Beginner Priority:3-Desired Status:1-New Type:Docs | What contributions are best? how would the dev team like me to communicate with them? Do you have formatting standards?
This issue can be closed when
- [ ] there is a CONTRIBUTING document in the repo somewhere
- [ ] there is documentation on contributing pathways for new contributors | 1.0 | How do I contribute to pygenesis? - What contributions are best? how would the dev team like me to communicate with them? Do you have formatting standards?
This issue can be closed when
- [ ] there is a CONTRIBUTING document in the repo somewhere
- [ ] there is documentation on contributing pathways for new contributors | non_infrastructure | how do i contribute to pygenesis what contributions are best how would the dev team like me to communicate with them do you have formatting standards this issue can be closed when there is a contributing document in the repo somewhere there is documentation on contributing pathways for new contributors | 0 |
634,504 | 20,363,589,116 | IssuesEvent | 2022-02-21 01:03:47 | metabase/metabase | https://api.github.com/repos/metabase/metabase | opened | Serialization dumps and loads questions with invalid references | Type:Bug Priority:P3 Operation/Serialization | **Describe the bug**
If you save a question in a personal collection and then use that question as the base for another one which is included in a shared collection, then serialization will error with
```
Unresolved references found for cards in collection null; will reload after first pass
"People, Count, Grouped by Source" (inserted as ID 1) missing:
at :dataset_query/:query/:source-table -> /collections/root/collections/a b%27s Personal Collection/cards/People
```
as personal collections aren't dumped,
It will create the questions in the end, but empty and just with a nice comment which is "-- DUMMY QUERY FOR SERIALIZATION FIRST PASS INSERT"
**Logs**
<details>
2022-02-21 00:50:32,523 WARN serialization.names :: #error {
:cause Value does not match schema: {:card (not (integer? nil))}
:data {:type :schema.core/error, :value {:collection 1, :card nil}, :error {:card (not (integer? nil))}}
:via
[{:type clojure.lang.ExceptionInfo
:message Can't resolve collection, card in fully qualified name /collections/root/collections/a b%27s Personal Collection/cards/People
:data {:fully-qualified-name /collections/root/collections/a b%27s Personal Collection/cards/People, :resolve-name-failed? true, :context {:collection 1, :card nil}}
:at [metabase_enterprise.serialization.names$fully_qualified_name__GT_context invokeStatic names.clj 325]}
{:type clojure.lang.ExceptionInfo
:message Value does not match schema: {:card (not (integer? nil))}
:data {:type :schema.core/error, :value {:collection 1, :card nil}, :error {:card (not (integer? nil))}}
:at [metabase.util.schema$schema_core_validator$fn__18071 invoke schema.clj 29]}]
:trace
[[metabase.util.schema$schema_core_validator$fn__18071 invoke schema.clj 29]
[schema.core$validate invokeStatic core.clj 164]
[schema.core$validate invoke core.clj 159]
[metabase_enterprise.serialization.names$fully_qualified_name__GT_context invokeStatic names.clj 321]
[metabase_enterprise.serialization.names$fully_qualified_name__GT_context invoke names.clj 299]
[metabase_enterprise.serialization.load$source_table invokeStatic load.clj 67]
[metabase_enterprise.serialization.load$source_table invoke load.clj 63]
[clojure.lang.AFn applyToHelper AFn.java 154]
[clojure.lang.AFn applyTo AFn.java 144]
[clojure.core$apply invokeStatic core.clj 669]
[clojure.core$apply invoke core.clj 662]
[medley.core$update_existing_in$up__5519 invoke core.cljc 73]
[medley.core$update_existing_in invokeStatic core.cljc 75]
[medley.core$update_existing_in doInvoke core.cljc 63]
[clojure.lang.RestFn invoke RestFn.java 445]
[metabase_enterprise.serialization.load$update_capture_missing_STAR_ invokeStatic load.clj 81]
[metabase_enterprise.serialization.load$update_capture_missing_STAR_ invoke load.clj 78]
[metabase_enterprise.serialization.load$update_existing_capture_missing invokeStatic load.clj 97]
[metabase_enterprise.serialization.load$update_existing_capture_missing invoke load.clj 95]
[metabase_enterprise.serialization.load$mbql_fully_qualified_names__GT_ids_STAR_$replace_82436__82437$fn__82450$fn__82451$fn__82452$fn__82455 invoke load.clj 152]
[metabase_enterprise.serialization.load$mbql_fully_qualified_names__GT_ids_STAR_$replace_82436__82437$fn__82450$fn__82451$fn__82452 invoke load.clj 152]
[metabase_enterprise.serialization.load$mbql_fully_qualified_names__GT_ids_STAR_$replace_82436__82437$fn__82450$fn__82451 invoke load.clj 152]
[metabase_enterprise.serialization.load$mbql_fully_qualified_names__GT_ids_STAR_$replace_82436__82437$fn__82450 invoke load.clj 152]
[metabase_enterprise.serialization.load$mbql_fully_qualified_names__GT_ids_STAR_$replace_82436__82437 invoke load.clj 152]
[metabase.mbql.util.match.impl$replace_in_collection$iter__27406__27410$fn__27411 invoke impl.cljc 44]
[clojure.lang.LazySeq sval LazySeq.java 42]
[clojure.lang.LazySeq seq LazySeq.java 51]
[clojure.lang.Cons next Cons.java 39]
[clojure.lang.RT next RT.java 713]
[clojure.core$next__5403 invokeStatic core.clj 64]
[clojure.core.protocols$fn__8181 invokeStatic protocols.clj 169]
[clojure.core.protocols$fn__8181 invoke protocols.clj 124]
[clojure.core.protocols$fn__8136$G__8131__8145 invoke protocols.clj 19]
[clojure.core.protocols$seq_reduce invokeStatic protocols.clj 31]
[clojure.core.protocols$fn__8168 invokeStatic protocols.clj 75]
[clojure.core.protocols$fn__8168 invoke protocols.clj 75]
[clojure.core.protocols$fn__8110$G__8105__8123 invoke protocols.clj 13]
[clojure.core$reduce invokeStatic core.clj 6830]
[clojure.core$into invokeStatic core.clj 6897]
[clojure.core$into invoke core.clj 6889]
[metabase.mbql.util.match.impl$replace_in_collection invokeStatic impl.cljc 43]
[metabase.mbql.util.match.impl$replace_in_collection invoke impl.cljc 38]
[metabase_enterprise.serialization.load$mbql_fully_qualified_names__GT_ids_STAR_$replace_82436__82437$fn__82450$fn__82451$fn__82452$fn__82455 invoke load.clj 152]
[metabase_enterprise.serialization.load$mbql_fully_qualified_names__GT_ids_STAR_$replace_82436__82437$fn__82450$fn__82451$fn__82452 invoke load.clj 152]
[metabase_enterprise.serialization.load$mbql_fully_qualified_names__GT_ids_STAR_$replace_82436__82437$fn__82450$fn__82451 invoke load.clj 152]
[metabase_enterprise.serialization.load$mbql_fully_qualified_names__GT_ids_STAR_$replace_82436__82437$fn__82450 invoke load.clj 152]
[metabase_enterprise.serialization.load$mbql_fully_qualified_names__GT_ids_STAR_$replace_82436__82437 invoke load.clj 152]
[metabase_enterprise.serialization.load$mbql_fully_qualified_names__GT_ids_STAR_ invokeStatic load.clj 152]
[metabase_enterprise.serialization.load$mbql_fully_qualified_names__GT_ids_STAR_ invoke load.clj 150]
[metabase_enterprise.serialization.load$mbql_fully_qualified_names__GT_ids invokeStatic load.clj 180]
[metabase_enterprise.serialization.load$mbql_fully_qualified_names__GT_ids invoke load.clj 178]
[clojure.core$update invokeStatic core.clj 6185]
[clojure.core$update invoke core.clj 6177]
[metabase_enterprise.serialization.load$resolve_card invokeStatic load.clj 598]
[metabase_enterprise.serialization.load$resolve_card invoke load.clj 594]
[metabase_enterprise.serialization.load$load_cards$iter__82875__82879$fn__82880$fn__82881 invoke load.clj 632]
[metabase_enterprise.serialization.load$load_cards$iter__82875__82879$fn__82880 invoke load.clj 631]
[clojure.lang.LazySeq sval LazySeq.java 42]
[clojure.lang.LazySeq seq LazySeq.java 51]
[clojure.lang.RT seq RT.java 535]
[clojure.lang.LazilyPersistentVector create LazilyPersistentVector.java 44]
[clojure.core$vec invokeStatic core.clj 379]
[clojure.core$vec invoke core.clj 369]
[metabase_enterprise.serialization.load$load_cards invokeStatic load.clj 641]
[metabase_enterprise.serialization.load$load_cards invoke load.clj 624]
[metabase_enterprise.serialization.load$load_cards$fn__82902 invoke load.clj 660]
[metabase_enterprise.serialization.load$make_reload_fn$fn__82931$iter__82932__82936$fn__82937$fn__82938 invoke load.clj 715]
[metabase_enterprise.serialization.load$make_reload_fn$fn__82931$iter__82932__82936$fn__82937 invoke load.clj 714]
[clojure.lang.LazySeq sval LazySeq.java 42]
[clojure.lang.LazySeq seq LazySeq.java 51]
[clojure.lang.RT seq RT.java 535]
[clojure.core$seq__5419 invokeStatic core.clj 139]
[clojure.core$filter$fn__5911 invoke core.clj 2813]
[clojure.lang.LazySeq sval LazySeq.java 42]
[clojure.lang.LazySeq seq LazySeq.java 51]
[clojure.lang.RT seq RT.java 535]
[clojure.core$seq__5419 invokeStatic core.clj 139]
[clojure.core$seq__5419 invoke core.clj 139]
[metabase_enterprise.serialization.load$make_reload_fn invokeStatic load.clj 711]
[metabase_enterprise.serialization.load$make_reload_fn invoke load.clj 709]
[metabase_enterprise.serialization.load$make_reload_fn$fn__82931 invoke load.clj 714]
[metabase_enterprise.serialization.cmd$fn__86713$load__86718$fn__86719 invoke cmd.clj 61]
[metabase_enterprise.serialization.cmd$fn__86713$load__86718 invoke cmd.clj 39]
[clojure.lang.Var invoke Var.java 388]
[metabase.cmd$load invokeStatic cmd.clj 149]
[metabase.cmd$load doInvoke cmd.clj 140]
[clojure.lang.RestFn invoke RestFn.java 423]
[metabase.cmd$load invokeStatic cmd.clj 144]
[metabase.cmd$load invoke cmd.clj 140]
[clojure.lang.AFn applyToHelper AFn.java 154]
[clojure.lang.RestFn applyTo RestFn.java 132]
[clojure.core$apply invokeStatic core.clj 667]
[clojure.core$apply invoke core.clj 662]
[metabase.cmd$run_cmd$fn__83228 invoke cmd.clj 190]
[metabase.cmd$run_cmd invokeStatic cmd.clj 190]
[metabase.cmd$run_cmd invoke cmd.clj 186]
[clojure.lang.Var invoke Var.java 388]
[metabase.core$run_cmd invokeStatic core.clj 141]
[metabase.core$run_cmd invoke core.clj 139]
[metabase.core$_main invokeStatic core.clj 163]
[metabase.core$_main doInvoke core.clj 158]
[clojure.lang.RestFn applyTo RestFn.java 137]
[metabase.core main nil -1]]}
2022-02-21 00:50:32,530 INFO serialization.upsert :: Updating Card "People, Count, Grouped by Source" (ID 1)
2022-02-21 00:50:32,537 INFO serialization.load :: Unresolved references found for cards in collection null; will reload after first pass
"People, Count, Grouped by Source" (inserted as ID 1) missing:
at :dataset_query/:query/:source-table -> /collections/root/collections/a b%27s Personal Collection/cards/People
</details>
**To Reproduce**
1. Have a source and destination Metabase instances
2. Create a question in the source and save it on the personal collection then create a question based on the first question but save it on a shared collection
3. dump on the source and load on destination, see the error
**Expected behavior**
At least capture the error nicely in the logs and maybe also change the comment in the code so the users see what happened. We could also add the behavior of not updating objects that don't have a valid reference.
**Screenshots**
NA
**Information about your Metabase Installation:**
- Your browser and the version: Brave latest
- Your operating system: Ubuntu 21.10
- Your databases: H2
- Metabase version: 1.41.2
- Metabase hosting environment: Docker
- Metabase internal database: H1
**Severity**
Non severe
**Additional context**
NA | 1.0 | Serialization dumps and loads questions with invalid references - **Describe the bug**
If you save a question in a personal collection and then use that question as the base for another one which is included in a shared collection, then serialization will error with
```
Unresolved references found for cards in collection null; will reload after first pass
"People, Count, Grouped by Source" (inserted as ID 1) missing:
at :dataset_query/:query/:source-table -> /collections/root/collections/a b%27s Personal Collection/cards/People
```
as personal collections aren't dumped,
It will create the questions in the end, but empty and just with a nice comment which is "-- DUMMY QUERY FOR SERIALIZATION FIRST PASS INSERT"
**Logs**
<details>
2022-02-21 00:50:32,523 WARN serialization.names :: #error {
:cause Value does not match schema: {:card (not (integer? nil))}
:data {:type :schema.core/error, :value {:collection 1, :card nil}, :error {:card (not (integer? nil))}}
:via
[{:type clojure.lang.ExceptionInfo
:message Can't resolve collection, card in fully qualified name /collections/root/collections/a b%27s Personal Collection/cards/People
:data {:fully-qualified-name /collections/root/collections/a b%27s Personal Collection/cards/People, :resolve-name-failed? true, :context {:collection 1, :card nil}}
:at [metabase_enterprise.serialization.names$fully_qualified_name__GT_context invokeStatic names.clj 325]}
{:type clojure.lang.ExceptionInfo
:message Value does not match schema: {:card (not (integer? nil))}
:data {:type :schema.core/error, :value {:collection 1, :card nil}, :error {:card (not (integer? nil))}}
:at [metabase.util.schema$schema_core_validator$fn__18071 invoke schema.clj 29]}]
:trace
[[metabase.util.schema$schema_core_validator$fn__18071 invoke schema.clj 29]
[schema.core$validate invokeStatic core.clj 164]
[schema.core$validate invoke core.clj 159]
[metabase_enterprise.serialization.names$fully_qualified_name__GT_context invokeStatic names.clj 321]
[metabase_enterprise.serialization.names$fully_qualified_name__GT_context invoke names.clj 299]
[metabase_enterprise.serialization.load$source_table invokeStatic load.clj 67]
[metabase_enterprise.serialization.load$source_table invoke load.clj 63]
[clojure.lang.AFn applyToHelper AFn.java 154]
[clojure.lang.AFn applyTo AFn.java 144]
[clojure.core$apply invokeStatic core.clj 669]
[clojure.core$apply invoke core.clj 662]
[medley.core$update_existing_in$up__5519 invoke core.cljc 73]
[medley.core$update_existing_in invokeStatic core.cljc 75]
[medley.core$update_existing_in doInvoke core.cljc 63]
[clojure.lang.RestFn invoke RestFn.java 445]
[metabase_enterprise.serialization.load$update_capture_missing_STAR_ invokeStatic load.clj 81]
[metabase_enterprise.serialization.load$update_capture_missing_STAR_ invoke load.clj 78]
[metabase_enterprise.serialization.load$update_existing_capture_missing invokeStatic load.clj 97]
[metabase_enterprise.serialization.load$update_existing_capture_missing invoke load.clj 95]
[metabase_enterprise.serialization.load$mbql_fully_qualified_names__GT_ids_STAR_$replace_82436__82437$fn__82450$fn__82451$fn__82452$fn__82455 invoke load.clj 152]
[metabase_enterprise.serialization.load$mbql_fully_qualified_names__GT_ids_STAR_$replace_82436__82437$fn__82450$fn__82451$fn__82452 invoke load.clj 152]
[metabase_enterprise.serialization.load$mbql_fully_qualified_names__GT_ids_STAR_$replace_82436__82437$fn__82450$fn__82451 invoke load.clj 152]
[metabase_enterprise.serialization.load$mbql_fully_qualified_names__GT_ids_STAR_$replace_82436__82437$fn__82450 invoke load.clj 152]
[metabase_enterprise.serialization.load$mbql_fully_qualified_names__GT_ids_STAR_$replace_82436__82437 invoke load.clj 152]
[metabase.mbql.util.match.impl$replace_in_collection$iter__27406__27410$fn__27411 invoke impl.cljc 44]
[clojure.lang.LazySeq sval LazySeq.java 42]
[clojure.lang.LazySeq seq LazySeq.java 51]
[clojure.lang.Cons next Cons.java 39]
[clojure.lang.RT next RT.java 713]
[clojure.core$next__5403 invokeStatic core.clj 64]
[clojure.core.protocols$fn__8181 invokeStatic protocols.clj 169]
[clojure.core.protocols$fn__8181 invoke protocols.clj 124]
[clojure.core.protocols$fn__8136$G__8131__8145 invoke protocols.clj 19]
[clojure.core.protocols$seq_reduce invokeStatic protocols.clj 31]
[clojure.core.protocols$fn__8168 invokeStatic protocols.clj 75]
[clojure.core.protocols$fn__8168 invoke protocols.clj 75]
[clojure.core.protocols$fn__8110$G__8105__8123 invoke protocols.clj 13]
[clojure.core$reduce invokeStatic core.clj 6830]
[clojure.core$into invokeStatic core.clj 6897]
[clojure.core$into invoke core.clj 6889]
[metabase.mbql.util.match.impl$replace_in_collection invokeStatic impl.cljc 43]
[metabase.mbql.util.match.impl$replace_in_collection invoke impl.cljc 38]
[metabase_enterprise.serialization.load$mbql_fully_qualified_names__GT_ids_STAR_$replace_82436__82437$fn__82450$fn__82451$fn__82452$fn__82455 invoke load.clj 152]
[metabase_enterprise.serialization.load$mbql_fully_qualified_names__GT_ids_STAR_$replace_82436__82437$fn__82450$fn__82451$fn__82452 invoke load.clj 152]
[metabase_enterprise.serialization.load$mbql_fully_qualified_names__GT_ids_STAR_$replace_82436__82437$fn__82450$fn__82451 invoke load.clj 152]
[metabase_enterprise.serialization.load$mbql_fully_qualified_names__GT_ids_STAR_$replace_82436__82437$fn__82450 invoke load.clj 152]
[metabase_enterprise.serialization.load$mbql_fully_qualified_names__GT_ids_STAR_$replace_82436__82437 invoke load.clj 152]
[metabase_enterprise.serialization.load$mbql_fully_qualified_names__GT_ids_STAR_ invokeStatic load.clj 152]
[metabase_enterprise.serialization.load$mbql_fully_qualified_names__GT_ids_STAR_ invoke load.clj 150]
[metabase_enterprise.serialization.load$mbql_fully_qualified_names__GT_ids invokeStatic load.clj 180]
[metabase_enterprise.serialization.load$mbql_fully_qualified_names__GT_ids invoke load.clj 178]
[clojure.core$update invokeStatic core.clj 6185]
[clojure.core$update invoke core.clj 6177]
[metabase_enterprise.serialization.load$resolve_card invokeStatic load.clj 598]
[metabase_enterprise.serialization.load$resolve_card invoke load.clj 594]
[metabase_enterprise.serialization.load$load_cards$iter__82875__82879$fn__82880$fn__82881 invoke load.clj 632]
[metabase_enterprise.serialization.load$load_cards$iter__82875__82879$fn__82880 invoke load.clj 631]
[clojure.lang.LazySeq sval LazySeq.java 42]
[clojure.lang.LazySeq seq LazySeq.java 51]
[clojure.lang.RT seq RT.java 535]
[clojure.lang.LazilyPersistentVector create LazilyPersistentVector.java 44]
[clojure.core$vec invokeStatic core.clj 379]
[clojure.core$vec invoke core.clj 369]
[metabase_enterprise.serialization.load$load_cards invokeStatic load.clj 641]
[metabase_enterprise.serialization.load$load_cards invoke load.clj 624]
[metabase_enterprise.serialization.load$load_cards$fn__82902 invoke load.clj 660]
[metabase_enterprise.serialization.load$make_reload_fn$fn__82931$iter__82932__82936$fn__82937$fn__82938 invoke load.clj 715]
[metabase_enterprise.serialization.load$make_reload_fn$fn__82931$iter__82932__82936$fn__82937 invoke load.clj 714]
[clojure.lang.LazySeq sval LazySeq.java 42]
[clojure.lang.LazySeq seq LazySeq.java 51]
[clojure.lang.RT seq RT.java 535]
[clojure.core$seq__5419 invokeStatic core.clj 139]
[clojure.core$filter$fn__5911 invoke core.clj 2813]
[clojure.lang.LazySeq sval LazySeq.java 42]
[clojure.lang.LazySeq seq LazySeq.java 51]
[clojure.lang.RT seq RT.java 535]
[clojure.core$seq__5419 invokeStatic core.clj 139]
[clojure.core$seq__5419 invoke core.clj 139]
[metabase_enterprise.serialization.load$make_reload_fn invokeStatic load.clj 711]
[metabase_enterprise.serialization.load$make_reload_fn invoke load.clj 709]
[metabase_enterprise.serialization.load$make_reload_fn$fn__82931 invoke load.clj 714]
[metabase_enterprise.serialization.cmd$fn__86713$load__86718$fn__86719 invoke cmd.clj 61]
[metabase_enterprise.serialization.cmd$fn__86713$load__86718 invoke cmd.clj 39]
[clojure.lang.Var invoke Var.java 388]
[metabase.cmd$load invokeStatic cmd.clj 149]
[metabase.cmd$load doInvoke cmd.clj 140]
[clojure.lang.RestFn invoke RestFn.java 423]
[metabase.cmd$load invokeStatic cmd.clj 144]
[metabase.cmd$load invoke cmd.clj 140]
[clojure.lang.AFn applyToHelper AFn.java 154]
[clojure.lang.RestFn applyTo RestFn.java 132]
[clojure.core$apply invokeStatic core.clj 667]
[clojure.core$apply invoke core.clj 662]
[metabase.cmd$run_cmd$fn__83228 invoke cmd.clj 190]
[metabase.cmd$run_cmd invokeStatic cmd.clj 190]
[metabase.cmd$run_cmd invoke cmd.clj 186]
[clojure.lang.Var invoke Var.java 388]
[metabase.core$run_cmd invokeStatic core.clj 141]
[metabase.core$run_cmd invoke core.clj 139]
[metabase.core$_main invokeStatic core.clj 163]
[metabase.core$_main doInvoke core.clj 158]
[clojure.lang.RestFn applyTo RestFn.java 137]
[metabase.core main nil -1]]}
2022-02-21 00:50:32,530 INFO serialization.upsert :: Updating Card "People, Count, Grouped by Source" (ID 1)
2022-02-21 00:50:32,537 INFO serialization.load :: Unresolved references found for cards in collection null; will reload after first pass
"People, Count, Grouped by Source" (inserted as ID 1) missing:
at :dataset_query/:query/:source-table -> /collections/root/collections/a b%27s Personal Collection/cards/People
</details>
**To Reproduce**
1. Have a source and destination Metabase instances
2. Create a question in the source and save it on the personal collection then create a question based on the first question but save it on a shared collection
3. dump on the source and load on destination, see the error
**Expected behavior**
At least capture the error nicely in the logs and maybe also change the comment in the code so the users see what happened. We could also add the behavior of not updating objects that don't have a valid reference.
**Screenshots**
NA
**Information about your Metabase Installation:**
- Your browser and the version: Brave latest
- Your operating system: Ubuntu 21.10
- Your databases: H2
- Metabase version: 1.41.2
- Metabase hosting environment: Docker
- Metabase internal database: H1
**Severity**
Non severe
**Additional context**
NA | non_infrastructure | serialization dumps and loads questions with invalid references describe the bug if you save a question in a personal collection and then use that question as the base for another one which is included in a shared collection then serialization will error with unresolved references found for cards in collection null will reload after first pass people count grouped by source inserted as id missing at dataset query query source table collections root collections a b personal collection cards people as personal collections aren t dumped it will create the questions in the end but empty and just with a nice comment which is dummy query for serialization first pass insert logs warn serialization names error cause value does not match schema card not integer nil data type schema core error value collection card nil error card not integer nil via type clojure lang exceptioninfo message can t resolve collection card in fully qualified name collections root collections a b personal collection cards people data fully qualified name collections root collections a b personal collection cards people resolve name failed true context collection card nil at type clojure lang exceptioninfo message value does not match schema card not integer nil data type schema core error value collection card nil error card not integer nil at trace info serialization upsert updating card people count grouped by source id info serialization load unresolved references found for cards in collection null will reload after first pass people count grouped by source inserted as id missing at dataset query query source table collections root collections a b personal collection cards people to reproduce have a source and destination metabase instances create a question in the source and save it on the personal collection then create a question based on the first question but save it on a shared collection dump on the source and load on destination see the error expected behavior at least capture the error nicely in the logs and maybe also change the comment in the code so the users see what happened we could also add the behavior of not updating objects that don t have a valid reference screenshots na information about your metabase installation your browser and the version brave latest your operating system ubuntu your databases metabase version metabase hosting environment docker metabase internal database severity non severe additional context na | 0 |
1,981 | 3,448,094,242 | IssuesEvent | 2015-12-16 05:38:01 | elmsln/elmsln | https://api.github.com/repos/elmsln/elmsln | opened | Network fly out overrides | enhancement future infrastructure Instructor Experience online / CIS Staff Experience | Need to be able to define (without code) what goes in the Network fly out. This would allow for "adding applications" to elmsln that are either system-wide, course-wide or section-wide in scope. This would probably be a series of content type(s) in CIS that provide entity references to the section (much like the 'resources' and 'tools this uses' reference fields) to provide this structure and the order.
Maybe a field collection like...
- tool / link (entity reference).
- grouping category (select or other)
Then the order of them would dictate the order they render. Nicer UIs could be constructed to set this data on the backend. | 1.0 | Network fly out overrides - Need to be able to define (without code) what goes in the Network fly out. This would allow for "adding applications" to elmsln that are either system-wide, course-wide or section-wide in scope. This would probably be a series of content type(s) in CIS that provide entity references to the section (much like the 'resources' and 'tools this uses' reference fields) to provide this structure and the order.
Maybe a field collection like...
- tool / link (entity reference).
- grouping category (select or other)
Then the order of them would dictate the order they render. Nicer UIs could be constructed to set this data on the backend. | infrastructure | network fly out overrides need to be able to define without code what goes in the network fly out this would allow for adding applications to elmsln that are either system wide course wide or section wide in scope this would probably be a series of content type s in cis that provide entity references to the section much like the resources and tools this uses reference fields to provide this structure and the order maybe a field collection like tool link entity reference grouping category select or other then the order of them would dictate the order they render nicer uis could be constructed to set this data on the backend | 1 |
17,650 | 12,495,104,523 | IssuesEvent | 2020-06-01 12:34:40 | libero/reviewer | https://api.github.com/repos/libero/reviewer | opened | Utilize return url functionality of continuum to simplify auth flow | Infrastructure | With the resolution of https://github.com/libero/reviewer/issues/837 we can simplify the authentication workflow and make it possible to have multiple deployment environments using the same continuum-login deployment.
ToDo after discussion with @erezmus.
use ingress to:
- [ ] append return url to to 301 redirect to continuum-login
- [ ] append optional argument to get token as part of returned request
- [ ] return directly to `/authenticate` instead of `/auth` (with optional argument, client no longer needed to process token)
Once successful:
- [ ] remove no longer needed client code
- [ ] move redirect logic out of ingress into continuum-adaptor
- [ ] update gitbook incl diagram | 1.0 | Utilize return url functionality of continuum to simplify auth flow - With the resolution of https://github.com/libero/reviewer/issues/837 we can simplify the authentication workflow and make it possible to have multiple deployment environments using the same continuum-login deployment.
ToDo after discussion with @erezmus.
use ingress to:
- [ ] append return url to to 301 redirect to continuum-login
- [ ] append optional argument to get token as part of returned request
- [ ] return directly to `/authenticate` instead of `/auth` (with optional argument, client no longer needed to process token)
Once successful:
- [ ] remove no longer needed client code
- [ ] move redirect logic out of ingress into continuum-adaptor
- [ ] update gitbook incl diagram | infrastructure | utilize return url functionality of continuum to simplify auth flow with the resolution of we can simplify the authentication workflow and make it possible to have multiple deployment environments using the same continuum login deployment todo after discussion with erezmus use ingress to append return url to to redirect to continuum login append optional argument to get token as part of returned request return directly to authenticate instead of auth with optional argument client no longer needed to process token once successful remove no longer needed client code move redirect logic out of ingress into continuum adaptor update gitbook incl diagram | 1 |
26,512 | 20,173,669,241 | IssuesEvent | 2022-02-10 12:43:27 | Bylothink/do-you-dare | https://api.github.com/repos/Bylothink/do-you-dare | opened | Loggare le procedure di backup | ⬇ low priority ⚙ infrastructure | Al momento, le procedure di backup eseguite da `crontab` non loggano da nessuna parte.
Valutare l'uso di `logger` e provare a far loggare queste procedure sotto la directory `/var/log`. | 1.0 | Loggare le procedure di backup - Al momento, le procedure di backup eseguite da `crontab` non loggano da nessuna parte.
Valutare l'uso di `logger` e provare a far loggare queste procedure sotto la directory `/var/log`. | infrastructure | loggare le procedure di backup al momento le procedure di backup eseguite da crontab non loggano da nessuna parte valutare l uso di logger e provare a far loggare queste procedure sotto la directory var log | 1 |
478,790 | 13,785,513,411 | IssuesEvent | 2020-10-08 23:05:58 | FACE-Amrita-Bengaluru/SLAC-2020 | https://api.github.com/repos/FACE-Amrita-Bengaluru/SLAC-2020 | closed | Update back-end | High Priority dependencies | - [x] Update dependencies
- [x] Purge unwanted content and files
- [x] Update `package.json` | 1.0 | Update back-end - - [x] Update dependencies
- [x] Purge unwanted content and files
- [x] Update `package.json` | non_infrastructure | update back end update dependencies purge unwanted content and files update package json | 0 |
520,705 | 15,091,485,069 | IssuesEvent | 2021-02-06 15:41:16 | Zettlr/Zettlr | https://api.github.com/repos/Zettlr/Zettlr | closed | [ENHANCEMENT] Using LanguageTool for spell and grammer checking | enhancement priority:low stale | It would be nice if Zettlr used LanguageTool for spelling and grammar checking.
The problem with a built-in directory is that my personal dictionary, which I created in LanguageTool, is not available in Zettlr. So I have to keep my user dictionary multiple times. | 1.0 | [ENHANCEMENT] Using LanguageTool for spell and grammer checking - It would be nice if Zettlr used LanguageTool for spelling and grammar checking.
The problem with a built-in directory is that my personal dictionary, which I created in LanguageTool, is not available in Zettlr. So I have to keep my user dictionary multiple times. | non_infrastructure | using languagetool for spell and grammer checking it would be nice if zettlr used languagetool for spelling and grammar checking the problem with a built in directory is that my personal dictionary which i created in languagetool is not available in zettlr so i have to keep my user dictionary multiple times | 0 |
96,125 | 16,113,231,107 | IssuesEvent | 2021-04-28 01:52:37 | n-devs/Testter | https://api.github.com/repos/n-devs/Testter | opened | CVE-2021-23382 (Medium) detected in postcss-6.0.13.tgz | security vulnerability | ## CVE-2021-23382 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>postcss-6.0.13.tgz</b></p></summary>
<p>Tool for transforming styles with JS plugins</p>
<p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-6.0.13.tgz">https://registry.npmjs.org/postcss/-/postcss-6.0.13.tgz</a></p>
<p>Path to dependency file: /Testter/package.json</p>
<p>Path to vulnerable library: Testter/node_modules/postcss/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-1.0.16.tgz (Root Library)
- autoprefixer-7.1.6.tgz
- :x: **postcss-6.0.13.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package postcss before 8.2.13 are vulnerable to Regular Expression Denial of Service (ReDoS) via getAnnotationURL() and loadAnnotation() in lib/previous-map.js. The vulnerable regexes are caused mainly by the sub-pattern \/\*\s* sourceMappingURL=(.*).
<p>Publish Date: 2021-04-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23382>CVE-2021-23382</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23382">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23382</a></p>
<p>Release Date: 2021-04-26</p>
<p>Fix Resolution: postcss - 8.2.13</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-23382 (Medium) detected in postcss-6.0.13.tgz - ## CVE-2021-23382 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>postcss-6.0.13.tgz</b></p></summary>
<p>Tool for transforming styles with JS plugins</p>
<p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-6.0.13.tgz">https://registry.npmjs.org/postcss/-/postcss-6.0.13.tgz</a></p>
<p>Path to dependency file: /Testter/package.json</p>
<p>Path to vulnerable library: Testter/node_modules/postcss/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-1.0.16.tgz (Root Library)
- autoprefixer-7.1.6.tgz
- :x: **postcss-6.0.13.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package postcss before 8.2.13 are vulnerable to Regular Expression Denial of Service (ReDoS) via getAnnotationURL() and loadAnnotation() in lib/previous-map.js. The vulnerable regexes are caused mainly by the sub-pattern \/\*\s* sourceMappingURL=(.*).
<p>Publish Date: 2021-04-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23382>CVE-2021-23382</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23382">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23382</a></p>
<p>Release Date: 2021-04-26</p>
<p>Fix Resolution: postcss - 8.2.13</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_infrastructure | cve medium detected in postcss tgz cve medium severity vulnerability vulnerable library postcss tgz tool for transforming styles with js plugins library home page a href path to dependency file testter package json path to vulnerable library testter node modules postcss package json dependency hierarchy react scripts tgz root library autoprefixer tgz x postcss tgz vulnerable library vulnerability details the package postcss before are vulnerable to regular expression denial of service redos via getannotationurl and loadannotation in lib previous map js the vulnerable regexes are caused mainly by the sub pattern s sourcemappingurl publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution postcss step up your open source security game with whitesource | 0 |
193,734 | 15,386,346,058 | IssuesEvent | 2021-03-03 08:04:51 | Doregon/tnpsh-wiki | https://api.github.com/repos/Doregon/tnpsh-wiki | closed | Work on entire "big-stinky-brew/utilities" directory | documentation enhancement no-issue-activity | - [ ] Utilties
- [ ] README.md
- [ ] reActPSN
- [x] README.md
- [x] Activating Games
- [x] Capabilities
- [ ] Converting Stuff
- [x] Rebug Toolbox
- [x] README.md
- [x] CEX to DEX
- [x] OtherOS++
- [ ] HEN Toolbox
- [x] PS3XPAD
- [x] PSNpatch
- [x] SEN Enabler
- [ ] Suite App Document Reader
- [x] XMB Package Downloader | 1.0 | Work on entire "big-stinky-brew/utilities" directory - - [ ] Utilties
- [ ] README.md
- [ ] reActPSN
- [x] README.md
- [x] Activating Games
- [x] Capabilities
- [ ] Converting Stuff
- [x] Rebug Toolbox
- [x] README.md
- [x] CEX to DEX
- [x] OtherOS++
- [ ] HEN Toolbox
- [x] PS3XPAD
- [x] PSNpatch
- [x] SEN Enabler
- [ ] Suite App Document Reader
- [x] XMB Package Downloader | non_infrastructure | work on entire big stinky brew utilities directory utilties readme md reactpsn readme md activating games capabilities converting stuff rebug toolbox readme md cex to dex otheros hen toolbox psnpatch sen enabler suite app document reader xmb package downloader | 0 |
22,257 | 15,058,533,035 | IssuesEvent | 2021-02-03 23:39:35 | Spine-project/Spine-Toolbox | https://api.github.com/repos/Spine-project/Spine-Toolbox | opened | Shipping data for tutorial with the app | Data Infrastructure | In GitLab by @manuelma on Nov 6, 2019, 09:58
I want to create a tutorial for case study A5 that new users can follow, but I need some data files. These are plain, small CSV files with hydro data from Sweden. I know one can ship non-code files with setuptools, but I'm not sure what's the best approach.
So, do you think it's a good idea to include these data files? And if yes, how exactly should I (we) proceed? | 1.0 | Shipping data for tutorial with the app - In GitLab by @manuelma on Nov 6, 2019, 09:58
I want to create a tutorial for case study A5 that new users can follow, but I need some data files. These are plain, small CSV files with hydro data from Sweden. I know one can ship non-code files with setuptools, but I'm not sure what's the best approach.
So, do you think it's a good idea to include these data files? And if yes, how exactly should I (we) proceed? | infrastructure | shipping data for tutorial with the app in gitlab by manuelma on nov i want to create a tutorial for case study that new users can follow but i need some data files these are plain small csv files with hydro data from sweden i know one can ship non code files with setuptools but i m not sure what s the best approach so do you think it s a good idea to include these data files and if yes how exactly should i we proceed | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.