Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1
value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3
values | title stringlengths 1 1k | labels stringlengths 4 1.38k | body stringlengths 1 262k | index stringclasses 16
values | text_combine stringlengths 96 262k | label stringclasses 2
values | text stringlengths 96 252k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
446,524 | 12,865,765,070 | IssuesEvent | 2020-07-10 01:26:31 | PyTorchLightning/pytorch-lightning | https://api.github.com/repos/PyTorchLightning/pytorch-lightning | closed | `model.test()` can fail for `ddp` because `args` in `evaluation_forward` are malformed | Priority bug / fix good first issue help wanted | ## 🐛 Bug
`model.test()` can fail while training via `dp` because `TrainerEvaluationLoopMixin.evaluation_forward` doesn't handle an edge case.
### To Reproduce
Attempt to `model.test()` any lightning model in `dp` mode (I believe it fails in any of the modes at https://github.com/PyTorchLightning/pytorch-lightning/blob/3a642601e84c3abf1f1b438f9acc932a1f150f7f/pytorch_lightning/trainer/evaluation_loop.py#L420).
_Note that the validation and training steps work well, but test fails._
The bottom of the stack trace isn't super elucidating but the crux of the matter is captured in
```
411 def evaluation_forward(self, model, batch, batch_idx, dataloader_idx, test_mode: bool = False):
412 # make dataloader_idx arg in validation_step optional
413 args = [batch, batch_idx]
414
415 if (test_mode and len(self.test_dataloaders) > 1) \
416 or (not test_mode and len(self.val_dataloaders) > 1):
417 args.append(dataloader_idx)
418
419 # handle DP, DDP forward
420 if self.use_ddp or self.use_dp or self.use_ddp2:
--> 421 output = model(*args)
422 return output
```
At line 421 the code that _will_ run is `output = model(*args[0][:-1])` but other things fail downstream of that hack. Note `args[0]` is the tuple of tensors and the last tensor is the target.
TL;DR: at this point (for test -- again val and train work perfectly) I believe that what we want is something similar to `output = model.test_step(*args)` instead (see later on in `evaluation_forward`, below the above trace).
However, i realized that the model, now a `LightningDataParallel` instance, no longer has the `test_step` that is defined in the original LightningModule, so my understanding of the system for making multi-GPU work is a limiting factor here.
This mock, I thought, would resolve the issue for me, but I then realized that the test_step method no longer existed per the above paragraph:
```
ORIG = pl.trainer.evaluation_loop.TrainerEvaluationLoopMixin.evaluation_forward
def _mock_evaluation_forward(self, model, batch, batch_idx, dataloader_idx, test_mode: bool = False):
if not test_mode or (not (self.use_ddp or self.use_dp or self.use_ddp2)):
return ORIG(self, model, batch, batch_idx, dataloader_idx, test_mode)
output = model.test_step(*args)
return output
from unittest import mock
@mock.patch('pytorch_lightning.trainer.evaluation_loop.TrainerEvaluationLoopMixin.evaluation_forward', _mock_evaluation_forward)
def train_my_model(): ...
```
### Additional context
Thanks for the great library! I can't precisely determine why train and eval work and then test fails. One thing to note is that the forward method to my model takes several tensors, not just one, which is a possible factor. Everything works perfectly with `dp` turned off. | 1.0 | `model.test()` can fail for `ddp` because `args` in `evaluation_forward` are malformed - ## 🐛 Bug
`model.test()` can fail while training via `dp` because `TrainerEvaluationLoopMixin.evaluation_forward` doesn't handle an edge case.
### To Reproduce
Attempt to `model.test()` any lightning model in `dp` mode (I believe it fails in any of the modes at https://github.com/PyTorchLightning/pytorch-lightning/blob/3a642601e84c3abf1f1b438f9acc932a1f150f7f/pytorch_lightning/trainer/evaluation_loop.py#L420).
_Note that the validation and training steps work well, but test fails._
The bottom of the stack trace isn't super elucidating but the crux of the matter is captured in
```
411 def evaluation_forward(self, model, batch, batch_idx, dataloader_idx, test_mode: bool = False):
412 # make dataloader_idx arg in validation_step optional
413 args = [batch, batch_idx]
414
415 if (test_mode and len(self.test_dataloaders) > 1) \
416 or (not test_mode and len(self.val_dataloaders) > 1):
417 args.append(dataloader_idx)
418
419 # handle DP, DDP forward
420 if self.use_ddp or self.use_dp or self.use_ddp2:
--> 421 output = model(*args)
422 return output
```
At line 421 the code that _will_ run is `output = model(*args[0][:-1])` but other things fail downstream of that hack. Note `args[0]` is the tuple of tensors and the last tensor is the target.
TL;DR: at this point (for test -- again val and train work perfectly) I believe that what we want is something similar to `output = model.test_step(*args)` instead (see later on in `evaluation_forward`, below the above trace).
However, i realized that the model, now a `LightningDataParallel` instance, no longer has the `test_step` that is defined in the original LightningModule, so my understanding of the system for making multi-GPU work is a limiting factor here.
This mock, I thought, would resolve the issue for me, but I then realized that the test_step method no longer existed per the above paragraph:
```
ORIG = pl.trainer.evaluation_loop.TrainerEvaluationLoopMixin.evaluation_forward
def _mock_evaluation_forward(self, model, batch, batch_idx, dataloader_idx, test_mode: bool = False):
if not test_mode or (not (self.use_ddp or self.use_dp or self.use_ddp2)):
return ORIG(self, model, batch, batch_idx, dataloader_idx, test_mode)
output = model.test_step(*args)
return output
from unittest import mock
@mock.patch('pytorch_lightning.trainer.evaluation_loop.TrainerEvaluationLoopMixin.evaluation_forward', _mock_evaluation_forward)
def train_my_model(): ...
```
### Additional context
Thanks for the great library! I can't precisely determine why train and eval work and then test fails. One thing to note is that the forward method to my model takes several tensors, not just one, which is a possible factor. Everything works perfectly with `dp` turned off. | priority | model test can fail for ddp because args in evaluation forward are malformed 🐛 bug model test can fail while training via dp because trainerevaluationloopmixin evaluation forward doesn t handle an edge case to reproduce attempt to model test any lightning model in dp mode i believe it fails in any of the modes at note that the validation and training steps work well but test fails the bottom of the stack trace isn t super elucidating but the crux of the matter is captured in def evaluation forward self model batch batch idx dataloader idx test mode bool false make dataloader idx arg in validation step optional args if test mode and len self test dataloaders or not test mode and len self val dataloaders args append dataloader idx handle dp ddp forward if self use ddp or self use dp or self use output model args return output at line the code that will run is output model args but other things fail downstream of that hack note args is the tuple of tensors and the last tensor is the target tl dr at this point for test again val and train work perfectly i believe that what we want is something similar to output model test step args instead see later on in evaluation forward below the above trace however i realized that the model now a lightningdataparallel instance no longer has the test step that is defined in the original lightningmodule so my understanding of the system for making multi gpu work is a limiting factor here this mock i thought would resolve the issue for me but i then realized that the test step method no longer existed per the above paragraph orig pl trainer evaluation loop trainerevaluationloopmixin evaluation forward def mock evaluation forward self model batch batch idx dataloader idx test mode bool false if not test mode or not self use ddp or self use dp or self use return orig self model batch batch idx dataloader idx test mode output model test step args return output from unittest import mock mock patch pytorch lightning trainer evaluation loop trainerevaluationloopmixin evaluation forward mock evaluation forward def train my model additional context thanks for the great library i can t precisely determine why train and eval work and then test fails one thing to note is that the forward method to my model takes several tensors not just one which is a possible factor everything works perfectly with dp turned off | 1 |
225,751 | 7,494,838,685 | IssuesEvent | 2018-04-07 14:32:41 | Blockrazor/blockrazor | https://api.github.com/repos/Blockrazor/blockrazor | closed | problem: [/currency/x] cannot add exchanges to coins | Paid-contributor Priority | Problem: it's not possible for users to add exchanges to coins on the currency detail page.
possible solution: Allow users to add an exchange to a currency from the existing list of exchanges or add a new exchange. Ideally this would be a typahead list with "add" if what the user types isn't found (e.g. how Github works with adding branches).
Any solution should also denormalize the data into the currencies collection so that currencies on the home route can be filtered by "is currently listed on an exchange". | 1.0 | problem: [/currency/x] cannot add exchanges to coins - Problem: it's not possible for users to add exchanges to coins on the currency detail page.
possible solution: Allow users to add an exchange to a currency from the existing list of exchanges or add a new exchange. Ideally this would be a typahead list with "add" if what the user types isn't found (e.g. how Github works with adding branches).
Any solution should also denormalize the data into the currencies collection so that currencies on the home route can be filtered by "is currently listed on an exchange". | priority | problem cannot add exchanges to coins problem it s not possible for users to add exchanges to coins on the currency detail page possible solution allow users to add an exchange to a currency from the existing list of exchanges or add a new exchange ideally this would be a typahead list with add if what the user types isn t found e g how github works with adding branches any solution should also denormalize the data into the currencies collection so that currencies on the home route can be filtered by is currently listed on an exchange | 1 |
106,648 | 4,281,658,232 | IssuesEvent | 2016-07-15 04:42:01 | matuella/javaee-clinic | https://api.github.com/repos/matuella/javaee-clinic | closed | Doctor register won't clear up after saving | High Priority | Probably because it's now being injected as a EJB. Need to investigate why. | 1.0 | Doctor register won't clear up after saving - Probably because it's now being injected as a EJB. Need to investigate why. | priority | doctor register won t clear up after saving probably because it s now being injected as a ejb need to investigate why | 1 |
269,753 | 8,443,001,891 | IssuesEvent | 2018-10-18 14:35:04 | CDCgov/WebMicrobeTrace | https://api.github.com/repos/CDCgov/WebMicrobeTrace | opened | Sequence Validator | enhancement epic low priority | Secure HIV-TRACE is implementing some sort of Sequence Validation. We should too! Here are the checks that we know about:
1. Presentness - Does the sequence exist and is it non-trivial?
2. Distinctness - Is the sequence distinct from the reference sequence?
3. Uniqueness - Is the sequence unique, or do any other sequences match it exactly?
4. Inversion - Is the sequence backwards? | 1.0 | Sequence Validator - Secure HIV-TRACE is implementing some sort of Sequence Validation. We should too! Here are the checks that we know about:
1. Presentness - Does the sequence exist and is it non-trivial?
2. Distinctness - Is the sequence distinct from the reference sequence?
3. Uniqueness - Is the sequence unique, or do any other sequences match it exactly?
4. Inversion - Is the sequence backwards? | priority | sequence validator secure hiv trace is implementing some sort of sequence validation we should too here are the checks that we know about presentness does the sequence exist and is it non trivial distinctness is the sequence distinct from the reference sequence uniqueness is the sequence unique or do any other sequences match it exactly inversion is the sequence backwards | 1 |
269,711 | 23,460,572,898 | IssuesEvent | 2022-08-16 12:48:26 | cobudget/cobudget | https://api.github.com/repos/cobudget/cobudget | closed | [FEATURE] Add "experimental features" bool to orgs | needs testing | We want to give some orgs access to experimental features.
This is toggled by a bool set in the database.
Direct funding is the first such feature. | 1.0 | [FEATURE] Add "experimental features" bool to orgs - We want to give some orgs access to experimental features.
This is toggled by a bool set in the database.
Direct funding is the first such feature. | non_priority | add experimental features bool to orgs we want to give some orgs access to experimental features this is toggled by a bool set in the database direct funding is the first such feature | 0 |
7,283 | 2,891,477,979 | IssuesEvent | 2015-06-15 05:51:55 | brobeson/uml2code | https://api.github.com/repos/brobeson/uml2code | opened | implement unit tests for the UML system | tests | Implement a robust set of unit tests for the UML system. The UML system is implemented by #6. | 1.0 | implement unit tests for the UML system - Implement a robust set of unit tests for the UML system. The UML system is implemented by #6. | non_priority | implement unit tests for the uml system implement a robust set of unit tests for the uml system the uml system is implemented by | 0 |
16,828 | 2,948,319,266 | IssuesEvent | 2015-07-06 01:27:28 | Winetricks/winetricks | https://api.github.com/repos/Winetricks/winetricks | closed | xna31 fails due to "dotnet2 missing" | auto-migrated Priority-Medium Type-Defect | ```
What steps will reproduce the problem?
1. delete .wine
2. ./winetricks.svn870 xna31
3.
err:msi:ITERATE_Actions Execution halted, action L"NotWinFx2Action" returned
1603
------------------------------------------------------
Note: command 'wine msiexec /quiet /i xnafx31_redist.msi' returned status 67.
Aborting.
What is the expected output? What do you see instead?
Expected = no fail of installing xna31
Seen instead = xna31 MS installer complains about missing .net 2 framework and
ends
What version of the product are you using? On what operating system?
On Ubuntu 12.04 with latest Wine PPA[1]
winetricks SVN r870
[1]
https://launchpad.net/~ubuntu-wine/+archive/ppa/+packages
wine1.5 - 1.5.10-0ubuntu1~pulse19+build2
wine-mono0.0.4 - 0.0.4-0ubuntu1~ppa1
wine-gecko1.7 - 1.7-0ubuntu1~ppa1~precise1
etc
```
Original issue reported on code.google.com by `stefan.h...@gmail.com` on 5 Aug 2012 at 8:37
Attachments:
* [consoleoutput fail xna31](https://storage.googleapis.com/google-code-attachments/winetricks/issue-243/comment-0/consoleoutput fail xna31)
| 1.0 | xna31 fails due to "dotnet2 missing" - ```
What steps will reproduce the problem?
1. delete .wine
2. ./winetricks.svn870 xna31
3.
err:msi:ITERATE_Actions Execution halted, action L"NotWinFx2Action" returned
1603
------------------------------------------------------
Note: command 'wine msiexec /quiet /i xnafx31_redist.msi' returned status 67.
Aborting.
What is the expected output? What do you see instead?
Expected = no fail of installing xna31
Seen instead = xna31 MS installer complains about missing .net 2 framework and
ends
What version of the product are you using? On what operating system?
On Ubuntu 12.04 with latest Wine PPA[1]
winetricks SVN r870
[1]
https://launchpad.net/~ubuntu-wine/+archive/ppa/+packages
wine1.5 - 1.5.10-0ubuntu1~pulse19+build2
wine-mono0.0.4 - 0.0.4-0ubuntu1~ppa1
wine-gecko1.7 - 1.7-0ubuntu1~ppa1~precise1
etc
```
Original issue reported on code.google.com by `stefan.h...@gmail.com` on 5 Aug 2012 at 8:37
Attachments:
* [consoleoutput fail xna31](https://storage.googleapis.com/google-code-attachments/winetricks/issue-243/comment-0/consoleoutput fail xna31)
| non_priority | fails due to missing what steps will reproduce the problem delete wine winetricks err msi iterate actions execution halted action l returned note command wine msiexec quiet i redist msi returned status aborting what is the expected output what do you see instead expected no fail of installing seen instead ms installer complains about missing net framework and ends what version of the product are you using on what operating system on ubuntu with latest wine ppa winetricks svn wine wine etc original issue reported on code google com by stefan h gmail com on aug at attachments fail | 0 |
275,498 | 8,576,355,871 | IssuesEvent | 2018-11-12 20:07:14 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | discordapp.com - site is not usable | browser-firefox priority-important | <!-- @browser: Firefox 65.0 -->
<!-- @ua_header: Mozilla/5.0 (X11; Linux x86_64; rv:65.0) Gecko/20100101 Firefox/65.0 -->
<!-- @reported_with: desktop-reporter -->
**URL**: https://discordapp.com/channels/@me
**Browser / Version**: Firefox 65.0
**Operating System**: Linux
**Tested Another Browser**: No
**Problem type**: Site is not usable
**Description**: Can't connect to websocket
**Steps to Reproduce**:
Just login to discord webpage and it will hang(loop forever)...
[](https://webcompat.com/uploads/2018/11/5a7905a0-bb55-4719-947f-01c2119fb3eb.jpeg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>mixed active content blocked: false</li><li>image.mem.shared: true</li><li>buildID: 20181108220756</li><li>tracking content blocked: false</li><li>gfx.webrender.blob-images: true</li><li>hasTouchScreen: false</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>gfx.webrender.all: false</li><li>channel: nightly</li>
</ul>
<p>Console Messages:</p>
<pre>
[u'[JavaScript Warning: "Content Security Policy: Ignoring \'unsafe-inline\' within script-src or style-src: nonce-source or hash-source specified"]', u'[JavaScript Warning: "Content Security Policy: Directive child-src has been deprecated. Please use directive worker-src to control workers, or directive frame-src to control frames respectively."]', u'[JavaScript Error: "Content Security Policy: The pages settings blocked the loading of a resource at inline (script-src)." {file: "https://discordapp.com/channels/@me" line: 1}]', u'[console.log([FAST CONNECT] wss://gateway.discord.gg/?encoding=json&v=6&compress=zlib-stream, encoding: json, version: 6) https://discordapp.com/channels/@me:36:341]', u'[console.log([BUILD INFO] Release Channel: stable, Build Number: 27701, Version Hash: 9b58af88d77832ba987ea20fdaa233ff13436ce9) https://discordapp.com/assets/e723596310c8e06b0a72.js:127:11531]', u'[console.info([GatewayDiscovery], [STICKY] wss://gateway.discord.gg) https://discordapp.com/assets/e723596310c8e06b0a72.js:127:11531]', u'[console.info([GatewaySocket], [CONNECT] wss://gateway.discord.gg, encoding: json, version: 6, compression: zlib-stream) https://discordapp.com/assets/e723596310c8e06b0a72.js:127:11531]', u'[console.warn([GatewaySocket], [WS CLOSED] (false, 0, The connection timed out after 20000 ms - did not receive OP_HELLO in time.) retrying in 1.80 seconds.) https://discordapp.com/assets/e723596310c8e06b0a72.js:127:11531]', u'[console.info([GatewayDiscovery], [STICKY] wss://gateway.discord.gg) https://discordapp.com/assets/e723596310c8e06b0a72.js:127:11531]', u'[console.info([GatewaySocket], [CONNECT] wss://gateway.discord.gg, encoding: json, version: 6, compression: zlib-stream) https://discordapp.com/assets/e723596310c8e06b0a72.js:127:11531]']
</pre>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | discordapp.com - site is not usable - <!-- @browser: Firefox 65.0 -->
<!-- @ua_header: Mozilla/5.0 (X11; Linux x86_64; rv:65.0) Gecko/20100101 Firefox/65.0 -->
<!-- @reported_with: desktop-reporter -->
**URL**: https://discordapp.com/channels/@me
**Browser / Version**: Firefox 65.0
**Operating System**: Linux
**Tested Another Browser**: No
**Problem type**: Site is not usable
**Description**: Can't connect to websocket
**Steps to Reproduce**:
Just login to discord webpage and it will hang(loop forever)...
[](https://webcompat.com/uploads/2018/11/5a7905a0-bb55-4719-947f-01c2119fb3eb.jpeg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>mixed active content blocked: false</li><li>image.mem.shared: true</li><li>buildID: 20181108220756</li><li>tracking content blocked: false</li><li>gfx.webrender.blob-images: true</li><li>hasTouchScreen: false</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>gfx.webrender.all: false</li><li>channel: nightly</li>
</ul>
<p>Console Messages:</p>
<pre>
[u'[JavaScript Warning: "Content Security Policy: Ignoring \'unsafe-inline\' within script-src or style-src: nonce-source or hash-source specified"]', u'[JavaScript Warning: "Content Security Policy: Directive child-src has been deprecated. Please use directive worker-src to control workers, or directive frame-src to control frames respectively."]', u'[JavaScript Error: "Content Security Policy: The pages settings blocked the loading of a resource at inline (script-src)." {file: "https://discordapp.com/channels/@me" line: 1}]', u'[console.log([FAST CONNECT] wss://gateway.discord.gg/?encoding=json&v=6&compress=zlib-stream, encoding: json, version: 6) https://discordapp.com/channels/@me:36:341]', u'[console.log([BUILD INFO] Release Channel: stable, Build Number: 27701, Version Hash: 9b58af88d77832ba987ea20fdaa233ff13436ce9) https://discordapp.com/assets/e723596310c8e06b0a72.js:127:11531]', u'[console.info([GatewayDiscovery], [STICKY] wss://gateway.discord.gg) https://discordapp.com/assets/e723596310c8e06b0a72.js:127:11531]', u'[console.info([GatewaySocket], [CONNECT] wss://gateway.discord.gg, encoding: json, version: 6, compression: zlib-stream) https://discordapp.com/assets/e723596310c8e06b0a72.js:127:11531]', u'[console.warn([GatewaySocket], [WS CLOSED] (false, 0, The connection timed out after 20000 ms - did not receive OP_HELLO in time.) retrying in 1.80 seconds.) https://discordapp.com/assets/e723596310c8e06b0a72.js:127:11531]', u'[console.info([GatewayDiscovery], [STICKY] wss://gateway.discord.gg) https://discordapp.com/assets/e723596310c8e06b0a72.js:127:11531]', u'[console.info([GatewaySocket], [CONNECT] wss://gateway.discord.gg, encoding: json, version: 6, compression: zlib-stream) https://discordapp.com/assets/e723596310c8e06b0a72.js:127:11531]']
</pre>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | priority | discordapp com site is not usable url browser version firefox operating system linux tested another browser no problem type site is not usable description can t connect to websocket steps to reproduce just login to discord webpage and it will hang loop forever browser configuration mixed active content blocked false image mem shared true buildid tracking content blocked false gfx webrender blob images true hastouchscreen false mixed passive content blocked false gfx webrender enabled false gfx webrender all false channel nightly console messages u u u wss gateway discord gg encoding json v compress zlib stream encoding json version u release channel stable build number version hash u wss gateway discord gg u wss gateway discord gg encoding json version compression zlib stream u false the connection timed out after ms did not receive op hello in time retrying in seconds u wss gateway discord gg u wss gateway discord gg encoding json version compression zlib stream from with ❤️ | 1 |
345,226 | 10,354,581,642 | IssuesEvent | 2019-09-05 14:01:20 | grpc/grpc | https://api.github.com/repos/grpc/grpc | closed | support custom Open SSL library for builds | kind/enhancement lang/c++ priority/P3 | ### What version of gRPC and what language are you using?
1.12
### What operating system (Linux, Windows, …) and version?
Ubuntu 16.04 (WSL)
### What runtime / compiler are you using (e.g. python version or version of gcc)
clang 5.0
### What did you do?
`ccmake -DOPENSSL_CRYPTO_LIBRARY=/mnt/d/OpenSSL/1_0_2h/lib/Linux/x86_64-unknown-linux-gnu -DOPENSSL_INCLUDE_DIR=/mnt/d/OpenSSL/1_0_2h/include/Linux/x86_64-unknown-linux-gnu -DCMAKE_CROSSCOMPILING=1 -DRUN_HAVE_POSIX_REGEX=0 -DRUN_HAVE_GNU_POSIX_REGEX=0 -DRUN_HAVE_STD_REGEX=0 -DRUN_HAVE_STEADY_CLOCK=0 -DHAVE_THREAD_SAFETY_ATTRIBUTES=0 -DOPENSSL_SSL_LIBRARY=OpenSSL CMakeLists.txt`
Not an error per se. I just want to use custom OpenSSL library. Without it, when I try to integrate grpc, I get bunch of duplicated symbols error from boring ssl.
### What did you expect to see?
Build picks up provided SSL library.
### What did you see instead?
Build is still using included boring ssl.
### Anything else we should know about your project / environment?
I'm doing integration of gRPC into Unreal Engine.
| 1.0 | support custom Open SSL library for builds - ### What version of gRPC and what language are you using?
1.12
### What operating system (Linux, Windows, …) and version?
Ubuntu 16.04 (WSL)
### What runtime / compiler are you using (e.g. python version or version of gcc)
clang 5.0
### What did you do?
`ccmake -DOPENSSL_CRYPTO_LIBRARY=/mnt/d/OpenSSL/1_0_2h/lib/Linux/x86_64-unknown-linux-gnu -DOPENSSL_INCLUDE_DIR=/mnt/d/OpenSSL/1_0_2h/include/Linux/x86_64-unknown-linux-gnu -DCMAKE_CROSSCOMPILING=1 -DRUN_HAVE_POSIX_REGEX=0 -DRUN_HAVE_GNU_POSIX_REGEX=0 -DRUN_HAVE_STD_REGEX=0 -DRUN_HAVE_STEADY_CLOCK=0 -DHAVE_THREAD_SAFETY_ATTRIBUTES=0 -DOPENSSL_SSL_LIBRARY=OpenSSL CMakeLists.txt`
Not an error per se. I just want to use custom OpenSSL library. Without it, when I try to integrate grpc, I get bunch of duplicated symbols error from boring ssl.
### What did you expect to see?
Build picks up provided SSL library.
### What did you see instead?
Build is still using included boring ssl.
### Anything else we should know about your project / environment?
I'm doing integration of gRPC into Unreal Engine.
| priority | support custom open ssl library for builds what version of grpc and what language are you using what operating system linux windows … and version ubuntu wsl what runtime compiler are you using e g python version or version of gcc clang what did you do ccmake dopenssl crypto library mnt d openssl lib linux unknown linux gnu dopenssl include dir mnt d openssl include linux unknown linux gnu dcmake crosscompiling drun have posix regex drun have gnu posix regex drun have std regex drun have steady clock dhave thread safety attributes dopenssl ssl library openssl cmakelists txt not an error per se i just want to use custom openssl library without it when i try to integrate grpc i get bunch of duplicated symbols error from boring ssl what did you expect to see build picks up provided ssl library what did you see instead build is still using included boring ssl anything else we should know about your project environment i m doing integration of grpc into unreal engine | 1 |
767,375 | 26,921,456,534 | IssuesEvent | 2023-02-07 10:44:45 | AUBGTheHUB/spa-website-2022 | https://api.github.com/repos/AUBGTheHUB/spa-website-2022 | closed | OnClick function for redirecting to Landing page | frontend medium priority SPA | Create an onClick function in React for redirecting users from 'subpages' (e.g. jobs page, or HackAUBG page) to the Landing page (main page) every time the user clicks the HUB logo/name in the Navbar.
| 1.0 | OnClick function for redirecting to Landing page - Create an onClick function in React for redirecting users from 'subpages' (e.g. jobs page, or HackAUBG page) to the Landing page (main page) every time the user clicks the HUB logo/name in the Navbar.
| priority | onclick function for redirecting to landing page create an onclick function in react for redirecting users from subpages e g jobs page or hackaubg page to the landing page main page every time the user clicks the hub logo name in the navbar | 1 |
128,557 | 10,542,902,024 | IssuesEvent | 2019-10-02 14:03:18 | brave/brave-browser | https://api.github.com/repos/brave/brave-browser | closed | web3 prompt/modal appearing for sites that don't have web3 integrated | QA/Test-Plan-Specified QA/Yes bug feature/crypto-wallets | <!-- Have you searched for similar issues? Before submitting this issue, please check the open issues and add a note before logging a new issue.
PLEASE USE THE TEMPLATE BELOW TO PROVIDE INFORMATION ABOUT THE ISSUE.
INSUFFICIENT INFO WILL GET THE ISSUE CLOSED. IT WILL ONLY BE REOPENED AFTER SUFFICIENT INFO IS PROVIDED-->
## Description
<!--Provide a brief description of the issue-->
Some websites are triggering the `Crypto Wallets` prompt/modal even though they don't have `web3` integrated.
## Steps to Reproduce
<!--Please add a series of steps to reproduce the issue-->
1. launch `0.69.129 Chromium: 77.0.3865.90`
2. visit wnyc.org and you'll receive a `web3` modal/prompt
## Actual result:
<!--Please add screenshots if needed-->

## Expected result:
The `web3` prompt/modal shouldn't be appearing on websites that haven't integrated `web3`
## Reproduces how often:
<!--[Easily reproduced/Intermittent issue/No steps to reproduce]-->
100% reproducible when using the above STR on a release build.
## Brave version (brave://version info)
<!--For installed build, please copy Brave, Revision and OS from brave://version and paste here. If building from source please mention it along with brave://version details-->
Brave | 0.69.129 Chromium: 77.0.3865.90 (Official Build) (64-bit)
-- | --
Revision | 58c425ba843df2918d9d4b409331972646c393dd-refs/branch-heads/3865@{#830}
OS | macOS Version 10.14.6 (Build 18G95)
Brave | 0.72.60 Chromium: 77.0.3865.90 (Official Build) nightly (64-bit)
-- | --
Revision | 58c425ba843df2918d9d4b409331972646c393dd-refs/branch-heads/3865@{#830}
OS | macOS Version 10.14.6 (Build 18G95)
## Version/Channel Information:
<!--Does this issue happen on any other channels? Or is it specific to a certain channel?-->
- Can you reproduce this issue with the current release? `N/A` (feature not released yet)
- Can you reproduce this issue with the beta channel? `Yes`
- Can you reproduce this issue with the dev channel? `Yes`
- Can you reproduce this issue with the nightly channel? `Yes`
## Other Additional Information:
- Does the issue resolve itself when disabling Brave Shields? `N/A`
- Does the issue resolve itself when disabling Brave Rewards? `N/A`
- Is the issue reproducible on the latest version of Chrome? `N/A`
## Miscellaneous Information:
<!--Any additional information, related issues, extra QA steps, configuration or data that might be necessary to reproduce the issue-->
CCing @bbondy @ryanml @mbacchi @brave/legacy_qa | 1.0 | web3 prompt/modal appearing for sites that don't have web3 integrated - <!-- Have you searched for similar issues? Before submitting this issue, please check the open issues and add a note before logging a new issue.
PLEASE USE THE TEMPLATE BELOW TO PROVIDE INFORMATION ABOUT THE ISSUE.
INSUFFICIENT INFO WILL GET THE ISSUE CLOSED. IT WILL ONLY BE REOPENED AFTER SUFFICIENT INFO IS PROVIDED-->
## Description
<!--Provide a brief description of the issue-->
Some websites are triggering the `Crypto Wallets` prompt/modal even though they don't have `web3` integrated.
## Steps to Reproduce
<!--Please add a series of steps to reproduce the issue-->
1. launch `0.69.129 Chromium: 77.0.3865.90`
2. visit wnyc.org and you'll receive a `web3` modal/prompt
## Actual result:
<!--Please add screenshots if needed-->

## Expected result:
The `web3` prompt/modal shouldn't be appearing on websites that haven't integrated `web3`
## Reproduces how often:
<!--[Easily reproduced/Intermittent issue/No steps to reproduce]-->
100% reproducible when using the above STR on a release build.
## Brave version (brave://version info)
<!--For installed build, please copy Brave, Revision and OS from brave://version and paste here. If building from source please mention it along with brave://version details-->
Brave | 0.69.129 Chromium: 77.0.3865.90 (Official Build) (64-bit)
-- | --
Revision | 58c425ba843df2918d9d4b409331972646c393dd-refs/branch-heads/3865@{#830}
OS | macOS Version 10.14.6 (Build 18G95)
Brave | 0.72.60 Chromium: 77.0.3865.90 (Official Build) nightly (64-bit)
-- | --
Revision | 58c425ba843df2918d9d4b409331972646c393dd-refs/branch-heads/3865@{#830}
OS | macOS Version 10.14.6 (Build 18G95)
## Version/Channel Information:
<!--Does this issue happen on any other channels? Or is it specific to a certain channel?-->
- Can you reproduce this issue with the current release? `N/A` (feature not released yet)
- Can you reproduce this issue with the beta channel? `Yes`
- Can you reproduce this issue with the dev channel? `Yes`
- Can you reproduce this issue with the nightly channel? `Yes`
## Other Additional Information:
- Does the issue resolve itself when disabling Brave Shields? `N/A`
- Does the issue resolve itself when disabling Brave Rewards? `N/A`
- Is the issue reproducible on the latest version of Chrome? `N/A`
## Miscellaneous Information:
<!--Any additional information, related issues, extra QA steps, configuration or data that might be necessary to reproduce the issue-->
CCing @bbondy @ryanml @mbacchi @brave/legacy_qa | non_priority | prompt modal appearing for sites that don t have integrated have you searched for similar issues before submitting this issue please check the open issues and add a note before logging a new issue please use the template below to provide information about the issue insufficient info will get the issue closed it will only be reopened after sufficient info is provided description some websites are triggering the crypto wallets prompt modal even though they don t have integrated steps to reproduce launch chromium visit wnyc org and you ll receive a modal prompt actual result expected result the prompt modal shouldn t be appearing on websites that haven t integrated reproduces how often reproducible when using the above str on a release build brave version brave version info brave chromium official build bit revision refs branch heads os macos version build brave chromium official build nightly bit revision refs branch heads os macos version build version channel information can you reproduce this issue with the current release n a feature not released yet can you reproduce this issue with the beta channel yes can you reproduce this issue with the dev channel yes can you reproduce this issue with the nightly channel yes other additional information does the issue resolve itself when disabling brave shields n a does the issue resolve itself when disabling brave rewards n a is the issue reproducible on the latest version of chrome n a miscellaneous information ccing bbondy ryanml mbacchi brave legacy qa | 0 |
2,833 | 3,900,901,025 | IssuesEvent | 2016-04-18 08:37:48 | dart-lang/sdk | https://api.github.com/repos/dart-lang/sdk | opened | cache-dir on a builder has a 40 GB cache of dart-lang/SDK - how is this possible | area-infrastructure Type: bug | File this as an issue with chrome-infrastructure-team, to get their input.
This filled up a 60GB drive, and caused the builder to fail.
Here is the result of df -k -d1 in the cache-dir
...
1013228 cache_dir/chromium.googlesource.com-external-github.com-dart--lang-co19
2965224 cache_dir/boringssl.googlesource.com-boringssl
40016532 cache_dir/chromium.googlesource.com-external-github.com-dart--lang-sdk
45689664 cache_dir
| 1.0 | cache-dir on a builder has a 40 GB cache of dart-lang/SDK - how is this possible - File this as an issue with chrome-infrastructure-team, to get their input.
This filled up a 60GB drive, and caused the builder to fail.
Here is the result of df -k -d1 in the cache-dir
...
1013228 cache_dir/chromium.googlesource.com-external-github.com-dart--lang-co19
2965224 cache_dir/boringssl.googlesource.com-boringssl
40016532 cache_dir/chromium.googlesource.com-external-github.com-dart--lang-sdk
45689664 cache_dir
| non_priority | cache dir on a builder has a gb cache of dart lang sdk how is this possible file this as an issue with chrome infrastructure team to get their input this filled up a drive and caused the builder to fail here is the result of df k in the cache dir cache dir chromium googlesource com external github com dart lang cache dir boringssl googlesource com boringssl cache dir chromium googlesource com external github com dart lang sdk cache dir | 0 |
129,946 | 18,152,074,021 | IssuesEvent | 2021-09-26 12:43:13 | anyulled/jbcnconf-react | https://api.github.com/repos/anyulled/jbcnconf-react | closed | CVE-2021-27290 (High) detected in ssri-6.0.1.tgz - autoclosed | security vulnerability | ## CVE-2021-27290 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ssri-6.0.1.tgz</b></p></summary>
<p>Standard Subresource Integrity library -- parses, serializes, generates, and verifies integrity metadata according to the SRI spec.</p>
<p>Library home page: <a href="https://registry.npmjs.org/ssri/-/ssri-6.0.1.tgz">https://registry.npmjs.org/ssri/-/ssri-6.0.1.tgz</a></p>
<p>Path to dependency file: jbcnconf-react/package.json</p>
<p>Path to vulnerable library: jbcnconf-react/node_modules/webpack/node_modules/ssri/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-4.0.3.tgz (Root Library)
- webpack-4.44.2.tgz
- terser-webpack-plugin-1.4.5.tgz
- cacache-12.0.4.tgz
- :x: **ssri-6.0.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/anyulled/jbcnconf-react/commit/520290716d5df64ebc72031adcfee0b067d16c03">520290716d5df64ebc72031adcfee0b067d16c03</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
ssri 5.2.2-8.0.0, fixed in 8.0.1, processes SRIs using a regular expression which is vulnerable to a denial of service. Malicious SRIs could take an extremely long time to process, leading to denial of service. This issue only affects consumers using the strict option.
<p>Publish Date: 2021-03-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-27290>CVE-2021-27290</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-27290">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-27290</a></p>
<p>Release Date: 2021-03-12</p>
<p>Fix Resolution: ssri - 6.0.2,8.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-27290 (High) detected in ssri-6.0.1.tgz - autoclosed - ## CVE-2021-27290 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ssri-6.0.1.tgz</b></p></summary>
<p>Standard Subresource Integrity library -- parses, serializes, generates, and verifies integrity metadata according to the SRI spec.</p>
<p>Library home page: <a href="https://registry.npmjs.org/ssri/-/ssri-6.0.1.tgz">https://registry.npmjs.org/ssri/-/ssri-6.0.1.tgz</a></p>
<p>Path to dependency file: jbcnconf-react/package.json</p>
<p>Path to vulnerable library: jbcnconf-react/node_modules/webpack/node_modules/ssri/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-4.0.3.tgz (Root Library)
- webpack-4.44.2.tgz
- terser-webpack-plugin-1.4.5.tgz
- cacache-12.0.4.tgz
- :x: **ssri-6.0.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/anyulled/jbcnconf-react/commit/520290716d5df64ebc72031adcfee0b067d16c03">520290716d5df64ebc72031adcfee0b067d16c03</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
ssri 5.2.2-8.0.0, fixed in 8.0.1, processes SRIs using a regular expression which is vulnerable to a denial of service. Malicious SRIs could take an extremely long time to process, leading to denial of service. This issue only affects consumers using the strict option.
<p>Publish Date: 2021-03-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-27290>CVE-2021-27290</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-27290">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-27290</a></p>
<p>Release Date: 2021-03-12</p>
<p>Fix Resolution: ssri - 6.0.2,8.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve high detected in ssri tgz autoclosed cve high severity vulnerability vulnerable library ssri tgz standard subresource integrity library parses serializes generates and verifies integrity metadata according to the sri spec library home page a href path to dependency file jbcnconf react package json path to vulnerable library jbcnconf react node modules webpack node modules ssri package json dependency hierarchy react scripts tgz root library webpack tgz terser webpack plugin tgz cacache tgz x ssri tgz vulnerable library found in head commit a href found in base branch master vulnerability details ssri fixed in processes sris using a regular expression which is vulnerable to a denial of service malicious sris could take an extremely long time to process leading to denial of service this issue only affects consumers using the strict option publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ssri step up your open source security game with whitesource | 0 |
754,852 | 26,406,149,537 | IssuesEvent | 2023-01-13 08:12:02 | Northeastern-Electric-Racing/shepherd_bms | https://api.github.com/repos/Northeastern-Electric-Racing/shepherd_bms | closed | Implement Charging Logic | Feature High Priority | This will utilize the "Charger" API from the compute board to say when the car should/should not be charging and if we are allowing charging | 1.0 | Implement Charging Logic - This will utilize the "Charger" API from the compute board to say when the car should/should not be charging and if we are allowing charging | priority | implement charging logic this will utilize the charger api from the compute board to say when the car should should not be charging and if we are allowing charging | 1 |
795,784 | 28,086,121,953 | IssuesEvent | 2023-03-30 09:53:12 | robotframework/robotframework | https://api.github.com/repos/robotframework/robotframework | opened | Support type aliases in formats `'list[int]'` and `'int | float'` in argument conversion | enhancement priority: medium effort: medium | Our argument conversion typically uses based on actual types like `int`, `list[int]` and `int | float`, but we also support type aliases as strings like `'int'` or `'integer'`. The motivation for type aliases is to support types returned, for example, by dynamic libraries wrapping code using other languages. Such libraries can simply return type names as strings instead of mapping them to actual Python types.
There are two limitations with type aliases, though:
- It isn't possible to represent types with nested types like `'list[int]'`. Aliases always map to a single concrete type, not to nested types.
- Unions cannot be represented using "Python syntax" like `'int | float'`. It is possible to use a tuple like `('int', 'float')`, though, so this is mainly an inconvenience.
Implementing this enhancement requires two things:
- Support for parsing strings like `'list[int]'` and `'int | float'`. Results could be newish [TypeInfo](https://github.com/robotframework/robotframework/blob/6e6f3a595d800ff43e792c4a7c582e7bf6abc131/src/robot/running/arguments/argumentspec.py#L183) objects that were added to make Libdoc handle nested types properly (#4538). Probably we could add a new `TypeInfo.from_string` class method.
- Enhance type conversion to work with `TypeInfo`. Currently these objects are only used by Libdoc.
In addition to helping with libraries wrapping non-Python code, this enhancement would allow us to create argument converters based on Libdoc spec files. That would probably be useful for external tools such as editor plugins. | 1.0 | Support type aliases in formats `'list[int]'` and `'int | float'` in argument conversion - Our argument conversion typically uses based on actual types like `int`, `list[int]` and `int | float`, but we also support type aliases as strings like `'int'` or `'integer'`. The motivation for type aliases is to support types returned, for example, by dynamic libraries wrapping code using other languages. Such libraries can simply return type names as strings instead of mapping them to actual Python types.
There are two limitations with type aliases, though:
- It isn't possible to represent types with nested types like `'list[int]'`. Aliases always map to a single concrete type, not to nested types.
- Unions cannot be represented using "Python syntax" like `'int | float'`. It is possible to use a tuple like `('int', 'float')`, though, so this is mainly an inconvenience.
Implementing this enhancement requires two things:
- Support for parsing strings like `'list[int]'` and `'int | float'`. Results could be newish [TypeInfo](https://github.com/robotframework/robotframework/blob/6e6f3a595d800ff43e792c4a7c582e7bf6abc131/src/robot/running/arguments/argumentspec.py#L183) objects that were added to make Libdoc handle nested types properly (#4538). Probably we could add a new `TypeInfo.from_string` class method.
- Enhance type conversion to work with `TypeInfo`. Currently these objects are only used by Libdoc.
In addition to helping with libraries wrapping non-Python code, this enhancement would allow us to create argument converters based on Libdoc spec files. That would probably be useful for external tools such as editor plugins. | priority | support type aliases in formats list and int float in argument conversion our argument conversion typically uses based on actual types like int list and int float but we also support type aliases as strings like int or integer the motivation for type aliases is to support types returned for example by dynamic libraries wrapping code using other languages such libraries can simply return type names as strings instead of mapping them to actual python types there are two limitations with type aliases though it isn t possible to represent types with nested types like list aliases always map to a single concrete type not to nested types unions cannot be represented using python syntax like int float it is possible to use a tuple like int float though so this is mainly an inconvenience implementing this enhancement requires two things support for parsing strings like list and int float results could be newish objects that were added to make libdoc handle nested types properly probably we could add a new typeinfo from string class method enhance type conversion to work with typeinfo currently these objects are only used by libdoc in addition to helping with libraries wrapping non python code this enhancement would allow us to create argument converters based on libdoc spec files that would probably be useful for external tools such as editor plugins | 1 |
182,511 | 30,858,454,071 | IssuesEvent | 2023-08-02 23:18:36 | AlaskaAirlines/AuroDesignTokens | https://api.github.com/repos/AlaskaAirlines/AuroDesignTokens | closed | Create new Jet Stream tokens/repo | Type: Feature Type: Documentation design tokens | # General Support Request
In order to deliver theming, we need to support a fork of the Auro design tokens to create JetStream Design Tokens.
## Possible Solution
There are two ways we can do this. One, host all the tokens in a single repo and configure a way to distribute two sets of tokens. Two, we simply clone the repo and rename.
## Specification
When comparing tokens between Auro and JetStream, there is a fair number of tokens that are consistent between the two. If the direction includes cloning the repo, we should consider not duplicating the shared tokens.
E.g. we might want to consider that users install both Auto and Jetstream and when there is a common token, that will use Auro and the Jetstream token would override the Auro token.
A worksheet of data from @leeejune - [worksheet](https://alaskaair-my.sharepoint.com/:x:/r/personal/june_lee_alaskaair_com/_layouts/15/Doc.aspx?sourcedoc=%7B752C91D6-4B99-4556-B38B-3C9052C9D793%7D&file=ITS%20%26%20Auro%20Design%20Token%20Mapping.xlsx&wdOrigin=TEAMS-ELECTRON.p2p.bim&ct=1670456350163&action=default&mobileredirect=true&wdExp=TEAMS-CONTROL&wdhostclicktime=1676652837156&web=1&cid=7e8e1379-e3c5-43b9-9062-1d770b7b3fb3)
## Recomendation
@leeejune pair with @jordanjones243 to ensure quality of code reference. There is a lot of data in the spreadsheet.
## Exit criteria
This issue will be considered closed once there is a set of tokens that any engineer can subscribe to that is specifically for JetStream UI. | 1.0 | Create new Jet Stream tokens/repo - # General Support Request
In order to deliver theming, we need to support a fork of the Auro design tokens to create JetStream Design Tokens.
## Possible Solution
There are two ways we can do this. One, host all the tokens in a single repo and configure a way to distribute two sets of tokens. Two, we simply clone the repo and rename.
## Specification
When comparing tokens between Auro and JetStream, there is a fair number of tokens that are consistent between the two. If the direction includes cloning the repo, we should consider not duplicating the shared tokens.
E.g. we might want to consider that users install both Auto and Jetstream and when there is a common token, that will use Auro and the Jetstream token would override the Auro token.
A worksheet of data from @leeejune - [worksheet](https://alaskaair-my.sharepoint.com/:x:/r/personal/june_lee_alaskaair_com/_layouts/15/Doc.aspx?sourcedoc=%7B752C91D6-4B99-4556-B38B-3C9052C9D793%7D&file=ITS%20%26%20Auro%20Design%20Token%20Mapping.xlsx&wdOrigin=TEAMS-ELECTRON.p2p.bim&ct=1670456350163&action=default&mobileredirect=true&wdExp=TEAMS-CONTROL&wdhostclicktime=1676652837156&web=1&cid=7e8e1379-e3c5-43b9-9062-1d770b7b3fb3)
## Recomendation
@leeejune pair with @jordanjones243 to ensure quality of code reference. There is a lot of data in the spreadsheet.
## Exit criteria
This issue will be considered closed once there is a set of tokens that any engineer can subscribe to that is specifically for JetStream UI. | non_priority | create new jet stream tokens repo general support request in order to deliver theming we need to support a fork of the auro design tokens to create jetstream design tokens possible solution there are two ways we can do this one host all the tokens in a single repo and configure a way to distribute two sets of tokens two we simply clone the repo and rename specification when comparing tokens between auro and jetstream there is a fair number of tokens that are consistent between the two if the direction includes cloning the repo we should consider not duplicating the shared tokens e g we might want to consider that users install both auto and jetstream and when there is a common token that will use auro and the jetstream token would override the auro token a worksheet of data from leeejune recomendation leeejune pair with to ensure quality of code reference there is a lot of data in the spreadsheet exit criteria this issue will be considered closed once there is a set of tokens that any engineer can subscribe to that is specifically for jetstream ui | 0 |
804,297 | 29,483,177,574 | IssuesEvent | 2023-06-02 07:43:19 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | drivers: can: mcp2515: default thread stack size too small | bug priority: low area: CAN | **Describe the bug**
The default thread stack size of 512 bytes is too small and causes stack overflows on our `frdm_k64f` reference platform.
**To Reproduce**
Steps to reproduce the behavior:
1. `west build -b frdm_k64f tests/drivers/can/api -- -DSHIELD=keyestudio_can_bus_ks0411`
2. `west flash`
3. See error:
```
*** Booting Zephyr OS build v3.4.0-rc1-131-g941e4f000b41 ***
Running TESTSUITE can_classic
===================================================================
START - test_add_filter
PASS - test_add_filter in 0.001 seconds
===================================================================
START - test_filters_added_while_stopped
E: ***** BUS FAULT *****
E: Stacking error
E: Imprecise data bus error
E: NXP MPU error, port 3
E: Mode: Supervisor, Data Address: 0x20000ae8
E: Type: Write, Master: 0, Regions: 0x8100
E: r0/a1: 0x00000800 r1/a2: 0x00000000 r2/a3: 0x00000000
E: r3/a4: 0x00000000 r12/ip: 0xfffffff5 r14/lr: 0x00000000
E: xpsr: 0x20000400
E: Faulting instruction address (r15/pc): 0x00003100
E: >>> ZEPHYR FATAL ERROR 2: Stack overflow on CPU 0
E: Current thread: 0x20002990 (unknown)
E: Halting system
```
**Expected behavior**
Default stack should be sufficient for reference platform.
**Impact**
Stack size must be manually increased.
**Environment (please complete the following information):**
- OS: Linux
- Toolchain Zephyr SDK
- Commit SHA: 941e4f000b4175b7c7d8cb30c56931b172eb68d3
| 1.0 | drivers: can: mcp2515: default thread stack size too small - **Describe the bug**
The default thread stack size of 512 bytes is too small and causes stack overflows on our `frdm_k64f` reference platform.
**To Reproduce**
Steps to reproduce the behavior:
1. `west build -b frdm_k64f tests/drivers/can/api -- -DSHIELD=keyestudio_can_bus_ks0411`
2. `west flash`
3. See error:
```
*** Booting Zephyr OS build v3.4.0-rc1-131-g941e4f000b41 ***
Running TESTSUITE can_classic
===================================================================
START - test_add_filter
PASS - test_add_filter in 0.001 seconds
===================================================================
START - test_filters_added_while_stopped
E: ***** BUS FAULT *****
E: Stacking error
E: Imprecise data bus error
E: NXP MPU error, port 3
E: Mode: Supervisor, Data Address: 0x20000ae8
E: Type: Write, Master: 0, Regions: 0x8100
E: r0/a1: 0x00000800 r1/a2: 0x00000000 r2/a3: 0x00000000
E: r3/a4: 0x00000000 r12/ip: 0xfffffff5 r14/lr: 0x00000000
E: xpsr: 0x20000400
E: Faulting instruction address (r15/pc): 0x00003100
E: >>> ZEPHYR FATAL ERROR 2: Stack overflow on CPU 0
E: Current thread: 0x20002990 (unknown)
E: Halting system
```
**Expected behavior**
Default stack should be sufficient for reference platform.
**Impact**
Stack size must be manually increased.
**Environment (please complete the following information):**
- OS: Linux
- Toolchain Zephyr SDK
- Commit SHA: 941e4f000b4175b7c7d8cb30c56931b172eb68d3
| priority | drivers can default thread stack size too small describe the bug the default thread stack size of bytes is too small and causes stack overflows on our frdm reference platform to reproduce steps to reproduce the behavior west build b frdm tests drivers can api dshield keyestudio can bus west flash see error booting zephyr os build running testsuite can classic start test add filter pass test add filter in seconds start test filters added while stopped e bus fault e stacking error e imprecise data bus error e nxp mpu error port e mode supervisor data address e type write master regions e e ip lr e xpsr e faulting instruction address pc e zephyr fatal error stack overflow on cpu e current thread unknown e halting system expected behavior default stack should be sufficient for reference platform impact stack size must be manually increased environment please complete the following information os linux toolchain zephyr sdk commit sha | 1 |
451,076 | 32,007,292,531 | IssuesEvent | 2023-09-21 15:34:02 | frostsg/inf2001-p11-2 | https://api.github.com/repos/frostsg/inf2001-p11-2 | closed | 1.8.1: Team Management | Documentation (Report) Meeting | Goal: Complete Team Management
Success criteria: Completed Team Management under Project Management
Start Date: 21 September 2023
End Date: 23 September 2023
Owner: Yeo Qing You, Kenrick
Status: In Progress | 1.0 | 1.8.1: Team Management - Goal: Complete Team Management
Success criteria: Completed Team Management under Project Management
Start Date: 21 September 2023
End Date: 23 September 2023
Owner: Yeo Qing You, Kenrick
Status: In Progress | non_priority | team management goal complete team management success criteria completed team management under project management start date september end date september owner yeo qing you kenrick status in progress | 0 |
24,828 | 7,571,398,763 | IssuesEvent | 2018-04-23 12:07:49 | junit-team/junit5 | https://api.github.com/repos/junit-team/junit5 | reopened | Test against JDK 9 modules | theme: Java 9+10+11... theme: build type: task | ## Status Quo
JUnit 5 currently builds and runs against JDK 9 early access builds (including Jigsaw builds), but we do not yet have any tests in place that run against user code built with module info.
## Related Pull Requests
- PR #1061 Part I - Introduce `ModuleUtils` with `ClassFinder` SPI
- PR #1057 Part II - Add module `org.junit.platform.commons.jpms`
## Related Issues
- #296
- #600
- #775
## Further Resources
- [Testing against JDK 9 Early Access builds](https://github.com/junit-team/junit5/wiki/Testing-against-JDK-9-Early-Access-builds) -- JUnit 5 wiki page
- [Project Jigsaw: Module System Quick-Start Guide](http://openjdk.java.net/projects/jigsaw/quick-start)
- [State of the Module System](http://openjdk.java.net/projects/jigsaw/spec/sotms/)
- [Big Kill Switch](http://mail.openjdk.java.net/pipermail/jigsaw-dev/2017-March/011763.html) for disabling strong encapsulation on Java 9
- [Jigsaw questions](http://mail.openjdk.java.net/pipermail/jigsaw-dev/2017-July/013086.html) How to compile test classes that are packaged in same packages as production code?
## Deliverables
- [ ] Integration test JUnit 5 against sample applications that supply JDK 9 module info.
- [ ] Test classpath scanning within named modules.
- [ ] Test classpath scanning within the unnamed module.
- [ ] Test classpath scanning with _exploded_ modules.
- See https://jira.spring.io/browse/SPR-14579
- [ ] Test the use of reflection to instantiate user classes that are loaded from named modules.
- [ ] Test the use of reflection to instantiate user classes that are loaded from the unnamed module.
- [ ] Test support for `clazz.getPackage().getImplementationVersion()` and related methods for retrieving JAR versioning metadata for classes loaded from named modules.
- See discussion in https://github.com/junit-team/junit5/pull/598 for details.
- [ ] Provide java[c] usage example with [`--patch-module module=file(;file)*`](https://docs.oracle.com/javase/9/tools/java.htm#JSWOR624) | 1.0 | Test against JDK 9 modules - ## Status Quo
JUnit 5 currently builds and runs against JDK 9 early access builds (including Jigsaw builds), but we do not yet have any tests in place that run against user code built with module info.
## Related Pull Requests
- PR #1061 Part I - Introduce `ModuleUtils` with `ClassFinder` SPI
- PR #1057 Part II - Add module `org.junit.platform.commons.jpms`
## Related Issues
- #296
- #600
- #775
## Further Resources
- [Testing against JDK 9 Early Access builds](https://github.com/junit-team/junit5/wiki/Testing-against-JDK-9-Early-Access-builds) -- JUnit 5 wiki page
- [Project Jigsaw: Module System Quick-Start Guide](http://openjdk.java.net/projects/jigsaw/quick-start)
- [State of the Module System](http://openjdk.java.net/projects/jigsaw/spec/sotms/)
- [Big Kill Switch](http://mail.openjdk.java.net/pipermail/jigsaw-dev/2017-March/011763.html) for disabling strong encapsulation on Java 9
- [Jigsaw questions](http://mail.openjdk.java.net/pipermail/jigsaw-dev/2017-July/013086.html) How to compile test classes that are packaged in same packages as production code?
## Deliverables
- [ ] Integration test JUnit 5 against sample applications that supply JDK 9 module info.
- [ ] Test classpath scanning within named modules.
- [ ] Test classpath scanning within the unnamed module.
- [ ] Test classpath scanning with _exploded_ modules.
- See https://jira.spring.io/browse/SPR-14579
- [ ] Test the use of reflection to instantiate user classes that are loaded from named modules.
- [ ] Test the use of reflection to instantiate user classes that are loaded from the unnamed module.
- [ ] Test support for `clazz.getPackage().getImplementationVersion()` and related methods for retrieving JAR versioning metadata for classes loaded from named modules.
- See discussion in https://github.com/junit-team/junit5/pull/598 for details.
- [ ] Provide java[c] usage example with [`--patch-module module=file(;file)*`](https://docs.oracle.com/javase/9/tools/java.htm#JSWOR624) | non_priority | test against jdk modules status quo junit currently builds and runs against jdk early access builds including jigsaw builds but we do not yet have any tests in place that run against user code built with module info related pull requests pr part i introduce moduleutils with classfinder spi pr part ii add module org junit platform commons jpms related issues further resources junit wiki page for disabling strong encapsulation on java how to compile test classes that are packaged in same packages as production code deliverables integration test junit against sample applications that supply jdk module info test classpath scanning within named modules test classpath scanning within the unnamed module test classpath scanning with exploded modules see test the use of reflection to instantiate user classes that are loaded from named modules test the use of reflection to instantiate user classes that are loaded from the unnamed module test support for clazz getpackage getimplementationversion and related methods for retrieving jar versioning metadata for classes loaded from named modules see discussion in for details provide java usage example with | 0 |
639,814 | 20,766,761,473 | IssuesEvent | 2022-03-15 21:29:34 | bottlerocket-os/bottlerocket | https://api.github.com/repos/bottlerocket-os/bottlerocket | closed | enable hardening features for service units | type/enhancement security priority/p1 status/notstarted | **What I'd like:**
`systemd-analyze security` should turn green where possible.
Useful articles:
* [systemd service sandboxing and security hardening 101](https://www.ctrl.blog/entry/systemd-service-hardening.html)
* [Limit the impact of a security intrusion with systemd security directives](https://www.ctrl.blog/entry/systemd-opensmtpd-hardening.html)
**Any alternatives you've considered:**
N/A
| 1.0 | enable hardening features for service units - **What I'd like:**
`systemd-analyze security` should turn green where possible.
Useful articles:
* [systemd service sandboxing and security hardening 101](https://www.ctrl.blog/entry/systemd-service-hardening.html)
* [Limit the impact of a security intrusion with systemd security directives](https://www.ctrl.blog/entry/systemd-opensmtpd-hardening.html)
**Any alternatives you've considered:**
N/A
| priority | enable hardening features for service units what i d like systemd analyze security should turn green where possible useful articles any alternatives you ve considered n a | 1 |
13,738 | 3,355,437,129 | IssuesEvent | 2015-11-18 16:25:47 | quantopian/zipline | https://api.github.com/repos/quantopian/zipline | opened | Create a minute-rate futures payout test | Needs Tests | Current test coverage of futures payouts are all at the daily frequency. | 1.0 | Create a minute-rate futures payout test - Current test coverage of futures payouts are all at the daily frequency. | non_priority | create a minute rate futures payout test current test coverage of futures payouts are all at the daily frequency | 0 |
435,711 | 12,539,816,958 | IssuesEvent | 2020-06-05 09:16:17 | scality/metalk8s | https://api.github.com/repos/scality/metalk8s | opened | Flaky during bootstrap restore | complexity:easy kind:bug priority:high topic:flakiness | <!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks!
If the matter is security related, please disclose it privately to moonshot-platform@scality.com
-->
**Component**:
'salt', 'restore'
<!-- E.g. 'salt', 'containers', 'kubernetes', 'build', 'tests'... -->
**What happened**:
During bootstrap restore some time we are not able to mark the new bootstrap node as this one does not exist (as kubelet not registered yet)
```
----------
ID: Mark control plane node
Function: metalk8s_kubernetes.object_updated
Name: bootstrap
Result: False
Comment: An exception occurred in this state: Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/salt/state.py", line 1981, in call
**cdata['kwargs'])
File "/usr/lib/python2.7/site-packages/salt/loader.py", line 1977, in wrapper
return f(*args, **kwargs)
File "/var/cache/salt/master/extmods/states/metalk8s_kubernetes.py", line 205, in object_updated
**kwargs
File "/var/cache/salt/master/extmods/modules/metalk8s_kubernetes.py", line 214, in method
return _handle_error(exc, action)
File "/var/cache/salt/master/extmods/modules/metalk8s_kubernetes.py", line 72, in _handle_error
raise CommandExecutionError(base_msg + str(exception).decode('utf-8'))
CommandExecutionError: Failed to update object: (404)
Reason: Not Found
HTTP response headers: HTTPHeaderDict({'date': 'Fri, 05 Jun 2020 09:09:21 GMT', 'content-length': '188', 'content-type': 'application/json', 'cache-control': 'no-cache, private'})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"bootstrap\" not found","reason":"NotFound","details":{"name":"bootstrap","kind":"nodes"},"code":404}
Started: 09:09:21.731846
Duration: 40.144 ms
Changes:
```
**What was expected**:
No fail
**Steps to reproduce**
Flaky
**Resolution proposal** (optional):
Add a retry on this specific state | 1.0 | Flaky during bootstrap restore - <!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks!
If the matter is security related, please disclose it privately to moonshot-platform@scality.com
-->
**Component**:
'salt', 'restore'
<!-- E.g. 'salt', 'containers', 'kubernetes', 'build', 'tests'... -->
**What happened**:
During bootstrap restore some time we are not able to mark the new bootstrap node as this one does not exist (as kubelet not registered yet)
```
----------
ID: Mark control plane node
Function: metalk8s_kubernetes.object_updated
Name: bootstrap
Result: False
Comment: An exception occurred in this state: Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/salt/state.py", line 1981, in call
**cdata['kwargs'])
File "/usr/lib/python2.7/site-packages/salt/loader.py", line 1977, in wrapper
return f(*args, **kwargs)
File "/var/cache/salt/master/extmods/states/metalk8s_kubernetes.py", line 205, in object_updated
**kwargs
File "/var/cache/salt/master/extmods/modules/metalk8s_kubernetes.py", line 214, in method
return _handle_error(exc, action)
File "/var/cache/salt/master/extmods/modules/metalk8s_kubernetes.py", line 72, in _handle_error
raise CommandExecutionError(base_msg + str(exception).decode('utf-8'))
CommandExecutionError: Failed to update object: (404)
Reason: Not Found
HTTP response headers: HTTPHeaderDict({'date': 'Fri, 05 Jun 2020 09:09:21 GMT', 'content-length': '188', 'content-type': 'application/json', 'cache-control': 'no-cache, private'})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"bootstrap\" not found","reason":"NotFound","details":{"name":"bootstrap","kind":"nodes"},"code":404}
Started: 09:09:21.731846
Duration: 40.144 ms
Changes:
```
**What was expected**:
No fail
**Steps to reproduce**
Flaky
**Resolution proposal** (optional):
Add a retry on this specific state | priority | flaky during bootstrap restore please use this template while reporting a bug and provide as much info as possible not doing so may result in your bug not being addressed in a timely manner thanks if the matter is security related please disclose it privately to moonshot platform scality com component salt restore what happened during bootstrap restore some time we are not able to mark the new bootstrap node as this one does not exist as kubelet not registered yet id mark control plane node function kubernetes object updated name bootstrap result false comment an exception occurred in this state traceback most recent call last file usr lib site packages salt state py line in call cdata file usr lib site packages salt loader py line in wrapper return f args kwargs file var cache salt master extmods states kubernetes py line in object updated kwargs file var cache salt master extmods modules kubernetes py line in method return handle error exc action file var cache salt master extmods modules kubernetes py line in handle error raise commandexecutionerror base msg str exception decode utf commandexecutionerror failed to update object reason not found http response headers httpheaderdict date fri jun gmt content length content type application json cache control no cache private http response body kind status apiversion metadata status failure message nodes bootstrap not found reason notfound details name bootstrap kind nodes code started duration ms changes what was expected no fail steps to reproduce flaky resolution proposal optional add a retry on this specific state | 1 |
313,253 | 23,465,212,967 | IssuesEvent | 2022-08-16 16:08:32 | vitejs/vite | https://api.github.com/repos/vitejs/vite | closed | [v3] Document type inference with import.meta.glob | documentation | ### Describe the bug
When using `import.meta.glob` with vite v3, no generic type is provided anymore. The [JS Doc provides you with this information](https://github.com/vitejs/vite/blob/main/packages/vite/types/importGlob.d.ts#L41), but it would have helped me to have it in the [official docs](https://vitejs.dev/guide/features.html#glob-import) easily available.
Also, is this not a breaking change for typescript users, strictly speaking?
Thanks for a great project.
### Reproduction
https://stackblitz.com/edit/vitejs-vite-lgkx3h?file=src/main.ts&view=editor
### System Info
```shell
System:
OS: Linux 5.18 Arch Linux
Memory: 12.14 GB / 23.23 GB
Container: Yes
Shell: 5.9 - /bin/zsh
Binaries:
Node: 16.15.1 - ~/.nvm/versions/node/v16.15.1/bin/node
Yarn: 1.22.15 - ~/.nvm/versions/node/v16.15.1/bin/yarn
npm: 8.11.0 - ~/.nvm/versions/node/v16.15.1/bin/npm
Browsers:
Chromium: 104.0.5112.79
Firefox: 103.0.1
npmPackages:
vite: ^3.0.4 => 3.0.4
```
### Used Package Manager
yarn
### Logs
```
src/main.ts:4:15 - error TS2571: Object is of type 'unknown'.
4 const error = (await moduleImports[0]()).default;
```
### Validations
- [X] Follow our [Code of Conduct](https://github.com/vitejs/vite/blob/main/CODE_OF_CONDUCT.md)
- [X] Read the [Contributing Guidelines](https://github.com/vitejs/vite/blob/main/CONTRIBUTING.md).
- [X] Read the [docs](https://vitejs.dev/guide).
- [X] Check that there isn't [already an issue](https://github.com/vitejs/vite/issues) that reports the same bug to avoid creating a duplicate.
- [X] Make sure this is a Vite issue and not a framework-specific issue. For example, if it's a Vue SFC related bug, it should likely be reported to [vuejs/core](https://github.com/vuejs/core) instead.
- [X] Check that this is a concrete bug. For Q&A open a [GitHub Discussion](https://github.com/vitejs/vite/discussions) or join our [Discord Chat Server](https://chat.vitejs.dev/).
- [X] The provided reproduction is a [minimal reproducible example](https://stackoverflow.com/help/minimal-reproducible-example) of the bug. | 1.0 | [v3] Document type inference with import.meta.glob - ### Describe the bug
When using `import.meta.glob` with vite v3, no generic type is provided anymore. The [JS Doc provides you with this information](https://github.com/vitejs/vite/blob/main/packages/vite/types/importGlob.d.ts#L41), but it would have helped me to have it in the [official docs](https://vitejs.dev/guide/features.html#glob-import) easily available.
Also, is this not a breaking change for typescript users, strictly speaking?
Thanks for a great project.
### Reproduction
https://stackblitz.com/edit/vitejs-vite-lgkx3h?file=src/main.ts&view=editor
### System Info
```shell
System:
OS: Linux 5.18 Arch Linux
Memory: 12.14 GB / 23.23 GB
Container: Yes
Shell: 5.9 - /bin/zsh
Binaries:
Node: 16.15.1 - ~/.nvm/versions/node/v16.15.1/bin/node
Yarn: 1.22.15 - ~/.nvm/versions/node/v16.15.1/bin/yarn
npm: 8.11.0 - ~/.nvm/versions/node/v16.15.1/bin/npm
Browsers:
Chromium: 104.0.5112.79
Firefox: 103.0.1
npmPackages:
vite: ^3.0.4 => 3.0.4
```
### Used Package Manager
yarn
### Logs
```
src/main.ts:4:15 - error TS2571: Object is of type 'unknown'.
4 const error = (await moduleImports[0]()).default;
```
### Validations
- [X] Follow our [Code of Conduct](https://github.com/vitejs/vite/blob/main/CODE_OF_CONDUCT.md)
- [X] Read the [Contributing Guidelines](https://github.com/vitejs/vite/blob/main/CONTRIBUTING.md).
- [X] Read the [docs](https://vitejs.dev/guide).
- [X] Check that there isn't [already an issue](https://github.com/vitejs/vite/issues) that reports the same bug to avoid creating a duplicate.
- [X] Make sure this is a Vite issue and not a framework-specific issue. For example, if it's a Vue SFC related bug, it should likely be reported to [vuejs/core](https://github.com/vuejs/core) instead.
- [X] Check that this is a concrete bug. For Q&A open a [GitHub Discussion](https://github.com/vitejs/vite/discussions) or join our [Discord Chat Server](https://chat.vitejs.dev/).
- [X] The provided reproduction is a [minimal reproducible example](https://stackoverflow.com/help/minimal-reproducible-example) of the bug. | non_priority | document type inference with import meta glob describe the bug when using import meta glob with vite no generic type is provided anymore the but it would have helped me to have it in the easily available also is this not a breaking change for typescript users strictly speaking thanks for a great project reproduction system info shell system os linux arch linux memory gb gb container yes shell bin zsh binaries node nvm versions node bin node yarn nvm versions node bin yarn npm nvm versions node bin npm browsers chromium firefox npmpackages vite used package manager yarn logs src main ts error object is of type unknown const error await moduleimports default validations follow our read the read the check that there isn t that reports the same bug to avoid creating a duplicate make sure this is a vite issue and not a framework specific issue for example if it s a vue sfc related bug it should likely be reported to instead check that this is a concrete bug for q a open a or join our the provided reproduction is a of the bug | 0 |
391,509 | 11,574,919,728 | IssuesEvent | 2020-02-21 08:38:14 | wso2/product-is | https://api.github.com/repos/wso2/product-is | closed | Configuring password recovery with reCaptcha for a tenant for not working | Affected/5.10.0-Beta2 Priority/High Severity/Blocker | **Steps to reproduce:**
Follow the instructions in [here](https://is.docs.wso2.com/en/next/learn/configuring-recaptcha-for-password-recovery/#configuring-password-recovery-with-recaptcha-for-a-tenant)
Recaptcha is not shown.
| 1.0 | Configuring password recovery with reCaptcha for a tenant for not working - **Steps to reproduce:**
Follow the instructions in [here](https://is.docs.wso2.com/en/next/learn/configuring-recaptcha-for-password-recovery/#configuring-password-recovery-with-recaptcha-for-a-tenant)
Recaptcha is not shown.
| priority | configuring password recovery with recaptcha for a tenant for not working steps to reproduce follow the instructions in recaptcha is not shown | 1 |
801,999 | 28,565,012,150 | IssuesEvent | 2023-04-21 00:45:40 | microsoft/rushstack | https://api.github.com/repos/microsoft/rushstack | closed | [rush] >O(n^2) performance in `rush version` | repro confirmed priority | ## Summary
When running `rush version --bump` in a monorepo with 877 projects, the `semver.satisfies` check in `PublishUtilties._updateDownstreamDependency` was invoked 80450952 times.
## Repro steps
In a large monorepo with a few hundred pending change files, run `rush version --bump`.
## Details
From inspection, the version bumping algorithm performs the following steps in a loop:
1) For each package with changes, update the consuming package's `dependencies`, `devDependencies`, and `peerDependencies` fields. Record as a change on said consuming package and recurse to its consumers. For some reason the use of `workspace:*` in a dependency field automatically recurses even if the newly added change does not alter the current package (e.g. because the same significance of change is already present).
1) For each package with changes, if it is part of a lockstep policy, apply the lockstep version change.
At no point in this process are the changes memoized. I propose the following alternative process for applying changes:
1) Read all change JSON files and group by package name
1) Filter out any groups that apply to non-existent projects (and optionally delete the change files for said non-existent projects, maybe behind a flag)
1) Build an augmented project graph in which any lockstepped version policy replaces all projects that are part of the policy with a single composite node which corresponds to the policy and to which all change calculations are applied
1) For each node in the augmented graph, perform a memoized depth-first search to determine the final changeType for the node
1) Apply all changes.
## Standard questions
Please answer these questions to help us investigate your issue more quickly:
| Question | Answer |
| -------- | -------- |
| `@microsoft/rush` globally installed version? | 5.97.0 |
| `rushVersion` from rush.json? | 5.97.0 |
| `useWorkspaces` from rush.json? | true|
| Operating system? | Linux |
| Would you consider contributing a PR? | Yes |
| Node.js version (`node -v`)? | 16.19.1 |
| 1.0 | [rush] >O(n^2) performance in `rush version` - ## Summary
When running `rush version --bump` in a monorepo with 877 projects, the `semver.satisfies` check in `PublishUtilties._updateDownstreamDependency` was invoked 80450952 times.
## Repro steps
In a large monorepo with a few hundred pending change files, run `rush version --bump`.
## Details
From inspection, the version bumping algorithm performs the following steps in a loop:
1) For each package with changes, update the consuming package's `dependencies`, `devDependencies`, and `peerDependencies` fields. Record as a change on said consuming package and recurse to its consumers. For some reason the use of `workspace:*` in a dependency field automatically recurses even if the newly added change does not alter the current package (e.g. because the same significance of change is already present).
1) For each package with changes, if it is part of a lockstep policy, apply the lockstep version change.
At no point in this process are the changes memoized. I propose the following alternative process for applying changes:
1) Read all change JSON files and group by package name
1) Filter out any groups that apply to non-existent projects (and optionally delete the change files for said non-existent projects, maybe behind a flag)
1) Build an augmented project graph in which any lockstepped version policy replaces all projects that are part of the policy with a single composite node which corresponds to the policy and to which all change calculations are applied
1) For each node in the augmented graph, perform a memoized depth-first search to determine the final changeType for the node
1) Apply all changes.
## Standard questions
Please answer these questions to help us investigate your issue more quickly:
| Question | Answer |
| -------- | -------- |
| `@microsoft/rush` globally installed version? | 5.97.0 |
| `rushVersion` from rush.json? | 5.97.0 |
| `useWorkspaces` from rush.json? | true|
| Operating system? | Linux |
| Would you consider contributing a PR? | Yes |
| Node.js version (`node -v`)? | 16.19.1 |
| priority | o n performance in rush version summary when running rush version bump in a monorepo with projects the semver satisfies check in publishutilties updatedownstreamdependency was invoked times repro steps in a large monorepo with a few hundred pending change files run rush version bump details from inspection the version bumping algorithm performs the following steps in a loop for each package with changes update the consuming package s dependencies devdependencies and peerdependencies fields record as a change on said consuming package and recurse to its consumers for some reason the use of workspace in a dependency field automatically recurses even if the newly added change does not alter the current package e g because the same significance of change is already present for each package with changes if it is part of a lockstep policy apply the lockstep version change at no point in this process are the changes memoized i propose the following alternative process for applying changes read all change json files and group by package name filter out any groups that apply to non existent projects and optionally delete the change files for said non existent projects maybe behind a flag build an augmented project graph in which any lockstepped version policy replaces all projects that are part of the policy with a single composite node which corresponds to the policy and to which all change calculations are applied for each node in the augmented graph perform a memoized depth first search to determine the final changetype for the node apply all changes standard questions please answer these questions to help us investigate your issue more quickly question answer microsoft rush globally installed version rushversion from rush json useworkspaces from rush json true operating system linux would you consider contributing a pr yes node js version node v | 1 |
322,943 | 9,833,845,905 | IssuesEvent | 2019-06-17 08:15:05 | input-output-hk/jormungandr | https://api.github.com/repos/input-output-hk/jormungandr | closed | Allow for better logging strategy | Priority - Low subsys-logging | In the event of a large amount of Logs being emitted the user may see this kind of logs:
```
Jun 13 17:27:39.749 ERRO slog-async: logger dropped messages due to channel overflow, count: 97, sub_task: Leader Task, task: leadership
```
We can configure the [`Async`](https://crates.io/crates/slog_async) to have a better strategy to manage the logs. Especially we can allow the user to set a custom strategy. | 1.0 | Allow for better logging strategy - In the event of a large amount of Logs being emitted the user may see this kind of logs:
```
Jun 13 17:27:39.749 ERRO slog-async: logger dropped messages due to channel overflow, count: 97, sub_task: Leader Task, task: leadership
```
We can configure the [`Async`](https://crates.io/crates/slog_async) to have a better strategy to manage the logs. Especially we can allow the user to set a custom strategy. | priority | allow for better logging strategy in the event of a large amount of logs being emitted the user may see this kind of logs jun erro slog async logger dropped messages due to channel overflow count sub task leader task task leadership we can configure the to have a better strategy to manage the logs especially we can allow the user to set a custom strategy | 1 |
161,672 | 13,865,481,375 | IssuesEvent | 2020-10-16 04:27:38 | pyconll/pyconll | https://api.github.com/repos/pyconll/pyconll | closed | Improve documentation via analytics and keeping module information up to date | documentation | Documentation in general for the library is relatively high quality but the next release should focus specifically on this.
Some items to improve.
* `to_tree` documentation improvement with a better description of what the tree structure is and what order the children are in relative to the sentence if possible.
* Improve order of main pages on homepage for documentation to guide users to the proper starting point. Many people go to the conll page which is not the best entry point.
* Improve main page to have more information about using the library
* Call out supported UD formats more explicitly
* Need more links to modules in "Getting Started" page.
* Reference gitter in README and use as a way to ping me or other people familiar with the project
* Others as I think of them... | 1.0 | Improve documentation via analytics and keeping module information up to date - Documentation in general for the library is relatively high quality but the next release should focus specifically on this.
Some items to improve.
* `to_tree` documentation improvement with a better description of what the tree structure is and what order the children are in relative to the sentence if possible.
* Improve order of main pages on homepage for documentation to guide users to the proper starting point. Many people go to the conll page which is not the best entry point.
* Improve main page to have more information about using the library
* Call out supported UD formats more explicitly
* Need more links to modules in "Getting Started" page.
* Reference gitter in README and use as a way to ping me or other people familiar with the project
* Others as I think of them... | non_priority | improve documentation via analytics and keeping module information up to date documentation in general for the library is relatively high quality but the next release should focus specifically on this some items to improve to tree documentation improvement with a better description of what the tree structure is and what order the children are in relative to the sentence if possible improve order of main pages on homepage for documentation to guide users to the proper starting point many people go to the conll page which is not the best entry point improve main page to have more information about using the library call out supported ud formats more explicitly need more links to modules in getting started page reference gitter in readme and use as a way to ping me or other people familiar with the project others as i think of them | 0 |
515,485 | 14,964,029,129 | IssuesEvent | 2021-01-27 11:22:56 | woocommerce/woocommerce-gutenberg-products-block | https://api.github.com/repos/woocommerce/woocommerce-gutenberg-products-block | closed | Hide all filter blocks from Block Widget Editor | priority: high type: enhancement ◼️ block: active product filters ◼️ block: filter products by attribute ◼️ block: filter products by price 🔹 block-type: filter blocks | Currently the filter blocks will not have any impact on the default shop page in WooCommerce core. They also do not have feature parity with the filter widgets that do impact the shop page. To prevent user confusion, for the short term we need to hide the filter blocks from the Block Widget editor (and they'll only be available for use in block based themes or post_content).
This includes:
- Filter Products by Price
- Filter Products by Attribute
- Active Product Filters
Similar to #3726, if the functionality for hiding blocks just in the block widget editor is not available in Gutenberg, then we will need to work with the team to add that functionality. | 1.0 | Hide all filter blocks from Block Widget Editor - Currently the filter blocks will not have any impact on the default shop page in WooCommerce core. They also do not have feature parity with the filter widgets that do impact the shop page. To prevent user confusion, for the short term we need to hide the filter blocks from the Block Widget editor (and they'll only be available for use in block based themes or post_content).
This includes:
- Filter Products by Price
- Filter Products by Attribute
- Active Product Filters
Similar to #3726, if the functionality for hiding blocks just in the block widget editor is not available in Gutenberg, then we will need to work with the team to add that functionality. | priority | hide all filter blocks from block widget editor currently the filter blocks will not have any impact on the default shop page in woocommerce core they also do not have feature parity with the filter widgets that do impact the shop page to prevent user confusion for the short term we need to hide the filter blocks from the block widget editor and they ll only be available for use in block based themes or post content this includes filter products by price filter products by attribute active product filters similar to if the functionality for hiding blocks just in the block widget editor is not available in gutenberg then we will need to work with the team to add that functionality | 1 |
122,497 | 10,225,272,334 | IssuesEvent | 2019-08-16 14:46:49 | ValveSoftware/steam-for-linux | https://api.github.com/repos/ValveSoftware/steam-for-linux | closed | Steam Background Viewer Problem | Need Retest reviewed | Your system information
- Steam client version:
Built: Jul 8 2016 at 21:44:35
Steam API: v017
Steam Package Versions: 1468023329
- Distribution (e.g. Ubuntu): Ubuntu 14.04
- Opted into Steam client beta?: No
- Have you checked for system updates?: I check it daily, it is up to date
#### Please describe your issue in as much detail as possible:
I expected to view a background in full size. Plus, links doesn't work, which might be related. When I click a link, or in Inventory a background, it does nothing. Before, the links were opened in a web browser, mine is Google Chrome. I tried switching the default browser to Mozilla Firefox, to no avail. The steps to reproduce are below. I would like to see this work as it is intended. In the browser, I can use it normally, but that's not my client that I'm used to.
#### Steps for reproducing this issue:
1. Go to "Ownname"->Inventory, on the top, with huge capitals.
2. Scroll to a background in your inventory, using the arrow keys on the bottom of the inventory screen, if you have many items. (optional, might be on the first page)
3. Click on the background.
4. On the right, you will see a grey View Full Size icon/link.
5. Click it, and, if the bug is on other computers with Ubuntu running as well, it will do nothing.
Alternative:
6. Click a link anywhere that is underlined or bold and leads to an external site. It does nothing. I know I shouldn't write more bugs in one, but these two might be related.

| 1.0 | Steam Background Viewer Problem - Your system information
- Steam client version:
Built: Jul 8 2016 at 21:44:35
Steam API: v017
Steam Package Versions: 1468023329
- Distribution (e.g. Ubuntu): Ubuntu 14.04
- Opted into Steam client beta?: No
- Have you checked for system updates?: I check it daily, it is up to date
#### Please describe your issue in as much detail as possible:
I expected to view a background in full size. Plus, links doesn't work, which might be related. When I click a link, or in Inventory a background, it does nothing. Before, the links were opened in a web browser, mine is Google Chrome. I tried switching the default browser to Mozilla Firefox, to no avail. The steps to reproduce are below. I would like to see this work as it is intended. In the browser, I can use it normally, but that's not my client that I'm used to.
#### Steps for reproducing this issue:
1. Go to "Ownname"->Inventory, on the top, with huge capitals.
2. Scroll to a background in your inventory, using the arrow keys on the bottom of the inventory screen, if you have many items. (optional, might be on the first page)
3. Click on the background.
4. On the right, you will see a grey View Full Size icon/link.
5. Click it, and, if the bug is on other computers with Ubuntu running as well, it will do nothing.
Alternative:
6. Click a link anywhere that is underlined or bold and leads to an external site. It does nothing. I know I shouldn't write more bugs in one, but these two might be related.

| non_priority | steam background viewer problem your system information steam client version built jul at steam api steam package versions distribution e g ubuntu ubuntu opted into steam client beta no have you checked for system updates i check it daily it is up to date please describe your issue in as much detail as possible i expected to view a background in full size plus links doesn t work which might be related when i click a link or in inventory a background it does nothing before the links were opened in a web browser mine is google chrome i tried switching the default browser to mozilla firefox to no avail the steps to reproduce are below i would like to see this work as it is intended in the browser i can use it normally but that s not my client that i m used to steps for reproducing this issue go to ownname inventory on the top with huge capitals scroll to a background in your inventory using the arrow keys on the bottom of the inventory screen if you have many items optional might be on the first page click on the background on the right you will see a grey view full size icon link click it and if the bug is on other computers with ubuntu running as well it will do nothing alternative click a link anywhere that is underlined or bold and leads to an external site it does nothing i know i shouldn t write more bugs in one but these two might be related | 0 |
9,441 | 2,615,150,267 | IssuesEvent | 2015-03-01 06:27:33 | chrsmith/reaver-wps | https://api.github.com/repos/chrsmith/reaver-wps | opened | iwlwifi driver : WPS transaction failed (code: 0x4) | auto-migrated Priority-Triage Type-Defect | ```
i didn't find information for my card with this driver,
i am on fedora 16 64 bit, kernel 3.2.1, driver iwlwifi for Intel 6300N
in previous kernels the old driver iwlagn was used with this card, not anymore
and i have yet to see oher reports with this driver.
injection works, tested with aircrack-ng suite.
My probem is this error message turning in loop :
[+] Trying pin 12345670
[+] Sending EAPOL START request
[+] Sending identity response
[+] Received M1 message
[+] Sending M2 message
[+] Received WSC NACK
[+] Sending WSC NACK
[!] WPS transaction failed (code: 0x4), re-trying last pin
If you have advice or something i can do to help further debuging please let me
know
```
Original issue reported on code.google.com by `sheepdes...@gmail.com` on 17 Jan 2012 at 10:53 | 1.0 | iwlwifi driver : WPS transaction failed (code: 0x4) - ```
i didn't find information for my card with this driver,
i am on fedora 16 64 bit, kernel 3.2.1, driver iwlwifi for Intel 6300N
in previous kernels the old driver iwlagn was used with this card, not anymore
and i have yet to see oher reports with this driver.
injection works, tested with aircrack-ng suite.
My probem is this error message turning in loop :
[+] Trying pin 12345670
[+] Sending EAPOL START request
[+] Sending identity response
[+] Received M1 message
[+] Sending M2 message
[+] Received WSC NACK
[+] Sending WSC NACK
[!] WPS transaction failed (code: 0x4), re-trying last pin
If you have advice or something i can do to help further debuging please let me
know
```
Original issue reported on code.google.com by `sheepdes...@gmail.com` on 17 Jan 2012 at 10:53 | non_priority | iwlwifi driver wps transaction failed code i didn t find information for my card with this driver i am on fedora bit kernel driver iwlwifi for intel in previous kernels the old driver iwlagn was used with this card not anymore and i have yet to see oher reports with this driver injection works tested with aircrack ng suite my probem is this error message turning in loop trying pin sending eapol start request sending identity response received message sending message received wsc nack sending wsc nack wps transaction failed code re trying last pin if you have advice or something i can do to help further debuging please let me know original issue reported on code google com by sheepdes gmail com on jan at | 0 |
319,522 | 9,745,383,883 | IssuesEvent | 2019-06-03 09:29:47 | McStasMcXtrace/ifitlab | https://api.github.com/repos/McStasMcXtrace/ifitlab | closed | "Different arrow" for e.g. rmint method | Priority 0 ui | Suggestion: Horizontal line rather than bent arrow (indicating this is a 'method' rather than a 'function') | 1.0 | "Different arrow" for e.g. rmint method - Suggestion: Horizontal line rather than bent arrow (indicating this is a 'method' rather than a 'function') | priority | different arrow for e g rmint method suggestion horizontal line rather than bent arrow indicating this is a method rather than a function | 1 |
93,508 | 19,254,682,526 | IssuesEvent | 2021-12-09 10:01:07 | Regalis11/Barotrauma | https://api.github.com/repos/Regalis11/Barotrauma | closed | [0.15.9.0] Abandoned Outpost - Hostages will not follow players that has diving suits on | Bug Code | - [x] I have searched the issue tracker to check if the issue has already been reported.
**Description**
Was in Multiplayer. Hostages will not follow players that has diving suits on in an abandoned outpost. Likely that they believe they need a suit themselves.
**Version**
0.15.9.0 | 1.0 | [0.15.9.0] Abandoned Outpost - Hostages will not follow players that has diving suits on - - [x] I have searched the issue tracker to check if the issue has already been reported.
**Description**
Was in Multiplayer. Hostages will not follow players that has diving suits on in an abandoned outpost. Likely that they believe they need a suit themselves.
**Version**
0.15.9.0 | non_priority | abandoned outpost hostages will not follow players that has diving suits on i have searched the issue tracker to check if the issue has already been reported description was in multiplayer hostages will not follow players that has diving suits on in an abandoned outpost likely that they believe they need a suit themselves version | 0 |
768,743 | 26,978,609,184 | IssuesEvent | 2023-02-09 11:23:33 | strusoft/femdesign-api | https://api.github.com/repos/strusoft/femdesign-api | closed | Design parameters | priority:later type:scope | # Design parameters
## Goals
* It should be possible to setup general and element settings for the most common code-checks and design calculations.
## Background
General and element settings for code-check and design calculations were extended to `fdscript` in FEM-Design 21. This feature still needs to be implemented to this project.
## Research
## Requirements
## Questions
## Not doing | 1.0 | Design parameters - # Design parameters
## Goals
* It should be possible to setup general and element settings for the most common code-checks and design calculations.
## Background
General and element settings for code-check and design calculations were extended to `fdscript` in FEM-Design 21. This feature still needs to be implemented to this project.
## Research
## Requirements
## Questions
## Not doing | priority | design parameters design parameters goals it should be possible to setup general and element settings for the most common code checks and design calculations background general and element settings for code check and design calculations were extended to fdscript in fem design this feature still needs to be implemented to this project research requirements questions not doing | 1 |
50,467 | 7,605,224,679 | IssuesEvent | 2018-04-30 07:56:44 | RaRe-Technologies/gensim | https://api.github.com/repos/RaRe-Technologies/gensim | opened | Documentation fixes | documentation | This issue collects PRs related to improving the Gensim documentation.
Merged
--------
https://github.com/RaRe-Technologies/gensim/pull/1633
https://github.com/RaRe-Technologies/gensim/pull/1625
https://github.com/RaRe-Technologies/gensim/pull/1640
https://github.com/RaRe-Technologies/gensim/pull/1702
https://github.com/RaRe-Technologies/gensim/pull/1684
https://github.com/RaRe-Technologies/gensim/pull/1709
https://github.com/RaRe-Technologies/gensim/pull/1739
https://github.com/RaRe-Technologies/gensim/pull/1681
https://github.com/RaRe-Technologies/gensim/pull/1806
https://github.com/RaRe-Technologies/gensim/pull/1802
https://github.com/RaRe-Technologies/gensim/pull/1797
https://github.com/RaRe-Technologies/gensim/pull/1804
https://github.com/RaRe-Technologies/gensim/pull/1803
https://github.com/RaRe-Technologies/gensim/pull/1805
https://github.com/RaRe-Technologies/gensim/pull/1714
https://github.com/RaRe-Technologies/gensim/pull/1814
https://github.com/RaRe-Technologies/gensim/pull/1729
https://github.com/RaRe-Technologies/gensim/pull/1793
https://github.com/RaRe-Technologies/gensim/pull/1801
https://github.com/RaRe-Technologies/gensim/pull/1913
https://github.com/RaRe-Technologies/gensim/pull/1859
https://github.com/RaRe-Technologies/gensim/pull/1835
https://github.com/RaRe-Technologies/gensim/pull/1792
https://github.com/RaRe-Technologies/gensim/pull/1904
https://github.com/RaRe-Technologies/gensim/pull/1910
https://github.com/RaRe-Technologies/gensim/pull/1919
https://github.com/RaRe-Technologies/gensim/pull/1892
https://github.com/RaRe-Technologies/gensim/pull/1880
https://github.com/RaRe-Technologies/gensim/pull/1861
https://github.com/RaRe-Technologies/gensim/pull/1876
WIP
----
https://github.com/RaRe-Technologies/gensim/pull/2026
https://github.com/RaRe-Technologies/gensim/pull/1809
https://github.com/RaRe-Technologies/gensim/pull/1944
| 1.0 | Documentation fixes - This issue collects PRs related to improving the Gensim documentation.
Merged
--------
https://github.com/RaRe-Technologies/gensim/pull/1633
https://github.com/RaRe-Technologies/gensim/pull/1625
https://github.com/RaRe-Technologies/gensim/pull/1640
https://github.com/RaRe-Technologies/gensim/pull/1702
https://github.com/RaRe-Technologies/gensim/pull/1684
https://github.com/RaRe-Technologies/gensim/pull/1709
https://github.com/RaRe-Technologies/gensim/pull/1739
https://github.com/RaRe-Technologies/gensim/pull/1681
https://github.com/RaRe-Technologies/gensim/pull/1806
https://github.com/RaRe-Technologies/gensim/pull/1802
https://github.com/RaRe-Technologies/gensim/pull/1797
https://github.com/RaRe-Technologies/gensim/pull/1804
https://github.com/RaRe-Technologies/gensim/pull/1803
https://github.com/RaRe-Technologies/gensim/pull/1805
https://github.com/RaRe-Technologies/gensim/pull/1714
https://github.com/RaRe-Technologies/gensim/pull/1814
https://github.com/RaRe-Technologies/gensim/pull/1729
https://github.com/RaRe-Technologies/gensim/pull/1793
https://github.com/RaRe-Technologies/gensim/pull/1801
https://github.com/RaRe-Technologies/gensim/pull/1913
https://github.com/RaRe-Technologies/gensim/pull/1859
https://github.com/RaRe-Technologies/gensim/pull/1835
https://github.com/RaRe-Technologies/gensim/pull/1792
https://github.com/RaRe-Technologies/gensim/pull/1904
https://github.com/RaRe-Technologies/gensim/pull/1910
https://github.com/RaRe-Technologies/gensim/pull/1919
https://github.com/RaRe-Technologies/gensim/pull/1892
https://github.com/RaRe-Technologies/gensim/pull/1880
https://github.com/RaRe-Technologies/gensim/pull/1861
https://github.com/RaRe-Technologies/gensim/pull/1876
WIP
----
https://github.com/RaRe-Technologies/gensim/pull/2026
https://github.com/RaRe-Technologies/gensim/pull/1809
https://github.com/RaRe-Technologies/gensim/pull/1944
| non_priority | documentation fixes this issue collects prs related to improving the gensim documentation merged wip | 0 |
828,638 | 31,836,752,913 | IssuesEvent | 2023-09-14 13:57:26 | infor-design/enterprise | https://api.github.com/repos/infor-design/enterprise | closed | Breadcrumb: flex-toolbar icon cut off | type: bug :bug: [1] focus: mobile priority: minor stale | <!-- Please be aware that this is a publicly visible bug report. Do not post any credentials, screenshots with proprietary information, or anything you think shouldn't be visible to the world. If reporting a security issue such as a xss vulnerability. Please use the [security advisories feature](https://github.com/infor-design/enterprise/security/advisories). If private information is required to be shared for a quality bug report, please email one of the [code owners](https://github.com/infor-design/enterprise/blob/main/.github/CODEOWNERS) directly. -->
**Describe the bug**
btn-icon is cut off in mobile when using `new theme`
**To Reproduce**
<!-- Please spend a little time to make an accurate reduced test case for the issue. The more code you include the less likely is that the issue can be fixed quickly (or at all). This is a good article about reduced test cases if your unfamiliar https://css-tricks.com/reduced-test-cases/. -->
Steps to reproduce the behavior:
1. Go to https://main-enterprise.demo.design.infor.com/components/breadcrumb/test-flex-toolbar.html
2. See error
**Expected behavior**
should not be cut off
**Version**
<!-- You can find this by inspecting the document html tag or sohoxi.js script header -->
- ids-enterprise: v4.68.0-dev
**Screenshots**
<img width="627" alt="image" src="https://user-images.githubusercontent.com/56722184/191231653-1cf0146e-09ce-4569-9b6d-ebb2d120ee43.png">
**Platform**
mobile device android and iOS
**Additional context**
Add any other context about the problem here.
| 1.0 | Breadcrumb: flex-toolbar icon cut off - <!-- Please be aware that this is a publicly visible bug report. Do not post any credentials, screenshots with proprietary information, or anything you think shouldn't be visible to the world. If reporting a security issue such as a xss vulnerability. Please use the [security advisories feature](https://github.com/infor-design/enterprise/security/advisories). If private information is required to be shared for a quality bug report, please email one of the [code owners](https://github.com/infor-design/enterprise/blob/main/.github/CODEOWNERS) directly. -->
**Describe the bug**
btn-icon is cut off in mobile when using `new theme`
**To Reproduce**
<!-- Please spend a little time to make an accurate reduced test case for the issue. The more code you include the less likely is that the issue can be fixed quickly (or at all). This is a good article about reduced test cases if your unfamiliar https://css-tricks.com/reduced-test-cases/. -->
Steps to reproduce the behavior:
1. Go to https://main-enterprise.demo.design.infor.com/components/breadcrumb/test-flex-toolbar.html
2. See error
**Expected behavior**
should not be cut off
**Version**
<!-- You can find this by inspecting the document html tag or sohoxi.js script header -->
- ids-enterprise: v4.68.0-dev
**Screenshots**
<img width="627" alt="image" src="https://user-images.githubusercontent.com/56722184/191231653-1cf0146e-09ce-4569-9b6d-ebb2d120ee43.png">
**Platform**
mobile device android and iOS
**Additional context**
Add any other context about the problem here.
| priority | breadcrumb flex toolbar icon cut off describe the bug btn icon is cut off in mobile when using new theme to reproduce steps to reproduce the behavior go to see error expected behavior should not be cut off version ids enterprise dev screenshots img width alt image src platform mobile device android and ios additional context add any other context about the problem here | 1 |
332,899 | 29,497,875,243 | IssuesEvent | 2023-06-02 18:39:54 | pytorch/pytorch | https://api.github.com/repos/pytorch/pytorch | opened | DISABLED test_cond_side_effects_dynamic_shapes_static_default (__main__.StaticDefaultDynamicShapesMiscTests) | triaged module: flaky-tests skipped module: dynamo | Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_cond_side_effects_dynamic_shapes_static_default&suite=StaticDefaultDynamicShapesMiscTests) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/undefined).
Over the past 3 hours, it has been determined flaky in 2 workflow(s) with 2 failures and 2 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_cond_side_effects_dynamic_shapes_static_default`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `dynamo/test_dynamic_shapes.py` or `dynamo/test_dynamic_shapes.py` | 1.0 | DISABLED test_cond_side_effects_dynamic_shapes_static_default (__main__.StaticDefaultDynamicShapesMiscTests) - Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_cond_side_effects_dynamic_shapes_static_default&suite=StaticDefaultDynamicShapesMiscTests) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/undefined).
Over the past 3 hours, it has been determined flaky in 2 workflow(s) with 2 failures and 2 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_cond_side_effects_dynamic_shapes_static_default`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `dynamo/test_dynamic_shapes.py` or `dynamo/test_dynamic_shapes.py` | non_priority | disabled test cond side effects dynamic shapes static default main staticdefaultdynamicshapesmisctests platforms linux this test was disabled because it is failing in ci see and the most recent trunk over the past hours it has been determined flaky in workflow s with failures and successes debugging instructions after clicking on the recent samples link do not assume things are okay if the ci is green we now shield flaky tests from developers so ci will thus be green but it will be harder to parse the logs to find relevant log snippets click on the workflow logs linked above click on the test step of the job so that it is expanded otherwise the grepping will not work grep for test cond side effects dynamic shapes static default there should be several instances run as flaky tests are rerun in ci from which you can study the logs test file path dynamo test dynamic shapes py or dynamo test dynamic shapes py | 0 |
180,777 | 30,566,644,415 | IssuesEvent | 2023-07-20 18:18:58 | EasyCorp/EasyAdminBundle | https://api.github.com/repos/EasyCorp/EasyAdminBundle | closed | submenu items are barely visible | bug design | **Describe the bug**
submenu items are barely visible in dark mode

**To Reproduce**
1. Create submenu items
2. set theme to dark mode
| 1.0 | submenu items are barely visible - **Describe the bug**
submenu items are barely visible in dark mode

**To Reproduce**
1. Create submenu items
2. set theme to dark mode
| non_priority | submenu items are barely visible describe the bug submenu items are barely visible in dark mode to reproduce create submenu items set theme to dark mode | 0 |
488,899 | 14,099,094,136 | IssuesEvent | 2020-11-06 00:29:23 | drashland/website | https://api.github.com/repos/drashland/website | closed | Write required documentation for deno-drash issue #427 (after_resource middleware hook) | Priority: Medium Remark: Deploy To Production Type: Chore | ## Summary
The following issue requires documentation before it can be closed:
https://github.com/drashland/deno-drash/issues/427
The following pull request is associated with the above issue:
https://github.com/drashland/deno-drash/issues/428 | 1.0 | Write required documentation for deno-drash issue #427 (after_resource middleware hook) - ## Summary
The following issue requires documentation before it can be closed:
https://github.com/drashland/deno-drash/issues/427
The following pull request is associated with the above issue:
https://github.com/drashland/deno-drash/issues/428 | priority | write required documentation for deno drash issue after resource middleware hook summary the following issue requires documentation before it can be closed the following pull request is associated with the above issue | 1 |
391,526 | 26,896,663,212 | IssuesEvent | 2023-02-06 12:57:22 | cloudflare/cloudflare-docs | https://api.github.com/repos/cloudflare/cloudflare-docs | opened | Add guidance when creating Domain lists via API | documentation content:edit | ### Which Cloudflare product does this pertain to?
Zero Trust
### Existing documentation URL(s)
https://developers.cloudflare.com/cloudflare-one/policies/filtering/lists/
### Section that requires update
[](https://developers.cloudflare.com/cloudflare-one/policies/filtering/lists/#create-a-list-from-a-csv-file)
Create a list from a CSV file
### What needs to change?
The conditions doesn't detail what happens when using duplicate hostnames for a "Domain" list.
The API would simply return 409 with the message "A resource with this identifier already exists." and code 1204.
This is potentially misleading if it's the List name or a List item.
### How should it change?
Mention that all hostnames in a list are:
- converted from IDN to Punycode
- checked against existing duplicate hostname in Punycode format.
### Additional information
for e.g, passing a list with these two entries will fail with 1204 error because they're duplicate (though not obvious until you convert to Punycode):
www.españa.com
www.xn--espaa-rta.com | 1.0 | Add guidance when creating Domain lists via API - ### Which Cloudflare product does this pertain to?
Zero Trust
### Existing documentation URL(s)
https://developers.cloudflare.com/cloudflare-one/policies/filtering/lists/
### Section that requires update
[](https://developers.cloudflare.com/cloudflare-one/policies/filtering/lists/#create-a-list-from-a-csv-file)
Create a list from a CSV file
### What needs to change?
The conditions doesn't detail what happens when using duplicate hostnames for a "Domain" list.
The API would simply return 409 with the message "A resource with this identifier already exists." and code 1204.
This is potentially misleading if it's the List name or a List item.
### How should it change?
Mention that all hostnames in a list are:
- converted from IDN to Punycode
- checked against existing duplicate hostname in Punycode format.
### Additional information
for e.g, passing a list with these two entries will fail with 1204 error because they're duplicate (though not obvious until you convert to Punycode):
www.españa.com
www.xn--espaa-rta.com | non_priority | add guidance when creating domain lists via api which cloudflare product does this pertain to zero trust existing documentation url s section that requires update create a list from a csv file what needs to change the conditions doesn t detail what happens when using duplicate hostnames for a domain list the api would simply return with the message a resource with this identifier already exists and code this is potentially misleading if it s the list name or a list item how should it change mention that all hostnames in a list are converted from idn to punycode checked against existing duplicate hostname in punycode format additional information for e g passing a list with these two entries will fail with error because they re duplicate though not obvious until you convert to punycode | 0 |
567,318 | 16,854,921,736 | IssuesEvent | 2021-06-21 04:35:38 | ballerina-platform/ballerina-standard-library | https://api.github.com/repos/ballerina-platform/ballerina-standard-library | closed | GraphQL Responses with Errors Should Return BAD_REQUEST Status Code | Priority/High Team/PCP Type/Improvement module/graphql | **Description:**
When a GraphQL request is failed in the validation phase, currently the response status code is set to `200`. But it should be `400`. | 1.0 | GraphQL Responses with Errors Should Return BAD_REQUEST Status Code - **Description:**
When a GraphQL request is failed in the validation phase, currently the response status code is set to `200`. But it should be `400`. | priority | graphql responses with errors should return bad request status code description when a graphql request is failed in the validation phase currently the response status code is set to but it should be | 1 |
40,871 | 8,870,550,742 | IssuesEvent | 2019-01-11 09:51:03 | Jigar3/Wall-Street | https://api.github.com/repos/Jigar3/Wall-Street | opened | Set up this project on your local machine | OpenCode'19 Rookie(10 Points) | Share a screenshot/GIF here. 10 points each for the frontend as well as backend setup | 1.0 | Set up this project on your local machine - Share a screenshot/GIF here. 10 points each for the frontend as well as backend setup | non_priority | set up this project on your local machine share a screenshot gif here points each for the frontend as well as backend setup | 0 |
135,388 | 5,247,424,657 | IssuesEvent | 2017-02-01 12:59:41 | moodlepeers/moodle-mod_groupformation | https://api.github.com/repos/moodlepeers/moodle-mod_groupformation | opened | layout issues with clean theme or beuth03 theme | bug FE (frontend) Priority medium | 1. Inside the Group Formation page, the Move block and Actions in every block will disappear, the icons will not appear appropriatly!
2. On the beuth03 Theme (the offical theme for Beuth Hochschule) the navigation header will not appear appropriatly inside the page of the groupformation, the header will navigate to the right.
3. Width of the input type="text": At the beginning of creating a new group formation, the group formation name text (under General) is very narrow, the letters will not appear in the appropriate size! | 1.0 | layout issues with clean theme or beuth03 theme - 1. Inside the Group Formation page, the Move block and Actions in every block will disappear, the icons will not appear appropriatly!
2. On the beuth03 Theme (the offical theme for Beuth Hochschule) the navigation header will not appear appropriatly inside the page of the groupformation, the header will navigate to the right.
3. Width of the input type="text": At the beginning of creating a new group formation, the group formation name text (under General) is very narrow, the letters will not appear in the appropriate size! | priority | layout issues with clean theme or theme inside the group formation page the move block and actions in every block will disappear the icons will not appear appropriatly on the theme the offical theme for beuth hochschule the navigation header will not appear appropriatly inside the page of the groupformation the header will navigate to the right width of the input type text at the beginning of creating a new group formation the group formation name text under general is very narrow the letters will not appear in the appropriate size | 1 |
170,181 | 13,177,100,557 | IssuesEvent | 2020-08-12 06:39:15 | elastic/elasticsearch | https://api.github.com/repos/elastic/elasticsearch | closed | About normal flush action | >test-failure | In SyncedFlushService.onShardInactive() method,
It notes,
"// A normal flush has the same effect as a synced flush if all nodes are on 7.6 or later."
I use 7.8.1,so it always will execute the NormalFlush(not the SyncedFlush).We know,the SyncedFlush will write the sync id on all shards,
but I find the NormalFlush don't do it!
Let's see the flush(boolean, boolean) method of InternalEngine,
...
commitIndexWriter(indexWriter, translog, null);
...
the 3th parameter is "null",
...
if (syncId != null) {
commitData.put(Engine.SYNC_COMMIT_ID, syncId);
}
...
it will never add(or update) the sync id,is it okay? | 1.0 | About normal flush action - In SyncedFlushService.onShardInactive() method,
It notes,
"// A normal flush has the same effect as a synced flush if all nodes are on 7.6 or later."
I use 7.8.1,so it always will execute the NormalFlush(not the SyncedFlush).We know,the SyncedFlush will write the sync id on all shards,
but I find the NormalFlush don't do it!
Let's see the flush(boolean, boolean) method of InternalEngine,
...
commitIndexWriter(indexWriter, translog, null);
...
the 3th parameter is "null",
...
if (syncId != null) {
commitData.put(Engine.SYNC_COMMIT_ID, syncId);
}
...
it will never add(or update) the sync id,is it okay? | non_priority | about normal flush action in syncedflushservice onshardinactive method it notes a normal flush has the same effect as a synced flush if all nodes are on or later i use so it always will execute the normalflush not the syncedflush we know the syncedflush will write the sync id on all shards but i find the normalflush don t do it let s see the flush boolean boolean method of internalengine commitindexwriter indexwriter translog null the parameter is null if syncid null commitdata put engine sync commit id syncid it will never add or update the sync id is it okay | 0 |
383,609 | 26,558,146,814 | IssuesEvent | 2023-01-20 13:48:30 | PennLINC/xcp_d | https://api.github.com/repos/PennLINC/xcp_d | closed | Offload advanced xcp-d usage examples to separate repository? | documentation question | ## Summary
There are a couple of xcp-d use-cases that require some preparation that we may want to document, including incorporating tedana components (#324) or non-RETROICOR physio regressors (#455) as custom confounds. We can easily write documentation or example code for these scenarios, but I think we would probably want to actually run the examples online so that we know the code works. I was thinking of writing these examples in a separate repository that we could link to from the main documentation.
| 1.0 | Offload advanced xcp-d usage examples to separate repository? - ## Summary
There are a couple of xcp-d use-cases that require some preparation that we may want to document, including incorporating tedana components (#324) or non-RETROICOR physio regressors (#455) as custom confounds. We can easily write documentation or example code for these scenarios, but I think we would probably want to actually run the examples online so that we know the code works. I was thinking of writing these examples in a separate repository that we could link to from the main documentation.
| non_priority | offload advanced xcp d usage examples to separate repository summary there are a couple of xcp d use cases that require some preparation that we may want to document including incorporating tedana components or non retroicor physio regressors as custom confounds we can easily write documentation or example code for these scenarios but i think we would probably want to actually run the examples online so that we know the code works i was thinking of writing these examples in a separate repository that we could link to from the main documentation | 0 |
44,520 | 5,632,193,469 | IssuesEvent | 2017-04-05 16:02:34 | OAButton/discussion | https://api.github.com/repos/OAButton/discussion | closed | Testing citation search | Blocked: Development Blocked: Test enhancement JISC | @svmelton hopefully it should be relatively clear how to go about this (and builds on your work on other similar issues).
Assuming title search testing is happening #111
@markmacgillivray I don't think this is 100% required to ship in a week, if it's buggy we may be better placed to release it with care later. | 1.0 | Testing citation search - @svmelton hopefully it should be relatively clear how to go about this (and builds on your work on other similar issues).
Assuming title search testing is happening #111
@markmacgillivray I don't think this is 100% required to ship in a week, if it's buggy we may be better placed to release it with care later. | non_priority | testing citation search svmelton hopefully it should be relatively clear how to go about this and builds on your work on other similar issues assuming title search testing is happening markmacgillivray i don t think this is required to ship in a week if it s buggy we may be better placed to release it with care later | 0 |
783,444 | 27,531,040,303 | IssuesEvent | 2023-03-06 22:08:37 | dotCMS/core | https://api.github.com/repos/dotCMS/core | reopened | Allow greater configuration of S3Client to target additional endpoints beyond AWS | Type : Enhancement QA : Approved Doc : Needs Doc Merged QA : Passed Internal LTS: Excluded Team : Falcon Next LTS Release Release : 23.02 OKR : Customer Success Priority : 2 High OKR : Customer Support | **Is your feature request related to a problem? Please describe.**
There are multiple different S3 Object stores available beyond AWS. These all still utilize the S3 SDK from AWS, so we should be able to allow configuration of our S3Client to target endpoints that are not in AWS.
Related Ticket: https://dotcms.zendesk.com/agent/tickets/107260
**Describe the solution you'd like**
Allow configuration of the endpoint similar to how we allow configuration of bucket names, region etc.
We should be able to change the targeted endpoint here, in AWSS3Storage:
```
private AWSS3Storage(DotAmazonS3Client s3client) {
this.s3client = s3client;
this.s3client.setEndpoint("xxx");
```
While I think we should still only officially support AWS S3 buckets, allowing endpoint configuration would give a lot more options for pushing to various S3 Object stores in dotCMS.
| 1.0 | Allow greater configuration of S3Client to target additional endpoints beyond AWS - **Is your feature request related to a problem? Please describe.**
There are multiple different S3 Object stores available beyond AWS. These all still utilize the S3 SDK from AWS, so we should be able to allow configuration of our S3Client to target endpoints that are not in AWS.
Related Ticket: https://dotcms.zendesk.com/agent/tickets/107260
**Describe the solution you'd like**
Allow configuration of the endpoint similar to how we allow configuration of bucket names, region etc.
We should be able to change the targeted endpoint here, in AWSS3Storage:
```
private AWSS3Storage(DotAmazonS3Client s3client) {
this.s3client = s3client;
this.s3client.setEndpoint("xxx");
```
While I think we should still only officially support AWS S3 buckets, allowing endpoint configuration would give a lot more options for pushing to various S3 Object stores in dotCMS.
| priority | allow greater configuration of to target additional endpoints beyond aws is your feature request related to a problem please describe there are multiple different object stores available beyond aws these all still utilize the sdk from aws so we should be able to allow configuration of our to target endpoints that are not in aws related ticket describe the solution you d like allow configuration of the endpoint similar to how we allow configuration of bucket names region etc we should be able to change the targeted endpoint here in private this this setendpoint xxx while i think we should still only officially support aws buckets allowing endpoint configuration would give a lot more options for pushing to various object stores in dotcms | 1 |
566,805 | 16,831,084,074 | IssuesEvent | 2021-06-18 05:01:12 | DeFiCh/jellyfish | https://api.github.com/repos/DeFiCh/jellyfish | closed | flaky test in `accountHistoryCount` | area/jellyfish-api-core kind/bug priority/urgent-now triage/accepted | <!--
Please use this template while reporting a bug and provide as much info as possible.
If the matter is security related, please disclose it privately via security@defichain.com
-->
#### What happened:
https://github.com/DeFiCh/jellyfish/pull/337/checks?check_run_id=2732807319
```txt
Summary of all failing tests
FAIL packages/jellyfish-api-core/__tests__/category/account/accountHistoryCount.test.ts (41.84 s)
● Account › should get accountHistoryCount with token option
expect(received).toStrictEqual(expected) // deep equality
Expected: 5
Received: 7
98 | expect(typeof countWithDBTC).toBe('number')
99 | expect(typeof countWithDETH).toBe('number')
> 100 | expect(countWithDBTC).toStrictEqual(5)
| ^
101 | expect(countWithDETH).toStrictEqual(3)
102 | })
103 | })
at __tests__/category/account/accountHistoryCount.test.ts:100:29
at fulfilled (__tests__/category/account/accountHistoryCount.test.ts:5:58)
at runMicrotasks (<anonymous>)
```
#### What you expected to happen:
Not flaky.
#### How to reproduce it (as minimally and precisely as possible):
See test.
| 1.0 | flaky test in `accountHistoryCount` - <!--
Please use this template while reporting a bug and provide as much info as possible.
If the matter is security related, please disclose it privately via security@defichain.com
-->
#### What happened:
https://github.com/DeFiCh/jellyfish/pull/337/checks?check_run_id=2732807319
```txt
Summary of all failing tests
FAIL packages/jellyfish-api-core/__tests__/category/account/accountHistoryCount.test.ts (41.84 s)
● Account › should get accountHistoryCount with token option
expect(received).toStrictEqual(expected) // deep equality
Expected: 5
Received: 7
98 | expect(typeof countWithDBTC).toBe('number')
99 | expect(typeof countWithDETH).toBe('number')
> 100 | expect(countWithDBTC).toStrictEqual(5)
| ^
101 | expect(countWithDETH).toStrictEqual(3)
102 | })
103 | })
at __tests__/category/account/accountHistoryCount.test.ts:100:29
at fulfilled (__tests__/category/account/accountHistoryCount.test.ts:5:58)
at runMicrotasks (<anonymous>)
```
#### What you expected to happen:
Not flaky.
#### How to reproduce it (as minimally and precisely as possible):
See test.
| priority | flaky test in accounthistorycount please use this template while reporting a bug and provide as much info as possible if the matter is security related please disclose it privately via security defichain com what happened txt summary of all failing tests fail packages jellyfish api core tests category account accounthistorycount test ts s ● account › should get accounthistorycount with token option expect received tostrictequal expected deep equality expected received expect typeof countwithdbtc tobe number expect typeof countwithdeth tobe number expect countwithdbtc tostrictequal expect countwithdeth tostrictequal at tests category account accounthistorycount test ts at fulfilled tests category account accounthistorycount test ts at runmicrotasks what you expected to happen not flaky how to reproduce it as minimally and precisely as possible see test | 1 |
235,575 | 25,955,213,571 | IssuesEvent | 2022-12-18 05:34:02 | Dima2022/JS-Demo | https://api.github.com/repos/Dima2022/JS-Demo | closed | CVE-2017-15010 (High) detected in tough-cookie-2.3.1.tgz - autoclosed | security vulnerability | ## CVE-2017-15010 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tough-cookie-2.3.1.tgz</b></p></summary>
<p>RFC6265 Cookies and Cookie Jar for node.js</p>
<p>Library home page: <a href="https://registry.npmjs.org/tough-cookie/-/tough-cookie-2.3.1.tgz">https://registry.npmjs.org/tough-cookie/-/tough-cookie-2.3.1.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/npm/node_modules/request/node_modules/tough-cookie/package.json</p>
<p>
Dependency Hierarchy:
- grunt-npm-install-0.3.1.tgz (Root Library)
- npm-3.10.10.tgz
- request-2.75.0.tgz
- :x: **tough-cookie-2.3.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Dima2022/JS-Demo/commit/4edf46ace164b01728ef7066c6a8e7464b89143a">4edf46ace164b01728ef7066c6a8e7464b89143a</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A ReDoS (regular expression denial of service) flaw was found in the tough-cookie module before 2.3.3 for Node.js. An attacker that is able to make an HTTP request using a specially crafted cookie may cause the application to consume an excessive amount of CPU.
<p>Publish Date: 2017-10-04
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2017-15010>CVE-2017-15010</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2017-15010">https://nvd.nist.gov/vuln/detail/CVE-2017-15010</a></p>
<p>Release Date: 2017-10-04</p>
<p>Fix Resolution: 2.3.3</p>
</p>
</details>
<p></p>
| True | CVE-2017-15010 (High) detected in tough-cookie-2.3.1.tgz - autoclosed - ## CVE-2017-15010 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tough-cookie-2.3.1.tgz</b></p></summary>
<p>RFC6265 Cookies and Cookie Jar for node.js</p>
<p>Library home page: <a href="https://registry.npmjs.org/tough-cookie/-/tough-cookie-2.3.1.tgz">https://registry.npmjs.org/tough-cookie/-/tough-cookie-2.3.1.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/npm/node_modules/request/node_modules/tough-cookie/package.json</p>
<p>
Dependency Hierarchy:
- grunt-npm-install-0.3.1.tgz (Root Library)
- npm-3.10.10.tgz
- request-2.75.0.tgz
- :x: **tough-cookie-2.3.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Dima2022/JS-Demo/commit/4edf46ace164b01728ef7066c6a8e7464b89143a">4edf46ace164b01728ef7066c6a8e7464b89143a</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A ReDoS (regular expression denial of service) flaw was found in the tough-cookie module before 2.3.3 for Node.js. An attacker that is able to make an HTTP request using a specially crafted cookie may cause the application to consume an excessive amount of CPU.
<p>Publish Date: 2017-10-04
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2017-15010>CVE-2017-15010</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2017-15010">https://nvd.nist.gov/vuln/detail/CVE-2017-15010</a></p>
<p>Release Date: 2017-10-04</p>
<p>Fix Resolution: 2.3.3</p>
</p>
</details>
<p></p>
| non_priority | cve high detected in tough cookie tgz autoclosed cve high severity vulnerability vulnerable library tough cookie tgz cookies and cookie jar for node js library home page a href path to dependency file package json path to vulnerable library node modules npm node modules request node modules tough cookie package json dependency hierarchy grunt npm install tgz root library npm tgz request tgz x tough cookie tgz vulnerable library found in head commit a href found in base branch master vulnerability details a redos regular expression denial of service flaw was found in the tough cookie module before for node js an attacker that is able to make an http request using a specially crafted cookie may cause the application to consume an excessive amount of cpu publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution | 0 |
18,655 | 11,031,774,470 | IssuesEvent | 2019-12-06 18:33:37 | bee-travels/bee-travels | https://api.github.com/repos/bee-travels/bee-travels | opened | Checkout/Cart/Payment Services DevOps Story | Cart Service Checkout Service Payment Service v3 | Create a DevOps Story around the Checkout/Cart/Payment Services | 3.0 | Checkout/Cart/Payment Services DevOps Story - Create a DevOps Story around the Checkout/Cart/Payment Services | non_priority | checkout cart payment services devops story create a devops story around the checkout cart payment services | 0 |
298,784 | 22,572,315,010 | IssuesEvent | 2022-06-28 02:14:37 | dipeshrai123/react-ui-animate-docs | https://api.github.com/repos/dipeshrai123/react-ui-animate-docs | closed | Some changes in `installation` in getting started page | documentation | Issue:

Expected Output:
- Change `you` to `your`, there is a typo in you.
- and change react-ui-animate@next to react-ui-animate in bash command only for v2.0.0 | 1.0 | Some changes in `installation` in getting started page - Issue:

Expected Output:
- Change `you` to `your`, there is a typo in you.
- and change react-ui-animate@next to react-ui-animate in bash command only for v2.0.0 | non_priority | some changes in installation in getting started page issue expected output change you to your there is a typo in you and change react ui animate next to react ui animate in bash command only for | 0 |
311,178 | 9,529,857,280 | IssuesEvent | 2019-04-29 12:31:18 | JuSpa/Trumpf | https://api.github.com/repos/JuSpa/Trumpf | opened | Feedback LM | Priority: High Status: ToDo Type: Task | ## Beschreibung:
Nach den CS steht noch ein Feedback zum Ländermanagement im Allgemeinen aus.
## Tasks:
- [ ] Vor welchen Herausforderungen steht das Ländermanagement?
## Ausarbeitung:

| 1.0 | Feedback LM - ## Beschreibung:
Nach den CS steht noch ein Feedback zum Ländermanagement im Allgemeinen aus.
## Tasks:
- [ ] Vor welchen Herausforderungen steht das Ländermanagement?
## Ausarbeitung:

| priority | feedback lm beschreibung nach den cs steht noch ein feedback zum ländermanagement im allgemeinen aus tasks vor welchen herausforderungen steht das ländermanagement ausarbeitung | 1 |
341,445 | 30,586,764,892 | IssuesEvent | 2023-07-21 13:57:05 | unifyai/ivy | https://api.github.com/repos/unifyai/ivy | closed | Fix paddle_tensor.test_paddle_instance_ceil | Sub Task Failing Test Paddle Frontend | | | |
|---|---|
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5601666711"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/5601705840"><img src=https://img.shields.io/badge/-success-success></a>
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5598583637"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/5601252436"><img src=https://img.shields.io/badge/-success-success></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5600275980"><img src=https://img.shields.io/badge/-success-success></a>
| 1.0 | Fix paddle_tensor.test_paddle_instance_ceil - | | |
|---|---|
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5601666711"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/5601705840"><img src=https://img.shields.io/badge/-success-success></a>
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5598583637"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/5601252436"><img src=https://img.shields.io/badge/-success-success></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5600275980"><img src=https://img.shields.io/badge/-success-success></a>
| non_priority | fix paddle tensor test paddle instance ceil numpy a href src jax a href src tensorflow a href src torch a href src paddle a href src | 0 |
785,672 | 27,622,209,292 | IssuesEvent | 2023-03-10 01:45:30 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | closed | [YSQL] yb_enable_expression_pushdown for GIN index scan can yield incorrect results | kind/bug area/ysql priority/medium | Jira Link: [DB-5677](https://yugabyte.atlassian.net/browse/DB-5677)
### Description
yb_enable_expression_pushdown does not seem to work correctly. Refer to the slack thread - https://yugabyte.slack.com/archives/CAR5BCH29/p1677518458483219
The test case to reproduce the error is below:
```
drop table demo;
CREATE TABLE demo(
demo_id varchar(255) not null,
guid varchar(255) not null unique,
status varchar(255),
json_content jsonb not null,
primary key (demo_id)
);
CREATE INDEX ref_idx ON demo USING ybgin (json_content jsonb_path_ops) ;
insert into demo select x::text, x::text, x::text, ('{"externalReferences": [{"val":"'||x||'"}]}')::jsonb
from generate_series (1, 10) x;
-- wrong answer
set yb_enable_expression_pushdown=on;
select * from demo where json_content @> '{"externalReferences": [{"val":"9"}]}' and demo_id <> '9';
demo_id | guid | status | json_content
---------+------+--------+----------------------------------------
9 | 9 | 9 | {"externalReferences": [{"val": "9"}]}
-- correct answer
set yb_enable_expression_pushdown=off;
select * from demo where json_content @> '{"externalReferences": [{"val":"9"}]}' and demo_id <> '9';
yugabyte=# demo_id | guid | status | json_content
---------+------+--------+--------------
(0 rows)
```
[DB-5677]: https://yugabyte.atlassian.net/browse/DB-5677?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ | 1.0 | [YSQL] yb_enable_expression_pushdown for GIN index scan can yield incorrect results - Jira Link: [DB-5677](https://yugabyte.atlassian.net/browse/DB-5677)
### Description
yb_enable_expression_pushdown does not seem to work correctly. Refer to the slack thread - https://yugabyte.slack.com/archives/CAR5BCH29/p1677518458483219
The test case to reproduce the error is below:
```
drop table demo;
CREATE TABLE demo(
demo_id varchar(255) not null,
guid varchar(255) not null unique,
status varchar(255),
json_content jsonb not null,
primary key (demo_id)
);
CREATE INDEX ref_idx ON demo USING ybgin (json_content jsonb_path_ops) ;
insert into demo select x::text, x::text, x::text, ('{"externalReferences": [{"val":"'||x||'"}]}')::jsonb
from generate_series (1, 10) x;
-- wrong answer
set yb_enable_expression_pushdown=on;
select * from demo where json_content @> '{"externalReferences": [{"val":"9"}]}' and demo_id <> '9';
demo_id | guid | status | json_content
---------+------+--------+----------------------------------------
9 | 9 | 9 | {"externalReferences": [{"val": "9"}]}
-- correct answer
set yb_enable_expression_pushdown=off;
select * from demo where json_content @> '{"externalReferences": [{"val":"9"}]}' and demo_id <> '9';
yugabyte=# demo_id | guid | status | json_content
---------+------+--------+--------------
(0 rows)
```
[DB-5677]: https://yugabyte.atlassian.net/browse/DB-5677?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ | priority | yb enable expression pushdown for gin index scan can yield incorrect results jira link description yb enable expression pushdown does not seem to work correctly refer to the slack thread the test case to reproduce the error is below drop table demo create table demo demo id varchar not null guid varchar not null unique status varchar json content jsonb not null primary key demo id create index ref idx on demo using ybgin json content jsonb path ops insert into demo select x text x text x text externalreferences jsonb from generate series x wrong answer set yb enable expression pushdown on select from demo where json content externalreferences and demo id demo id guid status json content externalreferences correct answer set yb enable expression pushdown off select from demo where json content externalreferences and demo id yugabyte demo id guid status json content rows | 1 |
221,255 | 7,375,115,500 | IssuesEvent | 2018-03-13 22:47:44 | Polymer/lit-html | https://api.github.com/repos/Polymer/lit-html | closed | Syntax Highlighting for vim | Priority: Low Status: Available Type: Question | Hello i found syntax highlighters for visual studio code and atom but non for vim does anyone know of any projects? | 1.0 | Syntax Highlighting for vim - Hello i found syntax highlighters for visual studio code and atom but non for vim does anyone know of any projects? | priority | syntax highlighting for vim hello i found syntax highlighters for visual studio code and atom but non for vim does anyone know of any projects | 1 |
264,777 | 8,319,379,672 | IssuesEvent | 2018-09-25 17:03:38 | Coow/cows-hacknslash | https://api.github.com/repos/Coow/cows-hacknslash | opened | SteamID to put into DiscordCtrl | Low Priority | Assigned to: Unassigned
Requires the game to first get into Steam | 1.0 | SteamID to put into DiscordCtrl - Assigned to: Unassigned
Requires the game to first get into Steam | priority | steamid to put into discordctrl assigned to unassigned requires the game to first get into steam | 1 |
691,429 | 23,696,723,171 | IssuesEvent | 2022-08-29 15:13:01 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | app.clipchamp.com - site is not usable | browser-firefox priority-normal severity-critical type-unsupported engine-gecko | <!-- @browser: Firefox 104.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:104.0) Gecko/20100101 Firefox/104.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/109811 -->
**URL**: https://app.clipchamp.com/signup
**Browser / Version**: Firefox 104.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes Chrome
**Problem type**: Site is not usable
**Description**: Browser unsupported
**Steps to Reproduce**:
App wont load due to browser compatibility check
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | app.clipchamp.com - site is not usable - <!-- @browser: Firefox 104.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:104.0) Gecko/20100101 Firefox/104.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/109811 -->
**URL**: https://app.clipchamp.com/signup
**Browser / Version**: Firefox 104.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes Chrome
**Problem type**: Site is not usable
**Description**: Browser unsupported
**Steps to Reproduce**:
App wont load due to browser compatibility check
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | priority | app clipchamp com site is not usable url browser version firefox operating system windows tested another browser yes chrome problem type site is not usable description browser unsupported steps to reproduce app wont load due to browser compatibility check browser configuration none from with ❤️ | 1 |
140,168 | 12,888,549,375 | IssuesEvent | 2020-07-13 13:12:53 | WizBhoo/OCR_P08_ToDoList | https://api.github.com/repos/WizBhoo/OCR_P08_ToDoList | opened | Lint : phpcs rules definition | documentation | Estimate time duration : 0,10 day.
Purpose : to define more precisely the rules to follow by contributors regarding quality code lint through phpcs.
Bonus : to implement phpcs in the Codacy report with symfony's rules.
Rules to follow : PSR-1 / PSR-2 / PSR-4 - Symfony | 1.0 | Lint : phpcs rules definition - Estimate time duration : 0,10 day.
Purpose : to define more precisely the rules to follow by contributors regarding quality code lint through phpcs.
Bonus : to implement phpcs in the Codacy report with symfony's rules.
Rules to follow : PSR-1 / PSR-2 / PSR-4 - Symfony | non_priority | lint phpcs rules definition estimate time duration day purpose to define more precisely the rules to follow by contributors regarding quality code lint through phpcs bonus to implement phpcs in the codacy report with symfony s rules rules to follow psr psr psr symfony | 0 |
235,533 | 19,377,313,969 | IssuesEvent | 2021-12-17 00:22:29 | kubernetes/test-infra | https://api.github.com/repos/kubernetes/test-infra | closed | crier: invalid memory address or nil pointer dereference on posting a GitHub comment in summary mode | kind/bug sig/testing area/prow/crier | **What happened**:
```
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x68 pc=0x1c6025b]
goroutine 1500 [running]:
k8s.io/test-infra/prow/github/report.createEntry(0x1cc0523, 0x7, 0xc0455af152, 0xe, 0xc03217ff20, 0x24, 0x0, 0x0, 0xc05858763a, 0x2, ...)
prow/github/report/report.go:299 +0x1db
k8s.io/test-infra/prow/github/report.parseIssueComments(0xc097480000, 0x2e, 0x4a, 0xc06afa4e60, 0xc00ee59500, 0x4, 0x4, 0xd5, 0xc00ee59500, 0x4, ...)
prow/github/report/report.go:274 +0x85b
k8s.io/test-infra/prow/github/report.ReportComment(0x2594d58, 0xc0137c6600, 0x7f142bf44cb0, 0xc0260c83c0, 0xc000f599c0, 0xc097480000, 0x2e, 0x4a, 0xc000172040, 0x2, ...)
prow/github/report/report.go:187 +0x445
k8s.io/test-infra/prow/crier/reporters/github.(*Client).Report(0xc025bb7e80, 0x2594d58, 0xc0137c6600, 0xc024e19960, 0xc02ff2cb00, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
prow/crier/reporters/github/reporter.go:207 +0x6b9
k8s.io/test-infra/prow/crier.(*reconciler).reconcile(0xc0260c86c0, 0x2594d90, 0xc016221bc0, 0xc024e197a0, 0xc033fd90ca, 0x2, 0xc033fa9e00, 0x24, 0x0, 0x0, ...)
prow/crier/controller.go:198 +0x764
k8s.io/test-infra/prow/crier.(*reconciler).Reconcile(0xc0260c86c0, 0x2594d90, 0xc02f1fb920, 0xc033fd90ca, 0x2, 0xc033fa9e00, 0x24, 0xc02f1fb920, 0xc02f1fb8f0, 0xc06a87ddb0, ...)
prow/crier/controller.go:148 +0x405
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile(0xc024dadcc0, 0x2594d90, 0xc02f1fb8f0, 0xc033fd90ca, 0x2, 0xc033fa9e00, 0x24, 0xc02f1fb800, 0x0, 0x0, ...)
external/io_k8s_sigs_controller_runtime/pkg/internal/controller/controller.go:114 +0x247
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc024dadcc0, 0x2594ce8, 0xc025c60340, 0x1fba4e0, 0xc06e98f120)
external/io_k8s_sigs_controller_runtime/pkg/internal/controller/controller.go:311 +0x305
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc024dadcc0, 0x2594ce8, 0xc025c60340, 0x0)
external/io_k8s_sigs_controller_runtime/pkg/internal/controller/controller.go:266 +0x205
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2(0xc01b45c960, 0xc024dadcc0, 0x2594ce8, 0xc025c60340)
external/io_k8s_sigs_controller_runtime/pkg/internal/controller/controller.go:227 +0x6b
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2
external/io_k8s_sigs_controller_runtime/pkg/internal/controller/controller.go:223 +0x4d2
```
**What you expected to happen**:
No crash
**How to reproduce it (as minimally and precisely as possible)**:
The null dereference is the following line which means the `Refs` are nil, nothing else is a pointer there. So this likely means a periodic entered this code somehow.
```go
return strings.Join([]string{
pj.Spec.Context,
pj.Spec.Refs.Pulls[0].SHA,
^^^^^^^^^^^^^^^^^^^^^^^^^^^ line 299
fmt.Sprintf("[link](%s)", pj.Status.URL),
required,
fmt.Sprintf("`%s`", pj.Spec.RerunCommand),
}, " | ")
```
**Anything else we need to know?**:
I'm reporting this for completeness, I'm also submitting a PR to fix the problem (at least partially).
/area prow/crier | 1.0 | crier: invalid memory address or nil pointer dereference on posting a GitHub comment in summary mode - **What happened**:
```
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x68 pc=0x1c6025b]
goroutine 1500 [running]:
k8s.io/test-infra/prow/github/report.createEntry(0x1cc0523, 0x7, 0xc0455af152, 0xe, 0xc03217ff20, 0x24, 0x0, 0x0, 0xc05858763a, 0x2, ...)
prow/github/report/report.go:299 +0x1db
k8s.io/test-infra/prow/github/report.parseIssueComments(0xc097480000, 0x2e, 0x4a, 0xc06afa4e60, 0xc00ee59500, 0x4, 0x4, 0xd5, 0xc00ee59500, 0x4, ...)
prow/github/report/report.go:274 +0x85b
k8s.io/test-infra/prow/github/report.ReportComment(0x2594d58, 0xc0137c6600, 0x7f142bf44cb0, 0xc0260c83c0, 0xc000f599c0, 0xc097480000, 0x2e, 0x4a, 0xc000172040, 0x2, ...)
prow/github/report/report.go:187 +0x445
k8s.io/test-infra/prow/crier/reporters/github.(*Client).Report(0xc025bb7e80, 0x2594d58, 0xc0137c6600, 0xc024e19960, 0xc02ff2cb00, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
prow/crier/reporters/github/reporter.go:207 +0x6b9
k8s.io/test-infra/prow/crier.(*reconciler).reconcile(0xc0260c86c0, 0x2594d90, 0xc016221bc0, 0xc024e197a0, 0xc033fd90ca, 0x2, 0xc033fa9e00, 0x24, 0x0, 0x0, ...)
prow/crier/controller.go:198 +0x764
k8s.io/test-infra/prow/crier.(*reconciler).Reconcile(0xc0260c86c0, 0x2594d90, 0xc02f1fb920, 0xc033fd90ca, 0x2, 0xc033fa9e00, 0x24, 0xc02f1fb920, 0xc02f1fb8f0, 0xc06a87ddb0, ...)
prow/crier/controller.go:148 +0x405
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile(0xc024dadcc0, 0x2594d90, 0xc02f1fb8f0, 0xc033fd90ca, 0x2, 0xc033fa9e00, 0x24, 0xc02f1fb800, 0x0, 0x0, ...)
external/io_k8s_sigs_controller_runtime/pkg/internal/controller/controller.go:114 +0x247
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc024dadcc0, 0x2594ce8, 0xc025c60340, 0x1fba4e0, 0xc06e98f120)
external/io_k8s_sigs_controller_runtime/pkg/internal/controller/controller.go:311 +0x305
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc024dadcc0, 0x2594ce8, 0xc025c60340, 0x0)
external/io_k8s_sigs_controller_runtime/pkg/internal/controller/controller.go:266 +0x205
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2(0xc01b45c960, 0xc024dadcc0, 0x2594ce8, 0xc025c60340)
external/io_k8s_sigs_controller_runtime/pkg/internal/controller/controller.go:227 +0x6b
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2
external/io_k8s_sigs_controller_runtime/pkg/internal/controller/controller.go:223 +0x4d2
```
**What you expected to happen**:
No crash
**How to reproduce it (as minimally and precisely as possible)**:
The null dereference is the following line which means the `Refs` are nil, nothing else is a pointer there. So this likely means a periodic entered this code somehow.
```go
return strings.Join([]string{
pj.Spec.Context,
pj.Spec.Refs.Pulls[0].SHA,
^^^^^^^^^^^^^^^^^^^^^^^^^^^ line 299
fmt.Sprintf("[link](%s)", pj.Status.URL),
required,
fmt.Sprintf("`%s`", pj.Spec.RerunCommand),
}, " | ")
```
**Anything else we need to know?**:
I'm reporting this for completeness, I'm also submitting a PR to fix the problem (at least partially).
/area prow/crier | non_priority | crier invalid memory address or nil pointer dereference on posting a github comment in summary mode what happened panic runtime error invalid memory address or nil pointer dereference goroutine io test infra prow github report createentry prow github report report go io test infra prow github report parseissuecomments prow github report report go io test infra prow github report reportcomment prow github report report go io test infra prow crier reporters github client report prow crier reporters github reporter go io test infra prow crier reconciler reconcile prow crier controller go io test infra prow crier reconciler reconcile prow crier controller go sigs io controller runtime pkg internal controller controller reconcile external io sigs controller runtime pkg internal controller controller go sigs io controller runtime pkg internal controller controller reconcilehandler external io sigs controller runtime pkg internal controller controller go sigs io controller runtime pkg internal controller controller processnextworkitem external io sigs controller runtime pkg internal controller controller go sigs io controller runtime pkg internal controller controller start external io sigs controller runtime pkg internal controller controller go created by sigs io controller runtime pkg internal controller controller start external io sigs controller runtime pkg internal controller controller go what you expected to happen no crash how to reproduce it as minimally and precisely as possible the null dereference is the following line which means the refs are nil nothing else is a pointer there so this likely means a periodic entered this code somehow go return strings join string pj spec context pj spec refs pulls sha line fmt sprintf s pj status url required fmt sprintf s pj spec reruncommand anything else we need to know i m reporting this for completeness i m also submitting a pr to fix the problem at least partially area prow crier | 0 |
466,934 | 13,437,491,614 | IssuesEvent | 2020-09-07 16:01:29 | php-censor/php-censor | https://api.github.com/repos/php-censor/php-censor | closed | [Localization] Improve Spanish localization (lang.es.php) | component:localization other:help-wanted priority:minor type:enhancement | See [lang.es.php](https://github.com/php-censor/php-censor/blob/master/src/Languages/lang.es.php):
**Not present strings:** 'per_page', 'default', 'login', 'remember_me', 'environment_x', 'project_groups', 'build_now_debug', 'delete_old_builds', 'delete_all_builds', 'projects_with_build_errors', 'no_build_errors', 'failed_allowed', 'error', 'skipped', 'trace', 'default_branch_only', 'overwrite_build_config', 'environments_label', 'all', 'date', 'environment', 'build_source', 'source_unknown', 'source_manual_web', 'source_manual_console', 'source_periodical', 'source_webhook_push', 'source_webhook_pull_request_created', 'source_webhook_pull_request_updated', 'source_webhook_pull_request_approved', 'source_webhook_pull_request_merged', 'group_projects', 'project_group', 'group_count', 'group_edit', 'group_delete', 'group_add', 'group_add_edit', 'group_title', 'group_save', 'errors', 'information', 'is_new', 'new', 'rebuild_now_debug', 'all_errors', 'only_new', 'only_old', 'new_errors', 'total_errors', 'classes', 'methods', 'coverage', 'phan_warnings', 'php_cs_fixer_warnings', 'php_cpd_warnings', 'php_tal_lint_warnings', 'php_tal_lint_errors', 'behat_warnings', 'sensiolabs_insight_warnings', 'technical_debt_warnings', 'merged_branches', 'codeception_feature', 'codeception_suite', 'codeception_time', 'codeception_synopsis', 'test_message', 'test_no_message', 'test_success', 'test_fail', 'test_skipped', 'test_error', 'test_todo', 'test_total', 'build-summary', 'stage', 'duration', 'seconds', 'plugin', 'stage_setup', 'stage_test', 'stage_deploy', 'stage_complete', 'stage_success', 'stage_failure', 'stage_broken', 'stage_fixed', 'severity', 'all_plugins', 'all_severities', 'filters', 'errors_selected', 'build_details', 'commit_details', 'committer', 'commit_message', 'timing', 'created', 'started', 'finished', 'add_to_queue_failed', 'critical', 'high', 'normal', 'low', 'confirm_message', 'confirm_title', 'confirm_ok', 'confirm_cancel', 'confirm_success', 'confirm_failed', 'public_status_title', 'public_status_image', 'public_status_page'.
**Strings same than English:** 'plugins', 'builds', 'commit', 'webhooks', 'build_n', 'commit_id_x', 'build', 'suite', 'test', 'ok', 'no', 'application_secret', 'none'. | 1.0 | [Localization] Improve Spanish localization (lang.es.php) - See [lang.es.php](https://github.com/php-censor/php-censor/blob/master/src/Languages/lang.es.php):
**Not present strings:** 'per_page', 'default', 'login', 'remember_me', 'environment_x', 'project_groups', 'build_now_debug', 'delete_old_builds', 'delete_all_builds', 'projects_with_build_errors', 'no_build_errors', 'failed_allowed', 'error', 'skipped', 'trace', 'default_branch_only', 'overwrite_build_config', 'environments_label', 'all', 'date', 'environment', 'build_source', 'source_unknown', 'source_manual_web', 'source_manual_console', 'source_periodical', 'source_webhook_push', 'source_webhook_pull_request_created', 'source_webhook_pull_request_updated', 'source_webhook_pull_request_approved', 'source_webhook_pull_request_merged', 'group_projects', 'project_group', 'group_count', 'group_edit', 'group_delete', 'group_add', 'group_add_edit', 'group_title', 'group_save', 'errors', 'information', 'is_new', 'new', 'rebuild_now_debug', 'all_errors', 'only_new', 'only_old', 'new_errors', 'total_errors', 'classes', 'methods', 'coverage', 'phan_warnings', 'php_cs_fixer_warnings', 'php_cpd_warnings', 'php_tal_lint_warnings', 'php_tal_lint_errors', 'behat_warnings', 'sensiolabs_insight_warnings', 'technical_debt_warnings', 'merged_branches', 'codeception_feature', 'codeception_suite', 'codeception_time', 'codeception_synopsis', 'test_message', 'test_no_message', 'test_success', 'test_fail', 'test_skipped', 'test_error', 'test_todo', 'test_total', 'build-summary', 'stage', 'duration', 'seconds', 'plugin', 'stage_setup', 'stage_test', 'stage_deploy', 'stage_complete', 'stage_success', 'stage_failure', 'stage_broken', 'stage_fixed', 'severity', 'all_plugins', 'all_severities', 'filters', 'errors_selected', 'build_details', 'commit_details', 'committer', 'commit_message', 'timing', 'created', 'started', 'finished', 'add_to_queue_failed', 'critical', 'high', 'normal', 'low', 'confirm_message', 'confirm_title', 'confirm_ok', 'confirm_cancel', 'confirm_success', 'confirm_failed', 'public_status_title', 'public_status_image', 'public_status_page'.
**Strings same than English:** 'plugins', 'builds', 'commit', 'webhooks', 'build_n', 'commit_id_x', 'build', 'suite', 'test', 'ok', 'no', 'application_secret', 'none'. | priority | improve spanish localization lang es php see not present strings per page default login remember me environment x project groups build now debug delete old builds delete all builds projects with build errors no build errors failed allowed error skipped trace default branch only overwrite build config environments label all date environment build source source unknown source manual web source manual console source periodical source webhook push source webhook pull request created source webhook pull request updated source webhook pull request approved source webhook pull request merged group projects project group group count group edit group delete group add group add edit group title group save errors information is new new rebuild now debug all errors only new only old new errors total errors classes methods coverage phan warnings php cs fixer warnings php cpd warnings php tal lint warnings php tal lint errors behat warnings sensiolabs insight warnings technical debt warnings merged branches codeception feature codeception suite codeception time codeception synopsis test message test no message test success test fail test skipped test error test todo test total build summary stage duration seconds plugin stage setup stage test stage deploy stage complete stage success stage failure stage broken stage fixed severity all plugins all severities filters errors selected build details commit details committer commit message timing created started finished add to queue failed critical high normal low confirm message confirm title confirm ok confirm cancel confirm success confirm failed public status title public status image public status page strings same than english plugins builds commit webhooks build n commit id x build suite test ok no application secret none | 1 |
422,423 | 28,437,384,890 | IssuesEvent | 2023-04-15 13:32:25 | ros2-dotnet/ros2_dotnet | https://api.github.com/repos/ros2-dotnet/ros2_dotnet | closed | Build overlaying Galactic | documentation Galactic | I an trying my luck with building it overlaying galactic and i encounter an error in file path like below
```
Failed <<< rosidl_typesupport_introspection_c [7.89s, exited with code 1]
C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\Common7\IDE\VC\VCTargets\Microsoft.CppBuild.targets(321,5): error MSB3491: Could not write lines to file "x64\Release\ament_cmake_python_symlink_rosidl_typesupport_introspection_c_setup\ament_cm.81E1E29E.tlog\ament_cmake_python_symlink_rosidl_typesupport_introspection_c_setup.lastbuildstate". The specified path, file name, or both are too long. The fully qualified file name must be less than 260 characters, and the directory name must be less than 248 characters. [D:\MyGit\ros2-dotnet\ros2_dotnet_ws\build\rosidl_typesupport_introspection_c\ament_cmake_python_symlink_rosidl_typesupport_introspection_c_setup.vcxproj]
```
Any hint how i can solve this? | 1.0 | Build overlaying Galactic - I an trying my luck with building it overlaying galactic and i encounter an error in file path like below
```
Failed <<< rosidl_typesupport_introspection_c [7.89s, exited with code 1]
C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\Common7\IDE\VC\VCTargets\Microsoft.CppBuild.targets(321,5): error MSB3491: Could not write lines to file "x64\Release\ament_cmake_python_symlink_rosidl_typesupport_introspection_c_setup\ament_cm.81E1E29E.tlog\ament_cmake_python_symlink_rosidl_typesupport_introspection_c_setup.lastbuildstate". The specified path, file name, or both are too long. The fully qualified file name must be less than 260 characters, and the directory name must be less than 248 characters. [D:\MyGit\ros2-dotnet\ros2_dotnet_ws\build\rosidl_typesupport_introspection_c\ament_cmake_python_symlink_rosidl_typesupport_introspection_c_setup.vcxproj]
```
Any hint how i can solve this? | non_priority | build overlaying galactic i an trying my luck with building it overlaying galactic and i encounter an error in file path like below failed rosidl typesupport introspection c c program files microsoft visual studio community ide vc vctargets microsoft cppbuild targets error could not write lines to file release ament cmake python symlink rosidl typesupport introspection c setup ament cm tlog ament cmake python symlink rosidl typesupport introspection c setup lastbuildstate the specified path file name or both are too long the fully qualified file name must be less than characters and the directory name must be less than characters any hint how i can solve this | 0 |
488,013 | 14,073,515,823 | IssuesEvent | 2020-11-04 05:05:00 | MBFVSolutionsLLC/ChiEpsilon | https://api.github.com/repos/MBFVSolutionsLLC/ChiEpsilon | closed | I need to switch out a couple of DC | HIGH PRIORITY bug | We've had two changes of district councilors.
The Great Lakes DC is Gian A. Rassati (GRN: 109838) and the Southern DC is David A. Chin (GRN: 89299)
I got this when I tried to change out Robbie Barns in Southern.

Is this not the place where we make these changes?
| 1.0 | I need to switch out a couple of DC - We've had two changes of district councilors.
The Great Lakes DC is Gian A. Rassati (GRN: 109838) and the Southern DC is David A. Chin (GRN: 89299)
I got this when I tried to change out Robbie Barns in Southern.

Is this not the place where we make these changes?
| priority | i need to switch out a couple of dc we ve had two changes of district councilors the great lakes dc is gian a rassati grn and the southern dc is david a chin grn i got this when i tried to change out robbie barns in southern is this not the place where we make these changes | 1 |
733,051 | 25,285,901,624 | IssuesEvent | 2022-11-16 19:16:27 | svthalia/concrexit | https://api.github.com/repos/svthalia/concrexit | opened | Set storage class for different file types on S3 | priority: low feature | ### Is your feature request related to a problem? Please describe.
Currently, all files on S3 have the same standard storage class.
### Describe the solution you'd like
Be able to set the storage class for files
### Motivation
Money
### Describe alternatives you've considered
### Additional context
| 1.0 | Set storage class for different file types on S3 - ### Is your feature request related to a problem? Please describe.
Currently, all files on S3 have the same standard storage class.
### Describe the solution you'd like
Be able to set the storage class for files
### Motivation
Money
### Describe alternatives you've considered
### Additional context
| priority | set storage class for different file types on is your feature request related to a problem please describe currently all files on have the same standard storage class describe the solution you d like be able to set the storage class for files motivation money describe alternatives you ve considered additional context | 1 |
13,229 | 3,317,712,003 | IssuesEvent | 2015-11-06 23:13:13 | UCHIC/ODM2Sensor | https://api.github.com/repos/UCHIC/ODM2Sensor | closed | Vocabularies Management | django version feature ready for testing working on | Need to add a 'New' and 'Edit' forms for the Site Type and the Equipment Type vocabularies.
- [ ] Add new Site Type
- [ ] Edit Site Type
- [ ] Add new Equipment Type
- [ ] Edit Equipment Type | 1.0 | Vocabularies Management - Need to add a 'New' and 'Edit' forms for the Site Type and the Equipment Type vocabularies.
- [ ] Add new Site Type
- [ ] Edit Site Type
- [ ] Add new Equipment Type
- [ ] Edit Equipment Type | non_priority | vocabularies management need to add a new and edit forms for the site type and the equipment type vocabularies add new site type edit site type add new equipment type edit equipment type | 0 |
364,304 | 10,761,971,150 | IssuesEvent | 2019-10-31 22:07:49 | JuezUN/INGInious | https://api.github.com/repos/JuezUN/INGInious | closed | Deleting the tasks files is a difficult and tedious work | Feature request Medium Priority | This feature request consists of adding selectors and a general delete button for all the files. | 1.0 | Deleting the tasks files is a difficult and tedious work - This feature request consists of adding selectors and a general delete button for all the files. | priority | deleting the tasks files is a difficult and tedious work this feature request consists of adding selectors and a general delete button for all the files | 1 |
544,450 | 15,893,707,107 | IssuesEvent | 2021-04-11 07:12:17 | open-wa/wa-automate-nodejs | https://api.github.com/repos/open-wa/wa-automate-nodejs | closed | enhancements: popup | PRIORITY | - [x] detect if the server is behind a reverse proxy
- [x] send a basic message with the sessionId when connected | 1.0 | enhancements: popup - - [x] detect if the server is behind a reverse proxy
- [x] send a basic message with the sessionId when connected | priority | enhancements popup detect if the server is behind a reverse proxy send a basic message with the sessionid when connected | 1 |
107,104 | 16,751,630,619 | IssuesEvent | 2021-06-12 01:33:01 | gms-ws-demo/nibrs | https://api.github.com/repos/gms-ws-demo/nibrs | opened | CVE-2019-17531 (High) detected in multiple libraries | security vulnerability | ## CVE-2019-17531 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jackson-databind-2.9.6.jar</b>, <b>jackson-databind-2.9.5.jar</b>, <b>jackson-databind-2.8.10.jar</b>, <b>jackson-databind-2.8.0.jar</b>, <b>jackson-databind-2.9.8.jar</b></p></summary>
<p>
<details><summary><b>jackson-databind-2.9.6.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: nibrs/tools/nibrs-staging-data/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,nibrs/web/nibrs-web/target/nibrs-web/WEB-INF/lib/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.9.6.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.9.5.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: nibrs/tools/nibrs-flatfile/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.5/jackson-databind-2.9.5.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.5/jackson-databind-2.9.5.jar</p>
<p>
Dependency Hierarchy:
- tika-parsers-1.18.jar (Root Library)
- :x: **jackson-databind-2.9.5.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.8.10.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: nibrs/tools/nibrs-fbi-service/pom.xml</p>
<p>Path to vulnerable library: canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.10/jackson-databind-2.8.10.jar,nibrs/tools/nibrs-fbi-service/target/nibrs-fbi-service-1.0.0/WEB-INF/lib/jackson-databind-2.8.10.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.8.10.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.8.0.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: nibrs/tools/nibrs-common/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.0/jackson-databind-2.8.0.jar</p>
<p>
Dependency Hierarchy:
- tika-parsers-1.18.jar (Root Library)
- :x: **jackson-databind-2.8.0.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.9.8.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: nibrs/tools/nibrs-summary-report-common/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.8/jackson-databind-2.9.8.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.5.RELEASE.jar (Root Library)
- spring-boot-starter-json-2.1.5.RELEASE.jar
- :x: **jackson-databind-2.9.8.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/gms-ws-demo/nibrs/commit/9fb1c19bd26c2113d1961640de126a33eacdc946">9fb1c19bd26c2113d1961640de126a33eacdc946</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.0.0 through 2.9.10. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint and the service has the apache-log4j-extra (version 1.2.x) jar in the classpath, and an attacker can provide a JNDI service to access, it is possible to make the service execute a malicious payload.
<p>Publish Date: 2019-10-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-17531>CVE-2019-17531</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17531">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17531</a></p>
<p>Release Date: 2019-10-12</p>
<p>Fix Resolution: 2.10</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.6","packageFilePaths":["/tools/nibrs-staging-data/pom.xml","/tools/nibrs-summary-report/pom.xml","/tools/nibrs-route/pom.xml","/tools/nibrs-staging-data-common/pom.xml","/tools/nibrs-xmlfile/pom.xml","/tools/nibrs-validation/pom.xml","/web/nibrs-web/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.9.6","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.10"},{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.5","packageFilePaths":["/tools/nibrs-flatfile/pom.xml","/tools/nibrs-validate-common/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.apache.tika:tika-parsers:1.18;com.fasterxml.jackson.core:jackson-databind:2.9.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.10"},{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.8.10","packageFilePaths":["/tools/nibrs-fbi-service/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.8.10","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.10"},{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.8.0","packageFilePaths":["/tools/nibrs-common/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.apache.tika:tika-parsers:1.18;com.fasterxml.jackson.core:jackson-databind:2.8.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.10"},{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.8","packageFilePaths":["/tools/nibrs-summary-report-common/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.springframework.boot:spring-boot-starter-web:2.1.5.RELEASE;org.springframework.boot:spring-boot-starter-json:2.1.5.RELEASE;com.fasterxml.jackson.core:jackson-databind:2.9.8","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.10"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2019-17531","vulnerabilityDetails":"A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.0.0 through 2.9.10. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint and the service has the apache-log4j-extra (version 1.2.x) jar in the classpath, and an attacker can provide a JNDI service to access, it is possible to make the service execute a malicious payload.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-17531","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | True | CVE-2019-17531 (High) detected in multiple libraries - ## CVE-2019-17531 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jackson-databind-2.9.6.jar</b>, <b>jackson-databind-2.9.5.jar</b>, <b>jackson-databind-2.8.10.jar</b>, <b>jackson-databind-2.8.0.jar</b>, <b>jackson-databind-2.9.8.jar</b></p></summary>
<p>
<details><summary><b>jackson-databind-2.9.6.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: nibrs/tools/nibrs-staging-data/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,nibrs/web/nibrs-web/target/nibrs-web/WEB-INF/lib/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.9.6.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.9.5.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: nibrs/tools/nibrs-flatfile/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.5/jackson-databind-2.9.5.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.5/jackson-databind-2.9.5.jar</p>
<p>
Dependency Hierarchy:
- tika-parsers-1.18.jar (Root Library)
- :x: **jackson-databind-2.9.5.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.8.10.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: nibrs/tools/nibrs-fbi-service/pom.xml</p>
<p>Path to vulnerable library: canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.10/jackson-databind-2.8.10.jar,nibrs/tools/nibrs-fbi-service/target/nibrs-fbi-service-1.0.0/WEB-INF/lib/jackson-databind-2.8.10.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.8.10.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.8.0.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: nibrs/tools/nibrs-common/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.0/jackson-databind-2.8.0.jar</p>
<p>
Dependency Hierarchy:
- tika-parsers-1.18.jar (Root Library)
- :x: **jackson-databind-2.8.0.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.9.8.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: nibrs/tools/nibrs-summary-report-common/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.8/jackson-databind-2.9.8.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.5.RELEASE.jar (Root Library)
- spring-boot-starter-json-2.1.5.RELEASE.jar
- :x: **jackson-databind-2.9.8.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/gms-ws-demo/nibrs/commit/9fb1c19bd26c2113d1961640de126a33eacdc946">9fb1c19bd26c2113d1961640de126a33eacdc946</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.0.0 through 2.9.10. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint and the service has the apache-log4j-extra (version 1.2.x) jar in the classpath, and an attacker can provide a JNDI service to access, it is possible to make the service execute a malicious payload.
<p>Publish Date: 2019-10-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-17531>CVE-2019-17531</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17531">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17531</a></p>
<p>Release Date: 2019-10-12</p>
<p>Fix Resolution: 2.10</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.6","packageFilePaths":["/tools/nibrs-staging-data/pom.xml","/tools/nibrs-summary-report/pom.xml","/tools/nibrs-route/pom.xml","/tools/nibrs-staging-data-common/pom.xml","/tools/nibrs-xmlfile/pom.xml","/tools/nibrs-validation/pom.xml","/web/nibrs-web/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.9.6","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.10"},{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.5","packageFilePaths":["/tools/nibrs-flatfile/pom.xml","/tools/nibrs-validate-common/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.apache.tika:tika-parsers:1.18;com.fasterxml.jackson.core:jackson-databind:2.9.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.10"},{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.8.10","packageFilePaths":["/tools/nibrs-fbi-service/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.8.10","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.10"},{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.8.0","packageFilePaths":["/tools/nibrs-common/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.apache.tika:tika-parsers:1.18;com.fasterxml.jackson.core:jackson-databind:2.8.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.10"},{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.8","packageFilePaths":["/tools/nibrs-summary-report-common/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.springframework.boot:spring-boot-starter-web:2.1.5.RELEASE;org.springframework.boot:spring-boot-starter-json:2.1.5.RELEASE;com.fasterxml.jackson.core:jackson-databind:2.9.8","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.10"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2019-17531","vulnerabilityDetails":"A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.0.0 through 2.9.10. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint and the service has the apache-log4j-extra (version 1.2.x) jar in the classpath, and an attacker can provide a JNDI service to access, it is possible to make the service execute a malicious payload.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-17531","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | non_priority | cve high detected in multiple libraries cve high severity vulnerability vulnerable libraries jackson databind jar jackson databind jar jackson databind jar jackson databind jar jackson databind jar jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file nibrs tools nibrs staging data pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar nibrs web nibrs web target nibrs web web inf lib jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar canner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file nibrs tools nibrs flatfile pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy tika parsers jar root library x jackson databind jar vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file nibrs tools nibrs fbi service pom xml path to vulnerable library canner repository com fasterxml jackson core jackson databind jackson databind jar nibrs tools nibrs fbi service target nibrs fbi service web inf lib jackson databind jar dependency hierarchy x jackson databind jar vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file nibrs tools nibrs common pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy tika parsers jar root library x jackson databind jar vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file nibrs tools nibrs summary report common pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy spring boot starter web release jar root library spring boot starter json release jar x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details a polymorphic typing issue was discovered in fasterxml jackson databind through when default typing is enabled either globally or for a specific property for an externally exposed json endpoint and the service has the apache extra version x jar in the classpath and an attacker can provide a jndi service to access it is possible to make the service execute a malicious payload publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion packagetype java groupid com fasterxml jackson core packagename jackson databind packageversion packagefilepaths istransitivedependency true dependencytree org apache tika tika parsers com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion packagetype java groupid com fasterxml jackson core packagename jackson databind packageversion packagefilepaths istransitivedependency false dependencytree com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion packagetype java groupid com fasterxml jackson core packagename jackson databind packageversion packagefilepaths istransitivedependency true dependencytree org apache tika tika parsers com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion packagetype java groupid com fasterxml jackson core packagename jackson databind packageversion packagefilepaths istransitivedependency true dependencytree org springframework boot spring boot starter web release org springframework boot spring boot starter json release com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion basebranches vulnerabilityidentifier cve vulnerabilitydetails a polymorphic typing issue was discovered in fasterxml jackson databind through when default typing is enabled either globally or for a specific property for an externally exposed json endpoint and the service has the apache extra version x jar in the classpath and an attacker can provide a jndi service to access it is possible to make the service execute a malicious payload vulnerabilityurl | 0 |
101,465 | 16,512,277,707 | IssuesEvent | 2021-05-26 06:27:07 | valtech-ch/microservice-kubernetes-cluster | https://api.github.com/repos/valtech-ch/microservice-kubernetes-cluster | opened | CVE-2019-12086 (High) detected in jackson-databind-2.9.8.jar | security vulnerability | ## CVE-2019-12086 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.8.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: microservice-kubernetes-cluster/functions/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.8/11283f21cc480aa86c4df7a0a3243ec508372ed2/jackson-databind-2.9.8.jar</p>
<p>
Dependency Hierarchy:
- spring-cloud-function-adapter-azure-3.1.2.jar (Root Library)
- spring-cloud-function-context-3.1.2.jar
- :x: **jackson-databind-2.9.8.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/valtech-ch/microservice-kubernetes-cluster/commit/eb274179a823f7d17154880d5a503973bae259a0">eb274179a823f7d17154880d5a503973bae259a0</a></p>
<p>Found in base branch: <b>develop</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.x before 2.9.9. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint, the service has the mysql-connector-java jar (8.0.14 or earlier) in the classpath, and an attacker can host a crafted MySQL server reachable by the victim, an attacker can send a crafted JSON message that allows them to read arbitrary local files on the server. This occurs because of missing com.mysql.cj.jdbc.admin.MiniAdmin validation.
<p>Publish Date: 2019-05-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-12086>CVE-2019-12086</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12086">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12086</a></p>
<p>Release Date: 2019-05-17</p>
<p>Fix Resolution: 2.9.9</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2019-12086 (High) detected in jackson-databind-2.9.8.jar - ## CVE-2019-12086 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.8.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: microservice-kubernetes-cluster/functions/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.8/11283f21cc480aa86c4df7a0a3243ec508372ed2/jackson-databind-2.9.8.jar</p>
<p>
Dependency Hierarchy:
- spring-cloud-function-adapter-azure-3.1.2.jar (Root Library)
- spring-cloud-function-context-3.1.2.jar
- :x: **jackson-databind-2.9.8.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/valtech-ch/microservice-kubernetes-cluster/commit/eb274179a823f7d17154880d5a503973bae259a0">eb274179a823f7d17154880d5a503973bae259a0</a></p>
<p>Found in base branch: <b>develop</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.x before 2.9.9. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint, the service has the mysql-connector-java jar (8.0.14 or earlier) in the classpath, and an attacker can host a crafted MySQL server reachable by the victim, an attacker can send a crafted JSON message that allows them to read arbitrary local files on the server. This occurs because of missing com.mysql.cj.jdbc.admin.MiniAdmin validation.
<p>Publish Date: 2019-05-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-12086>CVE-2019-12086</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12086">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12086</a></p>
<p>Release Date: 2019-05-17</p>
<p>Fix Resolution: 2.9.9</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file microservice kubernetes cluster functions build gradle path to vulnerable library home wss scanner gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy spring cloud function adapter azure jar root library spring cloud function context jar x jackson databind jar vulnerable library found in head commit a href found in base branch develop vulnerability details a polymorphic typing issue was discovered in fasterxml jackson databind x before when default typing is enabled either globally or for a specific property for an externally exposed json endpoint the service has the mysql connector java jar or earlier in the classpath and an attacker can host a crafted mysql server reachable by the victim an attacker can send a crafted json message that allows them to read arbitrary local files on the server this occurs because of missing com mysql cj jdbc admin miniadmin validation publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
66,583 | 3,256,049,459 | IssuesEvent | 2015-10-20 11:59:03 | remkos/rads | https://api.github.com/repos/remkos/rads | opened | Add ALES retracker output | enhancement Priority-Medium | The ALES retracker output for range, sigma0 and SWH is available at PODAAC.
This will cover a 55-km swath along all global coastlines (5 km inland, 50 km off shore).
Requesting just the retracker output (instead of SGDRs) from NOCS. | 1.0 | Add ALES retracker output - The ALES retracker output for range, sigma0 and SWH is available at PODAAC.
This will cover a 55-km swath along all global coastlines (5 km inland, 50 km off shore).
Requesting just the retracker output (instead of SGDRs) from NOCS. | priority | add ales retracker output the ales retracker output for range and swh is available at podaac this will cover a km swath along all global coastlines km inland km off shore requesting just the retracker output instead of sgdrs from nocs | 1 |
225,024 | 7,476,806,297 | IssuesEvent | 2018-04-04 05:34:22 | CS2103JAN2018-W15-B4/main | https://api.github.com/repos/CS2103JAN2018-W15-B4/main | closed | As an Exco member who created a poll, I want to view results of the poll | enhancement priority.high type.UI type.story | So that I can understand the other members' opinions | 1.0 | As an Exco member who created a poll, I want to view results of the poll - So that I can understand the other members' opinions | priority | as an exco member who created a poll i want to view results of the poll so that i can understand the other members opinions | 1 |
422,148 | 12,266,997,745 | IssuesEvent | 2020-05-07 09:54:14 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | accounts.google.com - see bug description | browser-firefox engine-gecko ml-needsdiagnosis-false os-mac priority-critical | <!-- @browser: Firefox 78.0 -->
<!-- @ua_header: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.11; rv:78.0) Gecko/20100101 Firefox/78.0 -->
<!-- @reported_with: desktop-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/52569 -->
**URL**: https://accounts.google.com/signin/v2/identifier?hl=en&passive=true&continue=https%3A%2F%2Fwww.google.com%2F&flowName=GlifWebSignIn&flowEntry=ServiceLogin
**Browser / Version**: Firefox 78.0
**Operating System**: Mac OS X 10.11
**Tested Another Browser**: Yes Chrome
**Problem type**: Something else
**Description**: Wont Log In
**Steps to Reproduce**:
I Tried To Log In To Google, But It Wont Let Me.
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20200506093716</li><li>channel: nightly</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2020/5/55d97e37-9e3e-4807-b278-03bbf88eebb1)
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | accounts.google.com - see bug description - <!-- @browser: Firefox 78.0 -->
<!-- @ua_header: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.11; rv:78.0) Gecko/20100101 Firefox/78.0 -->
<!-- @reported_with: desktop-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/52569 -->
**URL**: https://accounts.google.com/signin/v2/identifier?hl=en&passive=true&continue=https%3A%2F%2Fwww.google.com%2F&flowName=GlifWebSignIn&flowEntry=ServiceLogin
**Browser / Version**: Firefox 78.0
**Operating System**: Mac OS X 10.11
**Tested Another Browser**: Yes Chrome
**Problem type**: Something else
**Description**: Wont Log In
**Steps to Reproduce**:
I Tried To Log In To Google, But It Wont Let Me.
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20200506093716</li><li>channel: nightly</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2020/5/55d97e37-9e3e-4807-b278-03bbf88eebb1)
_From [webcompat.com](https://webcompat.com/) with ❤️_ | priority | accounts google com see bug description url browser version firefox operating system mac os x tested another browser yes chrome problem type something else description wont log in steps to reproduce i tried to log in to google but it wont let me browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel nightly hastouchscreen false mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️ | 1 |
24,140 | 4,059,257,746 | IssuesEvent | 2016-05-25 08:56:10 | difi/move-integrasjonspunkt | https://api.github.com/repos/difi/move-integrasjonspunkt | reopened | Leveringskvittering sendes for tidlig | test | Leveringskvittering sendes før sjekk gjøres på om innkommende melding er en kvittering. Dette forårsaker meldingsloop | 1.0 | Leveringskvittering sendes for tidlig - Leveringskvittering sendes før sjekk gjøres på om innkommende melding er en kvittering. Dette forårsaker meldingsloop | non_priority | leveringskvittering sendes for tidlig leveringskvittering sendes før sjekk gjøres på om innkommende melding er en kvittering dette forårsaker meldingsloop | 0 |
426,888 | 12,389,691,183 | IssuesEvent | 2020-05-20 09:25:39 | nativescript-vue/nativescript-vue | https://api.github.com/repos/nativescript-vue/nativescript-vue | closed | OnTouch error trying to Get X and Y | priority:normal | ### Version
2.6.1
### Reproduction link
[https://play.nativescript.org/?template=play-vue&id=Xg7jdt](https://play.nativescript.org/?template=play-vue&id=Xg7jdt)
### Platform and OS info
Android 9 - MIUI 11.0.3, Nativescript-Vue 2.6.10, Windows 10
### Steps to reproduce
I have added a @touch event listener on Label element like below:
<Label class="placeholder" :text="message" @touch="onTouch"></Label>
and it is linked to the following function
methods: { onTouch(arg) { console.log(arg); } }
When running the application without the 'args' argument e.x. onTouch() { } it works fine. But when i enter the args in order to get the coordinates of the touched location i receive the following error.
LOG from device Redmi johny: [Vue warn]: Error in v-on handler: "Error: java.lang.NoSuchFieldError: no "I" field "CLASSIFICATION_AMBIGUOUS_GESTURE" in class "Landroid/view/MotionEvent;" or its superclasses"
LOG from device Redmi johny: An uncaught Exception occurred on "main" thread.
Calling js method onTouch failed
Error: java.lang.NoSuchFieldError: no "I" field "CLASSIFICATION_AMBIGUOUS_GESTURE" in class "Landroid/view/MotionEvent;" or its superclasses
More info can be fund when running the example in the playground.
### What is expected?
I am trying to get the X and Y coordinates from a touch event
### What is actually happening?
An error occurs described above
<!-- generated by nativescript-vue-issue-helper. DO NOT REMOVE --> | 1.0 | OnTouch error trying to Get X and Y - ### Version
2.6.1
### Reproduction link
[https://play.nativescript.org/?template=play-vue&id=Xg7jdt](https://play.nativescript.org/?template=play-vue&id=Xg7jdt)
### Platform and OS info
Android 9 - MIUI 11.0.3, Nativescript-Vue 2.6.10, Windows 10
### Steps to reproduce
I have added a @touch event listener on Label element like below:
<Label class="placeholder" :text="message" @touch="onTouch"></Label>
and it is linked to the following function
methods: { onTouch(arg) { console.log(arg); } }
When running the application without the 'args' argument e.x. onTouch() { } it works fine. But when i enter the args in order to get the coordinates of the touched location i receive the following error.
LOG from device Redmi johny: [Vue warn]: Error in v-on handler: "Error: java.lang.NoSuchFieldError: no "I" field "CLASSIFICATION_AMBIGUOUS_GESTURE" in class "Landroid/view/MotionEvent;" or its superclasses"
LOG from device Redmi johny: An uncaught Exception occurred on "main" thread.
Calling js method onTouch failed
Error: java.lang.NoSuchFieldError: no "I" field "CLASSIFICATION_AMBIGUOUS_GESTURE" in class "Landroid/view/MotionEvent;" or its superclasses
More info can be fund when running the example in the playground.
### What is expected?
I am trying to get the X and Y coordinates from a touch event
### What is actually happening?
An error occurs described above
<!-- generated by nativescript-vue-issue-helper. DO NOT REMOVE --> | priority | ontouch error trying to get x and y version reproduction link platform and os info android miui nativescript vue windows steps to reproduce i have added a touch event listener on label element like below and it is linked to the following function methods ontouch arg console log arg when running the application without the args argument e x ontouch it works fine but when i enter the args in order to get the coordinates of the touched location i receive the following error log from device redmi johny error in v on handler error java lang nosuchfielderror no i field classification ambiguous gesture in class landroid view motionevent or its superclasses log from device redmi johny an uncaught exception occurred on main thread calling js method ontouch failed error java lang nosuchfielderror no i field classification ambiguous gesture in class landroid view motionevent or its superclasses more info can be fund when running the example in the playground what is expected i am trying to get the x and y coordinates from a touch event what is actually happening an error occurs described above | 1 |
144,162 | 11,596,388,994 | IssuesEvent | 2020-02-24 18:51:33 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | opened | cli: TestCLITimeout failed | C-test-failure O-robot branch-master | [(cli).TestCLITimeout failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=1763778&tab=buildLog) on [master@6d541881b9fc71c36175814fb206487d46b87f1a](https://github.com/cockroachdb/cockroach/commits/6d541881b9fc71c36175814fb206487d46b87f1a):
```
/go/src/github.com/cockroachdb/cockroach/pkg/internal/client/db.go:695 +0xd2
github.com/cockroachdb/cockroach/pkg/ts.(*DB).storeKvs(0xc000fea930, 0x6aacc00, 0xc000f56750, 0xc001e38000, 0x2a4, 0x400, 0xc00157e2a0, 0x32)
/go/src/github.com/cockroachdb/cockroach/pkg/ts/db.go:307 +0x2cc
github.com/cockroachdb/cockroach/pkg/ts.(*DB).tryStoreData(0xc000fea930, 0x6aacc00, 0xc000f56750, 0x1, 0xc001d44000, 0x2a4, 0x492, 0x7, 0x6aacc00)
/go/src/github.com/cockroachdb/cockroach/pkg/ts/db.go:245 +0x3a3
github.com/cockroachdb/cockroach/pkg/ts.(*DB).StoreData(0xc000fea930, 0x6aacc00, 0xc000f56750, 0x1, 0xc001d44000, 0x2a4, 0x492, 0x6b25460, 0xc000386c80)
/go/src/github.com/cockroachdb/cockroach/pkg/ts/db.go:210 +0x139
github.com/cockroachdb/cockroach/pkg/ts.(*poller).poll.func1(0x6aacc00, 0xc000f56450)
/go/src/github.com/cockroachdb/cockroach/pkg/ts/db.go:193 +0x1d4
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunTask(0xc001ac2280, 0x6aacc00, 0xc000f56450, 0x5cf77c5, 0xf, 0xc001215dd0, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:284 +0x151
github.com/cockroachdb/cockroach/pkg/ts.(*poller).poll(0xc000b062a0)
/go/src/github.com/cockroachdb/cockroach/pkg/ts/db.go:184 +0x18b
github.com/cockroachdb/cockroach/pkg/ts.(*poller).start.func1(0x6aacc00, 0xc000f566c0)
/go/src/github.com/cockroachdb/cockroach/pkg/ts/db.go:162 +0x65
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc000302e30, 0xc001ac2280, 0xc000302e00)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x1ba
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:190 +0xc4
goroutine 1136 [select]:
github.com/cockroachdb/cockroach/pkg/sql.(*SchemaChangeManager).Start.func1(0x6aacc00, 0xc000f081e0)
/go/src/github.com/cockroachdb/cockroach/pkg/sql/schema_changer.go:2148 +0xaea
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc000302e50, 0xc001ac2280, 0xc001c2e340)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x1ba
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:190 +0xc4
rax 0x0
rbx 0x14d95bf7d000
rcx 0x14d95a88b428
rdx 0x6
rdi 0x6d21
rsi 0x6d23
rbp 0x6c540e3
rsp 0x14d957debe98
r8 0xfffffffffffffcf0
r9 0xfeff092d63646b68
r10 0x8
r11 0x206
r12 0x175
r13 0x6c5402e
r14 0xc0001165b0
r15 0x10
rip 0x14d95a88b428
rflags 0x206
cs 0x33
fs 0x0
gs 0x0
FAIL github.com/cockroachdb/cockroach/pkg/cli 1.747s
```
<details><summary>More</summary><p>
Parameters:
- GOFLAGS=-json
```
make stressrace TESTS=TestCLITimeout PKG=./pkg/cli TESTTIMEOUT=5m STRESSFLAGS='-timeout 5m' 2>&1
```
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2ATestCLITimeout.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
| 1.0 | cli: TestCLITimeout failed - [(cli).TestCLITimeout failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=1763778&tab=buildLog) on [master@6d541881b9fc71c36175814fb206487d46b87f1a](https://github.com/cockroachdb/cockroach/commits/6d541881b9fc71c36175814fb206487d46b87f1a):
```
/go/src/github.com/cockroachdb/cockroach/pkg/internal/client/db.go:695 +0xd2
github.com/cockroachdb/cockroach/pkg/ts.(*DB).storeKvs(0xc000fea930, 0x6aacc00, 0xc000f56750, 0xc001e38000, 0x2a4, 0x400, 0xc00157e2a0, 0x32)
/go/src/github.com/cockroachdb/cockroach/pkg/ts/db.go:307 +0x2cc
github.com/cockroachdb/cockroach/pkg/ts.(*DB).tryStoreData(0xc000fea930, 0x6aacc00, 0xc000f56750, 0x1, 0xc001d44000, 0x2a4, 0x492, 0x7, 0x6aacc00)
/go/src/github.com/cockroachdb/cockroach/pkg/ts/db.go:245 +0x3a3
github.com/cockroachdb/cockroach/pkg/ts.(*DB).StoreData(0xc000fea930, 0x6aacc00, 0xc000f56750, 0x1, 0xc001d44000, 0x2a4, 0x492, 0x6b25460, 0xc000386c80)
/go/src/github.com/cockroachdb/cockroach/pkg/ts/db.go:210 +0x139
github.com/cockroachdb/cockroach/pkg/ts.(*poller).poll.func1(0x6aacc00, 0xc000f56450)
/go/src/github.com/cockroachdb/cockroach/pkg/ts/db.go:193 +0x1d4
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunTask(0xc001ac2280, 0x6aacc00, 0xc000f56450, 0x5cf77c5, 0xf, 0xc001215dd0, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:284 +0x151
github.com/cockroachdb/cockroach/pkg/ts.(*poller).poll(0xc000b062a0)
/go/src/github.com/cockroachdb/cockroach/pkg/ts/db.go:184 +0x18b
github.com/cockroachdb/cockroach/pkg/ts.(*poller).start.func1(0x6aacc00, 0xc000f566c0)
/go/src/github.com/cockroachdb/cockroach/pkg/ts/db.go:162 +0x65
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc000302e30, 0xc001ac2280, 0xc000302e00)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x1ba
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:190 +0xc4
goroutine 1136 [select]:
github.com/cockroachdb/cockroach/pkg/sql.(*SchemaChangeManager).Start.func1(0x6aacc00, 0xc000f081e0)
/go/src/github.com/cockroachdb/cockroach/pkg/sql/schema_changer.go:2148 +0xaea
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc000302e50, 0xc001ac2280, 0xc001c2e340)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x1ba
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:190 +0xc4
rax 0x0
rbx 0x14d95bf7d000
rcx 0x14d95a88b428
rdx 0x6
rdi 0x6d21
rsi 0x6d23
rbp 0x6c540e3
rsp 0x14d957debe98
r8 0xfffffffffffffcf0
r9 0xfeff092d63646b68
r10 0x8
r11 0x206
r12 0x175
r13 0x6c5402e
r14 0xc0001165b0
r15 0x10
rip 0x14d95a88b428
rflags 0x206
cs 0x33
fs 0x0
gs 0x0
FAIL github.com/cockroachdb/cockroach/pkg/cli 1.747s
```
<details><summary>More</summary><p>
Parameters:
- GOFLAGS=-json
```
make stressrace TESTS=TestCLITimeout PKG=./pkg/cli TESTTIMEOUT=5m STRESSFLAGS='-timeout 5m' 2>&1
```
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2ATestCLITimeout.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
| non_priority | cli testclitimeout failed on go src github com cockroachdb cockroach pkg internal client db go github com cockroachdb cockroach pkg ts db storekvs go src github com cockroachdb cockroach pkg ts db go github com cockroachdb cockroach pkg ts db trystoredata go src github com cockroachdb cockroach pkg ts db go github com cockroachdb cockroach pkg ts db storedata go src github com cockroachdb cockroach pkg ts db go github com cockroachdb cockroach pkg ts poller poll go src github com cockroachdb cockroach pkg ts db go github com cockroachdb cockroach pkg util stop stopper runtask go src github com cockroachdb cockroach pkg util stop stopper go github com cockroachdb cockroach pkg ts poller poll go src github com cockroachdb cockroach pkg ts db go github com cockroachdb cockroach pkg ts poller start go src github com cockroachdb cockroach pkg ts db go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go goroutine github com cockroachdb cockroach pkg sql schemachangemanager start go src github com cockroachdb cockroach pkg sql schema changer go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go rax rbx rcx rdx rdi rsi rbp rsp rip rflags cs fs gs fail github com cockroachdb cockroach pkg cli more parameters goflags json make stressrace tests testclitimeout pkg pkg cli testtimeout stressflags timeout powered by | 0 |
337,777 | 10,220,160,932 | IssuesEvent | 2019-08-15 20:33:27 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | hanime.tv - video or audio doesn't play | browser-focus-geckoview engine-gecko priority-normal | <!-- @browser: Firefox Mobile 68.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:68.0) Gecko/68.0 Firefox/68.0 -->
<!-- @reported_with: -->
<!-- @extra_labels: browser-focus-geckoview -->
**URL**: https://hanime.tv/hentai-videos/hakoiri-shoujo-virgin-territory-1
**Browser / Version**: Firefox Mobile 68.0
**Operating System**: Android
**Tested Another Browser**: Yes
**Problem type**: Video or audio doesn't play
**Description**: video doesn't play
**Steps to Reproduce**:
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | hanime.tv - video or audio doesn't play - <!-- @browser: Firefox Mobile 68.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:68.0) Gecko/68.0 Firefox/68.0 -->
<!-- @reported_with: -->
<!-- @extra_labels: browser-focus-geckoview -->
**URL**: https://hanime.tv/hentai-videos/hakoiri-shoujo-virgin-territory-1
**Browser / Version**: Firefox Mobile 68.0
**Operating System**: Android
**Tested Another Browser**: Yes
**Problem type**: Video or audio doesn't play
**Description**: video doesn't play
**Steps to Reproduce**:
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | priority | hanime tv video or audio doesn t play url browser version firefox mobile operating system android tested another browser yes problem type video or audio doesn t play description video doesn t play steps to reproduce browser configuration none from with ❤️ | 1 |
21,392 | 3,506,160,065 | IssuesEvent | 2016-01-08 04:08:39 | isushao/sundyandroid | https://api.github.com/repos/isushao/sundyandroid | closed | 下载资源在VeryCD上找的,感谢一下。 | auto-migrated Priority-Medium Type-Defect | ```
What steps will reproduce the problem?
1. 课程顺序匪夷所思
2. 讲解逻辑清晰
3. 代码和演示都很好,加油!!
What is the expected output? What do you see instead?
What version of the product are you using? On what operating system?
Please provide any additional information below.
```
Original issue reported on code.google.com by `lixue...@gmail.com` on 17 Jul 2012 at 8:39 | 1.0 | 下载资源在VeryCD上找的,感谢一下。 - ```
What steps will reproduce the problem?
1. 课程顺序匪夷所思
2. 讲解逻辑清晰
3. 代码和演示都很好,加油!!
What is the expected output? What do you see instead?
What version of the product are you using? On what operating system?
Please provide any additional information below.
```
Original issue reported on code.google.com by `lixue...@gmail.com` on 17 Jul 2012 at 8:39 | non_priority | 下载资源在verycd上找的,感谢一下。 what steps will reproduce the problem 课程顺序匪夷所思 讲解逻辑清晰 代码和演示都很好,加油!! what is the expected output what do you see instead what version of the product are you using on what operating system please provide any additional information below original issue reported on code google com by lixue gmail com on jul at | 0 |
63,183 | 3,194,268,744 | IssuesEvent | 2015-09-30 11:06:03 | fusioninventory/fusioninventory-for-glpi | https://api.github.com/repos/fusioninventory/fusioninventory-for-glpi | closed | PHP Fatal error: Call to undefined function logDebug() in .../fusinvdeploy/hook.php on line 154 | Category: Deploy Component: For junior contributor Component: Found in version Priority: Normal Status: Closed Tracker: Bug | ---
Author Name: **Mathieu Parent** (Mathieu Parent)
Original Redmine Issue: 1752, http://forge.fusioninventory.org/issues/1752
Original Date: 2012-08-02
Original Assignee: David Durieux
---
While deploying a package, I get a truncated page, and I see the above message in the Apache error log.
It seems that the logDebug function is not defined anywhere. There is also an uncommented call of it in .../fusinvdeploy/ajax/package.import.php line 49, and in ./webservices/inc/methodinventaire.class.php
line 515.
Found in version 0.83_1.0-RC3.
| 1.0 | PHP Fatal error: Call to undefined function logDebug() in .../fusinvdeploy/hook.php on line 154 - ---
Author Name: **Mathieu Parent** (Mathieu Parent)
Original Redmine Issue: 1752, http://forge.fusioninventory.org/issues/1752
Original Date: 2012-08-02
Original Assignee: David Durieux
---
While deploying a package, I get a truncated page, and I see the above message in the Apache error log.
It seems that the logDebug function is not defined anywhere. There is also an uncommented call of it in .../fusinvdeploy/ajax/package.import.php line 49, and in ./webservices/inc/methodinventaire.class.php
line 515.
Found in version 0.83_1.0-RC3.
| priority | php fatal error call to undefined function logdebug in fusinvdeploy hook php on line author name mathieu parent mathieu parent original redmine issue original date original assignee david durieux while deploying a package i get a truncated page and i see the above message in the apache error log it seems that the logdebug function is not defined anywhere there is also an uncommented call of it in fusinvdeploy ajax package import php line and in webservices inc methodinventaire class php line found in version | 1 |
263,009 | 8,272,954,736 | IssuesEvent | 2018-09-17 01:58:50 | javaee/glassfish | https://api.github.com/repos/javaee/glassfish | closed | EAR deployment fails when OSGi bundle is deployed | Component: OSGi Component: OSGi-JavaEE ERR: Assignee Priority: Critical Type: Bug | We have a JEE application packaged and deployed as EAR. Now we started to develop some OSGi EJB Application Bundles. Both will be deployed in the same Glassfish instance.
The OSGi EJB bundle includes some classes which are packaged in the EAR too. For example the package com.macd.foo is included in the bundle. The package com.macd.bar is not included in the bundle but added to the the Ignore-Package entry of bundles manifest.
Deploying the EAR fails saying that classes from com.macd.bar package could not be found. But the package com.macd.bar is packaged in the EAR file. If the OSGi bundle is removed deployment of EAR works.
So my questions are:
* Why does the OSGi bundle affects deployment of EAR file?
* How can I deploy OSGi bundles containing classes which are available in the EAR file too?
#### Affected Versions
[3.1.2_dev, 3.1.2] | 1.0 | EAR deployment fails when OSGi bundle is deployed - We have a JEE application packaged and deployed as EAR. Now we started to develop some OSGi EJB Application Bundles. Both will be deployed in the same Glassfish instance.
The OSGi EJB bundle includes some classes which are packaged in the EAR too. For example the package com.macd.foo is included in the bundle. The package com.macd.bar is not included in the bundle but added to the the Ignore-Package entry of bundles manifest.
Deploying the EAR fails saying that classes from com.macd.bar package could not be found. But the package com.macd.bar is packaged in the EAR file. If the OSGi bundle is removed deployment of EAR works.
So my questions are:
* Why does the OSGi bundle affects deployment of EAR file?
* How can I deploy OSGi bundles containing classes which are available in the EAR file too?
#### Affected Versions
[3.1.2_dev, 3.1.2] | priority | ear deployment fails when osgi bundle is deployed we have a jee application packaged and deployed as ear now we started to develop some osgi ejb application bundles both will be deployed in the same glassfish instance the osgi ejb bundle includes some classes which are packaged in the ear too for example the package com macd foo is included in the bundle the package com macd bar is not included in the bundle but added to the the ignore package entry of bundles manifest deploying the ear fails saying that classes from com macd bar package could not be found but the package com macd bar is packaged in the ear file if the osgi bundle is removed deployment of ear works so my questions are why does the osgi bundle affects deployment of ear file how can i deploy osgi bundles containing classes which are available in the ear file too affected versions | 1 |
749,836 | 26,181,042,487 | IssuesEvent | 2023-01-02 15:37:22 | projectdiscovery/retryabledns | https://api.github.com/repos/projectdiscovery/retryabledns | closed | Faulty rotate condition in client.queryMultiple | Priority: High Type: Bug | ## Description
The DNS server rotation logic in the client.queryMultiple does not rotate the server correctly. Picking one at the beginning if the variable is nil, and sticking to it for all retries
Ref: https://github.com/projectdiscovery/retryabledns/blob/32c28e9a7cd396d50dc15886f4ca14595bef5b57/client.go#L276 | 1.0 | Faulty rotate condition in client.queryMultiple - ## Description
The DNS server rotation logic in the client.queryMultiple does not rotate the server correctly. Picking one at the beginning if the variable is nil, and sticking to it for all retries
Ref: https://github.com/projectdiscovery/retryabledns/blob/32c28e9a7cd396d50dc15886f4ca14595bef5b57/client.go#L276 | priority | faulty rotate condition in client querymultiple description the dns server rotation logic in the client querymultiple does not rotate the server correctly picking one at the beginning if the variable is nil and sticking to it for all retries ref | 1 |
227,978 | 7,544,823,947 | IssuesEvent | 2018-04-17 19:38:52 | WordImpress/Give-Snippet-Library | https://api.github.com/repos/WordImpress/Give-Snippet-Library | closed | feat(donation): allow for splitting donations across multiple causes | 5-reported high-priority | ## Issue overview
This is a much-requested feature. Donors are far more likely to give once than they are 4 or 5 times back-to-back, so giving them a way to support multiple causes in one transaction would be preferable.
The best way to describe it is with this example:
### Use Case: churches
A church as 4 causes, all with separate forms for accounting to be able to keep the data separate.
1. General Fund
2. Missionary A
3. Missionary B
4. Homeless ministry
If a donor wants to give in one donation to all 4 causes, that's not currently possible. This is extremely common in church giving, where the memo line of the check denotes a portion of the giving to each fund.
## Proposed solution:
This will likely require the data restructuring that is coming in 2.0, but some sort of process that sits in between the submission of the donation from the front end and the processing of it on the back end, that splits the donation out to multiple forms, with options to enable receipts for each individual form, or just a receipt for the parent form.
In this case, we would be creating not even a "form" in the way we think of them, but a sort of "pre-form" that routes donations through forms.
#### Considerations:
It's likely that sending individual donations through to the gateway back-to-back will cause fraud warnings to go up (either at the gateway, or at the donor's credit card company), so sending money all at once would be preferable (the donor should only see one receipt from the payment gateway, after all. To them, it was one transaction.)
This would create a discrepancy between the gateway logs and the give logs, and would be especially problematic for recurring donations.
| 1.0 | feat(donation): allow for splitting donations across multiple causes - ## Issue overview
This is a much-requested feature. Donors are far more likely to give once than they are 4 or 5 times back-to-back, so giving them a way to support multiple causes in one transaction would be preferable.
The best way to describe it is with this example:
### Use Case: churches
A church as 4 causes, all with separate forms for accounting to be able to keep the data separate.
1. General Fund
2. Missionary A
3. Missionary B
4. Homeless ministry
If a donor wants to give in one donation to all 4 causes, that's not currently possible. This is extremely common in church giving, where the memo line of the check denotes a portion of the giving to each fund.
## Proposed solution:
This will likely require the data restructuring that is coming in 2.0, but some sort of process that sits in between the submission of the donation from the front end and the processing of it on the back end, that splits the donation out to multiple forms, with options to enable receipts for each individual form, or just a receipt for the parent form.
In this case, we would be creating not even a "form" in the way we think of them, but a sort of "pre-form" that routes donations through forms.
#### Considerations:
It's likely that sending individual donations through to the gateway back-to-back will cause fraud warnings to go up (either at the gateway, or at the donor's credit card company), so sending money all at once would be preferable (the donor should only see one receipt from the payment gateway, after all. To them, it was one transaction.)
This would create a discrepancy between the gateway logs and the give logs, and would be especially problematic for recurring donations.
| priority | feat donation allow for splitting donations across multiple causes issue overview this is a much requested feature donors are far more likely to give once than they are or times back to back so giving them a way to support multiple causes in one transaction would be preferable the best way to describe it is with this example use case churches a church as causes all with separate forms for accounting to be able to keep the data separate general fund missionary a missionary b homeless ministry if a donor wants to give in one donation to all causes that s not currently possible this is extremely common in church giving where the memo line of the check denotes a portion of the giving to each fund proposed solution this will likely require the data restructuring that is coming in but some sort of process that sits in between the submission of the donation from the front end and the processing of it on the back end that splits the donation out to multiple forms with options to enable receipts for each individual form or just a receipt for the parent form in this case we would be creating not even a form in the way we think of them but a sort of pre form that routes donations through forms considerations it s likely that sending individual donations through to the gateway back to back will cause fraud warnings to go up either at the gateway or at the donor s credit card company so sending money all at once would be preferable the donor should only see one receipt from the payment gateway after all to them it was one transaction this would create a discrepancy between the gateway logs and the give logs and would be especially problematic for recurring donations | 1 |
132,974 | 5,194,527,064 | IssuesEvent | 2017-01-23 04:21:14 | openshift/origin | https://api.github.com/repos/openshift/origin | closed | Router should test config before reloading | area/reliability component/routing kind/enhancement priority/P2 | In order to prevent a completely broken router the router should be able to:
1. test config if the implementation supports it to ensure validity before reloading
2. if there isn't a way to ensure validity it should be able to revert to a previous config if it cannot restart with the new config
In the event that an invalid configuration is generated the router should send an event and log any message received from validation or command execution.
@ramr @rajatchopra
| 1.0 | Router should test config before reloading - In order to prevent a completely broken router the router should be able to:
1. test config if the implementation supports it to ensure validity before reloading
2. if there isn't a way to ensure validity it should be able to revert to a previous config if it cannot restart with the new config
In the event that an invalid configuration is generated the router should send an event and log any message received from validation or command execution.
@ramr @rajatchopra
| priority | router should test config before reloading in order to prevent a completely broken router the router should be able to test config if the implementation supports it to ensure validity before reloading if there isn t a way to ensure validity it should be able to revert to a previous config if it cannot restart with the new config in the event that an invalid configuration is generated the router should send an event and log any message received from validation or command execution ramr rajatchopra | 1 |
611,363 | 18,953,295,733 | IssuesEvent | 2021-11-18 17:14:44 | internetarchive/openlibrary | https://api.github.com/repos/internetarchive/openlibrary | closed | Data Dumps not auto-generating | Type: Bug Priority: 2 Affects: Data Lead: @cdrini | Despite #5263 being resolved, it looks like the data dumps weren't uploaded on July 1st :/
### Relevant URL?
* https://github.com/internetarchive/openlibrary/wiki/Generating-Data-Dumps
* https://archive.org/details/ol_exports?sort=-publicdate
Related issues and pull requests:
* #3989
* #4621
* #4671
* #4723
* #5546
* #5673
* #5719
Related files:
* [`docker-compose.production.yml`](../blob/master/docker-compose.production.yml#L90) defines `cron-jobs` Docker container.
* [`docker/ol-cron-start.sh`](../blob/master/docker/ol-cron-start.sh) sets up the cron tasks.
* [olsystem: `/etc/cron.d/openlibrary.ol_home0`](https://github.com/internetarchive/olsystem/blob/master/etc/cron.d/openlibrary.ol_home0#L11) defines the actual job
* modify and then to reactivate do: `crontab /etc/cron.d/openlibrary.ol_home0` Also: https://cron.help
* [ ] internetarchive/olsystem#140
* [`scripts/oldump.sh`](../blob/master/scripts/oldump.sh) is the script that gets run.
* [x] #5860
### Proposal & Constraints
- Run manually for now
### Related files
<!-- Files related to this issue; this is super useful for new contributors who might want to help! If you're not sure, leave this blank; a maintainer will add them. -->
### Stakeholders
@mekarpeles @jimman2003
| 1.0 | Data Dumps not auto-generating - Despite #5263 being resolved, it looks like the data dumps weren't uploaded on July 1st :/
### Relevant URL?
* https://github.com/internetarchive/openlibrary/wiki/Generating-Data-Dumps
* https://archive.org/details/ol_exports?sort=-publicdate
Related issues and pull requests:
* #3989
* #4621
* #4671
* #4723
* #5546
* #5673
* #5719
Related files:
* [`docker-compose.production.yml`](../blob/master/docker-compose.production.yml#L90) defines `cron-jobs` Docker container.
* [`docker/ol-cron-start.sh`](../blob/master/docker/ol-cron-start.sh) sets up the cron tasks.
* [olsystem: `/etc/cron.d/openlibrary.ol_home0`](https://github.com/internetarchive/olsystem/blob/master/etc/cron.d/openlibrary.ol_home0#L11) defines the actual job
* modify and then to reactivate do: `crontab /etc/cron.d/openlibrary.ol_home0` Also: https://cron.help
* [ ] internetarchive/olsystem#140
* [`scripts/oldump.sh`](../blob/master/scripts/oldump.sh) is the script that gets run.
* [x] #5860
### Proposal & Constraints
- Run manually for now
### Related files
<!-- Files related to this issue; this is super useful for new contributors who might want to help! If you're not sure, leave this blank; a maintainer will add them. -->
### Stakeholders
@mekarpeles @jimman2003
| priority | data dumps not auto generating despite being resolved it looks like the data dumps weren t uploaded on july relevant url related issues and pull requests related files blob master docker compose production yml defines cron jobs docker container blob master docker ol cron start sh sets up the cron tasks defines the actual job modify and then to reactivate do crontab etc cron d openlibrary ol also internetarchive olsystem blob master scripts oldump sh is the script that gets run proposal constraints run manually for now related files stakeholders mekarpeles | 1 |
34,007 | 7,320,580,294 | IssuesEvent | 2018-03-02 08:03:06 | line/armeria | https://api.github.com/repos/line/armeria | closed | `MeterIdPrefixFunction.ofDefault` doesn't collect service level metrics for unframed request | defect | Please refer to https://gist.github.com/openaphid/63ee584a4b7bb4c646137ff5eea92dea for a reproducible case.
A custom `MeterIdPrefixFunction` could extract method name from `ServiceRequestContext` correctly, but I think there are some execution order issues in `UnframedGrpcService` and `GrpcService`.
For unframed requests, `requestContent` is not attached to context log:
https://github.com/line/armeria/blob/d3334ce5c25e9ecb45b0c94c54b9b99f2fb0e9ea/grpc/src/main/java/com/linecorp/armeria/server/grpc/GrpcService.java#L143 | 1.0 | `MeterIdPrefixFunction.ofDefault` doesn't collect service level metrics for unframed request - Please refer to https://gist.github.com/openaphid/63ee584a4b7bb4c646137ff5eea92dea for a reproducible case.
A custom `MeterIdPrefixFunction` could extract method name from `ServiceRequestContext` correctly, but I think there are some execution order issues in `UnframedGrpcService` and `GrpcService`.
For unframed requests, `requestContent` is not attached to context log:
https://github.com/line/armeria/blob/d3334ce5c25e9ecb45b0c94c54b9b99f2fb0e9ea/grpc/src/main/java/com/linecorp/armeria/server/grpc/GrpcService.java#L143 | non_priority | meteridprefixfunction ofdefault doesn t collect service level metrics for unframed request please refer to for a reproducible case a custom meteridprefixfunction could extract method name from servicerequestcontext correctly but i think there are some execution order issues in unframedgrpcservice and grpcservice for unframed requests requestcontent is not attached to context log | 0 |
104,537 | 8,974,705,002 | IssuesEvent | 2019-01-30 01:35:34 | onecodex/onecodex | https://api.github.com/repos/onecodex/onecodex | closed | Improve testing of plots produced by viz | test | Presently, tests succeed if the plotters don't raise. Obviously, it would be better if we check that the plots produced contain the correct information. Since we're using Vega, this is fairly simple as we can check the JSON returned by Altair.
Also, tests in `viz`, `taxonomy`, and `distance` should all be given a once-over to make sure redundant tests are minimized. Presently, they are thorough but slow. | 1.0 | Improve testing of plots produced by viz - Presently, tests succeed if the plotters don't raise. Obviously, it would be better if we check that the plots produced contain the correct information. Since we're using Vega, this is fairly simple as we can check the JSON returned by Altair.
Also, tests in `viz`, `taxonomy`, and `distance` should all be given a once-over to make sure redundant tests are minimized. Presently, they are thorough but slow. | non_priority | improve testing of plots produced by viz presently tests succeed if the plotters don t raise obviously it would be better if we check that the plots produced contain the correct information since we re using vega this is fairly simple as we can check the json returned by altair also tests in viz taxonomy and distance should all be given a once over to make sure redundant tests are minimized presently they are thorough but slow | 0 |
255,439 | 8,123,830,427 | IssuesEvent | 2018-08-16 15:38:08 | RenegadeLLC/renegade-dev | https://api.github.com/repos/RenegadeLLC/renegade-dev | opened | Homepage hero text needs to wrap | bug priority: high | Platform: All mobile phones
Text on homepage hero section spills past the screen on all phones when in portrait mode | 1.0 | Homepage hero text needs to wrap - Platform: All mobile phones
Text on homepage hero section spills past the screen on all phones when in portrait mode | priority | homepage hero text needs to wrap platform all mobile phones text on homepage hero section spills past the screen on all phones when in portrait mode | 1 |
687,568 | 23,531,882,852 | IssuesEvent | 2022-08-19 16:07:35 | nv-morpheus/Morpheus | https://api.github.com/repos/nv-morpheus/Morpheus | closed | [FEA] GCP Deployment Support | feature request Priority 0 | Confirm that a generic Helm-based installation of Morpheus (22.06) works properly in an GCP GPU VM. | 1.0 | [FEA] GCP Deployment Support - Confirm that a generic Helm-based installation of Morpheus (22.06) works properly in an GCP GPU VM. | priority | gcp deployment support confirm that a generic helm based installation of morpheus works properly in an gcp gpu vm | 1 |
375,104 | 11,099,921,212 | IssuesEvent | 2019-12-16 18:02:09 | SparkDevNetwork/Rock | https://api.github.com/repos/SparkDevNetwork/Rock | closed | FA Icons calendar and calendar-alt are swapped | Fixed in v10.2 Priority: Low Status: Confirmed Topic: UI Type: Bug | ### Prerequisites
* [x] Put an X between the brackets on this line if you have done all of the following:
* Can you reproduce the problem on a fresh install or the [demo site](http://rock.rocksolidchurchdemo.com/)?
* Did you include your Rock version number and [client culture](https://github.com/SparkDevNetwork/Rock/wiki/Environment-and-Diagnostics-Information) setting?
* Did you [perform a cursory search](https://github.com/issues?q=is%3Aissue+user%3ASparkDevNetwork+-repo%3ASparkDevNetwork%2FSlack) to see if your bug or enhancement is already reported?
### Description
As can be seen here https://github.com/SparkDevNetwork/Rock/blob/develop/RockWeb/Styles/FontAwesome/_rock-upgrade-map-classes.less#L757 the Font Awesome classes `fa-calendar` and `fa-calendar-alt` are swapped. I don't know if this was done on purpose or was a mistake, but it means what what you see (on fontawesome website) is not what you get (on Rock).
### Steps to Reproduce
1. Go look at Fontawesome.com
2. Click Icons
3. Search for "calendar-alt"
4. Go "oohhh yea, that's the sexy calendar icon I want!"
5. Go to Rock and set the Icon Css Class of something to `fa fa-calendar-alt`
6. Be disapointed.
**Expected behavior:**
Icon classes should match what people get on every other font-awesome enable website - or if it's intentional a comment should probably be put in the less file.
**Actual behavior:**
You don't get the icon you want.
### Versions
* **Rock Version:** 9.4
* **Client Culture Setting:** en-US
| 1.0 | FA Icons calendar and calendar-alt are swapped - ### Prerequisites
* [x] Put an X between the brackets on this line if you have done all of the following:
* Can you reproduce the problem on a fresh install or the [demo site](http://rock.rocksolidchurchdemo.com/)?
* Did you include your Rock version number and [client culture](https://github.com/SparkDevNetwork/Rock/wiki/Environment-and-Diagnostics-Information) setting?
* Did you [perform a cursory search](https://github.com/issues?q=is%3Aissue+user%3ASparkDevNetwork+-repo%3ASparkDevNetwork%2FSlack) to see if your bug or enhancement is already reported?
### Description
As can be seen here https://github.com/SparkDevNetwork/Rock/blob/develop/RockWeb/Styles/FontAwesome/_rock-upgrade-map-classes.less#L757 the Font Awesome classes `fa-calendar` and `fa-calendar-alt` are swapped. I don't know if this was done on purpose or was a mistake, but it means what what you see (on fontawesome website) is not what you get (on Rock).
### Steps to Reproduce
1. Go look at Fontawesome.com
2. Click Icons
3. Search for "calendar-alt"
4. Go "oohhh yea, that's the sexy calendar icon I want!"
5. Go to Rock and set the Icon Css Class of something to `fa fa-calendar-alt`
6. Be disapointed.
**Expected behavior:**
Icon classes should match what people get on every other font-awesome enable website - or if it's intentional a comment should probably be put in the less file.
**Actual behavior:**
You don't get the icon you want.
### Versions
* **Rock Version:** 9.4
* **Client Culture Setting:** en-US
| priority | fa icons calendar and calendar alt are swapped prerequisites put an x between the brackets on this line if you have done all of the following can you reproduce the problem on a fresh install or the did you include your rock version number and setting did you to see if your bug or enhancement is already reported description as can be seen here the font awesome classes fa calendar and fa calendar alt are swapped i don t know if this was done on purpose or was a mistake but it means what what you see on fontawesome website is not what you get on rock steps to reproduce go look at fontawesome com click icons search for calendar alt go oohhh yea that s the sexy calendar icon i want go to rock and set the icon css class of something to fa fa calendar alt be disapointed expected behavior icon classes should match what people get on every other font awesome enable website or if it s intentional a comment should probably be put in the less file actual behavior you don t get the icon you want versions rock version client culture setting en us | 1 |
695,537 | 23,862,350,051 | IssuesEvent | 2022-09-07 08:11:23 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | rollsroyce.wd3.myworkdayjobs.com - site is not usable | browser-firefox priority-normal engine-gecko | <!-- @browser: Firefox 72.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:72.0) Gecko/20100101 Firefox/72.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/110355 -->
**URL**: https://rollsroyce.wd3.myworkdayjobs.com/professional
**Browser / Version**: Firefox 72.0
**Operating System**: Windows 7
**Tested Another Browser**: No
**Problem type**: Site is not usable
**Description**: Page not loading correctly
**Steps to Reproduce**:
I went from this: https://careers.rolls-royce.com/search-and-apply
to this: https://rollsroyce.wd3.myworkdayjobs.com/professional
and the page is completely blank. If I look at the source however, there seems to be a lot of code.
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | rollsroyce.wd3.myworkdayjobs.com - site is not usable - <!-- @browser: Firefox 72.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:72.0) Gecko/20100101 Firefox/72.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/110355 -->
**URL**: https://rollsroyce.wd3.myworkdayjobs.com/professional
**Browser / Version**: Firefox 72.0
**Operating System**: Windows 7
**Tested Another Browser**: No
**Problem type**: Site is not usable
**Description**: Page not loading correctly
**Steps to Reproduce**:
I went from this: https://careers.rolls-royce.com/search-and-apply
to this: https://rollsroyce.wd3.myworkdayjobs.com/professional
and the page is completely blank. If I look at the source however, there seems to be a lot of code.
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | priority | rollsroyce myworkdayjobs com site is not usable url browser version firefox operating system windows tested another browser no problem type site is not usable description page not loading correctly steps to reproduce i went from this to this and the page is completely blank if i look at the source however there seems to be a lot of code browser configuration none from with ❤️ | 1 |
11,267 | 29,494,615,983 | IssuesEvent | 2023-06-02 15:53:03 | huggingface/datasets-server | https://api.github.com/repos/huggingface/datasets-server | closed | Change nomenclature of processing steps | refactoring / architecture | Until now, every public endpoint (ie /splits) was associated with one job and one type of cache entry. But that’s no more the case:
- the response can be computed using two concurrent methods (streaming vs normal, for example), and the first successful response is returned
- the response can be computed at config level (/split-names) then aggregated at dataset level
It’s the occasion to rethink a bit the name of the steps, job types and cache kinds.
For now, every processing step has only one associated job type and cache kind, so we use the same for these three. But it’s no more related to the public API endpoint.
- First change: remove the initial slash (`/step` → `step`)
- Second change: to make it easier to think about, we can include the “level” at which the response is computed in the name of the step (`step` → `dataset-step`)
With these rules, after renaming and implementing some pending issues (https://github.com/huggingface/datasets-server/issues/735) we would end with the following set of processing steps and public endpoints:
Public endpoint | Dataset-level steps | Config-level steps | Split-level steps
-- | -- | -- | --
| `dataset-config-names` | |
`/splits` | `dataset-split-names` | `config-split-names-from-streaming`, `config-split-names-from-dataset-info` | |
`/first-rows` | | | `split-first-rows-from-streaming`, `split-first-rows-from-parquet` | | |
| | `config-parquet-and-dataset-info` |
`/parquet` | `dataset-parquet` | `config-parquet` |
`/dataset-info` | `dataset-dataset-info` | `config-dataset-info` |
`/sizes` | `dataset-sizes` | `config-sizes` |
<p>Also: every cached response would be accessible through: <code>/admin/cache/[step name]</code>, e.g. <code>/admin/cache/config-parquet-and-dataset-info?dataset=x&config=y</code></p>
| 1.0 | Change nomenclature of processing steps - Until now, every public endpoint (ie /splits) was associated with one job and one type of cache entry. But that’s no more the case:
- the response can be computed using two concurrent methods (streaming vs normal, for example), and the first successful response is returned
- the response can be computed at config level (/split-names) then aggregated at dataset level
It’s the occasion to rethink a bit the name of the steps, job types and cache kinds.
For now, every processing step has only one associated job type and cache kind, so we use the same for these three. But it’s no more related to the public API endpoint.
- First change: remove the initial slash (`/step` → `step`)
- Second change: to make it easier to think about, we can include the “level” at which the response is computed in the name of the step (`step` → `dataset-step`)
With these rules, after renaming and implementing some pending issues (https://github.com/huggingface/datasets-server/issues/735) we would end with the following set of processing steps and public endpoints:
Public endpoint | Dataset-level steps | Config-level steps | Split-level steps
-- | -- | -- | --
| `dataset-config-names` | |
`/splits` | `dataset-split-names` | `config-split-names-from-streaming`, `config-split-names-from-dataset-info` | |
`/first-rows` | | | `split-first-rows-from-streaming`, `split-first-rows-from-parquet` | | |
| | `config-parquet-and-dataset-info` |
`/parquet` | `dataset-parquet` | `config-parquet` |
`/dataset-info` | `dataset-dataset-info` | `config-dataset-info` |
`/sizes` | `dataset-sizes` | `config-sizes` |
<p>Also: every cached response would be accessible through: <code>/admin/cache/[step name]</code>, e.g. <code>/admin/cache/config-parquet-and-dataset-info?dataset=x&config=y</code></p>
| non_priority | change nomenclature of processing steps until now every public endpoint ie splits was associated with one job and one type of cache entry but that’s no more the case the response can be computed using two concurrent methods streaming vs normal for example and the first successful response is returned the response can be computed at config level split names then aggregated at dataset level it’s the occasion to rethink a bit the name of the steps job types and cache kinds for now every processing step has only one associated job type and cache kind so we use the same for these three but it’s no more related to the public api endpoint first change remove the initial slash step → step second change to make it easier to think about we can include the “level” at which the response is computed in the name of the step step → dataset step with these rules after renaming and implementing some pending issues we would end with the following set of processing steps and public endpoints public endpoint dataset level steps config level steps split level steps dataset config names splits dataset split names config split names from streaming config split names from dataset info first rows split first rows from streaming split first rows from parquet config parquet and dataset info parquet dataset parquet config parquet dataset info dataset dataset info config dataset info sizes dataset sizes config sizes also every cached response would be accessible through admin cache e g admin cache config parquet and dataset info dataset x amp config y | 0 |
822,722 | 30,882,647,460 | IssuesEvent | 2023-08-03 18:53:18 | ramp4-pcar4/ramp4-pcar4 | https://api.github.com/repos/ramp4-pcar4/ramp4-pcar4 | closed | Notify User on Tile Schema Mismatch | flavour: feature priority: low | To reproduce
1. Fire up an R4 sample.
2. Make sure basemap is Mercator (usually is; Satellite tile is Mercator)
3. Open Wizard
4. Add [this url](https://maps-cartes.ec.gc.ca/arcgis/rest/services/Overlays/Provinces/MapServer) as a `Tile Layer` type. This tile is in Lambert schema
5. Layer loads and RAMP seems content, but layer does not draw due to schema conflict.
Trying the same on [RAMP2](https://fgpv-vpgf.github.io/fgpv-vpgf/develop/samples/index-fgp-en.html) gives you a ⚠️ icon in the legend. We don't have [mini icons](https://github.com/ramp4-pcar4/ramp4-pcar4/discussions/913) in R4 yet, but could use the Notification API in the meantime.
| 1.0 | Notify User on Tile Schema Mismatch - To reproduce
1. Fire up an R4 sample.
2. Make sure basemap is Mercator (usually is; Satellite tile is Mercator)
3. Open Wizard
4. Add [this url](https://maps-cartes.ec.gc.ca/arcgis/rest/services/Overlays/Provinces/MapServer) as a `Tile Layer` type. This tile is in Lambert schema
5. Layer loads and RAMP seems content, but layer does not draw due to schema conflict.
Trying the same on [RAMP2](https://fgpv-vpgf.github.io/fgpv-vpgf/develop/samples/index-fgp-en.html) gives you a ⚠️ icon in the legend. We don't have [mini icons](https://github.com/ramp4-pcar4/ramp4-pcar4/discussions/913) in R4 yet, but could use the Notification API in the meantime.
| priority | notify user on tile schema mismatch to reproduce fire up an sample make sure basemap is mercator usually is satellite tile is mercator open wizard add as a tile layer type this tile is in lambert schema layer loads and ramp seems content but layer does not draw due to schema conflict trying the same on gives you a ⚠️ icon in the legend we don t have in yet but could use the notification api in the meantime | 1 |
135,565 | 18,714,902,477 | IssuesEvent | 2021-11-03 02:18:11 | ChoeMinji/react | https://api.github.com/repos/ChoeMinji/react | opened | WS-2019-0307 (Medium) detected in mem-1.1.0.tgz | security vulnerability | ## WS-2019-0307 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mem-1.1.0.tgz</b></p></summary>
<p>Memoize functions - An optimization used to speed up consecutive function calls by caching the result of calls with identical input</p>
<p>Library home page: <a href="https://registry.npmjs.org/mem/-/mem-1.1.0.tgz">https://registry.npmjs.org/mem/-/mem-1.1.0.tgz</a></p>
<p>Path to dependency file: react/fixtures/dom/package.json</p>
<p>Path to vulnerable library: react/fixtures/dom/node_modules/mem/package.json,react/fixtures/concurrent/time-slicing/node_modules/mem/package.json,react/fixtures/expiration/node_modules/mem/package.json,react/fixtures/attribute-behavior/node_modules/mem/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-1.0.17.tgz (Root Library)
- webpack-3.8.1.tgz
- yargs-8.0.2.tgz
- os-locale-2.1.0.tgz
- :x: **mem-1.1.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ChoeMinji/react/commit/cfdac8a3b655e30ad4724d1e0f6910d3ca3c2b5e">cfdac8a3b655e30ad4724d1e0f6910d3ca3c2b5e</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In 'mem' before v4.0.0 there is a Denial of Service (DoS) vulnerability as a result of a failure in removal old values from the cache.
<p>Publish Date: 2018-08-27
<p>URL: <a href=https://github.com/sindresorhus/mem/commit/da4e4398cb27b602de3bd55f746efa9b4a31702b>WS-2019-0307</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1084">https://www.npmjs.com/advisories/1084</a></p>
<p>Release Date: 2019-12-01</p>
<p>Fix Resolution: mem - 4.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | WS-2019-0307 (Medium) detected in mem-1.1.0.tgz - ## WS-2019-0307 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mem-1.1.0.tgz</b></p></summary>
<p>Memoize functions - An optimization used to speed up consecutive function calls by caching the result of calls with identical input</p>
<p>Library home page: <a href="https://registry.npmjs.org/mem/-/mem-1.1.0.tgz">https://registry.npmjs.org/mem/-/mem-1.1.0.tgz</a></p>
<p>Path to dependency file: react/fixtures/dom/package.json</p>
<p>Path to vulnerable library: react/fixtures/dom/node_modules/mem/package.json,react/fixtures/concurrent/time-slicing/node_modules/mem/package.json,react/fixtures/expiration/node_modules/mem/package.json,react/fixtures/attribute-behavior/node_modules/mem/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-1.0.17.tgz (Root Library)
- webpack-3.8.1.tgz
- yargs-8.0.2.tgz
- os-locale-2.1.0.tgz
- :x: **mem-1.1.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ChoeMinji/react/commit/cfdac8a3b655e30ad4724d1e0f6910d3ca3c2b5e">cfdac8a3b655e30ad4724d1e0f6910d3ca3c2b5e</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In 'mem' before v4.0.0 there is a Denial of Service (DoS) vulnerability as a result of a failure in removal old values from the cache.
<p>Publish Date: 2018-08-27
<p>URL: <a href=https://github.com/sindresorhus/mem/commit/da4e4398cb27b602de3bd55f746efa9b4a31702b>WS-2019-0307</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1084">https://www.npmjs.com/advisories/1084</a></p>
<p>Release Date: 2019-12-01</p>
<p>Fix Resolution: mem - 4.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | ws medium detected in mem tgz ws medium severity vulnerability vulnerable library mem tgz memoize functions an optimization used to speed up consecutive function calls by caching the result of calls with identical input library home page a href path to dependency file react fixtures dom package json path to vulnerable library react fixtures dom node modules mem package json react fixtures concurrent time slicing node modules mem package json react fixtures expiration node modules mem package json react fixtures attribute behavior node modules mem package json dependency hierarchy react scripts tgz root library webpack tgz yargs tgz os locale tgz x mem tgz vulnerable library found in head commit a href found in base branch main vulnerability details in mem before there is a denial of service dos vulnerability as a result of a failure in removal old values from the cache publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution mem step up your open source security game with whitesource | 0 |
477,583 | 13,764,919,241 | IssuesEvent | 2020-10-07 12:45:31 | grpc/grpc | https://api.github.com/repos/grpc/grpc | closed | Cannot run interop for cpp in 1.32.0 | kind/bug lang/c++ priority/P1 | Looks like the docker build for cxx is broken so not able to find bins/opt/interop_client in the docker image.
$ tools/interop_matrix/run_interop_matrix_tests.py -l cxx --release v1.32.0
PASSED: pull_image_gcr.io/grpc-testing/grpc_interop_cxx:v1.32.0 [time=18.5sec, retries=0:0]
bash: bins/opt/interop_client: No such file or directory
FAILED: cxx__cxx_v1.32.0:cxx:grpc-test:large_unary [ret=127, pid=245082, time=0.8sec]
@markdroth | 1.0 | Cannot run interop for cpp in 1.32.0 - Looks like the docker build for cxx is broken so not able to find bins/opt/interop_client in the docker image.
$ tools/interop_matrix/run_interop_matrix_tests.py -l cxx --release v1.32.0
PASSED: pull_image_gcr.io/grpc-testing/grpc_interop_cxx:v1.32.0 [time=18.5sec, retries=0:0]
bash: bins/opt/interop_client: No such file or directory
FAILED: cxx__cxx_v1.32.0:cxx:grpc-test:large_unary [ret=127, pid=245082, time=0.8sec]
@markdroth | priority | cannot run interop for cpp in looks like the docker build for cxx is broken so not able to find bins opt interop client in the docker image tools interop matrix run interop matrix tests py l cxx release passed pull image gcr io grpc testing grpc interop cxx bash bins opt interop client no such file or directory failed cxx cxx cxx grpc test large unary markdroth | 1 |
407,674 | 27,624,830,947 | IssuesEvent | 2023-03-10 05:25:14 | fidildev/fettle | https://api.github.com/repos/fidildev/fettle | closed | Add conventional commits ADR | documentation | We want to use conventional commits in this repo since multiple people will be working on it.
| 1.0 | Add conventional commits ADR - We want to use conventional commits in this repo since multiple people will be working on it.
| non_priority | add conventional commits adr we want to use conventional commits in this repo since multiple people will be working on it | 0 |
42,472 | 11,061,995,995 | IssuesEvent | 2019-12-11 08:37:30 | hazelcast/hazelcast | https://api.github.com/repos/hazelcast/hazelcast | closed | Subtle change of behaviour of ExecutionCallback during shutdown | Module: Invocation System Source: Internal Team: Core Type: Defect | In Hazelcast 3.x, `IMap#submitToKey(K, EntryProcessor, ExecutionCallback` returns immediately. The supplied `ExecutionCallback` is executed after `EntryProcessor` is done, invoking its `onResponse/onFailure` methods respectively. If execution of `ExecutionCallback` fails due to the async executor rejecting execution (eg during Hazelcast shutdown), user is notified by invoking its onFailure method with the `RejectedExecutionException` as argument.
In Hazelcast 4.0, execution callback is submitted via `whenCompleteAsync`, [adapted to a `BiConsumer`](https://github.com/hazelcast/hazelcast/blob/83d814eefb9864187b3a43736d29422a1581f0aa/hazelcast/src/main/java/com/hazelcast/map/impl/proxy/MapProxySupport.java#L1262). If the `BiConsumer` is not executed due to `RejectedExecutionException`, then `ExecutionCallback.onFailure` is not executed -> user is not notified of the failure to execute their callback.
Same issue also applies for other `IExecutorService` methods which accept `ExecutionCallback` as argument. | 1.0 | Subtle change of behaviour of ExecutionCallback during shutdown - In Hazelcast 3.x, `IMap#submitToKey(K, EntryProcessor, ExecutionCallback` returns immediately. The supplied `ExecutionCallback` is executed after `EntryProcessor` is done, invoking its `onResponse/onFailure` methods respectively. If execution of `ExecutionCallback` fails due to the async executor rejecting execution (eg during Hazelcast shutdown), user is notified by invoking its onFailure method with the `RejectedExecutionException` as argument.
In Hazelcast 4.0, execution callback is submitted via `whenCompleteAsync`, [adapted to a `BiConsumer`](https://github.com/hazelcast/hazelcast/blob/83d814eefb9864187b3a43736d29422a1581f0aa/hazelcast/src/main/java/com/hazelcast/map/impl/proxy/MapProxySupport.java#L1262). If the `BiConsumer` is not executed due to `RejectedExecutionException`, then `ExecutionCallback.onFailure` is not executed -> user is not notified of the failure to execute their callback.
Same issue also applies for other `IExecutorService` methods which accept `ExecutionCallback` as argument. | non_priority | subtle change of behaviour of executioncallback during shutdown in hazelcast x imap submittokey k entryprocessor executioncallback returns immediately the supplied executioncallback is executed after entryprocessor is done invoking its onresponse onfailure methods respectively if execution of executioncallback fails due to the async executor rejecting execution eg during hazelcast shutdown user is notified by invoking its onfailure method with the rejectedexecutionexception as argument in hazelcast execution callback is submitted via whencompleteasync if the biconsumer is not executed due to rejectedexecutionexception then executioncallback onfailure is not executed user is not notified of the failure to execute their callback same issue also applies for other iexecutorservice methods which accept executioncallback as argument | 0 |
9,100 | 3,254,873,810 | IssuesEvent | 2015-10-20 03:58:34 | zulip/zulip | https://api.github.com/repos/zulip/zulip | closed | Help with Email polling | documentation question | I have access to an Exchange server and created a "Zulip Stream" email account with email address "zulip_stream@mydomain.org". So now I want to get the email polling working with Zulip. In /etc/zulip/settings.py I have set EMAIL_GATEWAY_PATTERN = "zulip_stream+%s@mydomain.org".
I then tried to test with the default "engineering" stream and see it's email address is zulip_stream+engineering+0c594a88ffb7af661555904d429e5fca@mydomain.org. I then went and created a secondary email address on the "Zulip Stream" account - I added the email address for the engineering stream. Lastly, as root, I created a new cron job with this as the entry....
```
* * * * * zulip cd /home/zulip/deployments/current && python manage.py email-mirror
```
Has anyone gotten the email polling to work with Zulip and Exchange? If so, please share some tips and let me know if I am totally misunderstanding how to set it up.
TIA,
Chris | 1.0 | Help with Email polling - I have access to an Exchange server and created a "Zulip Stream" email account with email address "zulip_stream@mydomain.org". So now I want to get the email polling working with Zulip. In /etc/zulip/settings.py I have set EMAIL_GATEWAY_PATTERN = "zulip_stream+%s@mydomain.org".
I then tried to test with the default "engineering" stream and see it's email address is zulip_stream+engineering+0c594a88ffb7af661555904d429e5fca@mydomain.org. I then went and created a secondary email address on the "Zulip Stream" account - I added the email address for the engineering stream. Lastly, as root, I created a new cron job with this as the entry....
```
* * * * * zulip cd /home/zulip/deployments/current && python manage.py email-mirror
```
Has anyone gotten the email polling to work with Zulip and Exchange? If so, please share some tips and let me know if I am totally misunderstanding how to set it up.
TIA,
Chris | non_priority | help with email polling i have access to an exchange server and created a zulip stream email account with email address zulip stream mydomain org so now i want to get the email polling working with zulip in etc zulip settings py i have set email gateway pattern zulip stream s mydomain org i then tried to test with the default engineering stream and see it s email address is zulip stream engineering mydomain org i then went and created a secondary email address on the zulip stream account i added the email address for the engineering stream lastly as root i created a new cron job with this as the entry zulip cd home zulip deployments current python manage py email mirror has anyone gotten the email polling to work with zulip and exchange if so please share some tips and let me know if i am totally misunderstanding how to set it up tia chris | 0 |
119,741 | 17,629,043,579 | IssuesEvent | 2021-08-19 04:35:50 | turkdevops/gitea | https://api.github.com/repos/turkdevops/gitea | closed | CVE-2021-35513 (Medium) detected in mermaid-8.10.1.tgz - autoclosed | security vulnerability | ## CVE-2021-35513 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mermaid-8.10.1.tgz</b></p></summary>
<p>Markdownish syntax for generating flowcharts, sequence diagrams, class diagrams, gantt charts and git graphs.</p>
<p>Library home page: <a href="https://registry.npmjs.org/mermaid/-/mermaid-8.10.1.tgz">https://registry.npmjs.org/mermaid/-/mermaid-8.10.1.tgz</a></p>
<p>Path to dependency file: gitea/package.json</p>
<p>Path to vulnerable library: /node_modules/mermaid/package.json</p>
<p>
Dependency Hierarchy:
- :x: **mermaid-8.10.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/gitea/commit/5a07ad71010693de12293f5ff1fadc890259b5e0">5a07ad71010693de12293f5ff1fadc890259b5e0</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Mermaid before 8.11.0 allows XSS when the antiscript feature is used.
<p>Publish Date: 2021-06-27
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-35513>CVE-2021-35513</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-35513">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-35513</a></p>
<p>Release Date: 2021-06-27</p>
<p>Fix Resolution: 8.11.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-35513 (Medium) detected in mermaid-8.10.1.tgz - autoclosed - ## CVE-2021-35513 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mermaid-8.10.1.tgz</b></p></summary>
<p>Markdownish syntax for generating flowcharts, sequence diagrams, class diagrams, gantt charts and git graphs.</p>
<p>Library home page: <a href="https://registry.npmjs.org/mermaid/-/mermaid-8.10.1.tgz">https://registry.npmjs.org/mermaid/-/mermaid-8.10.1.tgz</a></p>
<p>Path to dependency file: gitea/package.json</p>
<p>Path to vulnerable library: /node_modules/mermaid/package.json</p>
<p>
Dependency Hierarchy:
- :x: **mermaid-8.10.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/gitea/commit/5a07ad71010693de12293f5ff1fadc890259b5e0">5a07ad71010693de12293f5ff1fadc890259b5e0</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Mermaid before 8.11.0 allows XSS when the antiscript feature is used.
<p>Publish Date: 2021-06-27
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-35513>CVE-2021-35513</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-35513">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-35513</a></p>
<p>Release Date: 2021-06-27</p>
<p>Fix Resolution: 8.11.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve medium detected in mermaid tgz autoclosed cve medium severity vulnerability vulnerable library mermaid tgz markdownish syntax for generating flowcharts sequence diagrams class diagrams gantt charts and git graphs library home page a href path to dependency file gitea package json path to vulnerable library node modules mermaid package json dependency hierarchy x mermaid tgz vulnerable library found in head commit a href found in base branch main vulnerability details mermaid before allows xss when the antiscript feature is used publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
101,753 | 31,551,713,538 | IssuesEvent | 2023-09-02 06:05:54 | microsoft/azure-pipelines-tasks | https://api.github.com/repos/microsoft/azure-pipelines-tasks | closed | Release Pipeline: `This task must run in a build to publish artifacts to Azure Pipelines.` | Area: ABTT stale Task: PublishBuildArtifacts | ## Required Information
Entering this information will route you directly to the right team and expedite traction.
**Question, Bug, or Feature?**
*Type*: `Question`
**Enter Task Name**: `PublishBuildArtifacts`
## Environment
- Server - Azure Release Pipeline
- If using Azure Pipelines, provide the account name, team project name, build definition name/build number:
Account Name: `Denys Ashikhin`
Team Project Name: `UniTwin_InSiteTracker`
- Agent - Hosted or Private:
- If using Hosted agent, provide agent queue name: `Hosted Windows 2019 with VS2019`
## Issue Description
I am following the general steps from the following: https://youtu.be/_sUf0wqJYXo?t=363
To create the artifact in the release pipeline instead of a build pipeline. The goal here is to have a release control that can be quickly used to reference different modules for a specific release.
The following images the current setup with 4 `artifacts` (modules) being archived into the stagingDirectory where I attempt to publish .zips as in artifact.

But get the following error:
`This task must run in a build to publish artifacts to Azure Pipelines.`
Is there a way to make it work in releases? Or is there a way to access the zips after the deployment is completed another way?
### Task logs
[Enable debug logging and please provide the zip file containing all the logs for a speedy resolution]
## Troubleshooting
Checkout how to troubleshoot failures and collect debug logs: https://docs.microsoft.com/en-us/vsts/build-release/actions/troubleshooting
### Error logs
[Insert error from the logs here for a quick overview]
[ReleaseLogs_9.zip](https://github.com/microsoft/azure-pipelines-tasks/files/10853611/ReleaseLogs_9.zip)
| 1.0 | Release Pipeline: `This task must run in a build to publish artifacts to Azure Pipelines.` - ## Required Information
Entering this information will route you directly to the right team and expedite traction.
**Question, Bug, or Feature?**
*Type*: `Question`
**Enter Task Name**: `PublishBuildArtifacts`
## Environment
- Server - Azure Release Pipeline
- If using Azure Pipelines, provide the account name, team project name, build definition name/build number:
Account Name: `Denys Ashikhin`
Team Project Name: `UniTwin_InSiteTracker`
- Agent - Hosted or Private:
- If using Hosted agent, provide agent queue name: `Hosted Windows 2019 with VS2019`
## Issue Description
I am following the general steps from the following: https://youtu.be/_sUf0wqJYXo?t=363
To create the artifact in the release pipeline instead of a build pipeline. The goal here is to have a release control that can be quickly used to reference different modules for a specific release.
The following images the current setup with 4 `artifacts` (modules) being archived into the stagingDirectory where I attempt to publish .zips as in artifact.

But get the following error:
`This task must run in a build to publish artifacts to Azure Pipelines.`
Is there a way to make it work in releases? Or is there a way to access the zips after the deployment is completed another way?
### Task logs
[Enable debug logging and please provide the zip file containing all the logs for a speedy resolution]
## Troubleshooting
Checkout how to troubleshoot failures and collect debug logs: https://docs.microsoft.com/en-us/vsts/build-release/actions/troubleshooting
### Error logs
[Insert error from the logs here for a quick overview]
[ReleaseLogs_9.zip](https://github.com/microsoft/azure-pipelines-tasks/files/10853611/ReleaseLogs_9.zip)
| non_priority | release pipeline this task must run in a build to publish artifacts to azure pipelines required information entering this information will route you directly to the right team and expedite traction question bug or feature type question enter task name publishbuildartifacts environment server azure release pipeline if using azure pipelines provide the account name team project name build definition name build number account name denys ashikhin team project name unitwin insitetracker agent hosted or private if using hosted agent provide agent queue name hosted windows with issue description i am following the general steps from the following to create the artifact in the release pipeline instead of a build pipeline the goal here is to have a release control that can be quickly used to reference different modules for a specific release the following images the current setup with artifacts modules being archived into the stagingdirectory where i attempt to publish zips as in artifact but get the following error this task must run in a build to publish artifacts to azure pipelines is there a way to make it work in releases or is there a way to access the zips after the deployment is completed another way task logs troubleshooting checkout how to troubleshoot failures and collect debug logs error logs | 0 |
115,069 | 9,780,526,250 | IssuesEvent | 2019-06-07 17:09:45 | brave/browser-android-tabs | https://api.github.com/repos/brave/browser-android-tabs | closed | Settings > Site settings issues | QA/Test-plan-specified QA/Yes feature/settings regression | <!-- Have you searched for similar issues? Before submitting this issue, please check the open issues and add a note before logging a new issue.
PLEASE USE THE TEMPLATE BELOW TO PROVIDE INFORMATION ABOUT THE ISSUE.
INSUFFICIENT INFO WILL GET THE ISSUE CLOSED. IT WILL ONLY BE REOPENED AFTER SUFFICIENT INFO IS PROVIDED-->
## Description <!-- Provide a brief description of the issue -->
Gaps display in Settings > Site Settings. Also, `Desktop Mode` option is missing.
## Steps to reproduce <!-- Please add a series of steps to reproduce the issue -->
1. Navigate to Settings > Site Settings.
2. Compare available Settings to those from 1.0.95.
3.
## Actual result <!-- Please add screenshots if needed -->
Gaps in list, but only `Desktop Mode` appears to be missing.


## Expected result
No gaps in the list, all options from 1.0.95 should be available in 1.0.97.
This is what Settings > Site Settings looks like in 1.0.95:


## Issue reproduces how often <!-- [Easily reproduced/Intermittent issue/No steps to reproduce] -->
Reproduces in 1.0.96 and 1.0.97.
## Issue happens on <!-- Mention yes or no -->
- Current Play Store version? no, not in 1.0.95
- Beta build?
## Device details
- Install type (ARM, x86): all
- Device (Phone, Tablet, Phablet): all
- Android version: all
## Brave version
1.0.96, 1.0.97
### Website problems only
- Does the issue resolve itself when disabling Brave Shields? n/a
- Is the issue reproducible on the latest version of Chrome? No Gaps on Settings > Site Settings on Chrome 75.0.3770.75. However, Desktop Mode is not an option on either Chrome 74 or 75. Chrome 75 does have options in Settings > Site Settings that Chrome 74 does not have.
### Additional information
cc @brave/legacy_qa
| 1.0 | Settings > Site settings issues - <!-- Have you searched for similar issues? Before submitting this issue, please check the open issues and add a note before logging a new issue.
PLEASE USE THE TEMPLATE BELOW TO PROVIDE INFORMATION ABOUT THE ISSUE.
INSUFFICIENT INFO WILL GET THE ISSUE CLOSED. IT WILL ONLY BE REOPENED AFTER SUFFICIENT INFO IS PROVIDED-->
## Description <!-- Provide a brief description of the issue -->
Gaps display in Settings > Site Settings. Also, `Desktop Mode` option is missing.
## Steps to reproduce <!-- Please add a series of steps to reproduce the issue -->
1. Navigate to Settings > Site Settings.
2. Compare available Settings to those from 1.0.95.
3.
## Actual result <!-- Please add screenshots if needed -->
Gaps in list, but only `Desktop Mode` appears to be missing.


## Expected result
No gaps in the list, all options from 1.0.95 should be available in 1.0.97.
This is what Settings > Site Settings looks like in 1.0.95:


## Issue reproduces how often <!-- [Easily reproduced/Intermittent issue/No steps to reproduce] -->
Reproduces in 1.0.96 and 1.0.97.
## Issue happens on <!-- Mention yes or no -->
- Current Play Store version? no, not in 1.0.95
- Beta build?
## Device details
- Install type (ARM, x86): all
- Device (Phone, Tablet, Phablet): all
- Android version: all
## Brave version
1.0.96, 1.0.97
### Website problems only
- Does the issue resolve itself when disabling Brave Shields? n/a
- Is the issue reproducible on the latest version of Chrome? No Gaps on Settings > Site Settings on Chrome 75.0.3770.75. However, Desktop Mode is not an option on either Chrome 74 or 75. Chrome 75 does have options in Settings > Site Settings that Chrome 74 does not have.
### Additional information
cc @brave/legacy_qa
| non_priority | settings site settings issues have you searched for similar issues before submitting this issue please check the open issues and add a note before logging a new issue please use the template below to provide information about the issue insufficient info will get the issue closed it will only be reopened after sufficient info is provided description gaps display in settings site settings also desktop mode option is missing steps to reproduce navigate to settings site settings compare available settings to those from actual result gaps in list but only desktop mode appears to be missing expected result no gaps in the list all options from should be available in this is what settings site settings looks like in issue reproduces how often reproduces in and issue happens on current play store version no not in beta build device details install type arm all device phone tablet phablet all android version all brave version website problems only does the issue resolve itself when disabling brave shields n a is the issue reproducible on the latest version of chrome no gaps on settings site settings on chrome however desktop mode is not an option on either chrome or chrome does have options in settings site settings that chrome does not have additional information cc brave legacy qa | 0 |
57,540 | 15,835,081,940 | IssuesEvent | 2021-04-06 17:36:13 | primefaces/primefaces | https://api.github.com/repos/primefaces/primefaces | opened | Low FPW animations on Firefox | defect | **Describe the defect**
Animations have low FPS on Firefox. This happens with all components that have animations
**Reproducer**
1. Open this in Firefox and Chromium or Chrome https://www.primefaces.org/showcase/ui/panel/accordionPanel.xhtml
2. Interact with Accordion elements
3. Growl animation has low FPS https://www.primefaces.org/showcase/ui/message/growl.xhtml
4. https://www.primefaces.org/showcase/ui/multimedia/galleria.xhtml
**Environment:**
- PF Version: 10.0
- JSF + version: not sure
- Affected browsers: Firefox
-
**Expected behavior**
Animations have same FPS as in Chrome or Chromium | 1.0 | Low FPW animations on Firefox - **Describe the defect**
Animations have low FPS on Firefox. This happens with all components that have animations
**Reproducer**
1. Open this in Firefox and Chromium or Chrome https://www.primefaces.org/showcase/ui/panel/accordionPanel.xhtml
2. Interact with Accordion elements
3. Growl animation has low FPS https://www.primefaces.org/showcase/ui/message/growl.xhtml
4. https://www.primefaces.org/showcase/ui/multimedia/galleria.xhtml
**Environment:**
- PF Version: 10.0
- JSF + version: not sure
- Affected browsers: Firefox
-
**Expected behavior**
Animations have same FPS as in Chrome or Chromium | non_priority | low fpw animations on firefox describe the defect animations have low fps on firefox this happens with all components that have animations reproducer open this in firefox and chromium or chrome interact with accordion elements growl animation has low fps environment pf version jsf version not sure affected browsers firefox expected behavior animations have same fps as in chrome or chromium | 0 |
5,646 | 3,258,581,992 | IssuesEvent | 2015-10-20 23:13:44 | catapult-project/catapult | https://api.github.com/repos/catapult-project/catapult | closed | Unify sampling panel back into multi_sample_sub_view.html | Code Health Trace Viewer | originally, sampling panel was separated from multi sample sub view because samplign panel used d8 and thus bloated our code size. But, it has been rewritetn to use tables. So I think we should move all the code from sampling panel out of extras and into the main mssv.html file. | 1.0 | Unify sampling panel back into multi_sample_sub_view.html - originally, sampling panel was separated from multi sample sub view because samplign panel used d8 and thus bloated our code size. But, it has been rewritetn to use tables. So I think we should move all the code from sampling panel out of extras and into the main mssv.html file. | non_priority | unify sampling panel back into multi sample sub view html originally sampling panel was separated from multi sample sub view because samplign panel used and thus bloated our code size but it has been rewritetn to use tables so i think we should move all the code from sampling panel out of extras and into the main mssv html file | 0 |
381,654 | 11,277,832,966 | IssuesEvent | 2020-01-15 04:24:17 | GentenStudios/Phoenix | https://api.github.com/repos/GentenStudios/Phoenix | closed | Implement system to load lua modules | enhancement lua priority-high | ## Story
Our content is added via lua modules. We need to be able to load these modules when the game starts. The modules that need loaded are kept in the save folder so the correct modules load with each save. The folder structure is as follows:
```
phoenix.exe
modules/
mod1
dependencies.txt
init.lua
mod2
dependencies.txt
init.lua
saves/
save1
mods.txt
```
the dependencies.txt file in each mod lists what other mods need to load before this mod loads. The init.lua file is essentially the main function of the mod. The mods.txt inside the save folder is the list of mods that save needs in order to run. | 1.0 | Implement system to load lua modules - ## Story
Our content is added via lua modules. We need to be able to load these modules when the game starts. The modules that need loaded are kept in the save folder so the correct modules load with each save. The folder structure is as follows:
```
phoenix.exe
modules/
mod1
dependencies.txt
init.lua
mod2
dependencies.txt
init.lua
saves/
save1
mods.txt
```
the dependencies.txt file in each mod lists what other mods need to load before this mod loads. The init.lua file is essentially the main function of the mod. The mods.txt inside the save folder is the list of mods that save needs in order to run. | priority | implement system to load lua modules story our content is added via lua modules we need to be able to load these modules when the game starts the modules that need loaded are kept in the save folder so the correct modules load with each save the folder structure is as follows phoenix exe modules dependencies txt init lua dependencies txt init lua saves mods txt the dependencies txt file in each mod lists what other mods need to load before this mod loads the init lua file is essentially the main function of the mod the mods txt inside the save folder is the list of mods that save needs in order to run | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.