Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
4,535
| 7,373,325,266
|
IssuesEvent
|
2018-03-13 16:58:00
|
haskell-api-discussions/haskell-api-discussions
|
https://api.github.com/repos/haskell-api-discussions/haskell-api-discussions
|
opened
|
Difficult to run 'foo > /dev/null' and throw an exception on non-zero exit code
|
process
|
With `process`, it's hard to spawn an external process in IO, re-throw its non-zero exit code as a Haskell exception, and also redirect its output. Here's what I came up with:
```haskell
withFile "/dev/null" WriteMode $ \h ->
(_, _, _, ph) <- createProcess_ "" (shell "foo") { std_out = UseHandle h }
code <- waitForProcess ph
when (code /= ExitSuccess) $
throwIO code
```
If I didn't want to redirect output to a handle, this would be a one-liner using:
```haskell
callCommand :: String -> IO ()
```
This makes me think the API is warty, because it has a bunch of convenience functions that sit atop one tediously verbose master run function. Anything slightly out of the ordinary and you're forced to use it.
I wonder if there's a better design for `createProcess` overall. I don't have one off the top of my head, just the tedium of accomplishing this simple task using `process` made me want to complain about it here ;)
|
1.0
|
Difficult to run 'foo > /dev/null' and throw an exception on non-zero exit code - With `process`, it's hard to spawn an external process in IO, re-throw its non-zero exit code as a Haskell exception, and also redirect its output. Here's what I came up with:
```haskell
withFile "/dev/null" WriteMode $ \h ->
(_, _, _, ph) <- createProcess_ "" (shell "foo") { std_out = UseHandle h }
code <- waitForProcess ph
when (code /= ExitSuccess) $
throwIO code
```
If I didn't want to redirect output to a handle, this would be a one-liner using:
```haskell
callCommand :: String -> IO ()
```
This makes me think the API is warty, because it has a bunch of convenience functions that sit atop one tediously verbose master run function. Anything slightly out of the ordinary and you're forced to use it.
I wonder if there's a better design for `createProcess` overall. I don't have one off the top of my head, just the tedium of accomplishing this simple task using `process` made me want to complain about it here ;)
|
process
|
difficult to run foo dev null and throw an exception on non zero exit code with process it s hard to spawn an external process in io re throw its non zero exit code as a haskell exception and also redirect its output here s what i came up with haskell withfile dev null writemode h ph createprocess shell foo std out usehandle h code waitforprocess ph when code exitsuccess throwio code if i didn t want to redirect output to a handle this would be a one liner using haskell callcommand string io this makes me think the api is warty because it has a bunch of convenience functions that sit atop one tediously verbose master run function anything slightly out of the ordinary and you re forced to use it i wonder if there s a better design for createprocess overall i don t have one off the top of my head just the tedium of accomplishing this simple task using process made me want to complain about it here
| 1
|
21,945
| 3,768,666,286
|
IssuesEvent
|
2016-03-16 06:40:49
|
hollyjoke/33HU6POQKFJUS6K4M6BXQISV
|
https://api.github.com/repos/hollyjoke/33HU6POQKFJUS6K4M6BXQISV
|
closed
|
FOuPTQaQmLunnfNrw3IxjeYBCCvb4b/3o1OO3bNI/Ec42NECJO3VUtARIZ9gIZDl3JdRQUwcGy1dx/iYH+bRbU01ZOUDjRbqjlk1cKg7seV8h0OSNssiP/uf1k2G9GVwaggLPgn0ikHxRauvTk6fX8jkc2xTvm1CDN0qiAu6G9w=
|
design
|
DCDMcDY05vKF1+fb9v68FsRXICuHuAk6XZTR+RgQr1+m8uF7kLAkIca3o1J2Xot51JlXC9sD/SJ38VtiNl2wqZY962BWwPQEzjhZptT9Jjr949zkktXoLYFAE5kWSS4cKbOy/Eb25KnoM9oYPZ+TGSPQvSYZM2xgWzVDl8VRQnqWM3tBYESI9Ti4EMJIr2ERDiRC8Pt2MuhnE6GJc6Bfcxqm/XRqi3fpUtGb62EX6HNyhHgb872NIPow6wTNnYbByzp5204s/LMGdKrqbNZQjqPwnMFA0dSUmb/DUimhDSUhZqmGy3m1GZYduztYRyxC++m9DVgoHGdQHzi6ucrS58GPbzeY0qHNXkVMLTGcs3ip1gFT2rILYrIUKHAtV8eeqxNFCVGrsTfcWF7GqIs9VTPCYzFMh5d9g3BvWa8K1BL6KGi5Gggn33UWeIuPPVVXN/svxn06AyGxEbPMDLrrsiGwllFIQo3iqAFoFq8u7FrOUHjQ6WCZZoLsOZfbJ11CkBaKdmNiYgGifat822R7orc1N4ugvYaa0GMYtURBVxF7krL28nBFZwV3SPMuMlKqdM4SeX6KPnQa1dI7+HR+SWuD9f/XNPPnWnetAUWWQxcDo0xsgEGUI9Dqwrh8bcXB5pGFKNHQ4klCItW/bn98/gpRT+jH/Njg2YZumAX6UP83yLfL7x7VzJ3/watlNBJbgnFJ7ZdhzDVyn7HQAFx+aiNK7/2PnhPQcozfrP5vrlH8PDOjieeNkDMInfIkcbdUvzZvdPz/Vfk/q9+P2U6QlagdN1up05vv+c4Y4+5cwTcqkvi4+1KDr1gspy6JiJDp6gcJitDEDk34CfJYNgU0ayUwnnsaLpqUvmkN1+giHMG1VxANqZidSgmvik5nJcyexEFKsBE/QUQH2uH1Aw+AT9mtOn73vpYDBut9Cj1eLT99jNtIhpwwbcDl8yAzbPNyPXvyPfNBkmB66Br+t+cbZ2WoHveXuA4qiOBO/IW9GZ0icaEuzACcp1BzS/F7BAummcic1u7Y60u44QRSkJiSOIHlWobIiAQFTpH7QkTh8N5+2tG1rUuMKSM9x6+ZBvKD7qKqbT2OfCbyV2KEtbaorVaWWjz+Wa5yYo46ji/iwf4RR/7pxbjXkDDioEBTLAFbFLb9yVGmSCF4KSvAEMqaSux80stoH68Z099kRv7vpgZ+B8GBUwIymQYTgBHePm/thmW3kx1nocUc2bToROrkneFcJh/5ykJ1unHUr2+6cqNyhoz0GQ1rkjTgrsgUaeEkJMTN4xwlE0+0PTFreTUFPuEegsvbirjFcd8hivrfGjYv1L0DWbP0nciX/zczw0W6hHBX6VFY7WUPpEt1CZfGJLo4pob3jEdtGZjTAjDEwQ5JCuR3xufaxohBAx+AtodRg5Dlh8hk9hrizPbqoH8753SiWmQnOJHOTkji7eWyOXcrSml2Gul9MKsp4wYgH804YWmdaRfwt3xwnDMhGB1BWMih53NK8h0Pp1Kkxphim/fy1kezX7Da17Gs4rS3nEbdNQltQvFqL5iI0p05O4qG5kkqWw8a6JwylrwCrYjcLTb05O8yzJR9jyX1dYlQe5e/Qma3vNR6Y+yOQ0vlOlIxvxdlQaBRnEdXaILQs5JRr5rNFatP1FFrCGsM5cCNMOHPF3Y+rtlmP2t2iTA7dAXFePu67dldKT+NdBjM42asHcnNzJRD71OrkWh34jACX1Pb9tbdqG+MTXI91oqPbJrYUajIuzGSeFuR01tNNqmSsfnfCo9gZBX7cV427zysI2VQIJK1Jv8Im3TYFooYQ+Se5NXOS0MwCVyAW9HtLH0dAx+rE0UJUauxN9xYXsaoiz1VRG6d5gq45pu1+a/zrYD1jwCxVNpAwX3uZdoIKJAPbq4zrOYircynHao6L9cLI4xAuRYyheoDsHRm1oGOZ4HpDNgXxrzYlkuuJFUzAteGDLglCNzzHkJxAMSJlYP7eFY2q4vjbOYAk4SAtxE1FAuC6fRvxWubACGNjPKnii3avHgAUrkZXEUzNl7mEX69MXGfBfRQHB59SNBzGGslh19icKPmthPQ6NBMWoKui/pXx+9VQnDYhvk1Hm7O73N7f+N+6JH9WubYUwiR13jsrQunlU9prwUk1mVdHHBFbU4SNDw6fNoRhP+oetTtTxEfeL+Tq+WKyR5ihWJDe1C0UwSUgzOUfMPoF4btYV6MEYbCDeyGLjDROnL1LioORaygV01QOhC3Uy4zWYM/O7dZhGg3+3LdG17xuzmAnNkmTec1ncmDs4Zvtf9dOXnBdSk9c+DD/EPJ52blruyoKERd6udenba83sZMRVg2Oz5c8FfEXz4LimYaXlguDByKdW5e8y7Ke5oUxltFNePEUL+YgpJMxWGpQyCjUPgYrHqc4SlMRQprEMvmZkG3enM2UY1tdVWYqxNFCVGrsTfcWF7GqIs9Vee9lPZbtDfOOMpkwYxXnt5xnrhlwbSjTS1ZXF+OCnwyLa7ofXZ23KvE8F43zh6Q3GctiHt6FQxMk+dtlCC5KYFqIVKMvvf81kLU8wPFbAEi+4Z5Zv6k5CZRMoB1t8iZFnbcDhhguTnnKrDeH9m94trNzJRD71OrkWh34jACX1PbHqBZJ5WudwPMumiszKQspClYBMbHHfFIj94w4S4oX8p7mhTGW0U148RQv5iCkkzF8Zb/NzOSxcRzjN7PFjd2KayKwKsNcicLqesNG1dr20YIdPvuicuI2+5iNX6pY8T8X1ixR13s5Qf2mpBj0Hz5pTmCfO0s0uiyTy1v2uHjlFW58wwLb6VWqE/YMYQ5VmiN0n++qm70vHVeZVu+L3brZz3E14BICaUl+r/alkCh1IcWDvuk6H6e993LBiOwR3YBqgRKAZGD2GeFHmluCuKQt7O4AVhFssoG8pAlGUKlez+nNXwejgTlHr3m2bcYaRohkiVSGFNRB9bHu2yoOUCHM84oCRaZz/SXBKNODOG08vZUUSJ3Rnd5iqq3lNkuwy5Z13PuI+0mbFeZVPq3oRFLNKam+tdnJyMedDt8usyKtOZwosK9JJRtvm9tTe1o+FIxfvWc0jXeT6cE5ZhrgiuCRRSIWgQ5MlPfIR3otS905WOPRcpBCQwBGL17hkW+bEjO2MVGkTe5XcguwG0VuXxlYsieFFlW/ahDQaxk5Yx9HyWKcPAtk7nsXucvEV82MKVjZtfpHf6H3ozPCGUGlEbyFlicmKFKvYXFJVRmzzvgFXuzmrQYj3AzV4EHWr9JbZETVMl4xRZjmrT1ePgbWhc1aeXPcHpb5xoVhip0nTNFN2PFnR2A4Fmu1uFb4I/uiaCgHM2einE2zOTe3YmL04Isq7wRTXzBsaMmJddPItOdfKuP5KHjquq4pC+kij3ojAVeEYhAuM0H36s4jBzsj/7G8C8z4fNvE2kCHCNICIn+Ghy5QqMyc3jUXShsgD4IUinKoReD6kzeq7ceGMqfYdel+aaTnCCLdQxn433l26GYjtnJhdAlQOG8gUi520Gi5MM8+SWV9S0mkexL8MujtM0ig+QznksoJqcwJ8LuKxAEUGjcn3Rw3PCyIB70mvZqcRP+gb6lH94Ja30+uAvSYzln8Wze9/6Kj2myYExP8B7VqFx8vvifivu/n5LNm8/VJCcp89eAdhAWydamfDQhnVwFGe+5Q3Sr017ZP9gyVkwMzkcczZ6KcTbM5N7diYvTgiyrcnZOJc+5292/1GWq+6TbyUk39Qie7xmEHWS3MiqTl0zTMh1lIsxybJedM5LbIkm/d/gSV+cwWVObOu+YhM/0H83MlEPvU6uRaHfiMAJfU9voOGO97wkl4sPB2UPoVGoz7pcQ2DnUBmOCEfHwl7VYyDpqpNH3/3QwqBtQr08aktib1yMt+5HfVvNCu8hMhVFUYTaBG1mkaEH5EV2q1Nja1VvB2MpdEOD7FAGe7rSE7TeEcFfpUVjtZQ+kS3UJl8YkZhLdJDIqauqCx4FrUNm2oOeq2PDALGHg4VOlL/klZ2iEcFfpUVjtZQ+kS3UJl8YkQJkT+EYsBm2r9psbAqMQA9Xbxy3X1uDqvMQS6rOvW8erE0UJUauxN9xYXsaoiz1V/EEFHfNXEb5OXfxJYQtuARp9NmR4EH8EIfI9qyyGR6ESCGaS1SMXbocJ2gsWianNKLPySD+iLRrbY1tVmC8xhCvp7qqwk4bY081MXNLZObJ2ZU/2Ii26VZt3Dpp8Z/RJTKxwQY66h/K7RIa3IXYW5OFCrGMJUImsX/7qxQhq+ouBx5hUbZZWvuoqAZjaJaaSoFF6dn/Q+jTpjqZlyZihOnv9Zwim5g9NuRS9f08orUo=
|
1.0
|
FOuPTQaQmLunnfNrw3IxjeYBCCvb4b/3o1OO3bNI/Ec42NECJO3VUtARIZ9gIZDl3JdRQUwcGy1dx/iYH+bRbU01ZOUDjRbqjlk1cKg7seV8h0OSNssiP/uf1k2G9GVwaggLPgn0ikHxRauvTk6fX8jkc2xTvm1CDN0qiAu6G9w= - DCDMcDY05vKF1+fb9v68FsRXICuHuAk6XZTR+RgQr1+m8uF7kLAkIca3o1J2Xot51JlXC9sD/SJ38VtiNl2wqZY962BWwPQEzjhZptT9Jjr949zkktXoLYFAE5kWSS4cKbOy/Eb25KnoM9oYPZ+TGSPQvSYZM2xgWzVDl8VRQnqWM3tBYESI9Ti4EMJIr2ERDiRC8Pt2MuhnE6GJc6Bfcxqm/XRqi3fpUtGb62EX6HNyhHgb872NIPow6wTNnYbByzp5204s/LMGdKrqbNZQjqPwnMFA0dSUmb/DUimhDSUhZqmGy3m1GZYduztYRyxC++m9DVgoHGdQHzi6ucrS58GPbzeY0qHNXkVMLTGcs3ip1gFT2rILYrIUKHAtV8eeqxNFCVGrsTfcWF7GqIs9VTPCYzFMh5d9g3BvWa8K1BL6KGi5Gggn33UWeIuPPVVXN/svxn06AyGxEbPMDLrrsiGwllFIQo3iqAFoFq8u7FrOUHjQ6WCZZoLsOZfbJ11CkBaKdmNiYgGifat822R7orc1N4ugvYaa0GMYtURBVxF7krL28nBFZwV3SPMuMlKqdM4SeX6KPnQa1dI7+HR+SWuD9f/XNPPnWnetAUWWQxcDo0xsgEGUI9Dqwrh8bcXB5pGFKNHQ4klCItW/bn98/gpRT+jH/Njg2YZumAX6UP83yLfL7x7VzJ3/watlNBJbgnFJ7ZdhzDVyn7HQAFx+aiNK7/2PnhPQcozfrP5vrlH8PDOjieeNkDMInfIkcbdUvzZvdPz/Vfk/q9+P2U6QlagdN1up05vv+c4Y4+5cwTcqkvi4+1KDr1gspy6JiJDp6gcJitDEDk34CfJYNgU0ayUwnnsaLpqUvmkN1+giHMG1VxANqZidSgmvik5nJcyexEFKsBE/QUQH2uH1Aw+AT9mtOn73vpYDBut9Cj1eLT99jNtIhpwwbcDl8yAzbPNyPXvyPfNBkmB66Br+t+cbZ2WoHveXuA4qiOBO/IW9GZ0icaEuzACcp1BzS/F7BAummcic1u7Y60u44QRSkJiSOIHlWobIiAQFTpH7QkTh8N5+2tG1rUuMKSM9x6+ZBvKD7qKqbT2OfCbyV2KEtbaorVaWWjz+Wa5yYo46ji/iwf4RR/7pxbjXkDDioEBTLAFbFLb9yVGmSCF4KSvAEMqaSux80stoH68Z099kRv7vpgZ+B8GBUwIymQYTgBHePm/thmW3kx1nocUc2bToROrkneFcJh/5ykJ1unHUr2+6cqNyhoz0GQ1rkjTgrsgUaeEkJMTN4xwlE0+0PTFreTUFPuEegsvbirjFcd8hivrfGjYv1L0DWbP0nciX/zczw0W6hHBX6VFY7WUPpEt1CZfGJLo4pob3jEdtGZjTAjDEwQ5JCuR3xufaxohBAx+AtodRg5Dlh8hk9hrizPbqoH8753SiWmQnOJHOTkji7eWyOXcrSml2Gul9MKsp4wYgH804YWmdaRfwt3xwnDMhGB1BWMih53NK8h0Pp1Kkxphim/fy1kezX7Da17Gs4rS3nEbdNQltQvFqL5iI0p05O4qG5kkqWw8a6JwylrwCrYjcLTb05O8yzJR9jyX1dYlQe5e/Qma3vNR6Y+yOQ0vlOlIxvxdlQaBRnEdXaILQs5JRr5rNFatP1FFrCGsM5cCNMOHPF3Y+rtlmP2t2iTA7dAXFePu67dldKT+NdBjM42asHcnNzJRD71OrkWh34jACX1Pb9tbdqG+MTXI91oqPbJrYUajIuzGSeFuR01tNNqmSsfnfCo9gZBX7cV427zysI2VQIJK1Jv8Im3TYFooYQ+Se5NXOS0MwCVyAW9HtLH0dAx+rE0UJUauxN9xYXsaoiz1VRG6d5gq45pu1+a/zrYD1jwCxVNpAwX3uZdoIKJAPbq4zrOYircynHao6L9cLI4xAuRYyheoDsHRm1oGOZ4HpDNgXxrzYlkuuJFUzAteGDLglCNzzHkJxAMSJlYP7eFY2q4vjbOYAk4SAtxE1FAuC6fRvxWubACGNjPKnii3avHgAUrkZXEUzNl7mEX69MXGfBfRQHB59SNBzGGslh19icKPmthPQ6NBMWoKui/pXx+9VQnDYhvk1Hm7O73N7f+N+6JH9WubYUwiR13jsrQunlU9prwUk1mVdHHBFbU4SNDw6fNoRhP+oetTtTxEfeL+Tq+WKyR5ihWJDe1C0UwSUgzOUfMPoF4btYV6MEYbCDeyGLjDROnL1LioORaygV01QOhC3Uy4zWYM/O7dZhGg3+3LdG17xuzmAnNkmTec1ncmDs4Zvtf9dOXnBdSk9c+DD/EPJ52blruyoKERd6udenba83sZMRVg2Oz5c8FfEXz4LimYaXlguDByKdW5e8y7Ke5oUxltFNePEUL+YgpJMxWGpQyCjUPgYrHqc4SlMRQprEMvmZkG3enM2UY1tdVWYqxNFCVGrsTfcWF7GqIs9Vee9lPZbtDfOOMpkwYxXnt5xnrhlwbSjTS1ZXF+OCnwyLa7ofXZ23KvE8F43zh6Q3GctiHt6FQxMk+dtlCC5KYFqIVKMvvf81kLU8wPFbAEi+4Z5Zv6k5CZRMoB1t8iZFnbcDhhguTnnKrDeH9m94trNzJRD71OrkWh34jACX1PbHqBZJ5WudwPMumiszKQspClYBMbHHfFIj94w4S4oX8p7mhTGW0U148RQv5iCkkzF8Zb/NzOSxcRzjN7PFjd2KayKwKsNcicLqesNG1dr20YIdPvuicuI2+5iNX6pY8T8X1ixR13s5Qf2mpBj0Hz5pTmCfO0s0uiyTy1v2uHjlFW58wwLb6VWqE/YMYQ5VmiN0n++qm70vHVeZVu+L3brZz3E14BICaUl+r/alkCh1IcWDvuk6H6e993LBiOwR3YBqgRKAZGD2GeFHmluCuKQt7O4AVhFssoG8pAlGUKlez+nNXwejgTlHr3m2bcYaRohkiVSGFNRB9bHu2yoOUCHM84oCRaZz/SXBKNODOG08vZUUSJ3Rnd5iqq3lNkuwy5Z13PuI+0mbFeZVPq3oRFLNKam+tdnJyMedDt8usyKtOZwosK9JJRtvm9tTe1o+FIxfvWc0jXeT6cE5ZhrgiuCRRSIWgQ5MlPfIR3otS905WOPRcpBCQwBGL17hkW+bEjO2MVGkTe5XcguwG0VuXxlYsieFFlW/ahDQaxk5Yx9HyWKcPAtk7nsXucvEV82MKVjZtfpHf6H3ozPCGUGlEbyFlicmKFKvYXFJVRmzzvgFXuzmrQYj3AzV4EHWr9JbZETVMl4xRZjmrT1ePgbWhc1aeXPcHpb5xoVhip0nTNFN2PFnR2A4Fmu1uFb4I/uiaCgHM2einE2zOTe3YmL04Isq7wRTXzBsaMmJddPItOdfKuP5KHjquq4pC+kij3ojAVeEYhAuM0H36s4jBzsj/7G8C8z4fNvE2kCHCNICIn+Ghy5QqMyc3jUXShsgD4IUinKoReD6kzeq7ceGMqfYdel+aaTnCCLdQxn433l26GYjtnJhdAlQOG8gUi520Gi5MM8+SWV9S0mkexL8MujtM0ig+QznksoJqcwJ8LuKxAEUGjcn3Rw3PCyIB70mvZqcRP+gb6lH94Ja30+uAvSYzln8Wze9/6Kj2myYExP8B7VqFx8vvifivu/n5LNm8/VJCcp89eAdhAWydamfDQhnVwFGe+5Q3Sr017ZP9gyVkwMzkcczZ6KcTbM5N7diYvTgiyrcnZOJc+5292/1GWq+6TbyUk39Qie7xmEHWS3MiqTl0zTMh1lIsxybJedM5LbIkm/d/gSV+cwWVObOu+YhM/0H83MlEPvU6uRaHfiMAJfU9voOGO97wkl4sPB2UPoVGoz7pcQ2DnUBmOCEfHwl7VYyDpqpNH3/3QwqBtQr08aktib1yMt+5HfVvNCu8hMhVFUYTaBG1mkaEH5EV2q1Nja1VvB2MpdEOD7FAGe7rSE7TeEcFfpUVjtZQ+kS3UJl8YkZhLdJDIqauqCx4FrUNm2oOeq2PDALGHg4VOlL/klZ2iEcFfpUVjtZQ+kS3UJl8YkQJkT+EYsBm2r9psbAqMQA9Xbxy3X1uDqvMQS6rOvW8erE0UJUauxN9xYXsaoiz1V/EEFHfNXEb5OXfxJYQtuARp9NmR4EH8EIfI9qyyGR6ESCGaS1SMXbocJ2gsWianNKLPySD+iLRrbY1tVmC8xhCvp7qqwk4bY081MXNLZObJ2ZU/2Ii26VZt3Dpp8Z/RJTKxwQY66h/K7RIa3IXYW5OFCrGMJUImsX/7qxQhq+ouBx5hUbZZWvuoqAZjaJaaSoFF6dn/Q+jTpjqZlyZihOnv9Zwim5g9NuRS9f08orUo=
|
non_process
|
iyh hr gprt jh vfk t a pxx n oettttxefel tq dd r d gsv cwwvobou yhm q
| 0
|
21,133
| 28,105,553,594
|
IssuesEvent
|
2023-03-31 00:05:31
|
pfmc-assessments/canary_2023
|
https://api.github.com/repos/pfmc-assessments/canary_2023
|
closed
|
Convert Washington rec landings from numbers of fish to MT
|
Data processing
|
Washington recreational data is provided in numbers. Stock synthesis can handle this, however having a mix of numbers and weights for fleets when making projections is complicated. Previous assessments have used numbers but converted numbers to weight for the projections (2017 yellowtail rockfish and 2017 lingcod) or iteratively solved for entry in numbers to obtain the desired ACL in weight (2021 copper rockfish and 2021 quillback rockfish) or converted catch in numbers to catch in weight prior to input into stock synthesis.
As was done for 2021 lingcod ([see their issue #30](https://github.com/pfmc-assessments/lingcod/issues/30)) and in consultation with @tsoutt, we plan to convert numbers to weight by using the average length from recreational bio data of retained fish, then convert average length to average weight using a WL relationship. Whether to use RecFIN's (as provided by Theresa L-W: a = 1.04058E-08 b = 3.084136662) or the survey relationship likely does not matter as WL relationships are pretty robust. We plan to use the one provided by Theresa.
|
1.0
|
Convert Washington rec landings from numbers of fish to MT - Washington recreational data is provided in numbers. Stock synthesis can handle this, however having a mix of numbers and weights for fleets when making projections is complicated. Previous assessments have used numbers but converted numbers to weight for the projections (2017 yellowtail rockfish and 2017 lingcod) or iteratively solved for entry in numbers to obtain the desired ACL in weight (2021 copper rockfish and 2021 quillback rockfish) or converted catch in numbers to catch in weight prior to input into stock synthesis.
As was done for 2021 lingcod ([see their issue #30](https://github.com/pfmc-assessments/lingcod/issues/30)) and in consultation with @tsoutt, we plan to convert numbers to weight by using the average length from recreational bio data of retained fish, then convert average length to average weight using a WL relationship. Whether to use RecFIN's (as provided by Theresa L-W: a = 1.04058E-08 b = 3.084136662) or the survey relationship likely does not matter as WL relationships are pretty robust. We plan to use the one provided by Theresa.
|
process
|
convert washington rec landings from numbers of fish to mt washington recreational data is provided in numbers stock synthesis can handle this however having a mix of numbers and weights for fleets when making projections is complicated previous assessments have used numbers but converted numbers to weight for the projections yellowtail rockfish and lingcod or iteratively solved for entry in numbers to obtain the desired acl in weight copper rockfish and quillback rockfish or converted catch in numbers to catch in weight prior to input into stock synthesis as was done for lingcod and in consultation with tsoutt we plan to convert numbers to weight by using the average length from recreational bio data of retained fish then convert average length to average weight using a wl relationship whether to use recfin s as provided by theresa l w a b or the survey relationship likely does not matter as wl relationships are pretty robust we plan to use the one provided by theresa
| 1
|
691,003
| 23,680,673,451
|
IssuesEvent
|
2022-08-28 18:56:47
|
google/ground-platform
|
https://api.github.com/repos/google/ground-platform
|
closed
|
[Feature list] Show feature list for each layer
|
type: feature request web ux needed priority: p1
|
@gauravchalana1 :
As an initial prototype:
- [x] On layer click in the layer list, replace layer list with "feature list" side panel (similar to feature details panel). To do this you'll need to update the URL via NavigationService. URL format might be `#fl=layerId`.
- [ ] Show scrolling list of all features in that panel. We don't labels for features yet - @parulraheja98 to provide code for this soon. (Gentle ping :) For now just show uuid to test.
- [ ] Clicking on a feature in the list open the feature details panel for that feature. (use `NavigationService`).
For now we can load all the features in a layer at once. In the future we can implement incremental load on scroll.
We don't have mocks or proper designs for this yet; @jacobmclaws is this something you can work on? We basically need to show a list of features for a particular layer. The header might show the layer name and "Features" heading. In the list we'll show the feature name (generated from imported ID or label) or user defined. Wdyt?
|
1.0
|
[Feature list] Show feature list for each layer - @gauravchalana1 :
As an initial prototype:
- [x] On layer click in the layer list, replace layer list with "feature list" side panel (similar to feature details panel). To do this you'll need to update the URL via NavigationService. URL format might be `#fl=layerId`.
- [ ] Show scrolling list of all features in that panel. We don't labels for features yet - @parulraheja98 to provide code for this soon. (Gentle ping :) For now just show uuid to test.
- [ ] Clicking on a feature in the list open the feature details panel for that feature. (use `NavigationService`).
For now we can load all the features in a layer at once. In the future we can implement incremental load on scroll.
We don't have mocks or proper designs for this yet; @jacobmclaws is this something you can work on? We basically need to show a list of features for a particular layer. The header might show the layer name and "Features" heading. In the list we'll show the feature name (generated from imported ID or label) or user defined. Wdyt?
|
non_process
|
show feature list for each layer as an initial prototype on layer click in the layer list replace layer list with feature list side panel similar to feature details panel to do this you ll need to update the url via navigationservice url format might be fl layerid show scrolling list of all features in that panel we don t labels for features yet to provide code for this soon gentle ping for now just show uuid to test clicking on a feature in the list open the feature details panel for that feature use navigationservice for now we can load all the features in a layer at once in the future we can implement incremental load on scroll we don t have mocks or proper designs for this yet jacobmclaws is this something you can work on we basically need to show a list of features for a particular layer the header might show the layer name and features heading in the list we ll show the feature name generated from imported id or label or user defined wdyt
| 0
|
6,041
| 8,853,413,862
|
IssuesEvent
|
2019-01-08 21:17:42
|
googleapis/google-cloud-python
|
https://api.github.com/repos/googleapis/google-cloud-python
|
closed
|
Firestore: 'test_watch_query' systest flakes
|
api: firestore flaky testing type: process
|
Similar to #6605. See: https://source.cloud.google.com/results/invocations/57661644-6fd2-4203-a515-4c3bf9edfcd5/targets/cloud-devrel%2Fclient-libraries%2Fgoogle-cloud-python%2Fpresubmit%2Ffirestore/log
```python
_______________________________ test_watch_query _______________________________
client = <google.cloud.firestore_v1beta1.client.Client object at 0x7f61dc7705f8>
cleanup = <built-in method append of list object at 0x7f61d83f3688>
def test_watch_query(client, cleanup):
db = client
doc_ref = db.collection(u"users").document(u"alovelace" + unique_resource_id())
query_ref = db.collection(u"users").where("first", "==", u"Ada")
# Initial setting
doc_ref.set({u"first": u"Jane", u"last": u"Doe", u"born": 1900})
sleep(1)
# Setup listener
def on_snapshot(docs, changes, read_time):
on_snapshot.called_count += 1
# A snapshot should return the same thing as if a query ran now.
query_ran = db.collection(u"users").where("first", "==", u"Ada").get()
assert len(docs) == len([i for i in query_ran])
on_snapshot.called_count = 0
query_ref.on_snapshot(on_snapshot)
# Alter document
doc_ref.set({u"first": u"Ada", u"last": u"Lovelace", u"born": 1815})
for _ in range(10):
if on_snapshot.called_count == 1:
return
sleep(1)
if on_snapshot.called_count != 1:
raise AssertionError(
"Failed to get exactly one document change: count: "
> + str(on_snapshot.called_count)
)
E AssertionError: Failed to get exactly one document change: count: 0
tests/system.py:795: AssertionError
```
|
1.0
|
Firestore: 'test_watch_query' systest flakes - Similar to #6605. See: https://source.cloud.google.com/results/invocations/57661644-6fd2-4203-a515-4c3bf9edfcd5/targets/cloud-devrel%2Fclient-libraries%2Fgoogle-cloud-python%2Fpresubmit%2Ffirestore/log
```python
_______________________________ test_watch_query _______________________________
client = <google.cloud.firestore_v1beta1.client.Client object at 0x7f61dc7705f8>
cleanup = <built-in method append of list object at 0x7f61d83f3688>
def test_watch_query(client, cleanup):
db = client
doc_ref = db.collection(u"users").document(u"alovelace" + unique_resource_id())
query_ref = db.collection(u"users").where("first", "==", u"Ada")
# Initial setting
doc_ref.set({u"first": u"Jane", u"last": u"Doe", u"born": 1900})
sleep(1)
# Setup listener
def on_snapshot(docs, changes, read_time):
on_snapshot.called_count += 1
# A snapshot should return the same thing as if a query ran now.
query_ran = db.collection(u"users").where("first", "==", u"Ada").get()
assert len(docs) == len([i for i in query_ran])
on_snapshot.called_count = 0
query_ref.on_snapshot(on_snapshot)
# Alter document
doc_ref.set({u"first": u"Ada", u"last": u"Lovelace", u"born": 1815})
for _ in range(10):
if on_snapshot.called_count == 1:
return
sleep(1)
if on_snapshot.called_count != 1:
raise AssertionError(
"Failed to get exactly one document change: count: "
> + str(on_snapshot.called_count)
)
E AssertionError: Failed to get exactly one document change: count: 0
tests/system.py:795: AssertionError
```
|
process
|
firestore test watch query systest flakes similar to see python test watch query client cleanup def test watch query client cleanup db client doc ref db collection u users document u alovelace unique resource id query ref db collection u users where first u ada initial setting doc ref set u first u jane u last u doe u born sleep setup listener def on snapshot docs changes read time on snapshot called count a snapshot should return the same thing as if a query ran now query ran db collection u users where first u ada get assert len docs len on snapshot called count query ref on snapshot on snapshot alter document doc ref set u first u ada u last u lovelace u born for in range if on snapshot called count return sleep if on snapshot called count raise assertionerror failed to get exactly one document change count str on snapshot called count e assertionerror failed to get exactly one document change count tests system py assertionerror
| 1
|
64,830
| 18,942,885,868
|
IssuesEvent
|
2021-11-18 06:29:43
|
vector-im/element-android
|
https://api.github.com/repos/vector-im/element-android
|
opened
|
Voice and video calls connectivity issue using mobile data
|
T-Defect
|
### Steps to reproduce
1. We need two smartphones, at least one needs to be Xiaomi device (Mi9 was used to reproduce) and at least one needs to use mobile data (other can be on WiFi);
2. Direction of the call does not matter. Try establish voice or video call from one device to the other.
3. After call pick up, there will be "Connecting..." message hanging indefinitely

.

### Outcome
#### What did you expect?
Call should went through no matter what kind of connection a device is using.
#### What happened instead?
Connection was not established.
### Your phone model
OnePlus 7T
### Operating system version
Android 11
### Application version and app store
1.3.7, 1.3.8 GPlay, GITHub
### Homeserver
mozilla.org
### Will you send logs?
Yes
|
1.0
|
Voice and video calls connectivity issue using mobile data - ### Steps to reproduce
1. We need two smartphones, at least one needs to be Xiaomi device (Mi9 was used to reproduce) and at least one needs to use mobile data (other can be on WiFi);
2. Direction of the call does not matter. Try establish voice or video call from one device to the other.
3. After call pick up, there will be "Connecting..." message hanging indefinitely

.

### Outcome
#### What did you expect?
Call should went through no matter what kind of connection a device is using.
#### What happened instead?
Connection was not established.
### Your phone model
OnePlus 7T
### Operating system version
Android 11
### Application version and app store
1.3.7, 1.3.8 GPlay, GITHub
### Homeserver
mozilla.org
### Will you send logs?
Yes
|
non_process
|
voice and video calls connectivity issue using mobile data steps to reproduce we need two smartphones at least one needs to be xiaomi device was used to reproduce and at least one needs to use mobile data other can be on wifi direction of the call does not matter try establish voice or video call from one device to the other after call pick up there will be connecting message hanging indefinitely outcome what did you expect call should went through no matter what kind of connection a device is using what happened instead connection was not established your phone model oneplus operating system version android application version and app store gplay github homeserver mozilla org will you send logs yes
| 0
|
11,704
| 14,545,349,883
|
IssuesEvent
|
2020-12-15 19:31:01
|
allinurl/goaccess
|
https://api.github.com/repos/allinurl/goaccess
|
closed
|
Total Requests != Valid Requests + Failed Requests
|
log-processing
|
I have started using goaccess recently (v1.4). I don't understand why the Total Requests are not the same as sum of Valid Requests and Failed Requests. The Total Requests match the count in the log file. Clearly, my lack of undersatnding. Please help.
Thanks,
vs
This is how I am calling goaccess:
`
goaccess jetty/app-base/logs/*.request.log -o /var/www/html/report.html
--geoip-database geolitedb/GeoLite2-City_20201006/GeoLite2-City.mmdb
--real-time-html
--ws-url=localhost
`

|
1.0
|
Total Requests != Valid Requests + Failed Requests - I have started using goaccess recently (v1.4). I don't understand why the Total Requests are not the same as sum of Valid Requests and Failed Requests. The Total Requests match the count in the log file. Clearly, my lack of undersatnding. Please help.
Thanks,
vs
This is how I am calling goaccess:
`
goaccess jetty/app-base/logs/*.request.log -o /var/www/html/report.html
--geoip-database geolitedb/GeoLite2-City_20201006/GeoLite2-City.mmdb
--real-time-html
--ws-url=localhost
`

|
process
|
total requests valid requests failed requests i have started using goaccess recently i don t understand why the total requests are not the same as sum of valid requests and failed requests the total requests match the count in the log file clearly my lack of undersatnding please help thanks vs this is how i am calling goaccess goaccess jetty app base logs request log o var www html report html geoip database geolitedb city city mmdb real time html ws url localhost
| 1
|
8,092
| 11,270,095,282
|
IssuesEvent
|
2020-01-14 10:13:18
|
hashicorp/packer
|
https://api.github.com/repos/hashicorp/packer
|
closed
|
Vagrant post-processor literally has "panic" placeholder in Packer 1.5.x in HCL2 mode
|
bug hcl2 post-processor/vagrant
|
#### Overview of the Issue
PR #8423 breaks Vagrant post-processor with panic instruction. Haven't seen other issue tracking this so I thought it's worth to open a separate one.
https://github.com/hashicorp/packer/blob/c3c2622204fbc4358f1761d38ba99616bf1eba33/post-processor/vagrant/post-processor.go#L63
#### Reproduction Steps
Run following buildfile with .pkr.hcl extension, e.g. `packer build file.pkr.hcl`
### Packer version
1.5.1
### Simplified Packer Buildfile
```
source "virtualbox-iso" "packer-debian-10-amd64" {
iso_urls = [
"https://cdimage.debian.org/debian-cd/current/amd64/iso-cd/debian-10.2.0-amd64-netinst.iso"
]
iso_checksum = "e43fef979352df15056ac512ad96a07b515cb8789bf0bfd86f99ed0404f885f5"
ssh_username = "vagrant"
}
build {
sources = [
"source.virtualbox-iso.packer-debian-10-amd64"
]
post-processor "vagrant" {
output = "ephemeral/builds/vagrant-debian10.box"
}
}
```
### Operating system and Environment details
OS X Catalina
### Log Fragments and crash.log files
```
$ packer build base_debian_vagrant.pkr.hcl
panic: not implemented yet
2020/01/08 15:47:57 packer-post-processor-vagrant plugin:
2020/01/08 15:47:57 packer-post-processor-vagrant plugin: goroutine 71 [running]:
2020/01/08 15:47:57 packer-post-processor-vagrant plugin: github.com/hashicorp/packer/post-processor/vagrant.(*PostProcessor).ConfigSpec(0xc00015e010, 0xc00001a630)
2020/01/08 15:47:57 packer-post-processor-vagrant plugin: /private/tmp/packer-20191222-94130-1yh62h9/post-processor/vagrant/post-processor.go:63 +0x39
2020/01/08 15:47:57 packer-post-processor-vagrant plugin: github.com/hashicorp/packer/packer/rpc.(*commonServer).ConfigSpec(0xc0000aba98, 0x0, 0x0, 0xc0002e6ec0, 0x0, 0x0)
2020/01/08 15:47:57 packer-post-processor-vagrant plugin: /private/tmp/packer-20191222-94130-1yh62h9/packer/rpc/common.go:56 +0x38
2020/01/08 15:47:57 packer-post-processor-vagrant plugin: reflect.Value.call(0xc0005dc5a0, 0xc000010768, 0x13, 0x7c7258a, 0x4, 0xc000093f18, 0x3, 0x3, 0x0, 0x0, ...)
2020/01/08 15:47:57 packer-post-processor-vagrant plugin: /usr/local/Cellar/go/1.13.5/libexec/src/reflect/value.go:460 +0x5f6
2020/01/08 15:47:57 packer-post-processor-vagrant plugin: reflect.Value.Call(0xc0005dc5a0, 0xc000010768, 0x13, 0xc000080f18, 0x3, 0x3, 0x0, 0x0, 0x0)
2020/01/08 15:47:57 packer-post-processor-vagrant plugin: /usr/local/Cellar/go/1.13.5/libexec/src/reflect/value.go:321 +0xb4
2020/01/08 15:47:57 packer-post-processor-vagrant plugin: net/rpc.(*service).call(0xc0000abac0, 0xc0000ee820, 0xc0005b03c0, 0xc0005b03d0, 0xc000136380, 0xc0002e6e20, 0x71f50e0, 0xc0002aa3e0, 0x194, 0x6ef1da0, ...)
2020/01/08 15:47:57 packer-post-processor-vagrant plugin: /usr/local/Cellar/go/1.13.5/libexec/src/net/rpc/server.go:377 +0x16f
2020/01/08 15:47:57 packer-post-processor-vagrant plugin: created by net/rpc.(*Server).ServeCodec
2020/01/08 15:47:57 packer-post-processor-vagrant plugin: /usr/local/Cellar/go/1.13.5/libexec/src/net/rpc/server.go:474 +0x42b
2020/01/08 15:47:57 ConfigSpec failed: unexpected EOF
2020/01/08 15:47:57 waiting for all plugin processes to complete...
2020/01/08 15:47:57 /usr/local/bin/packer: plugin process exited
2020/01/08 15:47:57 /usr/local/bin/packer: plugin process exited
panic: ConfigSpec failed: unexpected EOF [recovered]
panic: ConfigSpec failed: unexpected EOF
```
|
1.0
|
Vagrant post-processor literally has "panic" placeholder in Packer 1.5.x in HCL2 mode - #### Overview of the Issue
PR #8423 breaks Vagrant post-processor with panic instruction. Haven't seen other issue tracking this so I thought it's worth to open a separate one.
https://github.com/hashicorp/packer/blob/c3c2622204fbc4358f1761d38ba99616bf1eba33/post-processor/vagrant/post-processor.go#L63
#### Reproduction Steps
Run following buildfile with .pkr.hcl extension, e.g. `packer build file.pkr.hcl`
### Packer version
1.5.1
### Simplified Packer Buildfile
```
source "virtualbox-iso" "packer-debian-10-amd64" {
iso_urls = [
"https://cdimage.debian.org/debian-cd/current/amd64/iso-cd/debian-10.2.0-amd64-netinst.iso"
]
iso_checksum = "e43fef979352df15056ac512ad96a07b515cb8789bf0bfd86f99ed0404f885f5"
ssh_username = "vagrant"
}
build {
sources = [
"source.virtualbox-iso.packer-debian-10-amd64"
]
post-processor "vagrant" {
output = "ephemeral/builds/vagrant-debian10.box"
}
}
```
### Operating system and Environment details
OS X Catalina
### Log Fragments and crash.log files
```
$ packer build base_debian_vagrant.pkr.hcl
panic: not implemented yet
2020/01/08 15:47:57 packer-post-processor-vagrant plugin:
2020/01/08 15:47:57 packer-post-processor-vagrant plugin: goroutine 71 [running]:
2020/01/08 15:47:57 packer-post-processor-vagrant plugin: github.com/hashicorp/packer/post-processor/vagrant.(*PostProcessor).ConfigSpec(0xc00015e010, 0xc00001a630)
2020/01/08 15:47:57 packer-post-processor-vagrant plugin: /private/tmp/packer-20191222-94130-1yh62h9/post-processor/vagrant/post-processor.go:63 +0x39
2020/01/08 15:47:57 packer-post-processor-vagrant plugin: github.com/hashicorp/packer/packer/rpc.(*commonServer).ConfigSpec(0xc0000aba98, 0x0, 0x0, 0xc0002e6ec0, 0x0, 0x0)
2020/01/08 15:47:57 packer-post-processor-vagrant plugin: /private/tmp/packer-20191222-94130-1yh62h9/packer/rpc/common.go:56 +0x38
2020/01/08 15:47:57 packer-post-processor-vagrant plugin: reflect.Value.call(0xc0005dc5a0, 0xc000010768, 0x13, 0x7c7258a, 0x4, 0xc000093f18, 0x3, 0x3, 0x0, 0x0, ...)
2020/01/08 15:47:57 packer-post-processor-vagrant plugin: /usr/local/Cellar/go/1.13.5/libexec/src/reflect/value.go:460 +0x5f6
2020/01/08 15:47:57 packer-post-processor-vagrant plugin: reflect.Value.Call(0xc0005dc5a0, 0xc000010768, 0x13, 0xc000080f18, 0x3, 0x3, 0x0, 0x0, 0x0)
2020/01/08 15:47:57 packer-post-processor-vagrant plugin: /usr/local/Cellar/go/1.13.5/libexec/src/reflect/value.go:321 +0xb4
2020/01/08 15:47:57 packer-post-processor-vagrant plugin: net/rpc.(*service).call(0xc0000abac0, 0xc0000ee820, 0xc0005b03c0, 0xc0005b03d0, 0xc000136380, 0xc0002e6e20, 0x71f50e0, 0xc0002aa3e0, 0x194, 0x6ef1da0, ...)
2020/01/08 15:47:57 packer-post-processor-vagrant plugin: /usr/local/Cellar/go/1.13.5/libexec/src/net/rpc/server.go:377 +0x16f
2020/01/08 15:47:57 packer-post-processor-vagrant plugin: created by net/rpc.(*Server).ServeCodec
2020/01/08 15:47:57 packer-post-processor-vagrant plugin: /usr/local/Cellar/go/1.13.5/libexec/src/net/rpc/server.go:474 +0x42b
2020/01/08 15:47:57 ConfigSpec failed: unexpected EOF
2020/01/08 15:47:57 waiting for all plugin processes to complete...
2020/01/08 15:47:57 /usr/local/bin/packer: plugin process exited
2020/01/08 15:47:57 /usr/local/bin/packer: plugin process exited
panic: ConfigSpec failed: unexpected EOF [recovered]
panic: ConfigSpec failed: unexpected EOF
```
|
process
|
vagrant post processor literally has panic placeholder in packer x in mode overview of the issue pr breaks vagrant post processor with panic instruction haven t seen other issue tracking this so i thought it s worth to open a separate one reproduction steps run following buildfile with pkr hcl extension e g packer build file pkr hcl packer version simplified packer buildfile source virtualbox iso packer debian iso urls iso checksum ssh username vagrant build sources source virtualbox iso packer debian post processor vagrant output ephemeral builds vagrant box operating system and environment details os x catalina log fragments and crash log files packer build base debian vagrant pkr hcl panic not implemented yet packer post processor vagrant plugin packer post processor vagrant plugin goroutine packer post processor vagrant plugin github com hashicorp packer post processor vagrant postprocessor configspec packer post processor vagrant plugin private tmp packer post processor vagrant post processor go packer post processor vagrant plugin github com hashicorp packer packer rpc commonserver configspec packer post processor vagrant plugin private tmp packer packer rpc common go packer post processor vagrant plugin reflect value call packer post processor vagrant plugin usr local cellar go libexec src reflect value go packer post processor vagrant plugin reflect value call packer post processor vagrant plugin usr local cellar go libexec src reflect value go packer post processor vagrant plugin net rpc service call packer post processor vagrant plugin usr local cellar go libexec src net rpc server go packer post processor vagrant plugin created by net rpc server servecodec packer post processor vagrant plugin usr local cellar go libexec src net rpc server go configspec failed unexpected eof waiting for all plugin processes to complete usr local bin packer plugin process exited usr local bin packer plugin process exited panic configspec failed unexpected eof panic configspec failed unexpected eof
| 1
|
6,705
| 9,814,928,825
|
IssuesEvent
|
2019-06-13 11:23:49
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Processing: merge vector layers produces duplicated fid
|
Bug Processing
|
Author Name: **Jérôme Guélat** (Jérôme Guélat)
Original Redmine Issue: [19758](https://issues.qgis.org/issues/19758)
Affected QGIS version: 3.2.2
Redmine category:processing/core
---
Here's how to reproduce the bug:
1. Add the 3 layers from the attached GeoPackage (pol.gpkg)
2. Open the merge vector layers tool in Processing, choose the 3 layers and save to a temporary layer
3. The resulting layer has features with the same fid, and hence can't be saved to a new GeoPackage without manually editing the fid values
4. Alternatively if you save directly to GeoPackage (instead of using a temporary layer), the tool produces a wrong output
A similar bug happens with other tools (see #27533). This fid problem makes Processing almost unusable with GeoPackages.
---
- [pol.gpkg](https://issues.qgis.org/attachments/download/13236/pol.gpkg) (Jérôme Guélat)
---
Related issue(s): #27533 (relates), #27820 (relates)
Redmine related issue(s): [19708](https://issues.qgis.org/issues/19708), [19998](https://issues.qgis.org/issues/19998)
---
|
1.0
|
Processing: merge vector layers produces duplicated fid - Author Name: **Jérôme Guélat** (Jérôme Guélat)
Original Redmine Issue: [19758](https://issues.qgis.org/issues/19758)
Affected QGIS version: 3.2.2
Redmine category:processing/core
---
Here's how to reproduce the bug:
1. Add the 3 layers from the attached GeoPackage (pol.gpkg)
2. Open the merge vector layers tool in Processing, choose the 3 layers and save to a temporary layer
3. The resulting layer has features with the same fid, and hence can't be saved to a new GeoPackage without manually editing the fid values
4. Alternatively if you save directly to GeoPackage (instead of using a temporary layer), the tool produces a wrong output
A similar bug happens with other tools (see #27533). This fid problem makes Processing almost unusable with GeoPackages.
---
- [pol.gpkg](https://issues.qgis.org/attachments/download/13236/pol.gpkg) (Jérôme Guélat)
---
Related issue(s): #27533 (relates), #27820 (relates)
Redmine related issue(s): [19708](https://issues.qgis.org/issues/19708), [19998](https://issues.qgis.org/issues/19998)
---
|
process
|
processing merge vector layers produces duplicated fid author name jérôme guélat jérôme guélat original redmine issue affected qgis version redmine category processing core here s how to reproduce the bug add the layers from the attached geopackage pol gpkg open the merge vector layers tool in processing choose the layers and save to a temporary layer the resulting layer has features with the same fid and hence can t be saved to a new geopackage without manually editing the fid values alternatively if you save directly to geopackage instead of using a temporary layer the tool produces a wrong output a similar bug happens with other tools see this fid problem makes processing almost unusable with geopackages jérôme guélat related issue s relates relates redmine related issue s
| 1
|
215,332
| 24,164,855,436
|
IssuesEvent
|
2022-09-22 14:20:10
|
SmartBear/ready-aws-plugin
|
https://api.github.com/repos/SmartBear/ready-aws-plugin
|
closed
|
CVE-2020-36180 (High) detected in jackson-databind-2.3.0.jar - autoclosed
|
security vulnerability
|
## CVE-2020-36180 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.3.0.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.3.0/jackson-databind-2.3.0.jar</p>
<p>
Dependency Hierarchy:
- ready-api-soapui-pro-1.3.0.jar (Root Library)
- keen-client-api-java-2.0.2.jar
- :x: **jackson-databind-2.3.0.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to org.apache.commons.dbcp2.cpdsadapter.DriverAdapterCPDS.
<p>Publish Date: 2021-01-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36180>CVE-2020-36180</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2021-01-07</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.8</p>
</p>
</details>
<p></p>
|
True
|
CVE-2020-36180 (High) detected in jackson-databind-2.3.0.jar - autoclosed - ## CVE-2020-36180 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.3.0.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.3.0/jackson-databind-2.3.0.jar</p>
<p>
Dependency Hierarchy:
- ready-api-soapui-pro-1.3.0.jar (Root Library)
- keen-client-api-java-2.0.2.jar
- :x: **jackson-databind-2.3.0.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to org.apache.commons.dbcp2.cpdsadapter.DriverAdapterCPDS.
<p>Publish Date: 2021-01-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36180>CVE-2020-36180</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2021-01-07</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.8</p>
</p>
</details>
<p></p>
|
non_process
|
cve high detected in jackson databind jar autoclosed cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api path to dependency file pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy ready api soapui pro jar root library keen client api java jar x jackson databind jar vulnerable library found in base branch master vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org apache commons cpdsadapter driveradaptercpds publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution com fasterxml jackson core jackson databind
| 0
|
444,461
| 12,813,108,023
|
IssuesEvent
|
2020-07-04 10:53:27
|
debops/debops
|
https://api.github.com/repos/debops/debops
|
closed
|
apt__conf: Unable to generate additional configuration file
|
meta priority: medium
|
I am just new to ansible and debops, so if I am doing something terribly wrong please forgive and advise;)
I have already an apt-cacher running in the local network, that also was used for the initial setup of the host, thus resulting in an entry in /etc/apt/apt.conf. This file is deleted by debops to avoid side-effects. Now I wanted to re-introduce the proxy configuration with
apt.yml
```
---
apt__conf:
- name: apt-cacher
priority: '02'
content: |
Acquire::HTTP {
Proxy "http://apt-cacher.{{ ansible-domain }}:9999";
};
```
which resulted in doing nothing (task being skipped) after running
$:~/projects/debops/debops/first_project$ debops service/apt -l majestix
To be safe that the task is executed I removed for debugging purposes the when clause from the task and started with the debugger activated
TASK [apt : Generate additionnal APT configuration files] ***********************************************************************************************************************************************************************************
fatal: [majestix]: FAILED! =>
msg: '''ansible'' is undefined'
[majestix] TASK: apt : Generate additionnal APT configuration files (debug)> p
***SyntaxError:SyntaxError('unexpected EOF while parsing', ('<string>', 0, 0, ''))
I tried with different entries, also using 'src': /path/to/some/file, but with no success.
What to do next to debug the issue?
|
1.0
|
apt__conf: Unable to generate additional configuration file - I am just new to ansible and debops, so if I am doing something terribly wrong please forgive and advise;)
I have already an apt-cacher running in the local network, that also was used for the initial setup of the host, thus resulting in an entry in /etc/apt/apt.conf. This file is deleted by debops to avoid side-effects. Now I wanted to re-introduce the proxy configuration with
apt.yml
```
---
apt__conf:
- name: apt-cacher
priority: '02'
content: |
Acquire::HTTP {
Proxy "http://apt-cacher.{{ ansible-domain }}:9999";
};
```
which resulted in doing nothing (task being skipped) after running
$:~/projects/debops/debops/first_project$ debops service/apt -l majestix
To be safe that the task is executed I removed for debugging purposes the when clause from the task and started with the debugger activated
TASK [apt : Generate additionnal APT configuration files] ***********************************************************************************************************************************************************************************
fatal: [majestix]: FAILED! =>
msg: '''ansible'' is undefined'
[majestix] TASK: apt : Generate additionnal APT configuration files (debug)> p
***SyntaxError:SyntaxError('unexpected EOF while parsing', ('<string>', 0, 0, ''))
I tried with different entries, also using 'src': /path/to/some/file, but with no success.
What to do next to debug the issue?
|
non_process
|
apt conf unable to generate additional configuration file i am just new to ansible and debops so if i am doing something terribly wrong please forgive and advise i have already an apt cacher running in the local network that also was used for the initial setup of the host thus resulting in an entry in etc apt apt conf this file is deleted by debops to avoid side effects now i wanted to re introduce the proxy configuration with apt yml apt conf name apt cacher priority content acquire http proxy ansible domain which resulted in doing nothing task being skipped after running projects debops debops first project debops service apt l majestix to be safe that the task is executed i removed for debugging purposes the when clause from the task and started with the debugger activated task fatal failed msg ansible is undefined task apt generate additionnal apt configuration files debug p syntaxerror syntaxerror unexpected eof while parsing i tried with different entries also using src path to some file but with no success what to do next to debug the issue
| 0
|
661,447
| 22,054,887,636
|
IssuesEvent
|
2022-05-30 12:00:48
|
owncloud/web
|
https://api.github.com/repos/owncloud/web
|
reopened
|
Upload of folder with same name shows error
|
Type:Bug Priority:p2-high
|
# Steps to reproduce
1. Login to https://ocis.ocis-wopi.released.owncloud.works/
2. Create a new folder "Folder" on the ocis instance and on your local drive.
3. upload your local "Folder" folder
4. upload does not start, shows error "Folder "Folder" already exists."

# Expected behaviour
System should offer me alternative options to upload the folder.
The appropriate dialogues are consistently shown in case of conflicting resources, eg. in context of:
- Drag'n drop upload
- upload via upload-button
- move-dialog
- copy-dialog
- trashbin-restore
- etc...
## General rule
**User gets asked what to do in case of conflicting resources**, as there is no common pattern across different services/platforms (OneDrive, Google Drive, Box, Windows, Mac, etc. - all different).
**Note** This dialog concept is also meant to be applied to https://github.com/owncloud/web/issues/1753
## Dialogs

## Behaviour
1. **Replace** will replace the existing file (and creates a new version implicitly).
2. **Keep both** will create a new resource with an appended number "(xx)" - counting up, if ex. (2) already exists.
3. **Cancel** will skip the current resource, if applicable shows the next conflict.
4. **Merge** will combine folders with the same name; if they contain conflicting files, user gets asked what to do (dialog for files shows up).
5. **Do this for all XX conflicts** will apply the selected option to all XX upcoming conflicts. Is only shown, if there are at least 2 conflicts.
6. **generally applicable:** The
## Userflows

____________
**solves:**
https://github.com/owncloud/web/issues/3751
https://github.com/owncloud/web/issues/5761
https://github.com/owncloud/web/issues/5106
https://github.com/owncloud/web/issues/6546
**groundwork for**
https://github.com/owncloud/web/issues/1753
|
1.0
|
Upload of folder with same name shows error - # Steps to reproduce
1. Login to https://ocis.ocis-wopi.released.owncloud.works/
2. Create a new folder "Folder" on the ocis instance and on your local drive.
3. upload your local "Folder" folder
4. upload does not start, shows error "Folder "Folder" already exists."

# Expected behaviour
System should offer me alternative options to upload the folder.
The appropriate dialogues are consistently shown in case of conflicting resources, eg. in context of:
- Drag'n drop upload
- upload via upload-button
- move-dialog
- copy-dialog
- trashbin-restore
- etc...
## General rule
**User gets asked what to do in case of conflicting resources**, as there is no common pattern across different services/platforms (OneDrive, Google Drive, Box, Windows, Mac, etc. - all different).
**Note** This dialog concept is also meant to be applied to https://github.com/owncloud/web/issues/1753
## Dialogs

## Behaviour
1. **Replace** will replace the existing file (and creates a new version implicitly).
2. **Keep both** will create a new resource with an appended number "(xx)" - counting up, if ex. (2) already exists.
3. **Cancel** will skip the current resource, if applicable shows the next conflict.
4. **Merge** will combine folders with the same name; if they contain conflicting files, user gets asked what to do (dialog for files shows up).
5. **Do this for all XX conflicts** will apply the selected option to all XX upcoming conflicts. Is only shown, if there are at least 2 conflicts.
6. **generally applicable:** The
## Userflows

____________
**solves:**
https://github.com/owncloud/web/issues/3751
https://github.com/owncloud/web/issues/5761
https://github.com/owncloud/web/issues/5106
https://github.com/owncloud/web/issues/6546
**groundwork for**
https://github.com/owncloud/web/issues/1753
|
non_process
|
upload of folder with same name shows error steps to reproduce login to create a new folder folder on the ocis instance and on your local drive upload your local folder folder upload does not start shows error folder folder already exists expected behaviour system should offer me alternative options to upload the folder the appropriate dialogues are consistently shown in case of conflicting resources eg in context of drag n drop upload upload via upload button move dialog copy dialog trashbin restore etc general rule user gets asked what to do in case of conflicting resources as there is no common pattern across different services platforms onedrive google drive box windows mac etc all different note this dialog concept is also meant to be applied to dialogs behaviour replace will replace the existing file and creates a new version implicitly keep both will create a new resource with an appended number xx counting up if ex already exists cancel will skip the current resource if applicable shows the next conflict merge will combine folders with the same name if they contain conflicting files user gets asked what to do dialog for files shows up do this for all xx conflicts will apply the selected option to all xx upcoming conflicts is only shown if there are at least conflicts generally applicable the userflows solves groundwork for
| 0
|
19,907
| 26,361,831,587
|
IssuesEvent
|
2023-01-11 13:57:18
|
scverse/spatialdata
|
https://api.github.com/repos/scverse/spatialdata
|
closed
|
mypy installation fails
|
bug CI dev process
|
Mypy checks pass offline, but the github action fails because the env can't be found.
I'll still merge two prs that have been approved to main, the red mark being due to this issue.
<img width="794" alt="Screenshot 2023-01-04 at 17 15 17" src="https://user-images.githubusercontent.com/2664412/210600204-f8a34648-3d08-4740-a323-55a29c582afc.png">
|
1.0
|
mypy installation fails - Mypy checks pass offline, but the github action fails because the env can't be found.
I'll still merge two prs that have been approved to main, the red mark being due to this issue.
<img width="794" alt="Screenshot 2023-01-04 at 17 15 17" src="https://user-images.githubusercontent.com/2664412/210600204-f8a34648-3d08-4740-a323-55a29c582afc.png">
|
process
|
mypy installation fails mypy checks pass offline but the github action fails because the env can t be found i ll still merge two prs that have been approved to main the red mark being due to this issue img width alt screenshot at src
| 1
|
117,473
| 11,947,613,193
|
IssuesEvent
|
2020-04-03 10:14:38
|
petermr/openVirus
|
https://api.github.com/repos/petermr/openVirus
|
closed
|
Add a LICENSE file
|
documentation
|
perhaps separately for
- software
- data
- text
- images and other media derived from the above
|
1.0
|
Add a LICENSE file - perhaps separately for
- software
- data
- text
- images and other media derived from the above
|
non_process
|
add a license file perhaps separately for software data text images and other media derived from the above
| 0
|
14,134
| 17,025,077,645
|
IssuesEvent
|
2021-07-03 10:16:39
|
Arch666Angel/mods
|
https://api.github.com/repos/Arch666Angel/mods
|
closed
|
The struggle with Red/Green wires in Angels+Bobs
|
Angels Bio Processing Impact: Enhancement
|
There is a lot of information here, and I'll try to make it coherent. Please ask me to clear up, or expand is something isn't clear.
With Angles, wood is difficult to reproduce in quantity. Which I get. The path to rubber and cellulose fiber...
The primary hurdle is the need for rare trees to make the seed generators. These are limited, thus limit the rate at which you can make more wood. Ultimately you can duplicate the rare trees too, but that needs a path down animal and vegetable farming to generate quantities of APLS.
In the combination of Angels Bobs (without bobs green houses) this hurts. The lower tech pain points that do not have alternate paths to scale are primarily red/green circuit wires, to a lesser extent splitters, and undergrounds.
The splitters and undergrounds of the basic (grey) belts require wood directly. By default, next tier requires previous tier, so all splitters and undergrounds require wood. Red/Green wires require rubber (at this tech level via resin via wood).
In pure Angels (industry only, or industry with components) Red/Green wires do not require rubber. Undergrounds and splitters do not require wood either.
In pure Bobs, you can use Bobs greenhouses, or from heavy oil just a bit later. The cost of wood -> resin - rubber is a lot lower compared to what it is in angels too. In pure bobs, 1K hand chopped wood will get you 1000 undergrounds, or 250 splitters, or 2000 red/green wires (1 wood -> 1 resin -> 1 rubber -> 2 insulated wires -> 2 red/green wires == .5 wood -> 1 red/green wire). In Angels, none of those things depend on wood.
*** This is the first pain point -- cost of wires is crazy high ***
In Angels + Bobs, that same 1K hand chopped wood will get you 1000 undergrounds, 250 splitters, or 66 red/green wires (30 wood -> 3 resin -> 1 rubber -> 2 insulated wires -> 2 red/green == 15 wood -> 1 red/green wire)
Thus, while not directly dependent on higher level tech, red/green wires at any scalable level needs large scale wood reproduction which does...
Since science colors are different in bobs I'll use S1 and S2 here.
S1 - Angles Red - Bobs Yellow - Automation
S2 - Angles Green -Bobs Red - Transport
In pure bobs, creating bobs greenhouses only requires S1. There are a total of 7 S1 techs needed and on S2 tech.
Pure Angels doesn't need rubber, but grey boards which can come from automated paper. In industry only you need 3 S1 and 1 S2 tech.
Angels components needs 6 S1 techs and 1 S2 tech.
*** This is the pain on top of the cost of wires pain above ***
Angels Bobs requires 9 S1 techs, and 1 S2 tech to make the circuits. Another 7 S1 techs to make wood -- assuming you have collected special trees to make the seed generators. Then another 5 S1 and 4 S2 and an APLS research to make special trees to make seed generators -- at which point you've unlocked bio-resin and liquid resin kinda making the exercise moot. And you still need to make the APLS as an ingredient for the special trees.
Until you can create wood, every underground and splitter will come at the cost of chopping down trees... assuming you're map isn't an arid desert.
I see / suggest a few paths to help this out...
Have the tree seed recipe use the seed-extractor, not the (special tree)-seed-generator. This keeps the tech level in line with the other iterations, keeps the wood cost high for rubber, and still more complicated than bob greenhouses. The seeds can only be used to create generic trees that get cut into wood.
Alter the insulated wire recipe. 30 tinned, 1 rubber -> 30 insulated wire in 15 seconds? Still twice as expensive (1:1 vs 1:2 - wood:wire) as pure bobs wire.
Add a recipe for insulated wire that uses cellulose fiber instead of rubber a la "Copper Cable Harness" in Angels component mode.
Or a combination of the above...
|
1.0
|
The struggle with Red/Green wires in Angels+Bobs - There is a lot of information here, and I'll try to make it coherent. Please ask me to clear up, or expand is something isn't clear.
With Angles, wood is difficult to reproduce in quantity. Which I get. The path to rubber and cellulose fiber...
The primary hurdle is the need for rare trees to make the seed generators. These are limited, thus limit the rate at which you can make more wood. Ultimately you can duplicate the rare trees too, but that needs a path down animal and vegetable farming to generate quantities of APLS.
In the combination of Angels Bobs (without bobs green houses) this hurts. The lower tech pain points that do not have alternate paths to scale are primarily red/green circuit wires, to a lesser extent splitters, and undergrounds.
The splitters and undergrounds of the basic (grey) belts require wood directly. By default, next tier requires previous tier, so all splitters and undergrounds require wood. Red/Green wires require rubber (at this tech level via resin via wood).
In pure Angels (industry only, or industry with components) Red/Green wires do not require rubber. Undergrounds and splitters do not require wood either.
In pure Bobs, you can use Bobs greenhouses, or from heavy oil just a bit later. The cost of wood -> resin - rubber is a lot lower compared to what it is in angels too. In pure bobs, 1K hand chopped wood will get you 1000 undergrounds, or 250 splitters, or 2000 red/green wires (1 wood -> 1 resin -> 1 rubber -> 2 insulated wires -> 2 red/green wires == .5 wood -> 1 red/green wire). In Angels, none of those things depend on wood.
*** This is the first pain point -- cost of wires is crazy high ***
In Angels + Bobs, that same 1K hand chopped wood will get you 1000 undergrounds, 250 splitters, or 66 red/green wires (30 wood -> 3 resin -> 1 rubber -> 2 insulated wires -> 2 red/green == 15 wood -> 1 red/green wire)
Thus, while not directly dependent on higher level tech, red/green wires at any scalable level needs large scale wood reproduction which does...
Since science colors are different in bobs I'll use S1 and S2 here.
S1 - Angles Red - Bobs Yellow - Automation
S2 - Angles Green -Bobs Red - Transport
In pure bobs, creating bobs greenhouses only requires S1. There are a total of 7 S1 techs needed and on S2 tech.
Pure Angels doesn't need rubber, but grey boards which can come from automated paper. In industry only you need 3 S1 and 1 S2 tech.
Angels components needs 6 S1 techs and 1 S2 tech.
*** This is the pain on top of the cost of wires pain above ***
Angels Bobs requires 9 S1 techs, and 1 S2 tech to make the circuits. Another 7 S1 techs to make wood -- assuming you have collected special trees to make the seed generators. Then another 5 S1 and 4 S2 and an APLS research to make special trees to make seed generators -- at which point you've unlocked bio-resin and liquid resin kinda making the exercise moot. And you still need to make the APLS as an ingredient for the special trees.
Until you can create wood, every underground and splitter will come at the cost of chopping down trees... assuming you're map isn't an arid desert.
I see / suggest a few paths to help this out...
Have the tree seed recipe use the seed-extractor, not the (special tree)-seed-generator. This keeps the tech level in line with the other iterations, keeps the wood cost high for rubber, and still more complicated than bob greenhouses. The seeds can only be used to create generic trees that get cut into wood.
Alter the insulated wire recipe. 30 tinned, 1 rubber -> 30 insulated wire in 15 seconds? Still twice as expensive (1:1 vs 1:2 - wood:wire) as pure bobs wire.
Add a recipe for insulated wire that uses cellulose fiber instead of rubber a la "Copper Cable Harness" in Angels component mode.
Or a combination of the above...
|
process
|
the struggle with red green wires in angels bobs there is a lot of information here and i ll try to make it coherent please ask me to clear up or expand is something isn t clear with angles wood is difficult to reproduce in quantity which i get the path to rubber and cellulose fiber the primary hurdle is the need for rare trees to make the seed generators these are limited thus limit the rate at which you can make more wood ultimately you can duplicate the rare trees too but that needs a path down animal and vegetable farming to generate quantities of apls in the combination of angels bobs without bobs green houses this hurts the lower tech pain points that do not have alternate paths to scale are primarily red green circuit wires to a lesser extent splitters and undergrounds the splitters and undergrounds of the basic grey belts require wood directly by default next tier requires previous tier so all splitters and undergrounds require wood red green wires require rubber at this tech level via resin via wood in pure angels industry only or industry with components red green wires do not require rubber undergrounds and splitters do not require wood either in pure bobs you can use bobs greenhouses or from heavy oil just a bit later the cost of wood resin rubber is a lot lower compared to what it is in angels too in pure bobs hand chopped wood will get you undergrounds or splitters or red green wires wood resin rubber insulated wires red green wires wood red green wire in angels none of those things depend on wood this is the first pain point cost of wires is crazy high in angels bobs that same hand chopped wood will get you undergrounds splitters or red green wires wood resin rubber insulated wires red green wood red green wire thus while not directly dependent on higher level tech red green wires at any scalable level needs large scale wood reproduction which does since science colors are different in bobs i ll use and here angles red bobs yellow automation angles green bobs red transport in pure bobs creating bobs greenhouses only requires there are a total of techs needed and on tech pure angels doesn t need rubber but grey boards which can come from automated paper in industry only you need and tech angels components needs techs and tech this is the pain on top of the cost of wires pain above angels bobs requires techs and tech to make the circuits another techs to make wood assuming you have collected special trees to make the seed generators then another and and an apls research to make special trees to make seed generators at which point you ve unlocked bio resin and liquid resin kinda making the exercise moot and you still need to make the apls as an ingredient for the special trees until you can create wood every underground and splitter will come at the cost of chopping down trees assuming you re map isn t an arid desert i see suggest a few paths to help this out have the tree seed recipe use the seed extractor not the special tree seed generator this keeps the tech level in line with the other iterations keeps the wood cost high for rubber and still more complicated than bob greenhouses the seeds can only be used to create generic trees that get cut into wood alter the insulated wire recipe tinned rubber insulated wire in seconds still twice as expensive vs wood wire as pure bobs wire add a recipe for insulated wire that uses cellulose fiber instead of rubber a la copper cable harness in angels component mode or a combination of the above
| 1
|
89,169
| 17,792,727,904
|
IssuesEvent
|
2021-08-31 18:08:54
|
ClickHouse/ClickHouse
|
https://api.github.com/repos/ClickHouse/ClickHouse
|
closed
|
S3 table engine doesn't support SETTINGS
|
unfinished code comp-s3
|
**Describe the unexpected behaviour**
It should be possible to pass input_format related settings to S3 table engine. (like for URL table engine)
**How to reproduce**
Clickhouse server 21.9
```
CREATE TABLE table_with_range (name String, value UInt32) ENGINE = S3('https://storage.yandexcloud.net/my-test-bucket-768/{some,another}_prefix/some_file_{1..3}', 'CSV') SETTINGS input_format_with_names_use_header=0;
CREATE TABLE table_with_range
(
`name` String,
`value` UInt32
)
ENGINE = S3('https://storage.yandexcloud.net/my-test-bucket-768/{some,another}_prefix/some_file_{1..3}', 'CSV')
SETTINGS input_format_with_names_use_header = 0
0 rows in set. Elapsed: 0.011 sec.
Received exception from server (version 21.9.1):
Code: 36. DB::Exception: Received from localhost:9000. DB::Exception: Engine S3 doesn't support SETTINGS clause. Currently only the following engines have support for the feature: [RabbitMQ, Kafka, MySQL, MaterializedPostgreSQL, Join, Set, MergeTree, Memory, URL, ReplicatedVersionedCollapsingMergeTree, ReplacingMergeTree, ReplicatedSummingMergeTree, ReplicatedAggregatingMergeTree, ReplicatedCollapsingMergeTree, File, ReplicatedGraphiteMergeTree, ReplicatedMergeTree, ReplicatedReplacingMergeTree, VersionedCollapsingMergeTree, SummingMergeTree, Distributed, TinyLog, GraphiteMergeTree, CollapsingMergeTree, AggregatingMergeTree, StripeLog, Log]. (BAD_ARGUMENTS)
```
|
1.0
|
S3 table engine doesn't support SETTINGS - **Describe the unexpected behaviour**
It should be possible to pass input_format related settings to S3 table engine. (like for URL table engine)
**How to reproduce**
Clickhouse server 21.9
```
CREATE TABLE table_with_range (name String, value UInt32) ENGINE = S3('https://storage.yandexcloud.net/my-test-bucket-768/{some,another}_prefix/some_file_{1..3}', 'CSV') SETTINGS input_format_with_names_use_header=0;
CREATE TABLE table_with_range
(
`name` String,
`value` UInt32
)
ENGINE = S3('https://storage.yandexcloud.net/my-test-bucket-768/{some,another}_prefix/some_file_{1..3}', 'CSV')
SETTINGS input_format_with_names_use_header = 0
0 rows in set. Elapsed: 0.011 sec.
Received exception from server (version 21.9.1):
Code: 36. DB::Exception: Received from localhost:9000. DB::Exception: Engine S3 doesn't support SETTINGS clause. Currently only the following engines have support for the feature: [RabbitMQ, Kafka, MySQL, MaterializedPostgreSQL, Join, Set, MergeTree, Memory, URL, ReplicatedVersionedCollapsingMergeTree, ReplacingMergeTree, ReplicatedSummingMergeTree, ReplicatedAggregatingMergeTree, ReplicatedCollapsingMergeTree, File, ReplicatedGraphiteMergeTree, ReplicatedMergeTree, ReplicatedReplacingMergeTree, VersionedCollapsingMergeTree, SummingMergeTree, Distributed, TinyLog, GraphiteMergeTree, CollapsingMergeTree, AggregatingMergeTree, StripeLog, Log]. (BAD_ARGUMENTS)
```
|
non_process
|
table engine doesn t support settings describe the unexpected behaviour it should be possible to pass input format related settings to table engine like for url table engine how to reproduce clickhouse server create table table with range name string value engine csv settings input format with names use header create table table with range name string value engine csv settings input format with names use header rows in set elapsed sec received exception from server version code db exception received from localhost db exception engine doesn t support settings clause currently only the following engines have support for the feature bad arguments
| 0
|
284,827
| 21,472,300,834
|
IssuesEvent
|
2022-04-26 10:36:27
|
sonofmagic/weapp-tailwindcss-webpack-plugin
|
https://api.github.com/repos/sonofmagic/weapp-tailwindcss-webpack-plugin
|
closed
|
使用uniapp demo,disabled不生效
|
documentation question
|
使用uniapp demo,编译为小程序,disabled类不生效
`<button class="bg-green-500 text-white disabled:opacity-50" disabled>disable</button>`
<img width="326" alt="image" src="https://user-images.githubusercontent.com/30710337/165274535-8752bcc3-f7fd-4dac-bb09-f6dbed3d0217.png">
<img width="863" alt="image" src="https://user-images.githubusercontent.com/30710337/165275451-62aa4fd0-323f-4437-adc9-45220d5854aa.png">
正常效果
<img width="762" alt="image" src="https://user-images.githubusercontent.com/30710337/165277755-26a57c64-3b95-45f3-8780-a9337c13e8dc.png">
|
1.0
|
使用uniapp demo,disabled不生效 - 使用uniapp demo,编译为小程序,disabled类不生效
`<button class="bg-green-500 text-white disabled:opacity-50" disabled>disable</button>`
<img width="326" alt="image" src="https://user-images.githubusercontent.com/30710337/165274535-8752bcc3-f7fd-4dac-bb09-f6dbed3d0217.png">
<img width="863" alt="image" src="https://user-images.githubusercontent.com/30710337/165275451-62aa4fd0-323f-4437-adc9-45220d5854aa.png">
正常效果
<img width="762" alt="image" src="https://user-images.githubusercontent.com/30710337/165277755-26a57c64-3b95-45f3-8780-a9337c13e8dc.png">
|
non_process
|
使用uniapp demo,disabled不生效 使用uniapp demo,编译为小程序,disabled类不生效 disable img width alt image src img width alt image src 正常效果 img width alt image src
| 0
|
10,582
| 3,129,824,487
|
IssuesEvent
|
2015-09-09 05:00:55
|
e-government-ua/i
|
https://api.github.com/repos/e-government-ua/i
|
closed
|
На бэке централа доработать сущность HistoryEvent_Service и организовать вызов ее сервисов
|
active hi priority test
|
- [ ] 1. На бэке централа добавить для сущности HistoryEvent_Service поле sDate типа "дата-время" (так-же как и для сущности Document)
- [ ] 2. Через уже реализованный механизм фильтра(при вызове любого сервиса), на регионале реализовать сервиса центрального портала:
- [x] 2.1. addHistoryEvent_Service при каждом добавлении таски Активити (которое используются на центральном портале, а вызывает региональный)
- [ ] 2.2. updateHistoryEvent_Service при каждом изменении таски Активити (ассайн и сабмит, которые используются на региональном дашборде)
- [ ] 3. В поля проставлять следующие данные:
- [ ] 3.1 В поле со статусом проставлять название юзертаски (task.getName()) (при изменении или "Заявка подана", при создании)
- [ ] 3.2 В поле sData проставлять текущую дату создания/правки записи
- [ ] 3.3 В sID при создании записи проставлять первичный ИД, который получается при создании процесса (который именно возвращается активити), и далее его не менять)
- [ ] 4. Убедиться что события по п.2 сохраняются также и в Мой Журнал (как это происходит и с остальными событиями по загрузке документа и предоставлению к нему доступа)
|
1.0
|
На бэке централа доработать сущность HistoryEvent_Service и организовать вызов ее сервисов - - [ ] 1. На бэке централа добавить для сущности HistoryEvent_Service поле sDate типа "дата-время" (так-же как и для сущности Document)
- [ ] 2. Через уже реализованный механизм фильтра(при вызове любого сервиса), на регионале реализовать сервиса центрального портала:
- [x] 2.1. addHistoryEvent_Service при каждом добавлении таски Активити (которое используются на центральном портале, а вызывает региональный)
- [ ] 2.2. updateHistoryEvent_Service при каждом изменении таски Активити (ассайн и сабмит, которые используются на региональном дашборде)
- [ ] 3. В поля проставлять следующие данные:
- [ ] 3.1 В поле со статусом проставлять название юзертаски (task.getName()) (при изменении или "Заявка подана", при создании)
- [ ] 3.2 В поле sData проставлять текущую дату создания/правки записи
- [ ] 3.3 В sID при создании записи проставлять первичный ИД, который получается при создании процесса (который именно возвращается активити), и далее его не менять)
- [ ] 4. Убедиться что события по п.2 сохраняются также и в Мой Журнал (как это происходит и с остальными событиями по загрузке документа и предоставлению к нему доступа)
|
non_process
|
на бэке централа доработать сущность historyevent service и организовать вызов ее сервисов на бэке централа добавить для сущности historyevent service поле sdate типа дата время так же как и для сущности document через уже реализованный механизм фильтра при вызове любого сервиса на регионале реализовать сервиса центрального портала addhistoryevent service при каждом добавлении таски активити которое используются на центральном портале а вызывает региональный updatehistoryevent service при каждом изменении таски активити ассайн и сабмит которые используются на региональном дашборде в поля проставлять следующие данные в поле со статусом проставлять название юзертаски task getname при изменении или заявка подана при создании в поле sdata проставлять текущую дату создания правки записи в sid при создании записи проставлять первичный ид который получается при создании процесса который именно возвращается активити и далее его не менять убедиться что события по п сохраняются также и в мой журнал как это происходит и с остальными событиями по загрузке документа и предоставлению к нему доступа
| 0
|
5,751
| 8,597,160,120
|
IssuesEvent
|
2018-11-15 17:49:58
|
googleapis/google-cloud-python
|
https://api.github.com/repos/googleapis/google-cloud-python
|
closed
|
Firestore: conformance tests hiding failures
|
api: firestore testing type: process
|
Since dd4b6465af550deb642733c3e52b9ed131141bdb (PR #4851), the `test_cross_language` testcase has been swallowing errors.
And therefore we have a bunch of bugs to fix. :(
|
1.0
|
Firestore: conformance tests hiding failures - Since dd4b6465af550deb642733c3e52b9ed131141bdb (PR #4851), the `test_cross_language` testcase has been swallowing errors.
And therefore we have a bunch of bugs to fix. :(
|
process
|
firestore conformance tests hiding failures since pr the test cross language testcase has been swallowing errors and therefore we have a bunch of bugs to fix
| 1
|
19,573
| 25,893,964,489
|
IssuesEvent
|
2022-12-14 20:32:23
|
googleapis/google-cloud-python-happybase
|
https://api.github.com/repos/googleapis/google-cloud-python-happybase
|
closed
|
Dependency Dashboard
|
type: process api: bigtable
|
This issue lists Renovate updates and detected dependencies. Read the [Dependency Dashboard](https://docs.renovatebot.com/key-concepts/dashboard/) docs to learn more.
## Rate-Limited
These updates are currently rate-limited. Click on a checkbox below to force their creation now.
- [ ] <!-- unlimit-branch=renovate/virtualenv-20.x -->chore(deps): update dependency virtualenv to v20.17.1
- [ ] <!-- unlimit-branch=renovate/zipp-3.x -->chore(deps): update dependency zipp to v3.11.0
- [ ] <!-- unlimit-branch=renovate/charset-normalizer-3.x -->chore(deps): update dependency charset-normalizer to v3
- [ ] <!-- unlimit-branch=renovate/packaging-22.x -->chore(deps): update dependency packaging to v22
- [ ] <!-- unlimit-branch=renovate/protobuf-4.x -->chore(deps): update dependency protobuf to v4
- [ ] <!-- create-all-rate-limited-prs -->🔐 **Create all rate-limited PRs at once** 🔐
## Edited/Blocked
These updates have been manually edited so Renovate will no longer make changes. To discard all commits and start over, click on a checkbox.
- [ ] <!-- rebase-branch=renovate/filelock-3.x -->[chore(deps): update dependency filelock to v3.8.2](../pull/103)
- [ ] <!-- rebase-branch=renovate/gcp-releasetool-1.x -->[chore(deps): update dependency gcp-releasetool to v1.10.1](../pull/102)
- [ ] <!-- rebase-branch=renovate/google-api-core-2.x -->[chore(deps): update dependency google-api-core to v2.11.0](../pull/104)
- [ ] <!-- rebase-branch=renovate/google-auth-2.x -->[chore(deps): update dependency google-auth to v2.15.0](../pull/105)
- [ ] <!-- rebase-branch=renovate/google-cloud-storage-2.x -->[chore(deps): update dependency google-cloud-storage to v2.7.0](../pull/106)
- [ ] <!-- rebase-branch=renovate/importlib-metadata-5.x -->[chore(deps): update dependency importlib-metadata to v5.1.0](../pull/107)
- [ ] <!-- rebase-branch=renovate/nox-2022.x -->[chore(deps): update dependency nox to v2022.11.21](../pull/108)
- [ ] <!-- rebase-branch=renovate/pkginfo-1.x -->[chore(deps): update dependency pkginfo to v1.9.2](../pull/109)
- [ ] <!-- rebase-branch=renovate/platformdirs-2.x -->[chore(deps): update dependency platformdirs to v2.6.0](../pull/110)
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/setuptools-65.x -->[chore(deps): update dependency setuptools to v65.6.3](../pull/111)
## Ignored or Blocked
These are blocked by an existing closed PR and will not be recreated unless you click a checkbox below.
- [ ] <!-- recreate-branch=renovate/cryptography-38.x -->[chore(deps): update dependency cryptography to v38.0.4](../pull/101)
- [ ] <!-- recreate-branch=renovate/twine-4.x -->[chore(deps): update dependency twine to v4.0.2](../pull/94)
- [ ] <!-- recreate-branch=renovate/urllib3-1.x -->[chore(deps): update dependency urllib3 to v1.26.13](../pull/95)
- [ ] <!-- recreate-branch=renovate/click-8.x -->[chore(deps): update dependency click to v8.1.3](../pull/97)
## Detected dependencies
<details><summary>pip_requirements</summary>
<blockquote>
<details><summary>.kokoro/requirements.txt</summary>
- `argcomplete ==2.0.0`
- `attrs ==22.1.0`
- `bleach ==5.0.1`
- `cachetools ==5.2.0`
- `certifi ==2022.12.7`
- `cffi ==1.15.1`
- `charset-normalizer ==2.1.1`
- `click ==8.0.4`
- `colorlog ==6.7.0`
- `commonmark ==0.9.1`
- `cryptography ==38.0.3`
- `distlib ==0.3.6`
- `docutils ==0.19`
- `filelock ==3.8.0`
- `gcp-docuploader ==0.6.4`
- `gcp-releasetool ==1.10.0`
- `google-api-core ==2.10.2`
- `google-auth ==2.14.1`
- `google-cloud-core ==2.3.2`
- `google-cloud-storage ==2.6.0`
- `google-crc32c ==1.5.0`
- `google-resumable-media ==2.4.0`
- `googleapis-common-protos ==1.57.0`
- `idna ==3.4`
- `importlib-metadata ==5.0.0`
- `jaraco-classes ==3.2.3`
- `jeepney ==0.8.0`
- `jinja2 ==3.1.2`
- `keyring ==23.11.0`
- `markupsafe ==2.1.1`
- `more-itertools ==9.0.0`
- `nox ==2022.8.7`
- `packaging ==21.3`
- `pkginfo ==1.8.3`
- `platformdirs ==2.5.4`
- `protobuf ==3.20.3`
- `py ==1.11.0`
- `pyasn1 ==0.4.8`
- `pyasn1-modules ==0.2.8`
- `pycparser ==2.21`
- `pygments ==2.13.0`
- `pyjwt ==2.6.0`
- `pyparsing ==3.0.9`
- `pyperclip ==1.8.2`
- `python-dateutil ==2.8.2`
- `readme-renderer ==37.3`
- `requests ==2.28.1`
- `requests-toolbelt ==0.10.1`
- `rfc3986 ==2.0.0`
- `rich ==12.6.0`
- `rsa ==4.9`
- `secretstorage ==3.3.3`
- `six ==1.16.0`
- `twine ==4.0.1`
- `typing-extensions ==4.4.0`
- `urllib3 ==1.26.12`
- `virtualenv ==20.16.7`
- `webencodings ==0.5.1`
- `wheel ==0.38.4`
- `zipp ==3.10.0`
- `setuptools ==65.5.1`
</details>
</blockquote>
</details>
<details><summary>pip_setup</summary>
<blockquote>
<details><summary>setup.py</summary>
- `google-cloud-bigtable >= 0.31.0`
</details>
</blockquote>
</details>
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
1.0
|
Dependency Dashboard - This issue lists Renovate updates and detected dependencies. Read the [Dependency Dashboard](https://docs.renovatebot.com/key-concepts/dashboard/) docs to learn more.
## Rate-Limited
These updates are currently rate-limited. Click on a checkbox below to force their creation now.
- [ ] <!-- unlimit-branch=renovate/virtualenv-20.x -->chore(deps): update dependency virtualenv to v20.17.1
- [ ] <!-- unlimit-branch=renovate/zipp-3.x -->chore(deps): update dependency zipp to v3.11.0
- [ ] <!-- unlimit-branch=renovate/charset-normalizer-3.x -->chore(deps): update dependency charset-normalizer to v3
- [ ] <!-- unlimit-branch=renovate/packaging-22.x -->chore(deps): update dependency packaging to v22
- [ ] <!-- unlimit-branch=renovate/protobuf-4.x -->chore(deps): update dependency protobuf to v4
- [ ] <!-- create-all-rate-limited-prs -->🔐 **Create all rate-limited PRs at once** 🔐
## Edited/Blocked
These updates have been manually edited so Renovate will no longer make changes. To discard all commits and start over, click on a checkbox.
- [ ] <!-- rebase-branch=renovate/filelock-3.x -->[chore(deps): update dependency filelock to v3.8.2](../pull/103)
- [ ] <!-- rebase-branch=renovate/gcp-releasetool-1.x -->[chore(deps): update dependency gcp-releasetool to v1.10.1](../pull/102)
- [ ] <!-- rebase-branch=renovate/google-api-core-2.x -->[chore(deps): update dependency google-api-core to v2.11.0](../pull/104)
- [ ] <!-- rebase-branch=renovate/google-auth-2.x -->[chore(deps): update dependency google-auth to v2.15.0](../pull/105)
- [ ] <!-- rebase-branch=renovate/google-cloud-storage-2.x -->[chore(deps): update dependency google-cloud-storage to v2.7.0](../pull/106)
- [ ] <!-- rebase-branch=renovate/importlib-metadata-5.x -->[chore(deps): update dependency importlib-metadata to v5.1.0](../pull/107)
- [ ] <!-- rebase-branch=renovate/nox-2022.x -->[chore(deps): update dependency nox to v2022.11.21](../pull/108)
- [ ] <!-- rebase-branch=renovate/pkginfo-1.x -->[chore(deps): update dependency pkginfo to v1.9.2](../pull/109)
- [ ] <!-- rebase-branch=renovate/platformdirs-2.x -->[chore(deps): update dependency platformdirs to v2.6.0](../pull/110)
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/setuptools-65.x -->[chore(deps): update dependency setuptools to v65.6.3](../pull/111)
## Ignored or Blocked
These are blocked by an existing closed PR and will not be recreated unless you click a checkbox below.
- [ ] <!-- recreate-branch=renovate/cryptography-38.x -->[chore(deps): update dependency cryptography to v38.0.4](../pull/101)
- [ ] <!-- recreate-branch=renovate/twine-4.x -->[chore(deps): update dependency twine to v4.0.2](../pull/94)
- [ ] <!-- recreate-branch=renovate/urllib3-1.x -->[chore(deps): update dependency urllib3 to v1.26.13](../pull/95)
- [ ] <!-- recreate-branch=renovate/click-8.x -->[chore(deps): update dependency click to v8.1.3](../pull/97)
## Detected dependencies
<details><summary>pip_requirements</summary>
<blockquote>
<details><summary>.kokoro/requirements.txt</summary>
- `argcomplete ==2.0.0`
- `attrs ==22.1.0`
- `bleach ==5.0.1`
- `cachetools ==5.2.0`
- `certifi ==2022.12.7`
- `cffi ==1.15.1`
- `charset-normalizer ==2.1.1`
- `click ==8.0.4`
- `colorlog ==6.7.0`
- `commonmark ==0.9.1`
- `cryptography ==38.0.3`
- `distlib ==0.3.6`
- `docutils ==0.19`
- `filelock ==3.8.0`
- `gcp-docuploader ==0.6.4`
- `gcp-releasetool ==1.10.0`
- `google-api-core ==2.10.2`
- `google-auth ==2.14.1`
- `google-cloud-core ==2.3.2`
- `google-cloud-storage ==2.6.0`
- `google-crc32c ==1.5.0`
- `google-resumable-media ==2.4.0`
- `googleapis-common-protos ==1.57.0`
- `idna ==3.4`
- `importlib-metadata ==5.0.0`
- `jaraco-classes ==3.2.3`
- `jeepney ==0.8.0`
- `jinja2 ==3.1.2`
- `keyring ==23.11.0`
- `markupsafe ==2.1.1`
- `more-itertools ==9.0.0`
- `nox ==2022.8.7`
- `packaging ==21.3`
- `pkginfo ==1.8.3`
- `platformdirs ==2.5.4`
- `protobuf ==3.20.3`
- `py ==1.11.0`
- `pyasn1 ==0.4.8`
- `pyasn1-modules ==0.2.8`
- `pycparser ==2.21`
- `pygments ==2.13.0`
- `pyjwt ==2.6.0`
- `pyparsing ==3.0.9`
- `pyperclip ==1.8.2`
- `python-dateutil ==2.8.2`
- `readme-renderer ==37.3`
- `requests ==2.28.1`
- `requests-toolbelt ==0.10.1`
- `rfc3986 ==2.0.0`
- `rich ==12.6.0`
- `rsa ==4.9`
- `secretstorage ==3.3.3`
- `six ==1.16.0`
- `twine ==4.0.1`
- `typing-extensions ==4.4.0`
- `urllib3 ==1.26.12`
- `virtualenv ==20.16.7`
- `webencodings ==0.5.1`
- `wheel ==0.38.4`
- `zipp ==3.10.0`
- `setuptools ==65.5.1`
</details>
</blockquote>
</details>
<details><summary>pip_setup</summary>
<blockquote>
<details><summary>setup.py</summary>
- `google-cloud-bigtable >= 0.31.0`
</details>
</blockquote>
</details>
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
process
|
dependency dashboard this issue lists renovate updates and detected dependencies read the docs to learn more rate limited these updates are currently rate limited click on a checkbox below to force their creation now chore deps update dependency virtualenv to chore deps update dependency zipp to chore deps update dependency charset normalizer to chore deps update dependency packaging to chore deps update dependency protobuf to 🔐 create all rate limited prs at once 🔐 edited blocked these updates have been manually edited so renovate will no longer make changes to discard all commits and start over click on a checkbox pull pull pull pull pull pull pull pull pull open these updates have all been created already click a checkbox below to force a retry rebase of any pull ignored or blocked these are blocked by an existing closed pr and will not be recreated unless you click a checkbox below pull pull pull pull detected dependencies pip requirements kokoro requirements txt argcomplete attrs bleach cachetools certifi cffi charset normalizer click colorlog commonmark cryptography distlib docutils filelock gcp docuploader gcp releasetool google api core google auth google cloud core google cloud storage google google resumable media googleapis common protos idna importlib metadata jaraco classes jeepney keyring markupsafe more itertools nox packaging pkginfo platformdirs protobuf py modules pycparser pygments pyjwt pyparsing pyperclip python dateutil readme renderer requests requests toolbelt rich rsa secretstorage six twine typing extensions virtualenv webencodings wheel zipp setuptools pip setup setup py google cloud bigtable check this box to trigger a request for renovate to run again on this repository
| 1
|
266,559
| 8,371,438,830
|
IssuesEvent
|
2018-10-05 00:39:50
|
matsengrp/olmsted
|
https://api.github.com/repos/matsengrp/olmsted
|
closed
|
Add UI and MVC logic around multiple reconstructions
|
high-priority react
|
Need to be able to change between reconstructions in the viz now that #6 has been resolved in #69.
|
1.0
|
Add UI and MVC logic around multiple reconstructions - Need to be able to change between reconstructions in the viz now that #6 has been resolved in #69.
|
non_process
|
add ui and mvc logic around multiple reconstructions need to be able to change between reconstructions in the viz now that has been resolved in
| 0
|
13,499
| 16,040,003,734
|
IssuesEvent
|
2021-04-22 06:29:49
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Cannot see Managed identity preview feature in Australia Central
|
Pri1 automation/svc cxp process-automation/subsvc product-question triaged
|
[Enter feedback here]
Hi
I cannot see Managed identity preview feature in Australia Central. Can you please advise regions where this is avail? and ETA for Au central?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: d9ce2612-b600-3fca-3315-a7836ef91c96
* Version Independent ID: 78766eed-c3c6-ce60-7620-17b99f3d9d5e
* Content: [Enable a managed identity for your Azure Automation account (preview)](https://docs.microsoft.com/en-us/azure/automation/enable-managed-identity-for-automation)
* Content Source: [articles/automation/enable-managed-identity-for-automation.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/automation/enable-managed-identity-for-automation.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @MGoedtel
* Microsoft Alias: **magoedte**
|
1.0
|
Cannot see Managed identity preview feature in Australia Central -
[Enter feedback here]
Hi
I cannot see Managed identity preview feature in Australia Central. Can you please advise regions where this is avail? and ETA for Au central?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: d9ce2612-b600-3fca-3315-a7836ef91c96
* Version Independent ID: 78766eed-c3c6-ce60-7620-17b99f3d9d5e
* Content: [Enable a managed identity for your Azure Automation account (preview)](https://docs.microsoft.com/en-us/azure/automation/enable-managed-identity-for-automation)
* Content Source: [articles/automation/enable-managed-identity-for-automation.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/automation/enable-managed-identity-for-automation.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @MGoedtel
* Microsoft Alias: **magoedte**
|
process
|
cannot see managed identity preview feature in australia central hi i cannot see managed identity preview feature in australia central can you please advise regions where this is avail and eta for au central document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation sub service process automation github login mgoedtel microsoft alias magoedte
| 1
|
157,959
| 12,397,062,212
|
IssuesEvent
|
2020-05-20 21:48:45
|
aliasrobotics/RVD
|
https://api.github.com/repos/aliasrobotics/RVD
|
opened
|
(warning) Member variable 'RobotLinkSelectionHandler
|
bug cppcheck static analysis testing triage
|
```yaml
{
"id": 1,
"title": "(warning) Member variable 'RobotLinkSelectionHandler",
"type": "bug",
"description": "[src/rviz/src/rviz/robot/robot_link.cpp:88]: (warning) Member variable 'RobotLinkSelectionHandler::position_property_' is not initialized in the constructor.",
"cwe": "None",
"cve": "None",
"keywords": [
"cppcheck",
"static analysis",
"testing",
"triage",
"bug"
],
"system": "src/rviz/src/rviz/robot/robot_link.cpp",
"vendor": null,
"severity": {
"rvss-score": 0,
"rvss-vector": "",
"severity-description": "",
"cvss-score": 0,
"cvss-vector": ""
},
"links": "",
"flaw": {
"phase": "testing",
"specificity": "N/A",
"architectural-location": "N/A",
"application": "N/A",
"subsystem": "N/A",
"package": "N/A",
"languages": "None",
"date-detected": "2020-05-20 (21:48)",
"detected-by": "Alias Robotics",
"detected-by-method": "testing static",
"date-reported": "2020-05-20 (21:48)",
"reported-by": "Alias Robotics",
"reported-by-relationship": "automatic",
"issue": "",
"reproducibility": "always",
"trace": "",
"reproduction": "See artifacts below (if available)",
"reproduction-image": "gitlab.com/aliasrobotics/offensive/alurity/pipelines/active/pipeline_ros_noetic/-/jobs/561514360/artifacts/download"
},
"exploitation": {
"description": "",
"exploitation-image": "",
"exploitation-vector": ""
},
"mitigation": {
"description": "",
"pull-request": "",
"date-mitigation": ""
}
}
```
|
1.0
|
(warning) Member variable 'RobotLinkSelectionHandler - ```yaml
{
"id": 1,
"title": "(warning) Member variable 'RobotLinkSelectionHandler",
"type": "bug",
"description": "[src/rviz/src/rviz/robot/robot_link.cpp:88]: (warning) Member variable 'RobotLinkSelectionHandler::position_property_' is not initialized in the constructor.",
"cwe": "None",
"cve": "None",
"keywords": [
"cppcheck",
"static analysis",
"testing",
"triage",
"bug"
],
"system": "src/rviz/src/rviz/robot/robot_link.cpp",
"vendor": null,
"severity": {
"rvss-score": 0,
"rvss-vector": "",
"severity-description": "",
"cvss-score": 0,
"cvss-vector": ""
},
"links": "",
"flaw": {
"phase": "testing",
"specificity": "N/A",
"architectural-location": "N/A",
"application": "N/A",
"subsystem": "N/A",
"package": "N/A",
"languages": "None",
"date-detected": "2020-05-20 (21:48)",
"detected-by": "Alias Robotics",
"detected-by-method": "testing static",
"date-reported": "2020-05-20 (21:48)",
"reported-by": "Alias Robotics",
"reported-by-relationship": "automatic",
"issue": "",
"reproducibility": "always",
"trace": "",
"reproduction": "See artifacts below (if available)",
"reproduction-image": "gitlab.com/aliasrobotics/offensive/alurity/pipelines/active/pipeline_ros_noetic/-/jobs/561514360/artifacts/download"
},
"exploitation": {
"description": "",
"exploitation-image": "",
"exploitation-vector": ""
},
"mitigation": {
"description": "",
"pull-request": "",
"date-mitigation": ""
}
}
```
|
non_process
|
warning member variable robotlinkselectionhandler yaml id title warning member variable robotlinkselectionhandler type bug description warning member variable robotlinkselectionhandler position property is not initialized in the constructor cwe none cve none keywords cppcheck static analysis testing triage bug system src rviz src rviz robot robot link cpp vendor null severity rvss score rvss vector severity description cvss score cvss vector links flaw phase testing specificity n a architectural location n a application n a subsystem n a package n a languages none date detected detected by alias robotics detected by method testing static date reported reported by alias robotics reported by relationship automatic issue reproducibility always trace reproduction see artifacts below if available reproduction image gitlab com aliasrobotics offensive alurity pipelines active pipeline ros noetic jobs artifacts download exploitation description exploitation image exploitation vector mitigation description pull request date mitigation
| 0
|
16,600
| 22,676,504,954
|
IssuesEvent
|
2022-07-04 05:28:40
|
safing/portmaster
|
https://api.github.com/repos/safing/portmaster
|
closed
|
Avast One
|
waiting for input in/compatibility
|
**Pre-Submit Checklist**:
- Check applicable sources for existing issues:
- [Linux Compatibility](https://docs.safing.io/portmaster/install/linux#compatibility)
- [VPN Compatibility](https://docs.safing.io/portmaster/install/status/vpn-compatibility)
- [Github Issues](https://github.com/safing/portmaster/issues?q=is%3Aissue+label%3Ain%2Fcompatibility)
**What worked?**
**What did not work?**
everything
**Debug Information**:
<!--
Paste debug information below if reporting a problem:
- General issue: Click on "Copy Debug Information" on the Settings page.
- App related issue: Click on "Copy Debug Information" in the dropdown menu of an app in the Monitor view.
⚠ Please remove sensitive/private information from the "Unexpected Logs" and "Network Connections" sections.
This is easiest to do in the preview mode.
If needed, additional logs can be found here:
- Linux: `/opt/safing/portmaster/logs`
- Windows: `%PROGRAMDATA%\Safing\Portmaster\logs`
-->
|
True
|
Avast One - **Pre-Submit Checklist**:
- Check applicable sources for existing issues:
- [Linux Compatibility](https://docs.safing.io/portmaster/install/linux#compatibility)
- [VPN Compatibility](https://docs.safing.io/portmaster/install/status/vpn-compatibility)
- [Github Issues](https://github.com/safing/portmaster/issues?q=is%3Aissue+label%3Ain%2Fcompatibility)
**What worked?**
**What did not work?**
everything
**Debug Information**:
<!--
Paste debug information below if reporting a problem:
- General issue: Click on "Copy Debug Information" on the Settings page.
- App related issue: Click on "Copy Debug Information" in the dropdown menu of an app in the Monitor view.
⚠ Please remove sensitive/private information from the "Unexpected Logs" and "Network Connections" sections.
This is easiest to do in the preview mode.
If needed, additional logs can be found here:
- Linux: `/opt/safing/portmaster/logs`
- Windows: `%PROGRAMDATA%\Safing\Portmaster\logs`
-->
|
non_process
|
avast one pre submit checklist check applicable sources for existing issues what worked what did not work everything debug information paste debug information below if reporting a problem general issue click on copy debug information on the settings page app related issue click on copy debug information in the dropdown menu of an app in the monitor view ⚠ please remove sensitive private information from the unexpected logs and network connections sections this is easiest to do in the preview mode if needed additional logs can be found here linux opt safing portmaster logs windows programdata safing portmaster logs
| 0
|
19,096
| 25,148,001,579
|
IssuesEvent
|
2022-11-10 07:41:14
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
SAGA Upslope Area tool error: tool need graphical user interface
|
Processing Bug
|
### What is the bug or the crash?
The SAGA Upslope Area tool in the QGIS Processing Toolbox gives the following error:
tool need graphical user interface [Upslope Area]
and doesn't produce the output.

### Steps to reproduce the issue
1. Click right on pixel in the DEM and copy the coordinate with the same CRS as the Filled DEM (306271.5,5622234.9 in the sample dataset).
2. In the Processing toolbox go to SAGA | Terrain Analysis - Hydrology | Upslope Area
3. Paste the X and Y coordinate copied from step 1.
4. Choose the Filled DEM as Elevation, use Method = [0] Deterministic 8. Keep the rest as default.
5. Click Run

[Filled_DEM_sample.zip](https://github.com/qgis/QGIS/files/6916516/Filled_DEM_sample.zip)
### Versions
The tool worked well until QGIS 3.16.4 with the SAGA 2.x version.
The error is observed in version 3.16.9 and 3.20.1 (I didn't test other vesions, but users report the same problem).
The error is also observed with the SAGA Next Gen experimental plugin.
<!--StartFragment--><!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0//EN" "http://www.w3.org/TR/REC-html40/strict.dtd">
<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8" /><style type="text/css">
p, li { white-space: pre-wrap; }
</style></head><body>
QGIS version | 3.16.9-Hannover | QGIS code revision | 9f8d2f79
-- | -- | -- | --
Compiled against Qt | 5.15.2 | Running against Qt | 5.15.2
Compiled against GDAL/OGR | 3.3.1 | Running against GDAL/OGR | 3.3.1
Compiled against GEOS | 3.9.1-CAPI-1.14.2 | Running against GEOS | 3.9.1-CAPI-1.14.2
Compiled against SQLite | 3.35.2 | Running against SQLite | 3.35.2
PostgreSQL Client Version | 13.0 | SpatiaLite Version | 5.0.1
QWT Version | 6.1.3 | QScintilla2 Version | 2.11.5
Compiled against PROJ | 8.1.0 | Running against PROJ | Rel. 8.1.0, July 1st, 2021
OS Version | Windows 10 Version 2009
Active python plugins | GeoCoding; quick_map_services; db_manager; MetaSearch; processing; processing_saga_nextgen
</body></html><!--EndFragment-->
### Additional context
In my opinion the LTR versions should use the stable SAGA processing tools. Now all SAGA tools give a warning and there's no (easy?) way to downgrade QGIS on Windows to version 3.16.4 when it was still working okay.
|
1.0
|
SAGA Upslope Area tool error: tool need graphical user interface - ### What is the bug or the crash?
The SAGA Upslope Area tool in the QGIS Processing Toolbox gives the following error:
tool need graphical user interface [Upslope Area]
and doesn't produce the output.

### Steps to reproduce the issue
1. Click right on pixel in the DEM and copy the coordinate with the same CRS as the Filled DEM (306271.5,5622234.9 in the sample dataset).
2. In the Processing toolbox go to SAGA | Terrain Analysis - Hydrology | Upslope Area
3. Paste the X and Y coordinate copied from step 1.
4. Choose the Filled DEM as Elevation, use Method = [0] Deterministic 8. Keep the rest as default.
5. Click Run

[Filled_DEM_sample.zip](https://github.com/qgis/QGIS/files/6916516/Filled_DEM_sample.zip)
### Versions
The tool worked well until QGIS 3.16.4 with the SAGA 2.x version.
The error is observed in version 3.16.9 and 3.20.1 (I didn't test other vesions, but users report the same problem).
The error is also observed with the SAGA Next Gen experimental plugin.
<!--StartFragment--><!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0//EN" "http://www.w3.org/TR/REC-html40/strict.dtd">
<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8" /><style type="text/css">
p, li { white-space: pre-wrap; }
</style></head><body>
QGIS version | 3.16.9-Hannover | QGIS code revision | 9f8d2f79
-- | -- | -- | --
Compiled against Qt | 5.15.2 | Running against Qt | 5.15.2
Compiled against GDAL/OGR | 3.3.1 | Running against GDAL/OGR | 3.3.1
Compiled against GEOS | 3.9.1-CAPI-1.14.2 | Running against GEOS | 3.9.1-CAPI-1.14.2
Compiled against SQLite | 3.35.2 | Running against SQLite | 3.35.2
PostgreSQL Client Version | 13.0 | SpatiaLite Version | 5.0.1
QWT Version | 6.1.3 | QScintilla2 Version | 2.11.5
Compiled against PROJ | 8.1.0 | Running against PROJ | Rel. 8.1.0, July 1st, 2021
OS Version | Windows 10 Version 2009
Active python plugins | GeoCoding; quick_map_services; db_manager; MetaSearch; processing; processing_saga_nextgen
</body></html><!--EndFragment-->
### Additional context
In my opinion the LTR versions should use the stable SAGA processing tools. Now all SAGA tools give a warning and there's no (easy?) way to downgrade QGIS on Windows to version 3.16.4 when it was still working okay.
|
process
|
saga upslope area tool error tool need graphical user interface what is the bug or the crash the saga upslope area tool in the qgis processing toolbox gives the following error tool need graphical user interface and doesn t produce the output steps to reproduce the issue click right on pixel in the dem and copy the coordinate with the same crs as the filled dem in the sample dataset in the processing toolbox go to saga terrain analysis hydrology upslope area paste the x and y coordinate copied from step choose the filled dem as elevation use method deterministic keep the rest as default click run versions the tool worked well until qgis with the saga x version the error is observed in version and i didn t test other vesions but users report the same problem the error is also observed with the saga next gen experimental plugin doctype html public dtd html en p li white space pre wrap qgis version hannover qgis code revision compiled against qt running against qt compiled against gdal ogr running against gdal ogr compiled against geos capi running against geos capi compiled against sqlite running against sqlite postgresql client version spatialite version qwt version version compiled against proj running against proj rel july os version windows version active python plugins geocoding quick map services db manager metasearch processing processing saga nextgen additional context in my opinion the ltr versions should use the stable saga processing tools now all saga tools give a warning and there s no easy way to downgrade qgis on windows to version when it was still working okay
| 1
|
391,480
| 26,894,771,767
|
IssuesEvent
|
2023-02-06 11:33:40
|
OBOFoundry/OBO-Dashboard
|
https://api.github.com/repos/OBOFoundry/OBO-Dashboard
|
closed
|
Make dashboard check links prominent within the dashboard report
|
documentation
|
From OBOFoundry/OBOFoundry.github.io#2254
To make it more obvious to a user what needs to be done to rectify an issue reported by the dashboard, the information (or links to it) should be found directly in the dashboard report. Currently, the 'Error Breakdown' section has a header with the following:
<html><body>
<!--StartFragment-->
Check | Status | Comment | Resources
-- | -- | -- | --
<!--EndFragment-->
</body>
</html>
and the contents under 'Check' are links to the principles pages. I'd change that title to 'Principle' and add a column 'Automated Check' that links to the relevant page for each principle.
<html><body>
<!--StartFragment-->
Principle | Check | Status | Comment | Resources
-- | -- | -- | -- | --
<!--EndFragment-->
</body>
</html>
This provides all relevant links in the location likely expected by a user.
|
1.0
|
Make dashboard check links prominent within the dashboard report - From OBOFoundry/OBOFoundry.github.io#2254
To make it more obvious to a user what needs to be done to rectify an issue reported by the dashboard, the information (or links to it) should be found directly in the dashboard report. Currently, the 'Error Breakdown' section has a header with the following:
<html><body>
<!--StartFragment-->
Check | Status | Comment | Resources
-- | -- | -- | --
<!--EndFragment-->
</body>
</html>
and the contents under 'Check' are links to the principles pages. I'd change that title to 'Principle' and add a column 'Automated Check' that links to the relevant page for each principle.
<html><body>
<!--StartFragment-->
Principle | Check | Status | Comment | Resources
-- | -- | -- | -- | --
<!--EndFragment-->
</body>
</html>
This provides all relevant links in the location likely expected by a user.
|
non_process
|
make dashboard check links prominent within the dashboard report from obofoundry obofoundry github io to make it more obvious to a user what needs to be done to rectify an issue reported by the dashboard the information or links to it should be found directly in the dashboard report currently the error breakdown section has a header with the following check status comment resources and the contents under check are links to the principles pages i d change that title to principle and add a column automated check that links to the relevant page for each principle principle check status comment resources this provides all relevant links in the location likely expected by a user
| 0
|
828,987
| 31,850,142,781
|
IssuesEvent
|
2023-09-15 00:27:36
|
GoogleCloudPlatform/nodejs-docs-samples
|
https://api.github.com/repos/GoogleCloudPlatform/nodejs-docs-samples
|
opened
|
run/helloworld: Service uses the NAME override failed
|
type: bug priority: p2 flakybot: issue
|
Note: #2802 was also for this test, but it was closed more than 10 days ago. So, I didn't mark it flaky.
----
commit: 60f1644e5cbae0b899d4687223d1a435385992b7
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/0043fbe2-5bfb-4271-8220-794f6c742849), [Sponge](http://sponge2/0043fbe2-5bfb-4271-8220-794f6c742849)
status: failed
<details><summary>Test output</summary><br><pre>Did not fallback to default as expected
+ expected - actual
-404
+200
AssertionError [ERR_ASSERTION]: Did not fallback to default as expected
at Context.<anonymous> (test/system.test.js:96:12)
at processTicksAndRejections (node:internal/process/task_queues:96:5)</pre></details>
|
1.0
|
run/helloworld: Service uses the NAME override failed - Note: #2802 was also for this test, but it was closed more than 10 days ago. So, I didn't mark it flaky.
----
commit: 60f1644e5cbae0b899d4687223d1a435385992b7
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/0043fbe2-5bfb-4271-8220-794f6c742849), [Sponge](http://sponge2/0043fbe2-5bfb-4271-8220-794f6c742849)
status: failed
<details><summary>Test output</summary><br><pre>Did not fallback to default as expected
+ expected - actual
-404
+200
AssertionError [ERR_ASSERTION]: Did not fallback to default as expected
at Context.<anonymous> (test/system.test.js:96:12)
at processTicksAndRejections (node:internal/process/task_queues:96:5)</pre></details>
|
non_process
|
run helloworld service uses the name override failed note was also for this test but it was closed more than days ago so i didn t mark it flaky commit buildurl status failed test output did not fallback to default as expected expected actual assertionerror did not fallback to default as expected at context test system test js at processticksandrejections node internal process task queues
| 0
|
224,852
| 17,778,513,989
|
IssuesEvent
|
2021-08-30 23:04:58
|
lightningnetwork/lnd
|
https://api.github.com/repos/lightningnetwork/lnd
|
closed
|
[neutrino] confirmation notification might be missed if registered at tip
|
bug neutrino test flake P1
|
In the neutrino chainnotifier there is a slight chance that a confirmation notification gets missed if the block that confirms it arrive at the exact same time. It gets gets even worse by the fact that the height gets committed to the height hint cache, resulting in any subsequent rescan starting from a block after the one where it actually got confirmed.
I suspect this is the cause of some of the integration test flakes we are seeing for the Neutrino Backend.
### Scenario
1. The client registers for confirmation of a given txid: https://github.com/lightningnetwork/lnd/blob/f8dda6f0eeb99a2a53c4b49ddfd196689d3fbabd/chainntnfs/neutrinonotify/neutrino.go#L827
2. The request is registered with the `TxNotifier`: https://github.com/lightningnetwork/lnd/blob/f8dda6f0eeb99a2a53c4b49ddfd196689d3fbabd/chainntnfs/neutrinonotify/neutrino.go#L835
3. While this is happening, the block at which the transaction is confirmed gets mined, and notified by Neutrino: https://github.com/lightningnetwork/lnd/blob/f8dda6f0eeb99a2a53c4b49ddfd196689d3fbabd/chainntnfs/neutrinonotify/neutrino.go#L246. Since we hadn't yet registered the transaction with the filtered chain view, this block doesn't have the transaction included.
4. The block is connected at tip: https://github.com/lightningnetwork/lnd/blob/f8dda6f0eeb99a2a53c4b49ddfd196689d3fbabd/chainntnfs/neutrinonotify/neutrino.go#L607
5. Since the tx is not found in the filtered block, the height hint for the conf notification is updated: https://github.com/lightningnetwork/lnd/blob/f8dda6f0eeb99a2a53c4b49ddfd196689d3fbabd/chainntnfs/txnotifier.go#L1381
6. Now the filter is updated with the new tx: https://github.com/lightningnetwork/lnd/blob/f8dda6f0eeb99a2a53c4b49ddfd196689d3fbabd/chainntnfs/neutrinonotify/neutrino.go#L861
7. Since the backend was told to rewind back to the height (https://github.com/lightningnetwork/lnd/blob/f8dda6f0eeb99a2a53c4b49ddfd196689d3fbabd/chainntnfs/neutrinonotify/neutrino.go#L864) where the tx was registered with the `TxNotifier`, the block where it confirmed will be notified again, but will this time be ignored since it is not at the expected "next height": https://github.com/lightningnetwork/lnd/blob/f8dda6f0eeb99a2a53c4b49ddfd196689d3fbabd/chainntnfs/txnotifier.go#L1359
This causes the confirmation to go undetected.
### Other backends
AFAIK this seems to not be a problem on other backends, since we don't filter the blocks there, so the transaction will be found the first time we receive the block. However, we've seen similar symptoms with the `bitcoind` backend in the past that we never got to the bottom of (https://github.com/lightningnetwork/lnd/issues/3692), so there might be similar events that could trigger something like this on the other backends.
### Fix
Fix doesn't seem to be straight forward, as there is a de-sync between updating the filter and handling the received block. One thing that comes to mind is to accept blocks in the past being notified again, but looks like that would violate some assumptions. A better solution could be to improve on the Neutrino API such that one could atomically update the filter.
|
1.0
|
[neutrino] confirmation notification might be missed if registered at tip - In the neutrino chainnotifier there is a slight chance that a confirmation notification gets missed if the block that confirms it arrive at the exact same time. It gets gets even worse by the fact that the height gets committed to the height hint cache, resulting in any subsequent rescan starting from a block after the one where it actually got confirmed.
I suspect this is the cause of some of the integration test flakes we are seeing for the Neutrino Backend.
### Scenario
1. The client registers for confirmation of a given txid: https://github.com/lightningnetwork/lnd/blob/f8dda6f0eeb99a2a53c4b49ddfd196689d3fbabd/chainntnfs/neutrinonotify/neutrino.go#L827
2. The request is registered with the `TxNotifier`: https://github.com/lightningnetwork/lnd/blob/f8dda6f0eeb99a2a53c4b49ddfd196689d3fbabd/chainntnfs/neutrinonotify/neutrino.go#L835
3. While this is happening, the block at which the transaction is confirmed gets mined, and notified by Neutrino: https://github.com/lightningnetwork/lnd/blob/f8dda6f0eeb99a2a53c4b49ddfd196689d3fbabd/chainntnfs/neutrinonotify/neutrino.go#L246. Since we hadn't yet registered the transaction with the filtered chain view, this block doesn't have the transaction included.
4. The block is connected at tip: https://github.com/lightningnetwork/lnd/blob/f8dda6f0eeb99a2a53c4b49ddfd196689d3fbabd/chainntnfs/neutrinonotify/neutrino.go#L607
5. Since the tx is not found in the filtered block, the height hint for the conf notification is updated: https://github.com/lightningnetwork/lnd/blob/f8dda6f0eeb99a2a53c4b49ddfd196689d3fbabd/chainntnfs/txnotifier.go#L1381
6. Now the filter is updated with the new tx: https://github.com/lightningnetwork/lnd/blob/f8dda6f0eeb99a2a53c4b49ddfd196689d3fbabd/chainntnfs/neutrinonotify/neutrino.go#L861
7. Since the backend was told to rewind back to the height (https://github.com/lightningnetwork/lnd/blob/f8dda6f0eeb99a2a53c4b49ddfd196689d3fbabd/chainntnfs/neutrinonotify/neutrino.go#L864) where the tx was registered with the `TxNotifier`, the block where it confirmed will be notified again, but will this time be ignored since it is not at the expected "next height": https://github.com/lightningnetwork/lnd/blob/f8dda6f0eeb99a2a53c4b49ddfd196689d3fbabd/chainntnfs/txnotifier.go#L1359
This causes the confirmation to go undetected.
### Other backends
AFAIK this seems to not be a problem on other backends, since we don't filter the blocks there, so the transaction will be found the first time we receive the block. However, we've seen similar symptoms with the `bitcoind` backend in the past that we never got to the bottom of (https://github.com/lightningnetwork/lnd/issues/3692), so there might be similar events that could trigger something like this on the other backends.
### Fix
Fix doesn't seem to be straight forward, as there is a de-sync between updating the filter and handling the received block. One thing that comes to mind is to accept blocks in the past being notified again, but looks like that would violate some assumptions. A better solution could be to improve on the Neutrino API such that one could atomically update the filter.
|
non_process
|
confirmation notification might be missed if registered at tip in the neutrino chainnotifier there is a slight chance that a confirmation notification gets missed if the block that confirms it arrive at the exact same time it gets gets even worse by the fact that the height gets committed to the height hint cache resulting in any subsequent rescan starting from a block after the one where it actually got confirmed i suspect this is the cause of some of the integration test flakes we are seeing for the neutrino backend scenario the client registers for confirmation of a given txid the request is registered with the txnotifier while this is happening the block at which the transaction is confirmed gets mined and notified by neutrino since we hadn t yet registered the transaction with the filtered chain view this block doesn t have the transaction included the block is connected at tip since the tx is not found in the filtered block the height hint for the conf notification is updated now the filter is updated with the new tx since the backend was told to rewind back to the height where the tx was registered with the txnotifier the block where it confirmed will be notified again but will this time be ignored since it is not at the expected next height this causes the confirmation to go undetected other backends afaik this seems to not be a problem on other backends since we don t filter the blocks there so the transaction will be found the first time we receive the block however we ve seen similar symptoms with the bitcoind backend in the past that we never got to the bottom of so there might be similar events that could trigger something like this on the other backends fix fix doesn t seem to be straight forward as there is a de sync between updating the filter and handling the received block one thing that comes to mind is to accept blocks in the past being notified again but looks like that would violate some assumptions a better solution could be to improve on the neutrino api such that one could atomically update the filter
| 0
|
17,981
| 24,001,104,763
|
IssuesEvent
|
2022-09-14 11:31:58
|
prometheus-community/windows_exporter
|
https://api.github.com/repos/prometheus-community/windows_exporter
|
closed
|
Process Id - Flags for PEr Process metrics.
|
question collector/process
|
Hi,
How would I set the exporter to include the process id ? I'm currently starting up the collector via cmd input, but the output is only including the processname ?
windows_netframework_clrloading_loader_heap_size_bytes{process="powershell"} 2.502656e+06
"C:\wmi_exporter\wmi_exporter.exe"
--log.format logger:eventlog?name=wmi_exporter
--collectors.enabled
cpu,
cs,
logical_disk,
memory,
net,
os,
process,
service,
system,
tcp,
textfile,
netframework_clrexceptions,
netframework_clrinterop,
netframework_clrjit,
netframework_clrloading,
netframework_clrlocksandthreads,
netframework_clrmemory,
netframework_clrremoting,
netframework_clrsecurity
--telemetry.addr :9182
--collector.process.whitelist="MyService.+|MyOtherService.+"
--collector.textfile.directory D:\Data\PrometheusMetrics \
--collector.service.services-where="Name='MyService'"
|
1.0
|
Process Id - Flags for PEr Process metrics. - Hi,
How would I set the exporter to include the process id ? I'm currently starting up the collector via cmd input, but the output is only including the processname ?
windows_netframework_clrloading_loader_heap_size_bytes{process="powershell"} 2.502656e+06
"C:\wmi_exporter\wmi_exporter.exe"
--log.format logger:eventlog?name=wmi_exporter
--collectors.enabled
cpu,
cs,
logical_disk,
memory,
net,
os,
process,
service,
system,
tcp,
textfile,
netframework_clrexceptions,
netframework_clrinterop,
netframework_clrjit,
netframework_clrloading,
netframework_clrlocksandthreads,
netframework_clrmemory,
netframework_clrremoting,
netframework_clrsecurity
--telemetry.addr :9182
--collector.process.whitelist="MyService.+|MyOtherService.+"
--collector.textfile.directory D:\Data\PrometheusMetrics \
--collector.service.services-where="Name='MyService'"
|
process
|
process id flags for per process metrics hi how would i set the exporter to include the process id i m currently starting up the collector via cmd input but the output is only including the processname windows netframework clrloading loader heap size bytes process powershell c wmi exporter wmi exporter exe log format logger eventlog name wmi exporter collectors enabled cpu cs logical disk memory net os process service system tcp textfile netframework clrexceptions netframework clrinterop netframework clrjit netframework clrloading netframework clrlocksandthreads netframework clrmemory netframework clrremoting netframework clrsecurity telemetry addr collector process whitelist myservice myotherservice collector textfile directory d data prometheusmetrics collector service services where name myservice
| 1
|
67,638
| 27,980,490,695
|
IssuesEvent
|
2023-03-26 04:21:35
|
shin6949/company-site-uptime
|
https://api.github.com/repos/shin6949/company-site-uptime
|
closed
|
🛑 Community Service is down
|
status community-service
|
In [`aad390b`](https://github.com/shin6949/company-site-uptime/commit/aad390b3c60d870bfa1a186107371b123fb3e990
), Community Service ($COMMUNITY_URL) was **down**:
- HTTP code: 0
- Response time: 0 ms
|
1.0
|
🛑 Community Service is down - In [`aad390b`](https://github.com/shin6949/company-site-uptime/commit/aad390b3c60d870bfa1a186107371b123fb3e990
), Community Service ($COMMUNITY_URL) was **down**:
- HTTP code: 0
- Response time: 0 ms
|
non_process
|
🛑 community service is down in community service community url was down http code response time ms
| 0
|
101,673
| 11,254,437,078
|
IssuesEvent
|
2020-01-11 23:43:15
|
egc-articuno/decide
|
https://api.github.com/repos/egc-articuno/decide
|
opened
|
[FEATURE] Incident Management
|
documentation enhancement voting
|
Describe the incident management process you have followed in the project. You should also link parts of your project where it is evident that you have followed that process.
|
1.0
|
[FEATURE] Incident Management - Describe the incident management process you have followed in the project. You should also link parts of your project where it is evident that you have followed that process.
|
non_process
|
incident management describe the incident management process you have followed in the project you should also link parts of your project where it is evident that you have followed that process
| 0
|
22,621
| 31,844,744,613
|
IssuesEvent
|
2023-09-14 19:00:13
|
USGS-WiM/StreamStats
|
https://api.github.com/repos/USGS-WiM/StreamStats
|
opened
|
Error: Cannot read properties of undefined (reading 'id')
|
bug Batch Processor
|
I am seeing this error:

When I click on this link in the email I received when my batch was completed: https://dev.streamstats.usgs.gov/ss?BP=batchStatus&email=amedenblik@usgs.gov
|
1.0
|
Error: Cannot read properties of undefined (reading 'id') - I am seeing this error:

When I click on this link in the email I received when my batch was completed: https://dev.streamstats.usgs.gov/ss?BP=batchStatus&email=amedenblik@usgs.gov
|
process
|
error cannot read properties of undefined reading id i am seeing this error when i click on this link in the email i received when my batch was completed
| 1
|
5,682
| 8,558,790,001
|
IssuesEvent
|
2018-11-08 19:18:11
|
easy-software-ufal/annotations_repos
|
https://api.github.com/repos/easy-software-ufal/annotations_repos
|
opened
|
json-api-dotnet/JsonApiDotNetCore Inconsistent casing for sort and filter queries
|
ADA C# test wrong processing
|
Issue: `https://github.com/json-api-dotnet/JsonApiDotNetCore/issues/209`
PR: `https://github.com/json-api-dotnet/JsonApiDotNetCore/pull/196`
|
1.0
|
json-api-dotnet/JsonApiDotNetCore Inconsistent casing for sort and filter queries - Issue: `https://github.com/json-api-dotnet/JsonApiDotNetCore/issues/209`
PR: `https://github.com/json-api-dotnet/JsonApiDotNetCore/pull/196`
|
process
|
json api dotnet jsonapidotnetcore inconsistent casing for sort and filter queries issue pr
| 1
|
556,003
| 16,472,929,322
|
IssuesEvent
|
2021-05-23 19:28:06
|
Team-uMigrate/umigrate
|
https://api.github.com/repos/Team-uMigrate/umigrate
|
opened
|
App: Show message times in chat rooms
|
medium medium priority
|
Blocked by ticket #430.
As shown [here](https://www.figma.com/file/TsZkT7eKQ48zVAOazCbHBC/UI%2FUX-uMigrate-Project?node-id=1042%3A6), when a user swipes to the left in a chat room, they should be able to see the time each message was sent. When they lift their finger, things go back to normal again.
|
1.0
|
App: Show message times in chat rooms - Blocked by ticket #430.
As shown [here](https://www.figma.com/file/TsZkT7eKQ48zVAOazCbHBC/UI%2FUX-uMigrate-Project?node-id=1042%3A6), when a user swipes to the left in a chat room, they should be able to see the time each message was sent. When they lift their finger, things go back to normal again.
|
non_process
|
app show message times in chat rooms blocked by ticket as shown when a user swipes to the left in a chat room they should be able to see the time each message was sent when they lift their finger things go back to normal again
| 0
|
314,229
| 26,985,583,937
|
IssuesEvent
|
2023-02-09 15:55:47
|
US-EPA-CAMD/easey-testing
|
https://api.github.com/repos/US-EPA-CAMD/easey-testing
|
opened
|
Import Daily Calibration From file Test Automation v1.0
|
Test Automation
|
## Context
Create a Test Automation Script to Import Emissions Daily Calibration Data
Log in to ECMPS
Naigate to the Emissions Module
Select a config
Check out a config
Select Import
Import From file
Import the clean emissions file (the system returns a success message)
Under view template select daily calibration
Apply filters for the imported data (the system should display a data table of the imported daily calibration data)
|
1.0
|
Import Daily Calibration From file Test Automation v1.0 - ## Context
Create a Test Automation Script to Import Emissions Daily Calibration Data
Log in to ECMPS
Naigate to the Emissions Module
Select a config
Check out a config
Select Import
Import From file
Import the clean emissions file (the system returns a success message)
Under view template select daily calibration
Apply filters for the imported data (the system should display a data table of the imported daily calibration data)
|
non_process
|
import daily calibration from file test automation context create a test automation script to import emissions daily calibration data log in to ecmps naigate to the emissions module select a config check out a config select import import from file import the clean emissions file the system returns a success message under view template select daily calibration apply filters for the imported data the system should display a data table of the imported daily calibration data
| 0
|
1,693
| 2,659,877,474
|
IssuesEvent
|
2015-03-19 00:09:21
|
brunobuzzi/OrbeonBridgeForGemStone
|
https://api.github.com/repos/brunobuzzi/OrbeonBridgeForGemStone
|
closed
|
Make all components send #answer: before calling other component
|
code improvement GUI
|
In order to close all rendered components make the component send "answer: true" before calling other component. This is to avoid unclosed component in the Web Server.
|
1.0
|
Make all components send #answer: before calling other component - In order to close all rendered components make the component send "answer: true" before calling other component. This is to avoid unclosed component in the Web Server.
|
non_process
|
make all components send answer before calling other component in order to close all rendered components make the component send answer true before calling other component this is to avoid unclosed component in the web server
| 0
|
693,339
| 23,772,871,704
|
IssuesEvent
|
2022-09-01 17:57:42
|
brave/brave-browser
|
https://api.github.com/repos/brave/brave-browser
|
closed
|
Expiry of JS cookie is capped at 6 months instead of 7 days
|
privacy priority/P4 QA/Yes release-notes/include regression feature/cookies OS/Desktop
|
In #3443, we capped the lifetime of cookies set in JS to 7 days and all other cookies to 6 months. Right now, JS cookies are capped at 6 months when tested using the test plan on https://github.com/brave/brave-core/pull/1905.
I'm not sure when this regression was introduced.
## Steps to Reproduce
<!--Please add a series of steps to reproduce the issue-->
1. Open <https://fmarier.github.io/brave-testing/document-cookie.html>.
2. Click on *Normal cookie using `Max-Age`*.
3. Open the *Application* tab in the devtools.
4. Click on *Storage*, then *Cookies* in the sidebar.
5. Click on `https://fmarier.github.io` and look at the expiry column.
6. Open `brave://settings/cookies/detail?site=fmarier.github.io` and check the expiry there too.
## Actual result:
The expiry is now + 6 months.
## Expected result:
The expiry should be now + 7 days.
## Reproduces how often:
Always
## Brave version (brave://version info)
```
Brave 1.23.51 Chromium: 89.0.4389.105 (Official Build) beta (64-bit)
Revision 14f44e21a9d539cd49c72468a29bfca4fa43f710-refs/branch-heads/4389_90@{#7}
OS Linux
```
|
1.0
|
Expiry of JS cookie is capped at 6 months instead of 7 days - In #3443, we capped the lifetime of cookies set in JS to 7 days and all other cookies to 6 months. Right now, JS cookies are capped at 6 months when tested using the test plan on https://github.com/brave/brave-core/pull/1905.
I'm not sure when this regression was introduced.
## Steps to Reproduce
<!--Please add a series of steps to reproduce the issue-->
1. Open <https://fmarier.github.io/brave-testing/document-cookie.html>.
2. Click on *Normal cookie using `Max-Age`*.
3. Open the *Application* tab in the devtools.
4. Click on *Storage*, then *Cookies* in the sidebar.
5. Click on `https://fmarier.github.io` and look at the expiry column.
6. Open `brave://settings/cookies/detail?site=fmarier.github.io` and check the expiry there too.
## Actual result:
The expiry is now + 6 months.
## Expected result:
The expiry should be now + 7 days.
## Reproduces how often:
Always
## Brave version (brave://version info)
```
Brave 1.23.51 Chromium: 89.0.4389.105 (Official Build) beta (64-bit)
Revision 14f44e21a9d539cd49c72468a29bfca4fa43f710-refs/branch-heads/4389_90@{#7}
OS Linux
```
|
non_process
|
expiry of js cookie is capped at months instead of days in we capped the lifetime of cookies set in js to days and all other cookies to months right now js cookies are capped at months when tested using the test plan on i m not sure when this regression was introduced steps to reproduce open click on normal cookie using max age open the application tab in the devtools click on storage then cookies in the sidebar click on and look at the expiry column open brave settings cookies detail site fmarier github io and check the expiry there too actual result the expiry is now months expected result the expiry should be now days reproduces how often always brave version brave version info brave chromium official build beta bit revision refs branch heads os linux
| 0
|
236,201
| 7,747,357,165
|
IssuesEvent
|
2018-05-30 02:48:41
|
JDMCreator/LaTeXTableEditor
|
https://api.github.com/repos/JDMCreator/LaTeXTableEditor
|
closed
|
Failed LaTeX Import
|
Priority: Medium Type: bug
|
```latex
\begin{table}[h]
\centering
\caption{Experimentos para a Base Bacillus variando de 1 até 18 dados faltantes}
\label{resClassBacillusVar1_18}
\scalefont{0.8}
\begin{tabular}{lrrrrrrrrrr}
\toprule
\multirow{2}{*}{Experimento} & \multicolumn{2}{l}{\begin{tabular}[c]{@{}l@{}}Acurácia\\Balanceada \end{tabular}} & \multicolumn{2}{l}{Sensibilidade} & \multicolumn{2}{l}{Especificidade} & \multicolumn{2}{l}{Acurácia}& \multicolumn{2}{l}{RMSE} \\
\cmidrule[\heavyrulewidth]{2-2}\cmidrule[\heavyrulewidth]{3-11}
& \multicolumn{1}{l}{Média} & \multicolumn{1}{l}{DP} & \multicolumn{1}{l}{Média} & \multicolumn{1}{l}{DP} & \multicolumn{1}{l}{Média} & \multicolumn{1}{l}{DP} & \multicolumn{1}{l}{Média}& \multicolumn{1}{l}{DP}& \multicolumn{1}{l}{Média}& \multicolumn{1}{l}{DP} \\
\toprule
EBac.IZVar & 0,82 & 0,06 & 0,9816 & 0,005 & 0,66 & 0,13 & 0,4647& 0,1081& 0,72& 0,08 \\
EBac.IMVar & 0,68 & 0,06 & 0,9847 & 0,002 & 0,37 & 0,13 & 0,9839& 0,002& 0,11& 0,01 \\
EBac.kMVar & 0,84 & 0,08 & 0,9925 & 0,003 & 0,68 & 0,17 & 0,9903& 0,004& 0,08& 0,02 \\
EBac.kMdVar & 0,82 & 0,09 & 0,9916 & 0,003 & 0,65 & 0,17 & 0,9885& 0,004& 0,09& 0,02 \\
EBac.kPVar & 0,84 & 0,08 & 0,9925 & 0,003 & 0,68 & 0,17 & 0,9903& 0,004& 0,08& 0,02 \\
\bottomrule
\end{tabular}
\end{table}
```
|
1.0
|
Failed LaTeX Import - ```latex
\begin{table}[h]
\centering
\caption{Experimentos para a Base Bacillus variando de 1 até 18 dados faltantes}
\label{resClassBacillusVar1_18}
\scalefont{0.8}
\begin{tabular}{lrrrrrrrrrr}
\toprule
\multirow{2}{*}{Experimento} & \multicolumn{2}{l}{\begin{tabular}[c]{@{}l@{}}Acurácia\\Balanceada \end{tabular}} & \multicolumn{2}{l}{Sensibilidade} & \multicolumn{2}{l}{Especificidade} & \multicolumn{2}{l}{Acurácia}& \multicolumn{2}{l}{RMSE} \\
\cmidrule[\heavyrulewidth]{2-2}\cmidrule[\heavyrulewidth]{3-11}
& \multicolumn{1}{l}{Média} & \multicolumn{1}{l}{DP} & \multicolumn{1}{l}{Média} & \multicolumn{1}{l}{DP} & \multicolumn{1}{l}{Média} & \multicolumn{1}{l}{DP} & \multicolumn{1}{l}{Média}& \multicolumn{1}{l}{DP}& \multicolumn{1}{l}{Média}& \multicolumn{1}{l}{DP} \\
\toprule
EBac.IZVar & 0,82 & 0,06 & 0,9816 & 0,005 & 0,66 & 0,13 & 0,4647& 0,1081& 0,72& 0,08 \\
EBac.IMVar & 0,68 & 0,06 & 0,9847 & 0,002 & 0,37 & 0,13 & 0,9839& 0,002& 0,11& 0,01 \\
EBac.kMVar & 0,84 & 0,08 & 0,9925 & 0,003 & 0,68 & 0,17 & 0,9903& 0,004& 0,08& 0,02 \\
EBac.kMdVar & 0,82 & 0,09 & 0,9916 & 0,003 & 0,65 & 0,17 & 0,9885& 0,004& 0,09& 0,02 \\
EBac.kPVar & 0,84 & 0,08 & 0,9925 & 0,003 & 0,68 & 0,17 & 0,9903& 0,004& 0,08& 0,02 \\
\bottomrule
\end{tabular}
\end{table}
```
|
non_process
|
failed latex import latex begin table centering caption experimentos para a base bacillus variando de até dados faltantes label scalefont begin tabular lrrrrrrrrrr toprule multirow experimento multicolumn l begin tabular l acurácia balanceada end tabular multicolumn l sensibilidade multicolumn l especificidade multicolumn l acurácia multicolumn l rmse cmidrule cmidrule multicolumn l média multicolumn l dp multicolumn l média multicolumn l dp multicolumn l média multicolumn l dp multicolumn l média multicolumn l dp multicolumn l média multicolumn l dp toprule ebac izvar ebac imvar ebac kmvar ebac kmdvar ebac kpvar bottomrule end tabular end table
| 0
|
5,249
| 8,039,259,783
|
IssuesEvent
|
2018-07-30 17:49:12
|
GoogleCloudPlatform/google-cloud-python
|
https://api.github.com/repos/GoogleCloudPlatform/google-cloud-python
|
closed
|
Logging: unexpected entries returned in system tests
|
api: logging flaky testing type: process
|
See: https://circleci.com/gh/GoogleCloudPlatform/google-cloud-python/7029
```python
______________________ TestLogging.test_log_handler_async ______________________
self = <test_system.TestLogging testMethod=test_log_handler_async>
def test_log_handler_async(self):
LOG_MESSAGE = 'It was the worst of times'
handler = CloudLoggingHandler(Config.CLIENT)
# only create the logger to delete, hidden otherwise
logger = Config.CLIENT.logger(handler.name)
self.to_delete.append(logger)
cloud_logger = logging.getLogger(handler.name)
cloud_logger.addHandler(handler)
cloud_logger.warn(LOG_MESSAGE)
handler.flush()
entries = _list_entries(logger)
expected_payload = {
'message': LOG_MESSAGE,
'python_logger': handler.name
}
> self.assertEqual(len(entries), 1)
E AssertionError: 2 != 1
tests/system/test_system.py:247: AssertionError
```
|
1.0
|
Logging: unexpected entries returned in system tests - See: https://circleci.com/gh/GoogleCloudPlatform/google-cloud-python/7029
```python
______________________ TestLogging.test_log_handler_async ______________________
self = <test_system.TestLogging testMethod=test_log_handler_async>
def test_log_handler_async(self):
LOG_MESSAGE = 'It was the worst of times'
handler = CloudLoggingHandler(Config.CLIENT)
# only create the logger to delete, hidden otherwise
logger = Config.CLIENT.logger(handler.name)
self.to_delete.append(logger)
cloud_logger = logging.getLogger(handler.name)
cloud_logger.addHandler(handler)
cloud_logger.warn(LOG_MESSAGE)
handler.flush()
entries = _list_entries(logger)
expected_payload = {
'message': LOG_MESSAGE,
'python_logger': handler.name
}
> self.assertEqual(len(entries), 1)
E AssertionError: 2 != 1
tests/system/test_system.py:247: AssertionError
```
|
process
|
logging unexpected entries returned in system tests see python testlogging test log handler async self lt test system testlogging testmethod test log handler async gt def test log handler async self log message it was the worst of times handler cloudlogginghandler config client only create the logger to delete hidden otherwise logger config client logger handler name self to delete append logger cloud logger logging getlogger handler name cloud logger addhandler handler cloud logger warn log message handler flush entries list entries logger expected payload message log message python logger handler name gt self assertequal len entries e assertionerror tests system test system py assertionerror
| 1
|
4,478
| 7,342,869,257
|
IssuesEvent
|
2018-03-07 09:29:03
|
UKHomeOffice/dq-aws-transition
|
https://api.github.com/repos/UKHomeOffice/dq-aws-transition
|
reopened
|
Test End-to-End Job_55_SMM_ACL Wherescape Job in NotProd
|
DQ Data Pipeline Production SSM processing
|
Task Estimate: 3 hours
All tasks complete and expected files and data
- [x] End-to-End Job_55_SMM_ACL tested
- [x] Batch 1 data tested
- [x] Batch 2, 3 data tested
- [x] Batch 4 data tested
- [x] Job running in NotProd
|
1.0
|
Test End-to-End Job_55_SMM_ACL Wherescape Job in NotProd - Task Estimate: 3 hours
All tasks complete and expected files and data
- [x] End-to-End Job_55_SMM_ACL tested
- [x] Batch 1 data tested
- [x] Batch 2, 3 data tested
- [x] Batch 4 data tested
- [x] Job running in NotProd
|
process
|
test end to end job smm acl wherescape job in notprod task estimate hours all tasks complete and expected files and data end to end job smm acl tested batch data tested batch data tested batch data tested job running in notprod
| 1
|
13,551
| 16,093,492,132
|
IssuesEvent
|
2021-04-26 19:46:43
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
opened
|
Excessive queries on Oracle can cause connection pool to get exhausted and never released
|
Database/Oracle Priority:P1 Querying/Processor Type:Bug
|
**Describe the bug**
Under specific condition, it's possible to "overload" the connection pool to Oracle, which then leaves the connections on Oracle in `INACTIVE` status, but then Metabase cannot reuse those connections and will stop being able to query the database.
Likely caused by Metabase using it's own connection pool `c3p0`, but there's also a connection pool in Oracle driver too - `UCP`.
**To Reproduce**
1. Setup Oracle database
```
docker run --name oracle11g -d -p 1521:1521 -p 8080:8080 -e ORACLE_ALLOW_REMOTE=true wnameless/oracle-xe-11g-r2
```
2. Connect to Oracle in Metabase - host: `localhost` port: `1521` sid: `xe` username: `system` password: `oracle`
3. Create 20 unique long-running (60 seconds) questions - it's very important that each is an unique query, since it will otherwise add the queries to "queued" instead of "Queries in flight", which is what is need to reproduce the problem.
```
Q1: select 01, dbms_pipe.receive_message(('a'),60), coalesce(null, null [[,{{filter}}]]) as filter from dual
Q2: select 02, dbms_pipe.receive_message(('a'),60), coalesce(null, null [[,{{filter}}]]) as filter from dual
Q3: ...
```
4. Add all questions to a dashboard (connect a "Other categories" filter for more real-world scenario, since changing the filter while queries are in-flight is more likely than leaving the dashboard).
5. In Firefox change `network.http.max-persistent-connections-per-server` in about:config to `25` to simulate http/2
6. Now go and visit the dashboard - when everything is still loading (queries in flight), leave the dashboard, which should then result in a lot of lines like this, where it says "Queries in flight: 0" at the end.
```
POST /api/card/512/query 202 [ASYNC: canceled] 7.1 s (5 DB calls) App DB connections: 0/10 Jetty threads: 3/50 (5 idle, 0 queued) (82 total active threads) Queries in flight: 0 (0 queued)
```
7. Connect to Oracle server via non-Metabase with user: `sys` password: `oracle` role: SYSDBA and run this query to see connections, which will show 15 `ACTIVE` connections:
`SELECT sid, status, sql_exec_start, prev_exec_start, logon_time FROM v$session WHERE program='JDBC Thin Client';`
8. Even after the query running period (60 seconds), the only change will be that the connections on Oracle will change to `INACTIVE`, but they will not be released after 5/15 minutes (c3p0 timeout), and it's now not possible to make any queries in Metabase against the Oracle database - even a simple `select 1 from dual` will not run until Metabase has been restarted or the connections on Oracle has been forced-closed.
9. Going to the dashboard and changing filter while queries are running, will just cause the "Queries in flight" counter to go up.
**Expected behavior**
Same behavior as other drivers.
**Information about your Metabase Installation:**
Tested 0.38.4 and 0.39.0, but it's likely been a problem since at least 0.35.0
|
1.0
|
Excessive queries on Oracle can cause connection pool to get exhausted and never released - **Describe the bug**
Under specific condition, it's possible to "overload" the connection pool to Oracle, which then leaves the connections on Oracle in `INACTIVE` status, but then Metabase cannot reuse those connections and will stop being able to query the database.
Likely caused by Metabase using it's own connection pool `c3p0`, but there's also a connection pool in Oracle driver too - `UCP`.
**To Reproduce**
1. Setup Oracle database
```
docker run --name oracle11g -d -p 1521:1521 -p 8080:8080 -e ORACLE_ALLOW_REMOTE=true wnameless/oracle-xe-11g-r2
```
2. Connect to Oracle in Metabase - host: `localhost` port: `1521` sid: `xe` username: `system` password: `oracle`
3. Create 20 unique long-running (60 seconds) questions - it's very important that each is an unique query, since it will otherwise add the queries to "queued" instead of "Queries in flight", which is what is need to reproduce the problem.
```
Q1: select 01, dbms_pipe.receive_message(('a'),60), coalesce(null, null [[,{{filter}}]]) as filter from dual
Q2: select 02, dbms_pipe.receive_message(('a'),60), coalesce(null, null [[,{{filter}}]]) as filter from dual
Q3: ...
```
4. Add all questions to a dashboard (connect a "Other categories" filter for more real-world scenario, since changing the filter while queries are in-flight is more likely than leaving the dashboard).
5. In Firefox change `network.http.max-persistent-connections-per-server` in about:config to `25` to simulate http/2
6. Now go and visit the dashboard - when everything is still loading (queries in flight), leave the dashboard, which should then result in a lot of lines like this, where it says "Queries in flight: 0" at the end.
```
POST /api/card/512/query 202 [ASYNC: canceled] 7.1 s (5 DB calls) App DB connections: 0/10 Jetty threads: 3/50 (5 idle, 0 queued) (82 total active threads) Queries in flight: 0 (0 queued)
```
7. Connect to Oracle server via non-Metabase with user: `sys` password: `oracle` role: SYSDBA and run this query to see connections, which will show 15 `ACTIVE` connections:
`SELECT sid, status, sql_exec_start, prev_exec_start, logon_time FROM v$session WHERE program='JDBC Thin Client';`
8. Even after the query running period (60 seconds), the only change will be that the connections on Oracle will change to `INACTIVE`, but they will not be released after 5/15 minutes (c3p0 timeout), and it's now not possible to make any queries in Metabase against the Oracle database - even a simple `select 1 from dual` will not run until Metabase has been restarted or the connections on Oracle has been forced-closed.
9. Going to the dashboard and changing filter while queries are running, will just cause the "Queries in flight" counter to go up.
**Expected behavior**
Same behavior as other drivers.
**Information about your Metabase Installation:**
Tested 0.38.4 and 0.39.0, but it's likely been a problem since at least 0.35.0
|
process
|
excessive queries on oracle can cause connection pool to get exhausted and never released describe the bug under specific condition it s possible to overload the connection pool to oracle which then leaves the connections on oracle in inactive status but then metabase cannot reuse those connections and will stop being able to query the database likely caused by metabase using it s own connection pool but there s also a connection pool in oracle driver too ucp to reproduce setup oracle database docker run name d p p e oracle allow remote true wnameless oracle xe connect to oracle in metabase host localhost port sid xe username system password oracle create unique long running seconds questions it s very important that each is an unique query since it will otherwise add the queries to queued instead of queries in flight which is what is need to reproduce the problem select dbms pipe receive message a coalesce null null as filter from dual select dbms pipe receive message a coalesce null null as filter from dual add all questions to a dashboard connect a other categories filter for more real world scenario since changing the filter while queries are in flight is more likely than leaving the dashboard in firefox change network http max persistent connections per server in about config to to simulate http now go and visit the dashboard when everything is still loading queries in flight leave the dashboard which should then result in a lot of lines like this where it says queries in flight at the end post api card query s db calls app db connections jetty threads idle queued total active threads queries in flight queued connect to oracle server via non metabase with user sys password oracle role sysdba and run this query to see connections which will show active connections select sid status sql exec start prev exec start logon time from v session where program jdbc thin client even after the query running period seconds the only change will be that the connections on oracle will change to inactive but they will not be released after minutes timeout and it s now not possible to make any queries in metabase against the oracle database even a simple select from dual will not run until metabase has been restarted or the connections on oracle has been forced closed going to the dashboard and changing filter while queries are running will just cause the queries in flight counter to go up expected behavior same behavior as other drivers information about your metabase installation tested and but it s likely been a problem since at least
| 1
|
1,172
| 3,664,300,249
|
IssuesEvent
|
2016-02-19 11:00:39
|
dominikwilkowski/bronzies
|
https://api.github.com/repos/dominikwilkowski/bronzies
|
closed
|
New design
|
In process
|
Needs a design as there is none right now. Wireframes have been made by Dylan but still waiting for a proper design.
|
1.0
|
New design - Needs a design as there is none right now. Wireframes have been made by Dylan but still waiting for a proper design.
|
process
|
new design needs a design as there is none right now wireframes have been made by dylan but still waiting for a proper design
| 1
|
412,929
| 12,058,328,631
|
IssuesEvent
|
2020-04-15 17:16:07
|
hobbit-project/platform
|
https://api.github.com/repos/hobbit-project/platform
|
closed
|
Return DNS name (or similar) instead of Task ID
|
component: controller priority: high type: bug
|
## Description
At the moment, the platform controller returns the task ID of a created container. This is not helpful for a user since he can not use it to connect to this container.
## Solution
- [x] Replace the task ID with something else that
- can be used to identify the container / task / service (and remove it if necessary)
- can be used to connect to the container
- [x] Test that a connection to the container based on the new identifier is working (e.g., a JUnit test would be good)
- [x] Change the other parts of the ContainerManagerImpl class which are based on the identifier (e.g., the stopping of containers)
|
1.0
|
Return DNS name (or similar) instead of Task ID - ## Description
At the moment, the platform controller returns the task ID of a created container. This is not helpful for a user since he can not use it to connect to this container.
## Solution
- [x] Replace the task ID with something else that
- can be used to identify the container / task / service (and remove it if necessary)
- can be used to connect to the container
- [x] Test that a connection to the container based on the new identifier is working (e.g., a JUnit test would be good)
- [x] Change the other parts of the ContainerManagerImpl class which are based on the identifier (e.g., the stopping of containers)
|
non_process
|
return dns name or similar instead of task id description at the moment the platform controller returns the task id of a created container this is not helpful for a user since he can not use it to connect to this container solution replace the task id with something else that can be used to identify the container task service and remove it if necessary can be used to connect to the container test that a connection to the container based on the new identifier is working e g a junit test would be good change the other parts of the containermanagerimpl class which are based on the identifier e g the stopping of containers
| 0
|
21,088
| 28,041,945,920
|
IssuesEvent
|
2023-03-28 19:14:47
|
evidence-dev/evidence
|
https://api.github.com/repos/evidence-dev/evidence
|
closed
|
Begin adopting tailwind
|
dev-process
|
Make tailwind available in dev workspace and leverage TW classes in one component.
|
1.0
|
Begin adopting tailwind - Make tailwind available in dev workspace and leverage TW classes in one component.
|
process
|
begin adopting tailwind make tailwind available in dev workspace and leverage tw classes in one component
| 1
|
51,932
| 13,686,917,212
|
IssuesEvent
|
2020-09-30 09:22:04
|
quarkusio/quarkus
|
https://api.github.com/repos/quarkusio/quarkus
|
closed
|
When JWT token is expired Runtime exception is thrown
|
area/security area/smallrye kind/bug
|
It seems that OIDC extension throws Runtime exception if JWT token is expired.
```
java.lang.RuntimeException: Expired JWT token: exp <= now
at io.vertx.ext.jwt.JWT.isExpired(JWT.java:333)
at io.quarkus.oidc.runtime.OidcIdentityProvider.validateTokenWithoutOidcServer(OidcIdentityProvider.java:215)
at io.quarkus.oidc.runtime.OidcIdentityProvider.authenticate(OidcIdentityProvider.java:70)
at io.quarkus.oidc.runtime.OidcIdentityProvider.access$000(OidcIdentityProvider.java:33)
at io.quarkus.oidc.runtime.OidcIdentityProvider$1.get(OidcIdentityProvider.java:59)
at io.quarkus.oidc.runtime.OidcIdentityProvider$1.get(OidcIdentityProvider.java:47)
at io.smallrye.mutiny.operators.UniCreateFromDeferredSupplier.subscribing(UniCreateFromDeferredSupplier.java:24)
at io.smallrye.mutiny.operators.UniSerializedSubscriber.subscribe(UniSerializedSubscriber.java:43)
at io.smallrye.mutiny.operators.UniSerializedSubscriber.subscribe(UniSerializedSubscriber.java:38)
at io.smallrye.mutiny.operators.AbstractUni.subscribe(AbstractUni.java:30)
at io.smallrye.mutiny.operators.UniOnItemTransformToUni.subscribing(UniOnItemTransformToUni.java:65)
at io.smallrye.mutiny.operators.UniSerializedSubscriber.subscribe(UniSerializedSubscriber.java:43)
at io.smallrye.mutiny.operators.UniSerializedSubscriber.subscribe(UniSerializedSubscriber.java:38)
at io.smallrye.mutiny.operators.AbstractUni.subscribe(AbstractUni.java:30)
at io.smallrye.mutiny.operators.UniCache.lambda$subscribing$0(UniCache.java:37)
at io.smallrye.mutiny.operators.UniCache.subscribing(UniCache.java:65)
at io.smallrye.mutiny.operators.UniSerializedSubscriber.subscribe(UniSerializedSubscriber.java:43)
at io.smallrye.mutiny.operators.UniSerializedSubscriber.subscribe(UniSerializedSubscriber.java:38)
at io.smallrye.mutiny.operators.AbstractUni.subscribe(AbstractUni.java:30)
at io.smallrye.mutiny.operators.UniOnItemTransformToUni.subscribing(UniOnItemTransformToUni.java:65)
```
Problem is, that `io.vertx.ext.jwt.JWT.isExpired()` always throws exception if token is expired.
Unfortunately I was not able to produce a simple reproducible, but it seems that issue is obvious and easy to fix. Issue seems to be here: https://github.com/quarkusio/quarkus/blob/a77842ef3ed94bb461a676b9398d7c1562c8ec8c/extensions/oidc/runtime/src/main/java/io/quarkus/oidc/runtime/OidcIdentityProvider.java#L240
Tested with Quarkus 1.8.1.Final.
|
True
|
When JWT token is expired Runtime exception is thrown - It seems that OIDC extension throws Runtime exception if JWT token is expired.
```
java.lang.RuntimeException: Expired JWT token: exp <= now
at io.vertx.ext.jwt.JWT.isExpired(JWT.java:333)
at io.quarkus.oidc.runtime.OidcIdentityProvider.validateTokenWithoutOidcServer(OidcIdentityProvider.java:215)
at io.quarkus.oidc.runtime.OidcIdentityProvider.authenticate(OidcIdentityProvider.java:70)
at io.quarkus.oidc.runtime.OidcIdentityProvider.access$000(OidcIdentityProvider.java:33)
at io.quarkus.oidc.runtime.OidcIdentityProvider$1.get(OidcIdentityProvider.java:59)
at io.quarkus.oidc.runtime.OidcIdentityProvider$1.get(OidcIdentityProvider.java:47)
at io.smallrye.mutiny.operators.UniCreateFromDeferredSupplier.subscribing(UniCreateFromDeferredSupplier.java:24)
at io.smallrye.mutiny.operators.UniSerializedSubscriber.subscribe(UniSerializedSubscriber.java:43)
at io.smallrye.mutiny.operators.UniSerializedSubscriber.subscribe(UniSerializedSubscriber.java:38)
at io.smallrye.mutiny.operators.AbstractUni.subscribe(AbstractUni.java:30)
at io.smallrye.mutiny.operators.UniOnItemTransformToUni.subscribing(UniOnItemTransformToUni.java:65)
at io.smallrye.mutiny.operators.UniSerializedSubscriber.subscribe(UniSerializedSubscriber.java:43)
at io.smallrye.mutiny.operators.UniSerializedSubscriber.subscribe(UniSerializedSubscriber.java:38)
at io.smallrye.mutiny.operators.AbstractUni.subscribe(AbstractUni.java:30)
at io.smallrye.mutiny.operators.UniCache.lambda$subscribing$0(UniCache.java:37)
at io.smallrye.mutiny.operators.UniCache.subscribing(UniCache.java:65)
at io.smallrye.mutiny.operators.UniSerializedSubscriber.subscribe(UniSerializedSubscriber.java:43)
at io.smallrye.mutiny.operators.UniSerializedSubscriber.subscribe(UniSerializedSubscriber.java:38)
at io.smallrye.mutiny.operators.AbstractUni.subscribe(AbstractUni.java:30)
at io.smallrye.mutiny.operators.UniOnItemTransformToUni.subscribing(UniOnItemTransformToUni.java:65)
```
Problem is, that `io.vertx.ext.jwt.JWT.isExpired()` always throws exception if token is expired.
Unfortunately I was not able to produce a simple reproducible, but it seems that issue is obvious and easy to fix. Issue seems to be here: https://github.com/quarkusio/quarkus/blob/a77842ef3ed94bb461a676b9398d7c1562c8ec8c/extensions/oidc/runtime/src/main/java/io/quarkus/oidc/runtime/OidcIdentityProvider.java#L240
Tested with Quarkus 1.8.1.Final.
|
non_process
|
when jwt token is expired runtime exception is thrown it seems that oidc extension throws runtime exception if jwt token is expired java lang runtimeexception expired jwt token exp now at io vertx ext jwt jwt isexpired jwt java at io quarkus oidc runtime oidcidentityprovider validatetokenwithoutoidcserver oidcidentityprovider java at io quarkus oidc runtime oidcidentityprovider authenticate oidcidentityprovider java at io quarkus oidc runtime oidcidentityprovider access oidcidentityprovider java at io quarkus oidc runtime oidcidentityprovider get oidcidentityprovider java at io quarkus oidc runtime oidcidentityprovider get oidcidentityprovider java at io smallrye mutiny operators unicreatefromdeferredsupplier subscribing unicreatefromdeferredsupplier java at io smallrye mutiny operators uniserializedsubscriber subscribe uniserializedsubscriber java at io smallrye mutiny operators uniserializedsubscriber subscribe uniserializedsubscriber java at io smallrye mutiny operators abstractuni subscribe abstractuni java at io smallrye mutiny operators unionitemtransformtouni subscribing unionitemtransformtouni java at io smallrye mutiny operators uniserializedsubscriber subscribe uniserializedsubscriber java at io smallrye mutiny operators uniserializedsubscriber subscribe uniserializedsubscriber java at io smallrye mutiny operators abstractuni subscribe abstractuni java at io smallrye mutiny operators unicache lambda subscribing unicache java at io smallrye mutiny operators unicache subscribing unicache java at io smallrye mutiny operators uniserializedsubscriber subscribe uniserializedsubscriber java at io smallrye mutiny operators uniserializedsubscriber subscribe uniserializedsubscriber java at io smallrye mutiny operators abstractuni subscribe abstractuni java at io smallrye mutiny operators unionitemtransformtouni subscribing unionitemtransformtouni java problem is that io vertx ext jwt jwt isexpired always throws exception if token is expired unfortunately i was not able to produce a simple reproducible but it seems that issue is obvious and easy to fix issue seems to be here tested with quarkus final
| 0
|
2,947
| 5,924,200,544
|
IssuesEvent
|
2017-05-23 09:51:27
|
zotero/zotero
|
https://api.github.com/repos/zotero/zotero
|
opened
|
Display progress window for all document updates
|
Word Processor Integration
|
Currently the most glaring one is refresh, but also applies to style changes.
Current progress-bar is part of quick format dialog, so this is related to #1223
|
1.0
|
Display progress window for all document updates - Currently the most glaring one is refresh, but also applies to style changes.
Current progress-bar is part of quick format dialog, so this is related to #1223
|
process
|
display progress window for all document updates currently the most glaring one is refresh but also applies to style changes current progress bar is part of quick format dialog so this is related to
| 1
|
4,833
| 7,726,052,614
|
IssuesEvent
|
2018-05-24 19:57:57
|
kaching-hq/Privacy-and-Security
|
https://api.github.com/repos/kaching-hq/Privacy-and-Security
|
opened
|
Send DPAs to clients
|
Processes
|
Sent:
- [ ] Digital Inn
- [ ] MacForum
Signed:
- [ ] Digital Inn
- [ ] MacForum
|
1.0
|
Send DPAs to clients - Sent:
- [ ] Digital Inn
- [ ] MacForum
Signed:
- [ ] Digital Inn
- [ ] MacForum
|
process
|
send dpas to clients sent digital inn macforum signed digital inn macforum
| 1
|
8,731
| 11,863,357,364
|
IssuesEvent
|
2020-03-25 19:34:13
|
GSA/CHRISUpdate
|
https://api.github.com/repos/GSA/CHRISUpdate
|
opened
|
Phone Number Extension Delimiter "X" Trimmed
|
Bug: Data Bug: Functional Topic: Record Processing Type: Bug
|
**Discovered By**: David Lenz
**Discovery Date**: March 25, 2019
**Discovery Environment**: Production
**Severity**: 3
**Description**: When a phone number in the daily HR Links update file containing an extension is imported into GCIMS, it is stripped of the "X" that separates the extension from the main phone number. For example, the a phone number reading +001.6584599981x5473 in the HR Links update file would be imported into GCIMS as +001.65845999815473. The "X" should not be trimmed from such phone numbers when imported.
Looks to be an issue similar to #138.
|
1.0
|
Phone Number Extension Delimiter "X" Trimmed - **Discovered By**: David Lenz
**Discovery Date**: March 25, 2019
**Discovery Environment**: Production
**Severity**: 3
**Description**: When a phone number in the daily HR Links update file containing an extension is imported into GCIMS, it is stripped of the "X" that separates the extension from the main phone number. For example, the a phone number reading +001.6584599981x5473 in the HR Links update file would be imported into GCIMS as +001.65845999815473. The "X" should not be trimmed from such phone numbers when imported.
Looks to be an issue similar to #138.
|
process
|
phone number extension delimiter x trimmed discovered by david lenz discovery date march discovery environment production severity description when a phone number in the daily hr links update file containing an extension is imported into gcims it is stripped of the x that separates the extension from the main phone number for example the a phone number reading in the hr links update file would be imported into gcims as the x should not be trimmed from such phone numbers when imported looks to be an issue similar to
| 1
|
3,106
| 6,122,490,092
|
IssuesEvent
|
2017-06-23 00:00:35
|
rancher/rancher
|
https://api.github.com/repos/rancher/rancher
|
closed
|
Rancher server has poor performing regex's under certain conditions
|
area/cattle kind/bug kind/performance priority/-1 process/cherry-pick process/cherry-picked status/resolved status/to-test version/1.6
|
We occasionally notice the our rancher-server host is eating up all of its CPU. upon investigation, the Java process inside the container had a few threads with the same stack traces:
https://gist.github.com/jdonofrio728/4afab9935a4eae6352bbd9351f422e32
A heap dump showed that the regex being applied is:
^((?:(?:[a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9-]*[a-zA-Z0-9])(?:(?:\Q.\E(?:[a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9-]*[a-zA-Z0-9]))+)?(?:\Q:\E[0-9]+)?\Q/\E)?[a-z0-9]+(?:(?:(?:[._]|__|[-]*)[a-z0-9]+)+)?(?:(?:\Q/\E[a-z0-9]+(?:(?:(?:[._]|__|[-]*)[a-z0-9]+)+)?)+)?)(?:\Q:\E([\w][\w.-]{0,127}))?(?:\Q@\E([A-Za-z][A-Za-z0-9]*(?:[-_+.][A-Za-z][A-Za-z0-9]*)*[:][0-9A-Fa-f]{32,}))?$
and the string object that is being matched is:
docker-virtual.ngp-artifactory/react-boilerplate-ui:
This is causing us to have to kill and restart docker server everytime it gets in this state.
**Rancher Versions:**
Server: 1.3.1
healthcheck:v0.2.0
ipsec:holder
network-services:v0.4.0
scheduler:v0.5.1
kubernetes (if applicable):
**Docker Version:**
1.13
**OS and where are the hosts located? (cloud, bare metal, etc):**
RHEL 7.2 vmware virtual machines
**Setup Details: (single node rancher vs. HA rancher, internal DB vs. external DB)**
HA Rancher with internal DB, 4 environments each with 3 nodes.
**Environment Type: (Cattle/Kubernetes/Swarm/Mesos)**
Cattle
**Steps to Reproduce:**
I'm not entirely sure how to reproduce on using Rancher as we have not tracked down which call to triggers this behavior. However I can easily reproduce this issue in a groovy script using the latest build of cattle-docker-common.jar file in my classpath:
```
import io.cattle.platform.docker.client.DockerImage
startTime = System.currentTimeMillis()
image = DockerImage.parse("docker-virtual.ngp-artifactory/react-boilerplate-ui:")
endTime = System.currentTimeMillis()
time = endTime - startTime
print "Time taken to parse: ${time}"
```
**Results:**
The results of my mini test show that it takes an indefinite amount of time to parse the string using the coded regex. Simple docker image names parse quickly, however the string in our environment spins indefinitly.
|
2.0
|
Rancher server has poor performing regex's under certain conditions - We occasionally notice the our rancher-server host is eating up all of its CPU. upon investigation, the Java process inside the container had a few threads with the same stack traces:
https://gist.github.com/jdonofrio728/4afab9935a4eae6352bbd9351f422e32
A heap dump showed that the regex being applied is:
^((?:(?:[a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9-]*[a-zA-Z0-9])(?:(?:\Q.\E(?:[a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9-]*[a-zA-Z0-9]))+)?(?:\Q:\E[0-9]+)?\Q/\E)?[a-z0-9]+(?:(?:(?:[._]|__|[-]*)[a-z0-9]+)+)?(?:(?:\Q/\E[a-z0-9]+(?:(?:(?:[._]|__|[-]*)[a-z0-9]+)+)?)+)?)(?:\Q:\E([\w][\w.-]{0,127}))?(?:\Q@\E([A-Za-z][A-Za-z0-9]*(?:[-_+.][A-Za-z][A-Za-z0-9]*)*[:][0-9A-Fa-f]{32,}))?$
and the string object that is being matched is:
docker-virtual.ngp-artifactory/react-boilerplate-ui:
This is causing us to have to kill and restart docker server everytime it gets in this state.
**Rancher Versions:**
Server: 1.3.1
healthcheck:v0.2.0
ipsec:holder
network-services:v0.4.0
scheduler:v0.5.1
kubernetes (if applicable):
**Docker Version:**
1.13
**OS and where are the hosts located? (cloud, bare metal, etc):**
RHEL 7.2 vmware virtual machines
**Setup Details: (single node rancher vs. HA rancher, internal DB vs. external DB)**
HA Rancher with internal DB, 4 environments each with 3 nodes.
**Environment Type: (Cattle/Kubernetes/Swarm/Mesos)**
Cattle
**Steps to Reproduce:**
I'm not entirely sure how to reproduce on using Rancher as we have not tracked down which call to triggers this behavior. However I can easily reproduce this issue in a groovy script using the latest build of cattle-docker-common.jar file in my classpath:
```
import io.cattle.platform.docker.client.DockerImage
startTime = System.currentTimeMillis()
image = DockerImage.parse("docker-virtual.ngp-artifactory/react-boilerplate-ui:")
endTime = System.currentTimeMillis()
time = endTime - startTime
print "Time taken to parse: ${time}"
```
**Results:**
The results of my mini test show that it takes an indefinite amount of time to parse the string using the coded regex. Simple docker image names parse quickly, however the string in our environment spins indefinitly.
|
process
|
rancher server has poor performing regex s under certain conditions we occasionally notice the our rancher server host is eating up all of its cpu upon investigation the java process inside the container had a few threads with the same stack traces a heap dump showed that the regex being applied is q e q e q e q e q e q e and the string object that is being matched is docker virtual ngp artifactory react boilerplate ui this is causing us to have to kill and restart docker server everytime it gets in this state rancher versions server healthcheck ipsec holder network services scheduler kubernetes if applicable docker version os and where are the hosts located cloud bare metal etc rhel vmware virtual machines setup details single node rancher vs ha rancher internal db vs external db ha rancher with internal db environments each with nodes environment type cattle kubernetes swarm mesos cattle steps to reproduce i m not entirely sure how to reproduce on using rancher as we have not tracked down which call to triggers this behavior however i can easily reproduce this issue in a groovy script using the latest build of cattle docker common jar file in my classpath import io cattle platform docker client dockerimage starttime system currenttimemillis image dockerimage parse docker virtual ngp artifactory react boilerplate ui endtime system currenttimemillis time endtime starttime print time taken to parse time results the results of my mini test show that it takes an indefinite amount of time to parse the string using the coded regex simple docker image names parse quickly however the string in our environment spins indefinitly
| 1
|
111,673
| 4,479,763,225
|
IssuesEvent
|
2016-08-27 20:33:16
|
skuhl/RobotRun
|
https://api.github.com/repos/skuhl/RobotRun
|
closed
|
Added Intermediate Scenario for the Current Scenario
|
low priority task
|
for example when i have created fixtures and parts and I am trying to teach the program to pick and place. I should be able to press a button go back to original setup and run the program again. else as of now i have to edit the part again and again.
|
1.0
|
Added Intermediate Scenario for the Current Scenario - for example when i have created fixtures and parts and I am trying to teach the program to pick and place. I should be able to press a button go back to original setup and run the program again. else as of now i have to edit the part again and again.
|
non_process
|
added intermediate scenario for the current scenario for example when i have created fixtures and parts and i am trying to teach the program to pick and place i should be able to press a button go back to original setup and run the program again else as of now i have to edit the part again and again
| 0
|
19,099
| 25,148,018,694
|
IssuesEvent
|
2022-11-10 07:42:15
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Using the SAGA dissolve tool from within QGIS: specify summary statistics
|
Processing Feature Request
|
Author Name: **Mattias Lindman** (Mattias Lindman)
Original Redmine Issue: [14511](https://issues.qgis.org/issues/14511)
Redmine category:processing/saga
---
When using the SAGA dissolve tool from within the SAGA software it is possible to specify the summary statistics to be calculated on the dissolved polygons. When using the SAGA dissolve tool through the SAGA toolbox in QGIS it is however only possible to specify the attributes on which the polygons shall be dissolved.
By browsing the submitted issues on dissolve I sense that it is recommended to use the SAGA dissolve tool rather than the tool implemented in QGIS. As the SAGA dissolve tool provides summary statistics I consider that it would be convenient if this functionality was available from within QGIS.
|
1.0
|
Using the SAGA dissolve tool from within QGIS: specify summary statistics - Author Name: **Mattias Lindman** (Mattias Lindman)
Original Redmine Issue: [14511](https://issues.qgis.org/issues/14511)
Redmine category:processing/saga
---
When using the SAGA dissolve tool from within the SAGA software it is possible to specify the summary statistics to be calculated on the dissolved polygons. When using the SAGA dissolve tool through the SAGA toolbox in QGIS it is however only possible to specify the attributes on which the polygons shall be dissolved.
By browsing the submitted issues on dissolve I sense that it is recommended to use the SAGA dissolve tool rather than the tool implemented in QGIS. As the SAGA dissolve tool provides summary statistics I consider that it would be convenient if this functionality was available from within QGIS.
|
process
|
using the saga dissolve tool from within qgis specify summary statistics author name mattias lindman mattias lindman original redmine issue redmine category processing saga when using the saga dissolve tool from within the saga software it is possible to specify the summary statistics to be calculated on the dissolved polygons when using the saga dissolve tool through the saga toolbox in qgis it is however only possible to specify the attributes on which the polygons shall be dissolved by browsing the submitted issues on dissolve i sense that it is recommended to use the saga dissolve tool rather than the tool implemented in qgis as the saga dissolve tool provides summary statistics i consider that it would be convenient if this functionality was available from within qgis
| 1
|
22,638
| 31,886,567,765
|
IssuesEvent
|
2023-09-17 01:52:05
|
Significant-Gravitas/Auto-GPT
|
https://api.github.com/repos/Significant-Gravitas/Auto-GPT
|
closed
|
Determine chunk size based on model being used
|
enhancement function: process text Stale
|
See:
* #38
* #796
* #1841
* Settings:
* [`BROWSE_CHUNK_MAX_LENGTH`](https://github.com/Significant-Gravitas/Auto-GPT/blob/d163c564e5c33ff70f07608340f1d815a07b6752/.env.template#L8-L9)
* [`FAST_TOKEN_LIMIT` `SMART_TOKEN_LIMIT`](https://github.com/Significant-Gravitas/Auto-GPT/blob/d163c564e5c33ff70f07608340f1d815a07b6752/.env.template#L40-L44)
The token limit depends directly on the LLM being used, so these settings should be consolidated and calculated where possible, instead of letting the user set them by hand.
|
1.0
|
Determine chunk size based on model being used - See:
* #38
* #796
* #1841
* Settings:
* [`BROWSE_CHUNK_MAX_LENGTH`](https://github.com/Significant-Gravitas/Auto-GPT/blob/d163c564e5c33ff70f07608340f1d815a07b6752/.env.template#L8-L9)
* [`FAST_TOKEN_LIMIT` `SMART_TOKEN_LIMIT`](https://github.com/Significant-Gravitas/Auto-GPT/blob/d163c564e5c33ff70f07608340f1d815a07b6752/.env.template#L40-L44)
The token limit depends directly on the LLM being used, so these settings should be consolidated and calculated where possible, instead of letting the user set them by hand.
|
process
|
determine chunk size based on model being used see settings the token limit depends directly on the llm being used so these settings should be consolidated and calculated where possible instead of letting the user set them by hand
| 1
|
253,028
| 27,293,006,552
|
IssuesEvent
|
2023-02-23 17:58:03
|
elastic/integrations
|
https://api.github.com/repos/elastic/integrations
|
closed
|
[Github] | Default template execution in audit
|
bug Team:Security-External Integrations Integration:Github
|
In `github.audit` datastream, the cursor value is based on [last_event.created_at](https://github.com/elastic/integrations/blob/main/packages/github/data_stream/audit/agent/stream/httpjson.yml.hbs#L49) timestamp which is given in UNIX format. But the [formatDate](https://github.com/elastic/integrations/blob/main/packages/github/data_stream/audit/agent/stream/httpjson.yml.hbs#L25) is parsing it in this format: `"2006-01-02T15:04:05-07:00"`
This is causing the default value being populated on each poll. Log: `"message":"template execution: falling back to default value` indicates this, which polls all events going back to the `initial_interval` again.
Fix would be to modify `formatDate` to parse cursor timestamp in UNIX format instead.
|
True
|
[Github] | Default template execution in audit - In `github.audit` datastream, the cursor value is based on [last_event.created_at](https://github.com/elastic/integrations/blob/main/packages/github/data_stream/audit/agent/stream/httpjson.yml.hbs#L49) timestamp which is given in UNIX format. But the [formatDate](https://github.com/elastic/integrations/blob/main/packages/github/data_stream/audit/agent/stream/httpjson.yml.hbs#L25) is parsing it in this format: `"2006-01-02T15:04:05-07:00"`
This is causing the default value being populated on each poll. Log: `"message":"template execution: falling back to default value` indicates this, which polls all events going back to the `initial_interval` again.
Fix would be to modify `formatDate` to parse cursor timestamp in UNIX format instead.
|
non_process
|
default template execution in audit in github audit datastream the cursor value is based on timestamp which is given in unix format but the is parsing it in this format this is causing the default value being populated on each poll log message template execution falling back to default value indicates this which polls all events going back to the initial interval again fix would be to modify formatdate to parse cursor timestamp in unix format instead
| 0
|
298,289
| 9,198,327,754
|
IssuesEvent
|
2019-03-07 12:19:51
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
opensourcerover.jpl.nasa.gov - site is not usable
|
browser-firefox-mobile priority-important
|
<!-- @browser: Firefox Mobile 66.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 8.0.0; Mobile; rv:66.0) Gecko/66.0 Firefox/66.0 -->
<!-- @reported_with: mobile-reporter -->
**URL**: https://opensourcerover.jpl.nasa.gov/#!/home
**Browser / Version**: Firefox Mobile 66.0
**Operating System**: Android 8.0.0
**Tested Another Browser**: Yes
**Problem type**: Site is not usable
**Description**: Rover graphics don't work
**Steps to Reproduce**:
The accelerometer-enabled visualization of the Mars rover doesn't appear, and the page shows a black screen instead. The 2D UI works fine, but does nothing. This was in a private browsing tab, and behaves the same whenever or not tracking protection is enabled. This website has been tested to work with Chrome 72.0.3626.105 on Android.
[](https://webcompat.com/uploads/2019/2/fe6e9465-d77d-4a12-ab85-3725b992987e.jpeg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>mixed active content blocked: false</li><li>image.mem.shared: true</li><li>buildID: 20190218131312</li><li>tracking content blocked: false</li><li>gfx.webrender.blob-images: true</li><li>hasTouchScreen: true</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>gfx.webrender.all: false</li><li>channel: beta</li>
</ul>
<p>Console Messages:</p>
<pre>
[u'[JavaScript Warning: "Loading failed for the <script> with source https://www.googletagmanager.com/gtag/js?id=UA-117250321-1." {file: "https://opensourcerover.jpl.nasa.gov/#!/home" line: 50}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://dap.digitalgov.gov/Universal-Federated-Analytics-Min.js?agency=NASA&subagency=OPENSOURCEROVER." {file: "https://opensourcerover.jpl.nasa.gov/#!/home" line: 57}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://dap.digitalgov.gov/Universal-Federated-Analytics-Min.js?agency=NASA&subagency=OPENSOURCEROVER&dclink=true&sp=search,s." {file: "https://opensourcerover.jpl.nasa.gov/#!/home" line: 90}]', u'[JavaScript Warning: "onmozfullscreenchange is deprecated." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/app.js?1532571312532" line: 11}]', u'[JavaScript Warning: "onmozfullscreenerror is deprecated." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/app.js?1532571312532" line: 11}]', u'[console.log(THREE.WebGLRenderer, AT::90) https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js:1:459122]', u'[JavaScript Warning: "Use of the motion sensor is deprecated." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/app.js?1532571312532" line: 11}]', u'[JavaScript Warning: "Use of the orientation sensor is deprecated." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/app.js?1532571312532" line: 11}]', u'[JavaScript Warning: "Error: WebGL warning: linkProgram: Must have an compiled fragment shader attached." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: useProgram: Program has not been successfully linked." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: drawElements: The current program is not linked." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: useProgram: Program has not been successfully linked." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: useProgram: Program has not been successfully linked." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: useProgram: Program has not been successfully linked." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: useProgram: Program has not been successfully linked." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: useProgram: Program has not been successfully linked." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: useProgram: Program has not been successfully linked." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: useProgram: Program has not been successfully linked." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: useProgram: Program has not been successfully linked." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: useProgram: Program has not been successfully linked." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: useProgram: Program has not been successfully linked." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: useProgram: Program has not been successfully linked." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: useProgram: Program has not been successfully linked." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: useProgram: Program has not been successfully linked." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: useProgram: Program has not been successfully linked." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: useProgram: Program has not been successfully linked." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: useProgram: Program has not been successfully linked." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: useProgram: Program has not been successfully linked." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: useProgram: Program has not been successfully linked." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: useProgram: Program has not been successfully linked." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: useProgram: Program has not been successfully linked." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: useProgram: Program has not been successfully linked." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: useProgram: Program has not been successfully linked." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: useProgram: Program has not been successfully linked." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: linkProgram: Must have an compiled fragment shader attached." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: useProgram: Program has not been successfully linked." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: useProgram: Program has not been successfully linked." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: useProgram: Program has not been successfully linked." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: clear: Framebuffer not complete. (status: 0x8cd6) Bad status according to the driver: 0x8cd6" {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: clear: Framebuffer must be complete." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL: No further warnings will be reported for this WebGL context. (already reported 32 warnings)" {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Will-change memory consumption is too high. Budget limit is the document surface area multiplied by 3 (215280 px). Occurrences of will-change over the budget will be ignored." {file: "https://opensourcerover.jpl.nasa.gov/#!/home" line: 0}]']
</pre>
</details>
Reported by @ebkalderon
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
opensourcerover.jpl.nasa.gov - site is not usable - <!-- @browser: Firefox Mobile 66.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 8.0.0; Mobile; rv:66.0) Gecko/66.0 Firefox/66.0 -->
<!-- @reported_with: mobile-reporter -->
**URL**: https://opensourcerover.jpl.nasa.gov/#!/home
**Browser / Version**: Firefox Mobile 66.0
**Operating System**: Android 8.0.0
**Tested Another Browser**: Yes
**Problem type**: Site is not usable
**Description**: Rover graphics don't work
**Steps to Reproduce**:
The accelerometer-enabled visualization of the Mars rover doesn't appear, and the page shows a black screen instead. The 2D UI works fine, but does nothing. This was in a private browsing tab, and behaves the same whenever or not tracking protection is enabled. This website has been tested to work with Chrome 72.0.3626.105 on Android.
[](https://webcompat.com/uploads/2019/2/fe6e9465-d77d-4a12-ab85-3725b992987e.jpeg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>mixed active content blocked: false</li><li>image.mem.shared: true</li><li>buildID: 20190218131312</li><li>tracking content blocked: false</li><li>gfx.webrender.blob-images: true</li><li>hasTouchScreen: true</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>gfx.webrender.all: false</li><li>channel: beta</li>
</ul>
<p>Console Messages:</p>
<pre>
[u'[JavaScript Warning: "Loading failed for the <script> with source https://www.googletagmanager.com/gtag/js?id=UA-117250321-1." {file: "https://opensourcerover.jpl.nasa.gov/#!/home" line: 50}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://dap.digitalgov.gov/Universal-Federated-Analytics-Min.js?agency=NASA&subagency=OPENSOURCEROVER." {file: "https://opensourcerover.jpl.nasa.gov/#!/home" line: 57}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://dap.digitalgov.gov/Universal-Federated-Analytics-Min.js?agency=NASA&subagency=OPENSOURCEROVER&dclink=true&sp=search,s." {file: "https://opensourcerover.jpl.nasa.gov/#!/home" line: 90}]', u'[JavaScript Warning: "onmozfullscreenchange is deprecated." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/app.js?1532571312532" line: 11}]', u'[JavaScript Warning: "onmozfullscreenerror is deprecated." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/app.js?1532571312532" line: 11}]', u'[console.log(THREE.WebGLRenderer, AT::90) https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js:1:459122]', u'[JavaScript Warning: "Use of the motion sensor is deprecated." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/app.js?1532571312532" line: 11}]', u'[JavaScript Warning: "Use of the orientation sensor is deprecated." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/app.js?1532571312532" line: 11}]', u'[JavaScript Warning: "Error: WebGL warning: linkProgram: Must have an compiled fragment shader attached." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: useProgram: Program has not been successfully linked." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: drawElements: The current program is not linked." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: useProgram: Program has not been successfully linked." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: useProgram: Program has not been successfully linked." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: useProgram: Program has not been successfully linked." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: useProgram: Program has not been successfully linked." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: useProgram: Program has not been successfully linked." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: useProgram: Program has not been successfully linked." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: useProgram: Program has not been successfully linked." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: useProgram: Program has not been successfully linked." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: useProgram: Program has not been successfully linked." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: useProgram: Program has not been successfully linked." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: useProgram: Program has not been successfully linked." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: useProgram: Program has not been successfully linked." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: useProgram: Program has not been successfully linked." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: useProgram: Program has not been successfully linked." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: useProgram: Program has not been successfully linked." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: useProgram: Program has not been successfully linked." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: useProgram: Program has not been successfully linked." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: useProgram: Program has not been successfully linked." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: useProgram: Program has not been successfully linked." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: useProgram: Program has not been successfully linked." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: useProgram: Program has not been successfully linked." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: useProgram: Program has not been successfully linked." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: useProgram: Program has not been successfully linked." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: linkProgram: Must have an compiled fragment shader attached." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: useProgram: Program has not been successfully linked." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: useProgram: Program has not been successfully linked." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: useProgram: Program has not been successfully linked." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: clear: Framebuffer not complete. (status: 0x8cd6) Bad status according to the driver: 0x8cd6" {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL warning: clear: Framebuffer must be complete." {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Error: WebGL: No further warnings will be reported for this WebGL context. (already reported 32 warnings)" {file: "https://opensourcerover.jpl.nasa.gov/assets/js/lib/three.min.js" line: 1}]', u'[JavaScript Warning: "Will-change memory consumption is too high. Budget limit is the document surface area multiplied by 3 (215280 px). Occurrences of will-change over the budget will be ignored." {file: "https://opensourcerover.jpl.nasa.gov/#!/home" line: 0}]']
</pre>
</details>
Reported by @ebkalderon
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
opensourcerover jpl nasa gov site is not usable url browser version firefox mobile operating system android tested another browser yes problem type site is not usable description rover graphics don t work steps to reproduce the accelerometer enabled visualization of the mars rover doesn t appear and the page shows a black screen instead the ui works fine but does nothing this was in a private browsing tab and behaves the same whenever or not tracking protection is enabled this website has been tested to work with chrome on android browser configuration mixed active content blocked false image mem shared true buildid tracking content blocked false gfx webrender blob images true hastouchscreen true mixed passive content blocked false gfx webrender enabled false gfx webrender all false channel beta console messages u u u u u u u u u u u u u u u u u u u u u u u u u u u u u u u u u u u u u u u u u reported by ebkalderon from with ❤️
| 0
|
11,048
| 13,877,268,426
|
IssuesEvent
|
2020-10-17 03:06:35
|
timberio/vector
|
https://api.github.com/repos/timberio/vector
|
closed
|
New URL functions for the remap syntax
|
domain: processing domain: remap transform: remap type: feature
|
Similar to #3761, I wanted to open an issue to represent a set of URL related functions. This should cover:
- [ ] Parsing URLs into the various parts (scheme, host, port, path, etc)
- [ ] Easily extracting a specific part.
- [ ] Easily removing a specific part.
I'm not necessarily looking for individual functions for each of these, but more of an agreement around how a user would accomplish the above. This might be a set of functions and syntax changes.
|
1.0
|
New URL functions for the remap syntax - Similar to #3761, I wanted to open an issue to represent a set of URL related functions. This should cover:
- [ ] Parsing URLs into the various parts (scheme, host, port, path, etc)
- [ ] Easily extracting a specific part.
- [ ] Easily removing a specific part.
I'm not necessarily looking for individual functions for each of these, but more of an agreement around how a user would accomplish the above. This might be a set of functions and syntax changes.
|
process
|
new url functions for the remap syntax similar to i wanted to open an issue to represent a set of url related functions this should cover parsing urls into the various parts scheme host port path etc easily extracting a specific part easily removing a specific part i m not necessarily looking for individual functions for each of these but more of an agreement around how a user would accomplish the above this might be a set of functions and syntax changes
| 1
|
9,476
| 12,468,312,270
|
IssuesEvent
|
2020-05-28 18:37:47
|
symfony/symfony
|
https://api.github.com/repos/symfony/symfony
|
closed
|
It is no longer possible to use Symfony Process as an extendable base class
|
Bug Process Status: Needs Review
|
**Symfony version(s) affected**: 5+
**Description**
The project [consolidation/site-process](https://github.com/consolidation/site-process) provides an API that returns Symfony Process objects for executing commands on remote environments. The type of remote environment is abstracted with transports; the ssh transport may be used to run commands on remote servers using ssh, and the docker transport may be used to run commands in a docker container. The client is generally unaware of transports, which are part of the site object that is passed to the site process factory to produce a Symfony Process object.
The implementation of the SiteProcess class was an extension of the Symfony Process class. This allowed clients the ability to use a familiar interface, perhaps even repurposing existing code that takes a Process object as a parameter. At the same time, being a subclass allowed SiteProcess to add convenience methods related to the remote nature of the new kind of process object.
In Symfony 5, the setCommandLine method was removed, making it impossible for site-process to wrap the commandline with the transport.
**How to reproduce**
Failing tests are at https://github.com/consolidation/site-process/pull/50.
**Possible Solution**
1. Direct support: The setCommandLine method could be restored to the Process class, but as a protected method rather than a public method. This would still allow subclasses to wrap commands with a transport.
1. Eat your own dogfood: The Process:start() method et. al. could call the getCommandLine() method rather than accessing the private field directly. This also would allow subclasses to control how the commandline is generated.
1. Will not fix: site-process could become an object that produces Symfony Process objects (a Process factory, rather than isa Process). This is undesirable, because it breaks the site-process API, and the resulting API is not a simple and easy to use.
1. Will not fix: site-process could copy all of the Symfony Process code into its own class, or perhaps use the hasa Process model rather than isa. This is undesirable, because it would require duplication of code. The isa relationship could perhaps still be maintained, though, if SiteProcess still extended Process without calling through to the Symfony Process implementation.
**Additional context**
If one of the proposed solutions involving changing symfony/process is acceptable, I'd be happy to make a pull request.
|
1.0
|
It is no longer possible to use Symfony Process as an extendable base class - **Symfony version(s) affected**: 5+
**Description**
The project [consolidation/site-process](https://github.com/consolidation/site-process) provides an API that returns Symfony Process objects for executing commands on remote environments. The type of remote environment is abstracted with transports; the ssh transport may be used to run commands on remote servers using ssh, and the docker transport may be used to run commands in a docker container. The client is generally unaware of transports, which are part of the site object that is passed to the site process factory to produce a Symfony Process object.
The implementation of the SiteProcess class was an extension of the Symfony Process class. This allowed clients the ability to use a familiar interface, perhaps even repurposing existing code that takes a Process object as a parameter. At the same time, being a subclass allowed SiteProcess to add convenience methods related to the remote nature of the new kind of process object.
In Symfony 5, the setCommandLine method was removed, making it impossible for site-process to wrap the commandline with the transport.
**How to reproduce**
Failing tests are at https://github.com/consolidation/site-process/pull/50.
**Possible Solution**
1. Direct support: The setCommandLine method could be restored to the Process class, but as a protected method rather than a public method. This would still allow subclasses to wrap commands with a transport.
1. Eat your own dogfood: The Process:start() method et. al. could call the getCommandLine() method rather than accessing the private field directly. This also would allow subclasses to control how the commandline is generated.
1. Will not fix: site-process could become an object that produces Symfony Process objects (a Process factory, rather than isa Process). This is undesirable, because it breaks the site-process API, and the resulting API is not a simple and easy to use.
1. Will not fix: site-process could copy all of the Symfony Process code into its own class, or perhaps use the hasa Process model rather than isa. This is undesirable, because it would require duplication of code. The isa relationship could perhaps still be maintained, though, if SiteProcess still extended Process without calling through to the Symfony Process implementation.
**Additional context**
If one of the proposed solutions involving changing symfony/process is acceptable, I'd be happy to make a pull request.
|
process
|
it is no longer possible to use symfony process as an extendable base class symfony version s affected description the project provides an api that returns symfony process objects for executing commands on remote environments the type of remote environment is abstracted with transports the ssh transport may be used to run commands on remote servers using ssh and the docker transport may be used to run commands in a docker container the client is generally unaware of transports which are part of the site object that is passed to the site process factory to produce a symfony process object the implementation of the siteprocess class was an extension of the symfony process class this allowed clients the ability to use a familiar interface perhaps even repurposing existing code that takes a process object as a parameter at the same time being a subclass allowed siteprocess to add convenience methods related to the remote nature of the new kind of process object in symfony the setcommandline method was removed making it impossible for site process to wrap the commandline with the transport how to reproduce failing tests are at possible solution direct support the setcommandline method could be restored to the process class but as a protected method rather than a public method this would still allow subclasses to wrap commands with a transport eat your own dogfood the process start method et al could call the getcommandline method rather than accessing the private field directly this also would allow subclasses to control how the commandline is generated will not fix site process could become an object that produces symfony process objects a process factory rather than isa process this is undesirable because it breaks the site process api and the resulting api is not a simple and easy to use will not fix site process could copy all of the symfony process code into its own class or perhaps use the hasa process model rather than isa this is undesirable because it would require duplication of code the isa relationship could perhaps still be maintained though if siteprocess still extended process without calling through to the symfony process implementation additional context if one of the proposed solutions involving changing symfony process is acceptable i d be happy to make a pull request
| 1
|
18,955
| 24,914,081,015
|
IssuesEvent
|
2022-10-30 07:17:30
|
osamhack2022-v2/APP_FreshPlus_TakeCareMyRefrigerator
|
https://api.github.com/repos/osamhack2022-v2/APP_FreshPlus_TakeCareMyRefrigerator
|
closed
|
Recognizing receipts in Flutter
|
front imgae process
|
현재 영수증을 인식하는 기능을 개발할 방식에는 두가지가 있을 것 같습니다.
1. 기존 Naver에서 제공하고 있는 유료 Api를 활용하는 방법
2. MLkit에서 제공하고 있는 OCR 기능에서 필요한 정보를 스플라이싱 하는 방법
두가지 방법 모두 괜찮을 것 같습니다.
관련 링크를 첨부드리니 개발에 참고해주시면 좋겠습니다.
-MLkit 텍스트 인식
https://developers.google.com/ml-kit/vision/text-recognition
-Naver 유료영수증 인식 api
https://clova.ai/ocr?lang=ko
|
1.0
|
Recognizing receipts in Flutter - 현재 영수증을 인식하는 기능을 개발할 방식에는 두가지가 있을 것 같습니다.
1. 기존 Naver에서 제공하고 있는 유료 Api를 활용하는 방법
2. MLkit에서 제공하고 있는 OCR 기능에서 필요한 정보를 스플라이싱 하는 방법
두가지 방법 모두 괜찮을 것 같습니다.
관련 링크를 첨부드리니 개발에 참고해주시면 좋겠습니다.
-MLkit 텍스트 인식
https://developers.google.com/ml-kit/vision/text-recognition
-Naver 유료영수증 인식 api
https://clova.ai/ocr?lang=ko
|
process
|
recognizing receipts in flutter 현재 영수증을 인식하는 기능을 개발할 방식에는 두가지가 있을 것 같습니다 기존 naver에서 제공하고 있는 유료 api를 활용하는 방법 mlkit에서 제공하고 있는 ocr 기능에서 필요한 정보를 스플라이싱 하는 방법 두가지 방법 모두 괜찮을 것 같습니다 관련 링크를 첨부드리니 개발에 참고해주시면 좋겠습니다 mlkit 텍스트 인식 naver 유료영수증 인식 api
| 1
|
7,531
| 10,606,545,737
|
IssuesEvent
|
2019-10-10 23:49:24
|
Stackdriver/collectd
|
https://api.github.com/repos/Stackdriver/collectd
|
closed
|
Please submit mongodb collectd plugin changes upstream
|
process
|
Stackdriver team,
The mongodb collectd plugin is currently not being shipped in EPEL and I believe that's related to the fact the "official" module hasn't seen an update since 2012 (according to [1]). According to [2] a fixed and confirmed working plugin (based on [1]) has been developed but the contributions were never submitted back upstream to be then consumed by the major Linux distributions out there. Would it be possible for this to happen in order for collectd to finally have an officially supported mongodb plugin?
Thanks!
[1] https://collectd.org/wiki/index.php/Plugin:MongoDB
[2] https://cloud.google.com/monitoring/agent/plugins/mongodb
|
1.0
|
Please submit mongodb collectd plugin changes upstream - Stackdriver team,
The mongodb collectd plugin is currently not being shipped in EPEL and I believe that's related to the fact the "official" module hasn't seen an update since 2012 (according to [1]). According to [2] a fixed and confirmed working plugin (based on [1]) has been developed but the contributions were never submitted back upstream to be then consumed by the major Linux distributions out there. Would it be possible for this to happen in order for collectd to finally have an officially supported mongodb plugin?
Thanks!
[1] https://collectd.org/wiki/index.php/Plugin:MongoDB
[2] https://cloud.google.com/monitoring/agent/plugins/mongodb
|
process
|
please submit mongodb collectd plugin changes upstream stackdriver team the mongodb collectd plugin is currently not being shipped in epel and i believe that s related to the fact the official module hasn t seen an update since according to according to a fixed and confirmed working plugin based on has been developed but the contributions were never submitted back upstream to be then consumed by the major linux distributions out there would it be possible for this to happen in order for collectd to finally have an officially supported mongodb plugin thanks
| 1
|
11,460
| 14,284,891,650
|
IssuesEvent
|
2020-11-23 13:10:50
|
prometheus-community/windows_exporter
|
https://api.github.com/repos/prometheus-community/windows_exporter
|
closed
|
Windows App Process
|
collector/process question
|
Hello,
Quick question is there a way to follow if the application is running by process collector.
If the whitelisted process has stoped will he not be seen anymore in monitoring or will its value be 0 or is there some other way to monitor app processes?
Best Regards
|
1.0
|
Windows App Process - Hello,
Quick question is there a way to follow if the application is running by process collector.
If the whitelisted process has stoped will he not be seen anymore in monitoring or will its value be 0 or is there some other way to monitor app processes?
Best Regards
|
process
|
windows app process hello quick question is there a way to follow if the application is running by process collector if the whitelisted process has stoped will he not be seen anymore in monitoring or will its value be or is there some other way to monitor app processes best regards
| 1
|
145,167
| 5,559,843,721
|
IssuesEvent
|
2017-03-24 17:54:55
|
YaleSTC/vesta
|
https://api.github.com/repos/YaleSTC/vesta
|
closed
|
Patch Nokogiri
|
2 - in review complexity: 1 priority: 5 type: bug
|
ruby-advisory picked up a vulnerability (thanks Travis) - upgrading Nokogiri to satisfy it (separately from #325)
|
1.0
|
Patch Nokogiri - ruby-advisory picked up a vulnerability (thanks Travis) - upgrading Nokogiri to satisfy it (separately from #325)
|
non_process
|
patch nokogiri ruby advisory picked up a vulnerability thanks travis upgrading nokogiri to satisfy it separately from
| 0
|
14,498
| 17,604,292,636
|
IssuesEvent
|
2021-08-17 15:13:32
|
qgis/QGIS-Documentation
|
https://api.github.com/repos/qgis/QGIS-Documentation
|
closed
|
Port drop fields algorithm to c++, add native "retain fields" algorithm (Request in QGIS)
|
Processing Alg 3.18
|
### Request for documentation
From pull request QGIS/qgis#40225
Author: @nyalldawson
QGIS version: 3.18
**Port drop fields algorithm to c++, add native "retain fields" algorithm**
### PR Description:
Supersedes https://github.com/qgis/QGIS/pull/39114
Adds a new algorithm to retain only selected fields. Allows users to select a list of fields to keep, and all other fields
will be dropped from the layer. Helps with making flexible models where input datasets may have a range of different fields and you need to drop all but a certain subset of these
### Commits tagged with [need-docs] or [FEATURE]
"[feature][processing] Add new algorithm to retain only selected fields\n\nAllows users to select a list of fields to keep, and all other fields\nwill be dropped from the layer. Helps with making flexible models where\ninput datasets may have a range of different fields and you need to drop\nall but a certain subset of these"
|
1.0
|
Port drop fields algorithm to c++, add native "retain fields" algorithm (Request in QGIS) - ### Request for documentation
From pull request QGIS/qgis#40225
Author: @nyalldawson
QGIS version: 3.18
**Port drop fields algorithm to c++, add native "retain fields" algorithm**
### PR Description:
Supersedes https://github.com/qgis/QGIS/pull/39114
Adds a new algorithm to retain only selected fields. Allows users to select a list of fields to keep, and all other fields
will be dropped from the layer. Helps with making flexible models where input datasets may have a range of different fields and you need to drop all but a certain subset of these
### Commits tagged with [need-docs] or [FEATURE]
"[feature][processing] Add new algorithm to retain only selected fields\n\nAllows users to select a list of fields to keep, and all other fields\nwill be dropped from the layer. Helps with making flexible models where\ninput datasets may have a range of different fields and you need to drop\nall but a certain subset of these"
|
process
|
port drop fields algorithm to c add native retain fields algorithm request in qgis request for documentation from pull request qgis qgis author nyalldawson qgis version port drop fields algorithm to c add native retain fields algorithm pr description supersedes adds a new algorithm to retain only selected fields allows users to select a list of fields to keep and all other fields will be dropped from the layer helps with making flexible models where input datasets may have a range of different fields and you need to drop all but a certain subset of these commits tagged with or add new algorithm to retain only selected fields n nallows users to select a list of fields to keep and all other fields nwill be dropped from the layer helps with making flexible models where ninput datasets may have a range of different fields and you need to drop nall but a certain subset of these
| 1
|
1,153
| 3,637,564,612
|
IssuesEvent
|
2016-02-12 11:32:23
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
Spawning child processes
|
child_process question
|
First example from https://nodejs.org/dist/latest-v5.x/docs/api/child_process.html is as follows:
```
const spawn = require('child_process').spawn;
const ls = spawn('ls', ['-lh', '/usr']);
ls.stdout.on('data', (data) => {
console.log(`stdout: ${data}`);
});
ls.stderr.on('data', (data) => {
console.log(`stderr: ${data}`);
});
ls.on('close', (code) => {
console.log(`child process exited with code ${code}`)
});
```
Isn't it wrong as in fact the process (in this case ls) may end up before one starts reacting to events so basically one will miss them. Is there a method to attach to events before spawning the process?
|
1.0
|
Spawning child processes - First example from https://nodejs.org/dist/latest-v5.x/docs/api/child_process.html is as follows:
```
const spawn = require('child_process').spawn;
const ls = spawn('ls', ['-lh', '/usr']);
ls.stdout.on('data', (data) => {
console.log(`stdout: ${data}`);
});
ls.stderr.on('data', (data) => {
console.log(`stderr: ${data}`);
});
ls.on('close', (code) => {
console.log(`child process exited with code ${code}`)
});
```
Isn't it wrong as in fact the process (in this case ls) may end up before one starts reacting to events so basically one will miss them. Is there a method to attach to events before spawning the process?
|
process
|
spawning child processes first example from is as follows const spawn require child process spawn const ls spawn ls ls stdout on data data console log stdout data ls stderr on data data console log stderr data ls on close code console log child process exited with code code isn t it wrong as in fact the process in this case ls may end up before one starts reacting to events so basically one will miss them is there a method to attach to events before spawning the process
| 1
|
12,509
| 14,962,188,219
|
IssuesEvent
|
2021-01-27 08:55:54
|
Altinn/altinn-studio
|
https://api.github.com/repos/Altinn/altinn-studio
|
opened
|
Analysis: Functional signing support in Altinn 3
|
area/process kind/user-story
|
## Description
Based on #5145 and feedback from agencies it is identified that Altinn 3 needs support for functional signing
### Identified Requirements
> Document the need briefly for the story. Start with the "who, what, why" eg "As a (type of user), I want (some goal), so that (reason)." The title of the user story should reflect this need in a shorter way.
## Screenshots
> Screenshots or links to Figma (make sure your sketch is public)
## Considerations
> Describe input (beyond tasks) on how the user story should be solved can be put here.
## Acceptance criteria
> Describe criteria here (i.e. What is allowed/not allowed (negative tesing), validations, error messages and warnings etc.)
## Specification tasks
- [ ] Development tasks are defined
- [ ] Test design / decide test need
## Development tasks
> Add tasks here
## Definition of done
Verify that this issue meets [DoD](https://confluence.brreg.no/display/T3KP/Definition+of+Done#DefinitionofDone-DoD%E2%80%93utvikling) (Only for project members) before closing.
- [ ] Documentation is updated (if relevant)
- [ ] Technical documentation (docs.altinn.studio)
- [ ] User documentation (altinn.github.io/docs)
- [ ] QA
- [ ] Manual test is complete (if relevant)
- [ ] Automated test is implemented (if relevant)
- [ ] All tasks in this userstory are closed (i.e. remaining tasks are moved to other user stories or marked obsolete)
|
1.0
|
Analysis: Functional signing support in Altinn 3 - ## Description
Based on #5145 and feedback from agencies it is identified that Altinn 3 needs support for functional signing
### Identified Requirements
> Document the need briefly for the story. Start with the "who, what, why" eg "As a (type of user), I want (some goal), so that (reason)." The title of the user story should reflect this need in a shorter way.
## Screenshots
> Screenshots or links to Figma (make sure your sketch is public)
## Considerations
> Describe input (beyond tasks) on how the user story should be solved can be put here.
## Acceptance criteria
> Describe criteria here (i.e. What is allowed/not allowed (negative tesing), validations, error messages and warnings etc.)
## Specification tasks
- [ ] Development tasks are defined
- [ ] Test design / decide test need
## Development tasks
> Add tasks here
## Definition of done
Verify that this issue meets [DoD](https://confluence.brreg.no/display/T3KP/Definition+of+Done#DefinitionofDone-DoD%E2%80%93utvikling) (Only for project members) before closing.
- [ ] Documentation is updated (if relevant)
- [ ] Technical documentation (docs.altinn.studio)
- [ ] User documentation (altinn.github.io/docs)
- [ ] QA
- [ ] Manual test is complete (if relevant)
- [ ] Automated test is implemented (if relevant)
- [ ] All tasks in this userstory are closed (i.e. remaining tasks are moved to other user stories or marked obsolete)
|
process
|
analysis functional signing support in altinn description based on and feedback from agencies it is identified that altinn needs support for functional signing identified requirements document the need briefly for the story start with the who what why eg as a type of user i want some goal so that reason the title of the user story should reflect this need in a shorter way screenshots screenshots or links to figma make sure your sketch is public considerations describe input beyond tasks on how the user story should be solved can be put here acceptance criteria describe criteria here i e what is allowed not allowed negative tesing validations error messages and warnings etc specification tasks development tasks are defined test design decide test need development tasks add tasks here definition of done verify that this issue meets only for project members before closing documentation is updated if relevant technical documentation docs altinn studio user documentation altinn github io docs qa manual test is complete if relevant automated test is implemented if relevant all tasks in this userstory are closed i e remaining tasks are moved to other user stories or marked obsolete
| 1
|
3,827
| 6,802,324,067
|
IssuesEvent
|
2017-11-02 19:47:38
|
WikiWatershed/model-my-watershed
|
https://api.github.com/repos/WikiWatershed/model-my-watershed
|
closed
|
Geoprocessing API: Setup throttling
|
Geoprocessing API WPF
|
Add the [DRF built-in throttling mechanism](http://www.django-rest-framework.org/api-guide/throttling/
) to limit each users API requests to `~2 req/min`. (We'll likely be tweaking this value). Throttling should only affect only API users, so we'll have to add a custom throttling class that will skip throttling if the user is the client app.
The DRF stores the number of requests already made in the Django cache. We should already have this set up, but additional configuration may be necessary.
|
1.0
|
Geoprocessing API: Setup throttling - Add the [DRF built-in throttling mechanism](http://www.django-rest-framework.org/api-guide/throttling/
) to limit each users API requests to `~2 req/min`. (We'll likely be tweaking this value). Throttling should only affect only API users, so we'll have to add a custom throttling class that will skip throttling if the user is the client app.
The DRF stores the number of requests already made in the Django cache. We should already have this set up, but additional configuration may be necessary.
|
process
|
geoprocessing api setup throttling add the to limit each users api requests to req min we ll likely be tweaking this value throttling should only affect only api users so we ll have to add a custom throttling class that will skip throttling if the user is the client app the drf stores the number of requests already made in the django cache we should already have this set up but additional configuration may be necessary
| 1
|
15,985
| 20,188,188,545
|
IssuesEvent
|
2022-02-11 01:16:28
|
savitamittalmsft/WAS-SEC-TEST
|
https://api.github.com/repos/savitamittalmsft/WAS-SEC-TEST
|
opened
|
Establish lifecycle management policy for critical accounts
|
WARP-Import WAF FEB 2021 Security Performance and Scalability Capacity Management Processes Security & Compliance Separation of duties
|
<a href="https://docs.microsoft.com/azure/architecture/framework/security/design-identity-authorization#authorization-for-critical-accounts">Establish lifecycle management policy for critical accounts</a>
<p><b>Why Consider This?</b></p>
Critical accounts are those which can produce a business-critical outcome, whether cloud administrators or workload-specific privileged users. Compromise or misuse of such an account can have a detrimental-to-material effect on the business and its information systems, so it's important to identify those accounts and adopt processes including close monitoring and lifecycle management, including retirement.
<p><b>Context</b></p>
<p><span>Securing privileged access is a critical first step to establishing security assurances for business assets in a modern organization. The security of most or all business assets in an IT organization depends on the integrity of the privileged accounts used to administer, manage, and develop. Cyberattackers often target these accounts and other elements of privileged access to gain access to data and systems using credential theft attacks like Pass-the-Hash and Pass-the-Ticket.</span></p><p><span>Protecting privileged access against determined adversaries requires you to take a complete and thoughtful approach to isolate these systems from risks.</span></p><p><span>"nbsp;</span></p>
<p><b>Suggested Actions</b></p>
<p><span>Ensure there's a process for disabling or deleting administrative accounts that are unused."nbsp; </span></p>
<p><b>Learn More</b></p>
<p><a href="https://docs.microsoft.com/en-us/azure/architecture/framework/Security/critical-impact-accounts#establish-lifecycle-management-for-critical-impact-accounts" target="_blank"><span>Establish lifecycle management for critical impact accounts</span></a><span /></p>
|
1.0
|
Establish lifecycle management policy for critical accounts - <a href="https://docs.microsoft.com/azure/architecture/framework/security/design-identity-authorization#authorization-for-critical-accounts">Establish lifecycle management policy for critical accounts</a>
<p><b>Why Consider This?</b></p>
Critical accounts are those which can produce a business-critical outcome, whether cloud administrators or workload-specific privileged users. Compromise or misuse of such an account can have a detrimental-to-material effect on the business and its information systems, so it's important to identify those accounts and adopt processes including close monitoring and lifecycle management, including retirement.
<p><b>Context</b></p>
<p><span>Securing privileged access is a critical first step to establishing security assurances for business assets in a modern organization. The security of most or all business assets in an IT organization depends on the integrity of the privileged accounts used to administer, manage, and develop. Cyberattackers often target these accounts and other elements of privileged access to gain access to data and systems using credential theft attacks like Pass-the-Hash and Pass-the-Ticket.</span></p><p><span>Protecting privileged access against determined adversaries requires you to take a complete and thoughtful approach to isolate these systems from risks.</span></p><p><span>"nbsp;</span></p>
<p><b>Suggested Actions</b></p>
<p><span>Ensure there's a process for disabling or deleting administrative accounts that are unused."nbsp; </span></p>
<p><b>Learn More</b></p>
<p><a href="https://docs.microsoft.com/en-us/azure/architecture/framework/Security/critical-impact-accounts#establish-lifecycle-management-for-critical-impact-accounts" target="_blank"><span>Establish lifecycle management for critical impact accounts</span></a><span /></p>
|
process
|
establish lifecycle management policy for critical accounts why consider this critical accounts are those which can produce a business critical outcome whether cloud administrators or workload specific privileged users compromise or misuse of such an account can have a detrimental to material effect on the business and its information systems so it s important to identify those accounts and adopt processes including close monitoring and lifecycle management including retirement context securing privileged access is a critical first step to establishing security assurances for business assets in a modern organization the security of most or all business assets in an it organization depends on the integrity of the privileged accounts used to administer manage and develop cyberattackers often target these accounts and other elements of privileged access to gain access to data and systems using credential theft attacks like pass the hash and pass the ticket protecting privileged access against determined adversaries requires you to take a complete and thoughtful approach to isolate these systems from risks nbsp suggested actions ensure there s a process for disabling or deleting administrative accounts that are unused nbsp learn more establish lifecycle management for critical impact accounts
| 1
|
25,784
| 12,741,271,576
|
IssuesEvent
|
2020-06-26 05:34:01
|
Phyronnaz/VoxelPlugin
|
https://api.github.com/repos/Phyronnaz/VoxelPlugin
|
opened
|
Replace stock per mesh-section occlusion queries with per-chunk culling
|
performance
|
For every draw-call VP reports, there's currently a second draw-call being caused by occlusion queries. They're unnecessarily detailed queries (per mesh section) and they suck up massive amounts of performance where they don't need to. Doing these queries per chunk instead should be detailed enough while being massively cheaper. This can potentially save thousands of draw-calls on a complex terrain.
|
True
|
Replace stock per mesh-section occlusion queries with per-chunk culling - For every draw-call VP reports, there's currently a second draw-call being caused by occlusion queries. They're unnecessarily detailed queries (per mesh section) and they suck up massive amounts of performance where they don't need to. Doing these queries per chunk instead should be detailed enough while being massively cheaper. This can potentially save thousands of draw-calls on a complex terrain.
|
non_process
|
replace stock per mesh section occlusion queries with per chunk culling for every draw call vp reports there s currently a second draw call being caused by occlusion queries they re unnecessarily detailed queries per mesh section and they suck up massive amounts of performance where they don t need to doing these queries per chunk instead should be detailed enough while being massively cheaper this can potentially save thousands of draw calls on a complex terrain
| 0
|
11,328
| 14,143,440,381
|
IssuesEvent
|
2020-11-10 15:16:37
|
panther-labs/panther
|
https://api.github.com/repos/panther-labs/panther
|
closed
|
New CloudTrail Parser is Missing p_ Fields
|
bug p0 team:data processing
|
### Describe the bug
The `p_` fields are missing from the Glue table.
### Steps to reproduce
Onboard a CloudTrail source, look at the Glue table.
|
1.0
|
New CloudTrail Parser is Missing p_ Fields - ### Describe the bug
The `p_` fields are missing from the Glue table.
### Steps to reproduce
Onboard a CloudTrail source, look at the Glue table.
|
process
|
new cloudtrail parser is missing p fields describe the bug the p fields are missing from the glue table steps to reproduce onboard a cloudtrail source look at the glue table
| 1
|
74,719
| 7,439,174,380
|
IssuesEvent
|
2018-03-27 04:59:56
|
dotnet/corefx
|
https://api.github.com/repos/dotnet/corefx
|
closed
|
Test failure: System.IO.Tests.Perf_Directory/RecursiveCreateDirectory(depth: 100, times: 10)
|
area-System.IO test-run-desktop
|
@anipik please disable RecursiveCreateDirectory perf test for .NET Framework, as it hits path limit.
The test `System.IO.Tests.Perf_Directory/RecursiveCreateDirectory(depth: 100, times: 10)` has failed.
System.IO.PathTooLongException : The specified path, file name, or both are too long. The fully qualified file name must be less than 260 characters, and the directory name must be less than 248 characters.
Stack Trace:
at System.IO.PathHelper.GetFullPathName()
at System.IO.Path.LegacyNormalizePath(String path, Boolean fullCheck, Int32 maxPathLength, Boolean expandShortPaths)
at System.IO.Path.GetFullPathInternal(String path)
at System.IO.Directory.InternalCreateDirectoryHelper(String path, Boolean checkHost)
at System.IO.Tests.Perf_Directory.RecursiveCreateDirectory(Int32 depth, Int32 times)
Build : Master - 20180323.06 (Full Framework Tests)
Failing configurations:
- Windows.10.Amd64-x64
- Release
- Windows.10.Amd64-x86
- Release
|
1.0
|
Test failure: System.IO.Tests.Perf_Directory/RecursiveCreateDirectory(depth: 100, times: 10) - @anipik please disable RecursiveCreateDirectory perf test for .NET Framework, as it hits path limit.
The test `System.IO.Tests.Perf_Directory/RecursiveCreateDirectory(depth: 100, times: 10)` has failed.
System.IO.PathTooLongException : The specified path, file name, or both are too long. The fully qualified file name must be less than 260 characters, and the directory name must be less than 248 characters.
Stack Trace:
at System.IO.PathHelper.GetFullPathName()
at System.IO.Path.LegacyNormalizePath(String path, Boolean fullCheck, Int32 maxPathLength, Boolean expandShortPaths)
at System.IO.Path.GetFullPathInternal(String path)
at System.IO.Directory.InternalCreateDirectoryHelper(String path, Boolean checkHost)
at System.IO.Tests.Perf_Directory.RecursiveCreateDirectory(Int32 depth, Int32 times)
Build : Master - 20180323.06 (Full Framework Tests)
Failing configurations:
- Windows.10.Amd64-x64
- Release
- Windows.10.Amd64-x86
- Release
|
non_process
|
test failure system io tests perf directory recursivecreatedirectory depth times anipik please disable recursivecreatedirectory perf test for net framework as it hits path limit the test system io tests perf directory recursivecreatedirectory depth times has failed system io pathtoolongexception the specified path file name or both are too long the fully qualified file name must be less than characters and the directory name must be less than characters stack trace at system io pathhelper getfullpathname at system io path legacynormalizepath string path boolean fullcheck maxpathlength boolean expandshortpaths at system io path getfullpathinternal string path at system io directory internalcreatedirectoryhelper string path boolean checkhost at system io tests perf directory recursivecreatedirectory depth times build master full framework tests failing configurations windows release windows release
| 0
|
6,802
| 9,941,172,161
|
IssuesEvent
|
2019-07-03 10:54:34
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Add background color as a parameter when generating xyz tile
|
Feature Request Processing
|
With this PR https://github.com/qgis/QGIS/pull/30477 it is now possible to assign the background color as a parameter.
|
1.0
|
Add background color as a parameter when generating xyz tile - With this PR https://github.com/qgis/QGIS/pull/30477 it is now possible to assign the background color as a parameter.
|
process
|
add background color as a parameter when generating xyz tile with this pr it is now possible to assign the background color as a parameter
| 1
|
264,409
| 28,179,913,974
|
IssuesEvent
|
2023-04-04 01:03:18
|
MidnightBSD/src
|
https://api.github.com/repos/MidnightBSD/src
|
reopened
|
CVE-2023-28531 (High) detected in freebsd-srcrelease/12.4.0
|
Mend: dependency security vulnerability
|
## CVE-2023-28531 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>freebsd-srcrelease/12.4.0</b></p></summary>
<p>
<p>FreeBSD src tree (read-only mirror)</p>
<p>Library home page: <a href=https://github.com/freebsd/freebsd-src.git>https://github.com/freebsd/freebsd-src.git</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/crypto/openssh/authfd.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
ssh-add in OpenSSH before 9.3 adds smartcard keys to ssh-agent without the intended per-hop destination constraints.
<p>Publish Date: 2023-03-17
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-28531>CVE-2023-28531</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2023-28531">https://www.cve.org/CVERecord?id=CVE-2023-28531</a></p>
<p>Release Date: 2023-03-17</p>
<p>Fix Resolution: V_9_3_P1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2023-28531 (High) detected in freebsd-srcrelease/12.4.0 - ## CVE-2023-28531 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>freebsd-srcrelease/12.4.0</b></p></summary>
<p>
<p>FreeBSD src tree (read-only mirror)</p>
<p>Library home page: <a href=https://github.com/freebsd/freebsd-src.git>https://github.com/freebsd/freebsd-src.git</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/crypto/openssh/authfd.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
ssh-add in OpenSSH before 9.3 adds smartcard keys to ssh-agent without the intended per-hop destination constraints.
<p>Publish Date: 2023-03-17
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-28531>CVE-2023-28531</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2023-28531">https://www.cve.org/CVERecord?id=CVE-2023-28531</a></p>
<p>Release Date: 2023-03-17</p>
<p>Fix Resolution: V_9_3_P1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in freebsd srcrelease cve high severity vulnerability vulnerable library freebsd srcrelease freebsd src tree read only mirror library home page a href found in base branch master vulnerable source files crypto openssh authfd c vulnerability details ssh add in openssh before adds smartcard keys to ssh agent without the intended per hop destination constraints publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution v step up your open source security game with mend
| 0
|
195,379
| 14,726,125,297
|
IssuesEvent
|
2021-01-06 06:13:53
|
kubernetes-sigs/azuredisk-csi-driver
|
https://api.github.com/repos/kubernetes-sigs/azuredisk-csi-driver
|
closed
|
don't run e2e test with doc change
|
help wanted kind/test
|
**Is your feature request related to a problem?/Why is this needed**
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
**Describe the solution you'd like in detail**
<!-- A clear and concise description of what you want to happen. -->
don't run e2e test with doc change (`*.md` file), could refer to https://github.com/kubernetes/test-infra/pull/17800
make change into https://github.com/kubernetes/test-infra/blob/master/config/jobs/kubernetes-sigs/azuredisk-csi-driver/azuredisk-csi-driver-config.yaml
**Describe alternatives you've considered**
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
**Additional context**
<!-- Add any other context or screenshots about the feature request here. -->
|
1.0
|
don't run e2e test with doc change - **Is your feature request related to a problem?/Why is this needed**
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
**Describe the solution you'd like in detail**
<!-- A clear and concise description of what you want to happen. -->
don't run e2e test with doc change (`*.md` file), could refer to https://github.com/kubernetes/test-infra/pull/17800
make change into https://github.com/kubernetes/test-infra/blob/master/config/jobs/kubernetes-sigs/azuredisk-csi-driver/azuredisk-csi-driver-config.yaml
**Describe alternatives you've considered**
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
**Additional context**
<!-- Add any other context or screenshots about the feature request here. -->
|
non_process
|
don t run test with doc change is your feature request related to a problem why is this needed describe the solution you d like in detail don t run test with doc change md file could refer to make change into describe alternatives you ve considered additional context
| 0
|
2,173
| 5,027,452,550
|
IssuesEvent
|
2016-12-15 15:35:17
|
Alfresco/alfresco-ng2-components
|
https://api.github.com/repos/Alfresco/alfresco-ng2-components
|
opened
|
Date picker not displayed within user task
|
browser: all bug comp: activiti-processList
|
When clicking on the date picker within a form attached to a user task within a process it does not display
1. Import below app
2. Start process
3. Complete start form attached to start event
4. Go to active tasks and click on user task
5. Click date picker
<img width="629" alt="screen shot 2016-12-15 at 15 21 44" src="https://cloud.githubusercontent.com/assets/13200338/21230067/a62efa18-c2db-11e6-97d5-f63085280c6a.png">
[no typeaheads.zip](https://github.com/Alfresco/alfresco-ng2-components/files/655039/no.typeaheads.zip)
|
1.0
|
Date picker not displayed within user task - When clicking on the date picker within a form attached to a user task within a process it does not display
1. Import below app
2. Start process
3. Complete start form attached to start event
4. Go to active tasks and click on user task
5. Click date picker
<img width="629" alt="screen shot 2016-12-15 at 15 21 44" src="https://cloud.githubusercontent.com/assets/13200338/21230067/a62efa18-c2db-11e6-97d5-f63085280c6a.png">
[no typeaheads.zip](https://github.com/Alfresco/alfresco-ng2-components/files/655039/no.typeaheads.zip)
|
process
|
date picker not displayed within user task when clicking on the date picker within a form attached to a user task within a process it does not display import below app start process complete start form attached to start event go to active tasks and click on user task click date picker img width alt screen shot at src
| 1
|
1,438
| 4,005,112,886
|
IssuesEvent
|
2016-05-12 10:05:24
|
DynareTeam/dynare
|
https://api.github.com/repos/DynareTeam/dynare
|
closed
|
allow to skip 2nd order param derivs
|
performance preprocessor
|
For large models, computing the analytic Hessian just produces a too big `_params_derivs.m` file which is untractable.
For running identification or only computing scores, 2nd order derivs would be useless.
@houtanb Would it be possible to tell the pre-processor to only compute 1st order derivatives, i.e. stop writing the `param_derivs` file when it reaches
`if nargout >= 3 ...`
?
2nd order terms could be just defined as empty, and routines (`dsge_likelihood`) possibly asking for 2nd order ones will trap this and just stop.
|
1.0
|
allow to skip 2nd order param derivs - For large models, computing the analytic Hessian just produces a too big `_params_derivs.m` file which is untractable.
For running identification or only computing scores, 2nd order derivs would be useless.
@houtanb Would it be possible to tell the pre-processor to only compute 1st order derivatives, i.e. stop writing the `param_derivs` file when it reaches
`if nargout >= 3 ...`
?
2nd order terms could be just defined as empty, and routines (`dsge_likelihood`) possibly asking for 2nd order ones will trap this and just stop.
|
process
|
allow to skip order param derivs for large models computing the analytic hessian just produces a too big params derivs m file which is untractable for running identification or only computing scores order derivs would be useless houtanb would it be possible to tell the pre processor to only compute order derivatives i e stop writing the param derivs file when it reaches if nargout order terms could be just defined as empty and routines dsge likelihood possibly asking for order ones will trap this and just stop
| 1
|
243,674
| 26,287,388,883
|
IssuesEvent
|
2023-01-08 01:04:14
|
snowdensb/braindump
|
https://api.github.com/repos/snowdensb/braindump
|
reopened
|
CVE-2020-7676 (Medium) detected in angular-1.2.20.min.js
|
security vulnerability
|
## CVE-2020-7676 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>angular-1.2.20.min.js</b></p></summary>
<p>AngularJS is an MVC framework for building web applications. The core features include HTML enhanced with custom component and data-binding capabilities, dependency injection and strong focus on simplicity, testability, maintainability and boiler-plate reduction.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/angular.js/1.2.20/angular.min.js">https://cdnjs.cloudflare.com/ajax/libs/angular.js/1.2.20/angular.min.js</a></p>
<p>Path to dependency file: /node_modules/bootstrap-tagsinput/examples/bootstrap-2.3.2.html</p>
<p>Path to vulnerable library: /node_modules/bootstrap-tagsinput/examples/bootstrap-2.3.2.html</p>
<p>
Dependency Hierarchy:
- :x: **angular-1.2.20.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/snowdensb/braindump/commit/815ae0afebcf867f02143f3ab9cf88b1d4dacdec">815ae0afebcf867f02143f3ab9cf88b1d4dacdec</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
angular.js prior to 1.8.0 allows cross site scripting. The regex-based input HTML replacement may turn sanitized code into unsanitized one. Wrapping "<option>" elements in "<select>" ones changes parsing behavior, leading to possibly unsanitizing code.
<p>Publish Date: 2020-06-08
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7676>CVE-2020-7676</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7676">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7676</a></p>
<p>Release Date: 2020-10-09</p>
<p>Fix Resolution: 1.8.0</p>
</p>
</details>
<p></p>
|
True
|
CVE-2020-7676 (Medium) detected in angular-1.2.20.min.js - ## CVE-2020-7676 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>angular-1.2.20.min.js</b></p></summary>
<p>AngularJS is an MVC framework for building web applications. The core features include HTML enhanced with custom component and data-binding capabilities, dependency injection and strong focus on simplicity, testability, maintainability and boiler-plate reduction.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/angular.js/1.2.20/angular.min.js">https://cdnjs.cloudflare.com/ajax/libs/angular.js/1.2.20/angular.min.js</a></p>
<p>Path to dependency file: /node_modules/bootstrap-tagsinput/examples/bootstrap-2.3.2.html</p>
<p>Path to vulnerable library: /node_modules/bootstrap-tagsinput/examples/bootstrap-2.3.2.html</p>
<p>
Dependency Hierarchy:
- :x: **angular-1.2.20.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/snowdensb/braindump/commit/815ae0afebcf867f02143f3ab9cf88b1d4dacdec">815ae0afebcf867f02143f3ab9cf88b1d4dacdec</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
angular.js prior to 1.8.0 allows cross site scripting. The regex-based input HTML replacement may turn sanitized code into unsanitized one. Wrapping "<option>" elements in "<select>" ones changes parsing behavior, leading to possibly unsanitizing code.
<p>Publish Date: 2020-06-08
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7676>CVE-2020-7676</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7676">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7676</a></p>
<p>Release Date: 2020-10-09</p>
<p>Fix Resolution: 1.8.0</p>
</p>
</details>
<p></p>
|
non_process
|
cve medium detected in angular min js cve medium severity vulnerability vulnerable library angular min js angularjs is an mvc framework for building web applications the core features include html enhanced with custom component and data binding capabilities dependency injection and strong focus on simplicity testability maintainability and boiler plate reduction library home page a href path to dependency file node modules bootstrap tagsinput examples bootstrap html path to vulnerable library node modules bootstrap tagsinput examples bootstrap html dependency hierarchy x angular min js vulnerable library found in head commit a href found in base branch master vulnerability details angular js prior to allows cross site scripting the regex based input html replacement may turn sanitized code into unsanitized one wrapping elements in ones changes parsing behavior leading to possibly unsanitizing code publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution
| 0
|
1,833
| 4,630,197,096
|
IssuesEvent
|
2016-09-28 12:00:27
|
opentrials/opentrials
|
https://api.github.com/repos/opentrials/opentrials
|
opened
|
Link to research summaries in HRA website
|
0. Ready for Analysis Collectors Explorer Processors
|
Our HRA collector/processor uses their internal API to extract the data. This makes it much easier to write and maintain, but then our `source_url` points to the API url, not the human-readable url (e.g. https://stage.harp.org.uk/HARPApiExternal/api/ResearchSummaries instead of http://www.hra.nhs.uk/news/research-summaries/longterm-fu-study-of-botox-in-idiopathic-overactive-bladder-patients/).
# Tasks
- [ ] Change the HRA `source_url` to point to the human-readable URL in the HRA website (e.g. http://www.hra.nhs.uk/news/research-summaries/longterm-fu-study-of-botox-in-idiopathic-overactive-bladder-patients/)
- [ ] Create new "Research summaries" section in the trial page sidebar with a link to the HRA research summary named "NHS Health Research Authority (HRA)"
|
1.0
|
Link to research summaries in HRA website - Our HRA collector/processor uses their internal API to extract the data. This makes it much easier to write and maintain, but then our `source_url` points to the API url, not the human-readable url (e.g. https://stage.harp.org.uk/HARPApiExternal/api/ResearchSummaries instead of http://www.hra.nhs.uk/news/research-summaries/longterm-fu-study-of-botox-in-idiopathic-overactive-bladder-patients/).
# Tasks
- [ ] Change the HRA `source_url` to point to the human-readable URL in the HRA website (e.g. http://www.hra.nhs.uk/news/research-summaries/longterm-fu-study-of-botox-in-idiopathic-overactive-bladder-patients/)
- [ ] Create new "Research summaries" section in the trial page sidebar with a link to the HRA research summary named "NHS Health Research Authority (HRA)"
|
process
|
link to research summaries in hra website our hra collector processor uses their internal api to extract the data this makes it much easier to write and maintain but then our source url points to the api url not the human readable url e g instead of tasks change the hra source url to point to the human readable url in the hra website e g create new research summaries section in the trial page sidebar with a link to the hra research summary named nhs health research authority hra
| 1
|
159,485
| 13,765,654,563
|
IssuesEvent
|
2020-10-07 13:40:34
|
excellent-react/form
|
https://api.github.com/repos/excellent-react/form
|
closed
|
Update Documentation
|
documentation good first issue
|
- [ ] Update GitHub/Npm Readme
- [ ] Add Doumentation on documantion web app
- [ ] Fix Netlify CI/CD
Documentation Needs to include
----
- [ ] Basic Example ( footprint compare )
- [ ] Watch Mode (`input` and `change mode` comparison )
- [ ] Custom form input watch ( `react-select` )
- [ ] Custom react component handler
- [ ] Validation
- [ ] Input multiple values Handle
- [ ] Supports various UI (ex. `Material-ui`, `chakra-ui`, etc. and Custom react input components ex. `react-select`)
|
1.0
|
Update Documentation - - [ ] Update GitHub/Npm Readme
- [ ] Add Doumentation on documantion web app
- [ ] Fix Netlify CI/CD
Documentation Needs to include
----
- [ ] Basic Example ( footprint compare )
- [ ] Watch Mode (`input` and `change mode` comparison )
- [ ] Custom form input watch ( `react-select` )
- [ ] Custom react component handler
- [ ] Validation
- [ ] Input multiple values Handle
- [ ] Supports various UI (ex. `Material-ui`, `chakra-ui`, etc. and Custom react input components ex. `react-select`)
|
non_process
|
update documentation update github npm readme add doumentation on documantion web app fix netlify ci cd documentation needs to include basic example footprint compare watch mode input and change mode comparison custom form input watch react select custom react component handler validation input multiple values handle supports various ui ex material ui chakra ui etc and custom react input components ex react select
| 0
|
29,360
| 7,091,556,443
|
IssuesEvent
|
2018-01-12 13:34:05
|
ppy/osu
|
https://api.github.com/repos/ppy/osu
|
opened
|
Expose an IsHit property from DrawableHitObject
|
code quality
|
Essentially should be something like `IsHit => Judgements.Any(h => h.IsFinal && h.IsHit) && NestedHitObjects.ForEach(n => n.IsHit)`
|
1.0
|
Expose an IsHit property from DrawableHitObject - Essentially should be something like `IsHit => Judgements.Any(h => h.IsFinal && h.IsHit) && NestedHitObjects.ForEach(n => n.IsHit)`
|
non_process
|
expose an ishit property from drawablehitobject essentially should be something like ishit judgements any h h isfinal h ishit nestedhitobjects foreach n n ishit
| 0
|
85,496
| 24,610,793,465
|
IssuesEvent
|
2022-10-14 21:11:24
|
MicrosoftDocs/visualstudio-docs
|
https://api.github.com/repos/MicrosoftDocs/visualstudio-docs
|
closed
|
end tag for CopyToOutputDirectory has typo error
|
doc-bug visual-studio-windows/prod msbuild/tech Pri2
|
typo error
`<CopyToOutputDirectory>PreserveNewest<CopyToOutputDirectory>`
for `<CopyToOutputDirectory>` tag , end tag is wrongly typed as `<CopyToOutputDirectory>`
it should have been
`</CopyToOutputDirectory>`
So entire line should have been .....
`<CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>`
---
#### Document Details
⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.*
* ID: 54968ac1-07eb-8537-1a9d-13dccc31f259
* Version Independent ID: af818105-b95d-8f5a-8152-a4414962921d
* Content: [MSB3577: Two output file names resolved to the same output path: 'path' - MSBuild](https://learn.microsoft.com/en-us/visualstudio/msbuild/errors/msb3577?view=vs-2022)
* Content Source: [docs/msbuild/errors/msb3577.md](https://github.com/MicrosoftDocs/visualstudio-docs/blob/main/docs/msbuild/errors/msb3577.md)
* Product: **visual-studio-windows**
* Technology: **msbuild**
* GitHub Login: @ghogen
* Microsoft Alias: **ghogen**
|
1.0
|
end tag for CopyToOutputDirectory has typo error - typo error
`<CopyToOutputDirectory>PreserveNewest<CopyToOutputDirectory>`
for `<CopyToOutputDirectory>` tag , end tag is wrongly typed as `<CopyToOutputDirectory>`
it should have been
`</CopyToOutputDirectory>`
So entire line should have been .....
`<CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>`
---
#### Document Details
⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.*
* ID: 54968ac1-07eb-8537-1a9d-13dccc31f259
* Version Independent ID: af818105-b95d-8f5a-8152-a4414962921d
* Content: [MSB3577: Two output file names resolved to the same output path: 'path' - MSBuild](https://learn.microsoft.com/en-us/visualstudio/msbuild/errors/msb3577?view=vs-2022)
* Content Source: [docs/msbuild/errors/msb3577.md](https://github.com/MicrosoftDocs/visualstudio-docs/blob/main/docs/msbuild/errors/msb3577.md)
* Product: **visual-studio-windows**
* Technology: **msbuild**
* GitHub Login: @ghogen
* Microsoft Alias: **ghogen**
|
non_process
|
end tag for copytooutputdirectory has typo error typo error preservenewest for tag end tag is wrongly typed as it should have been so entire line should have been preservenewest document details ⚠ do not edit this section it is required for learn microsoft com ➟ github issue linking id version independent id content content source product visual studio windows technology msbuild github login ghogen microsoft alias ghogen
| 0
|
3,575
| 6,617,854,402
|
IssuesEvent
|
2017-09-21 04:46:49
|
econtoolkit/continuous_time_methods
|
https://api.github.com/repos/econtoolkit/continuous_time_methods
|
opened
|
Consider formalization of this trick for solving for the KFE
|
Stochastic Processes theory
|
From Ben:
Simply fix one element of the vector f, then solve the linear system (which is no longer singular) and the renormalize f afterwards. See e.g. this code snippet from http://www.princeton.edu/~moll/HACTproject/huggett_partialeq.m
```
AT = A';
b = zeros(2*I,1);
%need to fix one value, otherwise matrix is singular
i_fix = 1;
b(i_fix)=.1;
row = [zeros(1,i_fix-1),1,zeros(1,2*I-i_fix)];
AT(i_fix,:) = row;
%Solve linear system
gg = AT\b;
g_sum = gg'*ones(2*I,1)*da;
gg = gg./g_sum;
```
The question will be whether this approach can be generalized in a nonlinear setup.
|
1.0
|
Consider formalization of this trick for solving for the KFE - From Ben:
Simply fix one element of the vector f, then solve the linear system (which is no longer singular) and the renormalize f afterwards. See e.g. this code snippet from http://www.princeton.edu/~moll/HACTproject/huggett_partialeq.m
```
AT = A';
b = zeros(2*I,1);
%need to fix one value, otherwise matrix is singular
i_fix = 1;
b(i_fix)=.1;
row = [zeros(1,i_fix-1),1,zeros(1,2*I-i_fix)];
AT(i_fix,:) = row;
%Solve linear system
gg = AT\b;
g_sum = gg'*ones(2*I,1)*da;
gg = gg./g_sum;
```
The question will be whether this approach can be generalized in a nonlinear setup.
|
process
|
consider formalization of this trick for solving for the kfe from ben simply fix one element of the vector f then solve the linear system which is no longer singular and the renormalize f afterwards see e g this code snippet from at a b zeros i need to fix one value otherwise matrix is singular i fix b i fix row at i fix row solve linear system gg at b g sum gg ones i da gg gg g sum the question will be whether this approach can be generalized in a nonlinear setup
| 1
|
8,044
| 11,218,453,461
|
IssuesEvent
|
2020-01-07 11:31:55
|
Starcounter/Home
|
https://api.github.com/repos/Starcounter/Home
|
closed
|
How does Starcounter's Schedulers work with respect to Starcounter.Session?
|
hosting hosting-single-process investigate question
|
Starcounter version: `<2.3.1.8415>`.
### Issue type
- [x] Bug
- [ ] Feature request
- [ ] Suggestion
- [X] Question
- [ ] Cannot reproduce
- [ ] Urgent
### Issue description
Continuation of #462. While investigating further into #462 I started suspecting that the transactions were not the cause for requests being blocked. As stated in #462 I tried to split a very time consuming transaction into smaller pieces and experienced an improvement. This turned out to be random though. Even with that change, requests get blocked, randomly it seemed at first.
Things noted:
- One request may be blocked while another one isn't, simultaneously.
- If I stop all threads in the debugger when blocking occurs, there is actually only one thread executing code to serve a request.
So, I started suspecting that the requests are not allowed to run at all when blocked. In that case it may be an issue with how requests are scheduled. I realize that I may have used the term "request" in the wrong sense here. What I'm talking about is "XSON processing an incoming JSON PATCH".
It seems that each `Starcounter.Session` is permanently assigned a Scheduler Id, which I assume corresponds to a worker thread. It also seems that many sessions share the same Scheduler Id. Am I correct then to assume that processing of incoming patches for all sessions sharing the same Scheduler Id will actually execute on the same thread?
That would explain the behavior I'm seeing. Next question: What would be the best course to deal with this?
__Update__: "requests" are actually a mix of "XSON processing an incoming JSON PATH" and `Handle.GET()` delgated producing responses without upgrading to XSON websockes.
|
1.0
|
How does Starcounter's Schedulers work with respect to Starcounter.Session? - Starcounter version: `<2.3.1.8415>`.
### Issue type
- [x] Bug
- [ ] Feature request
- [ ] Suggestion
- [X] Question
- [ ] Cannot reproduce
- [ ] Urgent
### Issue description
Continuation of #462. While investigating further into #462 I started suspecting that the transactions were not the cause for requests being blocked. As stated in #462 I tried to split a very time consuming transaction into smaller pieces and experienced an improvement. This turned out to be random though. Even with that change, requests get blocked, randomly it seemed at first.
Things noted:
- One request may be blocked while another one isn't, simultaneously.
- If I stop all threads in the debugger when blocking occurs, there is actually only one thread executing code to serve a request.
So, I started suspecting that the requests are not allowed to run at all when blocked. In that case it may be an issue with how requests are scheduled. I realize that I may have used the term "request" in the wrong sense here. What I'm talking about is "XSON processing an incoming JSON PATCH".
It seems that each `Starcounter.Session` is permanently assigned a Scheduler Id, which I assume corresponds to a worker thread. It also seems that many sessions share the same Scheduler Id. Am I correct then to assume that processing of incoming patches for all sessions sharing the same Scheduler Id will actually execute on the same thread?
That would explain the behavior I'm seeing. Next question: What would be the best course to deal with this?
__Update__: "requests" are actually a mix of "XSON processing an incoming JSON PATH" and `Handle.GET()` delgated producing responses without upgrading to XSON websockes.
|
process
|
how does starcounter s schedulers work with respect to starcounter session starcounter version issue type bug feature request suggestion question cannot reproduce urgent issue description continuation of while investigating further into i started suspecting that the transactions were not the cause for requests being blocked as stated in i tried to split a very time consuming transaction into smaller pieces and experienced an improvement this turned out to be random though even with that change requests get blocked randomly it seemed at first things noted one request may be blocked while another one isn t simultaneously if i stop all threads in the debugger when blocking occurs there is actually only one thread executing code to serve a request so i started suspecting that the requests are not allowed to run at all when blocked in that case it may be an issue with how requests are scheduled i realize that i may have used the term request in the wrong sense here what i m talking about is xson processing an incoming json patch it seems that each starcounter session is permanently assigned a scheduler id which i assume corresponds to a worker thread it also seems that many sessions share the same scheduler id am i correct then to assume that processing of incoming patches for all sessions sharing the same scheduler id will actually execute on the same thread that would explain the behavior i m seeing next question what would be the best course to deal with this update requests are actually a mix of xson processing an incoming json path and handle get delgated producing responses without upgrading to xson websockes
| 1
|
593,028
| 17,936,294,603
|
IssuesEvent
|
2021-09-10 15:43:48
|
Journaly/journaly
|
https://api.github.com/repos/Journaly/journaly
|
closed
|
Include search filters on "My Posts" page
|
good first issue frontend low priority
|
#### Perceived Problem
- Some users who have written a lot of posts have reported that it would be helpful to have the filters on the My Feed page also in their My Posts page
#### Ideas / Proposed Solution(s)
- Fortunately, we've already refactored the filters out into `FeedHeader.tsx`
- Let's include that on the my posts page
|
1.0
|
Include search filters on "My Posts" page - #### Perceived Problem
- Some users who have written a lot of posts have reported that it would be helpful to have the filters on the My Feed page also in their My Posts page
#### Ideas / Proposed Solution(s)
- Fortunately, we've already refactored the filters out into `FeedHeader.tsx`
- Let's include that on the my posts page
|
non_process
|
include search filters on my posts page perceived problem some users who have written a lot of posts have reported that it would be helpful to have the filters on the my feed page also in their my posts page ideas proposed solution s fortunately we ve already refactored the filters out into feedheader tsx let s include that on the my posts page
| 0
|
9,474
| 12,467,823,686
|
IssuesEvent
|
2020-05-28 17:44:58
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Expose 3d:tessellate processing tool in PyQgis
|
Bug Processing
|
**Feature description.**
Proposition to create the appropriate SIP bindings to expose the C function to Python. so that the tessellate processing tool is visible from PyGqgis.
**Additional context**
We need to use the processing tool 3d:tessellate in a QGIS standalone processing script. Presently, when we try to use it gives the following error
```
Traceback (most recent call last):
File "C:/Users/berge/PycharmProjects/qgis_geo_sim/runner_chordal_axis.py", line 19, in <module>
processing.run("3d:tessellate", {'INPUT': 'D:/OneDrive/Personnel/Daniel/QGIS/Kingston/Kingston_sub9.gpkg|layername=Road', 'OUTPUT': 'TEMPORARY_OUTPUT'})
File "C:\OSGeo4W64\apps\qgis\python\plugins\processing\tools\general.py", line 106, in run
return Processing.runAlgorithm(algOrName, parameters, onFinish, feedback, context)
File "C:\OSGeo4W64\apps\qgis\python\plugins\processing\core\Processing.py", line 125, in runAlgorithm
raise QgsProcessingException(msg)
_core.QgsProcessingException: Error: Algorithm 3d:tessellate not found
```
The following question has been published on [GIS Exchange related to this issue](https://gis.stackexchange.com/questions/362584/qgis-standalone-application-in-python-cant-access-3dtessellate-processing-tool)
|
1.0
|
Expose 3d:tessellate processing tool in PyQgis - **Feature description.**
Proposition to create the appropriate SIP bindings to expose the C function to Python. so that the tessellate processing tool is visible from PyGqgis.
**Additional context**
We need to use the processing tool 3d:tessellate in a QGIS standalone processing script. Presently, when we try to use it gives the following error
```
Traceback (most recent call last):
File "C:/Users/berge/PycharmProjects/qgis_geo_sim/runner_chordal_axis.py", line 19, in <module>
processing.run("3d:tessellate", {'INPUT': 'D:/OneDrive/Personnel/Daniel/QGIS/Kingston/Kingston_sub9.gpkg|layername=Road', 'OUTPUT': 'TEMPORARY_OUTPUT'})
File "C:\OSGeo4W64\apps\qgis\python\plugins\processing\tools\general.py", line 106, in run
return Processing.runAlgorithm(algOrName, parameters, onFinish, feedback, context)
File "C:\OSGeo4W64\apps\qgis\python\plugins\processing\core\Processing.py", line 125, in runAlgorithm
raise QgsProcessingException(msg)
_core.QgsProcessingException: Error: Algorithm 3d:tessellate not found
```
The following question has been published on [GIS Exchange related to this issue](https://gis.stackexchange.com/questions/362584/qgis-standalone-application-in-python-cant-access-3dtessellate-processing-tool)
|
process
|
expose tessellate processing tool in pyqgis feature description proposition to create the appropriate sip bindings to expose the c function to python so that the tessellate processing tool is visible from pygqgis additional context we need to use the processing tool tessellate in a qgis standalone processing script presently when we try to use it gives the following error traceback most recent call last file c users berge pycharmprojects qgis geo sim runner chordal axis py line in processing run tessellate input d onedrive personnel daniel qgis kingston kingston gpkg layername road output temporary output file c apps qgis python plugins processing tools general py line in run return processing runalgorithm algorname parameters onfinish feedback context file c apps qgis python plugins processing core processing py line in runalgorithm raise qgsprocessingexception msg core qgsprocessingexception error algorithm tessellate not found the following question has been published on
| 1
|
27,821
| 5,108,632,581
|
IssuesEvent
|
2017-01-05 18:17:51
|
ehynds/jquery-ui-multiselect-widget
|
https://api.github.com/repos/ehynds/jquery-ui-multiselect-widget
|
closed
|
Cursor trap for keyboard-only users
|
defect
|
When using TAB to access select boxes, if a user uses Shift+TAB to reverse direction, the drop-down closes, and the cursor sticks to whatever option was previously selected.
Using the keyboard to select/deselect the option is still possible (one can see the number of options selected changing as the checkbox is toggled, even though it is no longer visible), but tabbing away from the drop-down is no longer possible. The mouse is the only way out.
Perhaps there's a way to detect when the drop-down closes, and to transport the cursor back to main select box object.
|
1.0
|
Cursor trap for keyboard-only users - When using TAB to access select boxes, if a user uses Shift+TAB to reverse direction, the drop-down closes, and the cursor sticks to whatever option was previously selected.
Using the keyboard to select/deselect the option is still possible (one can see the number of options selected changing as the checkbox is toggled, even though it is no longer visible), but tabbing away from the drop-down is no longer possible. The mouse is the only way out.
Perhaps there's a way to detect when the drop-down closes, and to transport the cursor back to main select box object.
|
non_process
|
cursor trap for keyboard only users when using tab to access select boxes if a user uses shift tab to reverse direction the drop down closes and the cursor sticks to whatever option was previously selected using the keyboard to select deselect the option is still possible one can see the number of options selected changing as the checkbox is toggled even though it is no longer visible but tabbing away from the drop down is no longer possible the mouse is the only way out perhaps there s a way to detect when the drop down closes and to transport the cursor back to main select box object
| 0
|
1,591
| 4,187,656,223
|
IssuesEvent
|
2016-06-23 18:11:06
|
opentrials/opentrials
|
https://api.github.com/repos/opentrials/opentrials
|
opened
|
NCT00598182 and NCT00758160 are treated as the same trial
|
Data cleaning Processors
|
See http://explorer.opentrials.net/trials/8acd7593-8a47-4236-8351-880b401873ef, one of the sources point to NCT00598182 and the other points to NCT00758160
It appears both trials are related, but they don't seem to be the same.
|
1.0
|
NCT00598182 and NCT00758160 are treated as the same trial - See http://explorer.opentrials.net/trials/8acd7593-8a47-4236-8351-880b401873ef, one of the sources point to NCT00598182 and the other points to NCT00758160
It appears both trials are related, but they don't seem to be the same.
|
process
|
and are treated as the same trial see one of the sources point to and the other points to it appears both trials are related but they don t seem to be the same
| 1
|
553,368
| 16,370,958,638
|
IssuesEvent
|
2021-05-15 05:07:13
|
edwisely-ai/Relationship-Management
|
https://api.github.com/repos/edwisely-ai/Relationship-Management
|
closed
|
Call with SMGOIH - H&S HOD, VP
|
Priority High
|
Informed about the section change and shared the whatsapp messages
|
1.0
|
Call with SMGOIH - H&S HOD, VP - Informed about the section change and shared the whatsapp messages
|
non_process
|
call with smgoih h s hod vp informed about the section change and shared the whatsapp messages
| 0
|
430
| 2,859,770,195
|
IssuesEvent
|
2015-06-03 12:44:50
|
tomchristie/django-rest-framework
|
https://api.github.com/repos/tomchristie/django-rest-framework
|
closed
|
Managing Transifex and translations
|
Process
|
Not sure if it's possible but I think that the maintenance team should be able to manage the Transifex project to review and handle requests to join translations. It might also be part of the release process and release manager's responsibility to download any new translations before a new release .
|
1.0
|
Managing Transifex and translations - Not sure if it's possible but I think that the maintenance team should be able to manage the Transifex project to review and handle requests to join translations. It might also be part of the release process and release manager's responsibility to download any new translations before a new release .
|
process
|
managing transifex and translations not sure if it s possible but i think that the maintenance team should be able to manage the transifex project to review and handle requests to join translations it might also be part of the release process and release manager s responsibility to download any new translations before a new release
| 1
|
159,225
| 24,959,501,308
|
IssuesEvent
|
2022-11-01 14:29:29
|
department-of-veterans-affairs/vets-design-system-documentation
|
https://api.github.com/repos/department-of-veterans-affairs/vets-design-system-documentation
|
opened
|
[Alerts] - Reorganize Alert guidance on design.va.gov
|
vsp-design-system-team va-alert va-alert-expandable
|
## Description
Reorganize Alert guidance to include Alert-expandable as a variation and create child pages for alert variations similarly to the way [Link](https://design.va.gov/components/link/) or [Form](https://design.va.gov/components/form/) sections are organized on design.va.gov. Note that Form contains an Overview page and Link does not. We should consider using an Overview page for Alert.
## Details
We should also address the different ways "Info" alerts are used, including referencing Featured content, which is visually similar to the Background-only Info alert. Quote from @humancompanion-usds: "It might be worth breaking Alert - Info and all variations of it out into a distinct component. Because Alert - Info, background-only Alert info, Alert - Expandable info, and Featured content are all basically variations of the same thing."
## Tasks
- [ ] Write or update component documentation for design.va.gov
- [ ] Coordinate with accessibility specialist to create or update accessibility guidance
- [ ] Request documentation draft review
- [ ] Once documentation is approved, publish to design.va.gov
## Acceptance Criteria
- [ ] Component documentation is published on design.va.gov
|
1.0
|
[Alerts] - Reorganize Alert guidance on design.va.gov - ## Description
Reorganize Alert guidance to include Alert-expandable as a variation and create child pages for alert variations similarly to the way [Link](https://design.va.gov/components/link/) or [Form](https://design.va.gov/components/form/) sections are organized on design.va.gov. Note that Form contains an Overview page and Link does not. We should consider using an Overview page for Alert.
## Details
We should also address the different ways "Info" alerts are used, including referencing Featured content, which is visually similar to the Background-only Info alert. Quote from @humancompanion-usds: "It might be worth breaking Alert - Info and all variations of it out into a distinct component. Because Alert - Info, background-only Alert info, Alert - Expandable info, and Featured content are all basically variations of the same thing."
## Tasks
- [ ] Write or update component documentation for design.va.gov
- [ ] Coordinate with accessibility specialist to create or update accessibility guidance
- [ ] Request documentation draft review
- [ ] Once documentation is approved, publish to design.va.gov
## Acceptance Criteria
- [ ] Component documentation is published on design.va.gov
|
non_process
|
reorganize alert guidance on design va gov description reorganize alert guidance to include alert expandable as a variation and create child pages for alert variations similarly to the way or sections are organized on design va gov note that form contains an overview page and link does not we should consider using an overview page for alert details we should also address the different ways info alerts are used including referencing featured content which is visually similar to the background only info alert quote from humancompanion usds it might be worth breaking alert info and all variations of it out into a distinct component because alert info background only alert info alert expandable info and featured content are all basically variations of the same thing tasks write or update component documentation for design va gov coordinate with accessibility specialist to create or update accessibility guidance request documentation draft review once documentation is approved publish to design va gov acceptance criteria component documentation is published on design va gov
| 0
|
130,570
| 12,440,352,115
|
IssuesEvent
|
2020-05-26 11:50:31
|
biopython/biopython
|
https://api.github.com/repos/biopython/biopython
|
closed
|
docs - tutorial - blast - UML diagram out of date
|
Documentation help wanted
|
@peterjc the UML diagram at https://github.com/biopython/biopython/blob/master/Doc/Tutorial/chapter_blast.tex#L445
is out of date; a number of fields are missing.
|
1.0
|
docs - tutorial - blast - UML diagram out of date - @peterjc the UML diagram at https://github.com/biopython/biopython/blob/master/Doc/Tutorial/chapter_blast.tex#L445
is out of date; a number of fields are missing.
|
non_process
|
docs tutorial blast uml diagram out of date peterjc the uml diagram at is out of date a number of fields are missing
| 0
|
180,473
| 13,932,994,445
|
IssuesEvent
|
2020-10-22 08:05:47
|
Tribler/py-ipv8
|
https://api.github.com/repos/Tribler/py-ipv8
|
opened
|
Add shortcuts for node attributes in TestBase
|
enhancement tests
|
In many tests we assert that attributes of nodes change in a certain way (mostly their `overlay` attribute). We can adopt the style of `test_identity.py` in `TestBase` to provide convenient shortcuts:
https://github.com/Tribler/py-ipv8/blob/4843ff33b5eaf04c2315d04a1f40c5197fae0a11/ipv8/test/attestation/identity/test_identity.py#L21-L28
A before-and-after example:
```python
# Before
self.nodes[3].overlay.send_to(Peer(self.nodes[4].my_peer.public_key.key_to_bin(), self.nodes[4].endpoint.wan_address), 3)
self.assertEqual(3, self.nodes[4].overlay.something)
# After
self.overlay(3).send_to(self.peer(4), 3)
self.assertEqual(3, self.overlay(4).something)
```
|
1.0
|
Add shortcuts for node attributes in TestBase - In many tests we assert that attributes of nodes change in a certain way (mostly their `overlay` attribute). We can adopt the style of `test_identity.py` in `TestBase` to provide convenient shortcuts:
https://github.com/Tribler/py-ipv8/blob/4843ff33b5eaf04c2315d04a1f40c5197fae0a11/ipv8/test/attestation/identity/test_identity.py#L21-L28
A before-and-after example:
```python
# Before
self.nodes[3].overlay.send_to(Peer(self.nodes[4].my_peer.public_key.key_to_bin(), self.nodes[4].endpoint.wan_address), 3)
self.assertEqual(3, self.nodes[4].overlay.something)
# After
self.overlay(3).send_to(self.peer(4), 3)
self.assertEqual(3, self.overlay(4).something)
```
|
non_process
|
add shortcuts for node attributes in testbase in many tests we assert that attributes of nodes change in a certain way mostly their overlay attribute we can adopt the style of test identity py in testbase to provide convenient shortcuts a before and after example python before self nodes overlay send to peer self nodes my peer public key key to bin self nodes endpoint wan address self assertequal self nodes overlay something after self overlay send to self peer self assertequal self overlay something
| 0
|
273,424
| 23,750,522,369
|
IssuesEvent
|
2022-08-31 20:08:44
|
harvester/harvester
|
https://api.github.com/repos/harvester/harvester
|
closed
|
[BUG] Increase resource for POD grafana
|
bug require/doc area/ui priority/2 area/monitoring severity/3 require-ui/small not-require/test-plan
|
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->
In a single-node cluster, with one guest VM, the Grafana POD's memory usage is close to limit. It is necessary to increase the limits value.

```
"resources": {
"limits": {
"cpu": "200m",
"memory": "200Mi"
},
"requests": {
"cpu": "100m",
"memory": "100Mi"
}
},
```
**To Reproduce**
Steps to reproduce the behavior:
1. Install Harvester with master-head
2. Create a few VMs
3. Check the memory usage of POD grafana from Harvester metrics.
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
Has a proper memory limit for POD grafana.
**Support bundle**
<!--
You can generate a support bundle in the bottom of Harvester UI (https://docs.harvesterhci.io/v1.0/troubleshooting/harvester/#generate-a-support-bundle). It includes logs and configurations that help diagnose the issue.
Tokens, passwords, and secrets are automatically removed from support bundles. If you feel it's not appropriate to share the bundle files publicly, please consider:
- Wait for a developer to reach you and provide the bundle file by any secure methods.
- Join our Slack community (https://rancher-users.slack.com/archives/C01GKHKAG0K) to provide the bundle.
- Send the bundle to harvester-support-bundle@suse.com with the correct issue ID. -->
**Environment**
- Harvester ISO version: V1.0.1
- Underlying Infrastructure (e.g. Baremetal with Dell PowerEdge R630): KVM based Cluster
**Additional context**
Add any other context about the problem here.
|
1.0
|
[BUG] Increase resource for POD grafana - **Describe the bug**
<!-- A clear and concise description of what the bug is. -->
In a single-node cluster, with one guest VM, the Grafana POD's memory usage is close to limit. It is necessary to increase the limits value.

```
"resources": {
"limits": {
"cpu": "200m",
"memory": "200Mi"
},
"requests": {
"cpu": "100m",
"memory": "100Mi"
}
},
```
**To Reproduce**
Steps to reproduce the behavior:
1. Install Harvester with master-head
2. Create a few VMs
3. Check the memory usage of POD grafana from Harvester metrics.
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
Has a proper memory limit for POD grafana.
**Support bundle**
<!--
You can generate a support bundle in the bottom of Harvester UI (https://docs.harvesterhci.io/v1.0/troubleshooting/harvester/#generate-a-support-bundle). It includes logs and configurations that help diagnose the issue.
Tokens, passwords, and secrets are automatically removed from support bundles. If you feel it's not appropriate to share the bundle files publicly, please consider:
- Wait for a developer to reach you and provide the bundle file by any secure methods.
- Join our Slack community (https://rancher-users.slack.com/archives/C01GKHKAG0K) to provide the bundle.
- Send the bundle to harvester-support-bundle@suse.com with the correct issue ID. -->
**Environment**
- Harvester ISO version: V1.0.1
- Underlying Infrastructure (e.g. Baremetal with Dell PowerEdge R630): KVM based Cluster
**Additional context**
Add any other context about the problem here.
|
non_process
|
increase resource for pod grafana describe the bug in a single node cluster with one guest vm the grafana pod s memory usage is close to limit it is necessary to increase the limits value resources limits cpu memory requests cpu memory to reproduce steps to reproduce the behavior install harvester with master head create a few vms check the memory usage of pod grafana from harvester metrics expected behavior has a proper memory limit for pod grafana support bundle you can generate a support bundle in the bottom of harvester ui it includes logs and configurations that help diagnose the issue tokens passwords and secrets are automatically removed from support bundles if you feel it s not appropriate to share the bundle files publicly please consider wait for a developer to reach you and provide the bundle file by any secure methods join our slack community to provide the bundle send the bundle to harvester support bundle suse com with the correct issue id environment harvester iso version underlying infrastructure e g baremetal with dell poweredge kvm based cluster additional context add any other context about the problem here
| 0
|
13,856
| 16,616,023,085
|
IssuesEvent
|
2021-06-02 16:46:07
|
googleapis/python-policy-troubleshooter
|
https://api.github.com/repos/googleapis/python-policy-troubleshooter
|
closed
|
Release as GA
|
api: policytroubleshooter type: process
|
[GA release template](https://github.com/googleapis/google-cloud-common/issues/287)
## Required
- [x] 28 days elapsed since last beta release with new API surface.
- [x] Server API is GA
- [x] Package API is stable, and we can commit to backward compatibility
- [x] All dependencies are GA
|
1.0
|
Release as GA - [GA release template](https://github.com/googleapis/google-cloud-common/issues/287)
## Required
- [x] 28 days elapsed since last beta release with new API surface.
- [x] Server API is GA
- [x] Package API is stable, and we can commit to backward compatibility
- [x] All dependencies are GA
|
process
|
release as ga required days elapsed since last beta release with new api surface server api is ga package api is stable and we can commit to backward compatibility all dependencies are ga
| 1
|
85,042
| 7,960,561,063
|
IssuesEvent
|
2018-07-13 07:43:33
|
rust-lang/rust
|
https://api.github.com/repos/rust-lang/rust
|
reopened
|
Inner function cannot be tested
|
A-diagnostics A-libtest C-feature-request
|
I really would like to be able to this:
``` rust
fn outer(x: i32) -> i32 {
fn inner(a: i32, b: i32) -> i32 {
a + b
}
#[test]
fn test_inner() {
assert_eq!(inner(3, 5), 8);
}
inner(x, 5)
}
#[test]
fn test_outer() {
assert_eq!(outer(3), 8);
}
```
|
1.0
|
Inner function cannot be tested - I really would like to be able to this:
``` rust
fn outer(x: i32) -> i32 {
fn inner(a: i32, b: i32) -> i32 {
a + b
}
#[test]
fn test_inner() {
assert_eq!(inner(3, 5), 8);
}
inner(x, 5)
}
#[test]
fn test_outer() {
assert_eq!(outer(3), 8);
}
```
|
non_process
|
inner function cannot be tested i really would like to be able to this rust fn outer x fn inner a b a b fn test inner assert eq inner inner x fn test outer assert eq outer
| 0
|
629,710
| 20,050,870,248
|
IssuesEvent
|
2022-02-03 06:10:12
|
latteart-org/latteart
|
https://api.github.com/repos/latteart-org/latteart
|
closed
|
起動スクリプト経由の場合別ポートでLatteArtを起動できない
|
Type: Bug Priority: Should Effort: 1
|
**Describe the bug**
`launch.config.json`で`13000`、`13001`、`13002`を指定しても、起動スクリプトを実行すると、`latteart-capture-cl`と`latteart-repository`がデフォルトの`3001`、`3002`で起動してしまう。
**Desktop (please complete the following information):**
- Browser [e.g. chrome]
**Smartphone (please complete the following information):**
**Additional context**
|
1.0
|
起動スクリプト経由の場合別ポートでLatteArtを起動できない - **Describe the bug**
`launch.config.json`で`13000`、`13001`、`13002`を指定しても、起動スクリプトを実行すると、`latteart-capture-cl`と`latteart-repository`がデフォルトの`3001`、`3002`で起動してしまう。
**Desktop (please complete the following information):**
- Browser [e.g. chrome]
**Smartphone (please complete the following information):**
**Additional context**
|
non_process
|
起動スクリプト経由の場合別ポートでlatteartを起動できない describe the bug launch config json で 、 、 を指定しても、起動スクリプトを実行すると、 latteart capture cl と latteart repository がデフォルトの 、 で起動してしまう。 desktop please complete the following information browser smartphone please complete the following information additional context
| 0
|
407,353
| 11,912,361,317
|
IssuesEvent
|
2020-03-31 10:08:17
|
teambit/bit
|
https://api.github.com/repos/teambit/bit
|
closed
|
bit autodocs should support non-primitive types
|
area/docs-parsing priority/medium type/bug
|
Bit should detect prop 'elevation' for this component:
```tsx
import React from 'react';
import classNames from 'classnames';
import styles from './card.module.scss';
import elevations from './elevations.module.scss';
export type CardProps = {
/**
* Controls the shadow cast by the card, to generate a "stacking" effects.
* For example, a modal floating over elements may have a 'high' elevation
*/
elevation: 'none' | 'low' | 'medium' | 'high';
} & React.HTMLAttributes<HTMLDivElement>;
/**
* A wrapper resembling a physical card, grouping elements and improve readability.
*/
export function Card({ className, elevation, ...rest }: CardProps) {
return (
<div className={classNames(styles.card, elevations[elevation], className)} {...rest} />
);
}
Card.defaultProps = {
elevation: 'low',
};
```
I expect to get these docs:
```
┌────────────────────┬──────────────────────────────────────────────────┐
│ Name │ Card │
├────────────────────┼──────────────────────────────────────────────────┤
│ Description │ A wrapper resembling a physical card, grouping │
│ │ elements and improve readability. │
├────────────────────┼──────────────────────────────────────────────────┤
│ Properties │ (elevation: 'none' | 'low' | 'medium' | 'high') │
└────────────────────┴──────────────────────────────────────────────────┘
```
Actually, I get these docs:
```
┌────────────────────┬──────────────────────────────────────────────────┐
│ Name │ │
├────────────────────┼──────────────────────────────────────────────────┤
│ Description │ Controls the shadow cast by the card, to │
│ │ generate a "stacking" effects. │
│ │ For example, a modal floating over elements may │
│ │ have a 'high' elevation │
└────────────────────┴──────────────────────────────────────────────────┘
┌────────────────────┬──────────────────────────────────────────────────┐
│ Name │ │
├────────────────────┼──────────────────────────────────────────────────┤
│ Description │ A wrapper resembling a physical card, grouping │
│ │ elements and improve readability. │
└────────────────────┴──────────────────────────────────────────────────┘
```
### Specifications
- Bit version: 14.7.6
|
1.0
|
bit autodocs should support non-primitive types - Bit should detect prop 'elevation' for this component:
```tsx
import React from 'react';
import classNames from 'classnames';
import styles from './card.module.scss';
import elevations from './elevations.module.scss';
export type CardProps = {
/**
* Controls the shadow cast by the card, to generate a "stacking" effects.
* For example, a modal floating over elements may have a 'high' elevation
*/
elevation: 'none' | 'low' | 'medium' | 'high';
} & React.HTMLAttributes<HTMLDivElement>;
/**
* A wrapper resembling a physical card, grouping elements and improve readability.
*/
export function Card({ className, elevation, ...rest }: CardProps) {
return (
<div className={classNames(styles.card, elevations[elevation], className)} {...rest} />
);
}
Card.defaultProps = {
elevation: 'low',
};
```
I expect to get these docs:
```
┌────────────────────┬──────────────────────────────────────────────────┐
│ Name │ Card │
├────────────────────┼──────────────────────────────────────────────────┤
│ Description │ A wrapper resembling a physical card, grouping │
│ │ elements and improve readability. │
├────────────────────┼──────────────────────────────────────────────────┤
│ Properties │ (elevation: 'none' | 'low' | 'medium' | 'high') │
└────────────────────┴──────────────────────────────────────────────────┘
```
Actually, I get these docs:
```
┌────────────────────┬──────────────────────────────────────────────────┐
│ Name │ │
├────────────────────┼──────────────────────────────────────────────────┤
│ Description │ Controls the shadow cast by the card, to │
│ │ generate a "stacking" effects. │
│ │ For example, a modal floating over elements may │
│ │ have a 'high' elevation │
└────────────────────┴──────────────────────────────────────────────────┘
┌────────────────────┬──────────────────────────────────────────────────┐
│ Name │ │
├────────────────────┼──────────────────────────────────────────────────┤
│ Description │ A wrapper resembling a physical card, grouping │
│ │ elements and improve readability. │
└────────────────────┴──────────────────────────────────────────────────┘
```
### Specifications
- Bit version: 14.7.6
|
non_process
|
bit autodocs should support non primitive types bit should detect prop elevation for this component tsx import react from react import classnames from classnames import styles from card module scss import elevations from elevations module scss export type cardprops controls the shadow cast by the card to generate a stacking effects for example a modal floating over elements may have a high elevation elevation none low medium high react htmlattributes a wrapper resembling a physical card grouping elements and improve readability export function card classname elevation rest cardprops return card defaultprops elevation low i expect to get these docs ┌────────────────────┬──────────────────────────────────────────────────┐ │ name │ card │ ├────────────────────┼──────────────────────────────────────────────────┤ │ description │ a wrapper resembling a physical card grouping │ │ │ elements and improve readability │ ├────────────────────┼──────────────────────────────────────────────────┤ │ properties │ elevation none low medium high │ └────────────────────┴──────────────────────────────────────────────────┘ actually i get these docs ┌────────────────────┬──────────────────────────────────────────────────┐ │ name │ │ ├────────────────────┼──────────────────────────────────────────────────┤ │ description │ controls the shadow cast by the card to │ │ │ generate a stacking effects │ │ │ for example a modal floating over elements may │ │ │ have a high elevation │ └────────────────────┴──────────────────────────────────────────────────┘ ┌────────────────────┬──────────────────────────────────────────────────┐ │ name │ │ ├────────────────────┼──────────────────────────────────────────────────┤ │ description │ a wrapper resembling a physical card grouping │ │ │ elements and improve readability │ └────────────────────┴──────────────────────────────────────────────────┘ specifications bit version
| 0
|
87,567
| 10,927,660,342
|
IssuesEvent
|
2019-11-22 17:10:38
|
ParabolInc/action
|
https://api.github.com/repos/ParabolInc/action
|
closed
|
"Observer" or Occasional User Status
|
design discussion enhancement user request
|
## Issue - Enhancement
When a user is not on a team, but needs, or likes to see what that team is up to, our users/team leads are requesting a status differentiated from fully "Active"
> "When you’re selling to management you’re selling reporting." -a smart guy we know
### Acceptance Criteria (optional)
Users can:
- Observers are not charged. There is no limit to how many Observers can be added to a Team.
- Observers have full visual access to the Dashboard.
- Observers ARE included in the upper right lineup of the static Team Dashboard view; to the furthest right position/s, with a darker/darkest dot
- Observers are NOT included in the upper right lineup/any phase of a Meeting
- Observers ARE included in the Summary roundup & email
- **Estimated effort:** 13 points ([see CONTRIBUTING.md](https://github.com/ParabolInc/action/blob/master/CONTRIBUTING.md#points-and-sizes))
|
1.0
|
"Observer" or Occasional User Status - ## Issue - Enhancement
When a user is not on a team, but needs, or likes to see what that team is up to, our users/team leads are requesting a status differentiated from fully "Active"
> "When you’re selling to management you’re selling reporting." -a smart guy we know
### Acceptance Criteria (optional)
Users can:
- Observers are not charged. There is no limit to how many Observers can be added to a Team.
- Observers have full visual access to the Dashboard.
- Observers ARE included in the upper right lineup of the static Team Dashboard view; to the furthest right position/s, with a darker/darkest dot
- Observers are NOT included in the upper right lineup/any phase of a Meeting
- Observers ARE included in the Summary roundup & email
- **Estimated effort:** 13 points ([see CONTRIBUTING.md](https://github.com/ParabolInc/action/blob/master/CONTRIBUTING.md#points-and-sizes))
|
non_process
|
observer or occasional user status issue enhancement when a user is not on a team but needs or likes to see what that team is up to our users team leads are requesting a status differentiated from fully active when you’re selling to management you’re selling reporting a smart guy we know acceptance criteria optional users can observers are not charged there is no limit to how many observers can be added to a team observers have full visual access to the dashboard observers are included in the upper right lineup of the static team dashboard view to the furthest right position s with a darker darkest dot observers are not included in the upper right lineup any phase of a meeting observers are included in the summary roundup email estimated effort points
| 0
|
249,437
| 7,961,811,612
|
IssuesEvent
|
2018-07-13 12:15:10
|
FIDUCEO/FCDR_HIRS
|
https://api.github.com/repos/FIDUCEO/FCDR_HIRS
|
opened
|
Add more debug radiances to debug FCDR
|
Priority: Medium
|
Some useful radiances to add to debug FDCR:
- [ ] radiance with zero harmonisation
- [ ] radiance with zero harmonisation and zero self-emission
|
1.0
|
Add more debug radiances to debug FCDR - Some useful radiances to add to debug FDCR:
- [ ] radiance with zero harmonisation
- [ ] radiance with zero harmonisation and zero self-emission
|
non_process
|
add more debug radiances to debug fcdr some useful radiances to add to debug fdcr radiance with zero harmonisation radiance with zero harmonisation and zero self emission
| 0
|
10,117
| 7,919,531,328
|
IssuesEvent
|
2018-07-04 17:22:25
|
StreisandEffect/streisand
|
https://api.github.com/repos/StreisandEffect/streisand
|
closed
|
TLS Ciphersuite/Protocol Enhancement
|
area/gateway area/openconnect area/openvpn area/stunnel area/tls kind/security
|
This is an omnibus issue for tracking a project-wide cleanup/enhancement of TLS ciphersuite and protocol settings.
### Expected behavior:
Streisand components should emphasize best-practices for TLS ciphersuite deployment. Supported protocol versions should be kept to a minimum and favour modern and secure options.
### Actual Behavior:
In several places Streisand's ciphersuite/protocol choices overly favour backwards compatibility and include insecure or dated options (e.g. TLS 1.0, 3DES...)
### Services to Address
* [x] - OpenConnect
* [x] - Stunnel`*`
* [x] - OpenVPN
* [x] - NGinx Gateway
`*` Note: Since Stunnel is strictly used as an OpenVPN obfuscation layer the ciphersuite/protocol choices here are less important than in other services.
### OpenConnect:
* TODO: Verify if Streisand is using good settings here. Evaluate the client compatibility impact of choosing more modern options.
### OpenVPN
* TODO: Verify if Streisand is using good settings here. Evaluate the client compatibility impact of choosing more modern options.
### NGinx Gateway
* TODO: Verify if Streisand is using good settings here. Evaluate the client compatibility impact of choosing more modern options.
|
True
|
TLS Ciphersuite/Protocol Enhancement - This is an omnibus issue for tracking a project-wide cleanup/enhancement of TLS ciphersuite and protocol settings.
### Expected behavior:
Streisand components should emphasize best-practices for TLS ciphersuite deployment. Supported protocol versions should be kept to a minimum and favour modern and secure options.
### Actual Behavior:
In several places Streisand's ciphersuite/protocol choices overly favour backwards compatibility and include insecure or dated options (e.g. TLS 1.0, 3DES...)
### Services to Address
* [x] - OpenConnect
* [x] - Stunnel`*`
* [x] - OpenVPN
* [x] - NGinx Gateway
`*` Note: Since Stunnel is strictly used as an OpenVPN obfuscation layer the ciphersuite/protocol choices here are less important than in other services.
### OpenConnect:
* TODO: Verify if Streisand is using good settings here. Evaluate the client compatibility impact of choosing more modern options.
### OpenVPN
* TODO: Verify if Streisand is using good settings here. Evaluate the client compatibility impact of choosing more modern options.
### NGinx Gateway
* TODO: Verify if Streisand is using good settings here. Evaluate the client compatibility impact of choosing more modern options.
|
non_process
|
tls ciphersuite protocol enhancement this is an omnibus issue for tracking a project wide cleanup enhancement of tls ciphersuite and protocol settings expected behavior streisand components should emphasize best practices for tls ciphersuite deployment supported protocol versions should be kept to a minimum and favour modern and secure options actual behavior in several places streisand s ciphersuite protocol choices overly favour backwards compatibility and include insecure or dated options e g tls services to address openconnect stunnel openvpn nginx gateway note since stunnel is strictly used as an openvpn obfuscation layer the ciphersuite protocol choices here are less important than in other services openconnect todo verify if streisand is using good settings here evaluate the client compatibility impact of choosing more modern options openvpn todo verify if streisand is using good settings here evaluate the client compatibility impact of choosing more modern options nginx gateway todo verify if streisand is using good settings here evaluate the client compatibility impact of choosing more modern options
| 0
|
155,873
| 24,533,270,849
|
IssuesEvent
|
2022-10-11 18:20:05
|
beamer-bridge/beamer
|
https://api.github.com/repos/beamer-bridge/beamer
|
opened
|
Frontend: Clickable logo on the app - enabling direction switch
|
design 🐩
|
## Description of Deliverables(s)
As per #339 there is still a remaining feature to implement. This has also been addressed by the community recently: 'Our logo is like a toggle button, can we switch between 2 chains when we click on this logo? ' (see Discord - feedback channel)
## Deadline
This is not super urgent, but it would be good to prio in this or the next sprint.
|
1.0
|
Frontend: Clickable logo on the app - enabling direction switch - ## Description of Deliverables(s)
As per #339 there is still a remaining feature to implement. This has also been addressed by the community recently: 'Our logo is like a toggle button, can we switch between 2 chains when we click on this logo? ' (see Discord - feedback channel)
## Deadline
This is not super urgent, but it would be good to prio in this or the next sprint.
|
non_process
|
frontend clickable logo on the app enabling direction switch description of deliverables s as per there is still a remaining feature to implement this has also been addressed by the community recently our logo is like a toggle button can we switch between chains when we click on this logo see discord feedback channel deadline this is not super urgent but it would be good to prio in this or the next sprint
| 0
|
14,465
| 17,570,025,840
|
IssuesEvent
|
2021-08-14 13:45:05
|
oasis-tcs/csaf
|
https://api.github.com/repos/oasis-tcs/csaf
|
opened
|
Redundant left-over tagged conformance statement from CVRF 1.2 in section 2
|
csaf 2.0 editorial oasis_tc_process CSDPR01_feedback
|
# Situation
reference: https://docs.oasis-open.org/csaf/csaf/v2.0/csd01/csaf-v2.0-csd01.html#2-design-considerations
In section 2, first paragraph following the only informative comment of that section the only sentence has two weaknesses (left over from CVRF 1.w text).
1. The former typographical convention is still in use (enclosing guillemets and square brackets with a tag.
2. The cloudy wording does not go well with the subsequent MUST from the normative words RFC vocabulary - esp. as the facts stated are enforced by the JSON schema
# Proposal
Preferred: Remove the sentence for the next iteration (CS01 or CSD02).
In case others see benefit in the sentence another possible remediation could be to at least lowercase the must and remove the meta tokens.
|
1.0
|
Redundant left-over tagged conformance statement from CVRF 1.2 in section 2 - # Situation
reference: https://docs.oasis-open.org/csaf/csaf/v2.0/csd01/csaf-v2.0-csd01.html#2-design-considerations
In section 2, first paragraph following the only informative comment of that section the only sentence has two weaknesses (left over from CVRF 1.w text).
1. The former typographical convention is still in use (enclosing guillemets and square brackets with a tag.
2. The cloudy wording does not go well with the subsequent MUST from the normative words RFC vocabulary - esp. as the facts stated are enforced by the JSON schema
# Proposal
Preferred: Remove the sentence for the next iteration (CS01 or CSD02).
In case others see benefit in the sentence another possible remediation could be to at least lowercase the must and remove the meta tokens.
|
process
|
redundant left over tagged conformance statement from cvrf in section situation reference in section first paragraph following the only informative comment of that section the only sentence has two weaknesses left over from cvrf w text the former typographical convention is still in use enclosing guillemets and square brackets with a tag the cloudy wording does not go well with the subsequent must from the normative words rfc vocabulary esp as the facts stated are enforced by the json schema proposal preferred remove the sentence for the next iteration or in case others see benefit in the sentence another possible remediation could be to at least lowercase the must and remove the meta tokens
| 1
|
16,638
| 21,707,262,136
|
IssuesEvent
|
2022-05-10 10:46:02
|
sjmog/smartflix
|
https://api.github.com/repos/sjmog/smartflix
|
opened
|
Going Async
|
04-background-processing Rails/Background processing Rails/Workers
|
In the previous ticket, we enriched shows with external API data before rendering that enriched data to the view.
API requests can take a while to complete. At the moment, we're calling the API **synchronously**: we have to wait for the request to complete before continuing with rendering the view. This is going to get painful when our application gets popular, as we'll be potentially making hundreds of long-running API requests each minute.
In this ticket, you'll set up an **worker** that fetches data from the API **asynchronously**. You'll use **Sidekiq** to do this.
> Workers are especially useful for performing work outside of the standard Rails request/response lifecycle.
## To complete this challenge, you will have to:
- [ ] Set up [Sidekiq](https://github.com/mperham/sidekiq). (See the Tips below for more.)
- [ ] Create a Sidekiq background worker that calls the OMDB API and updates a show.
- [ ] Update your controller action so it returns all available movie data. If anything of the fields are missing, the action should respond with proper status and body, and enqueue a worker.
- [ ] Deploy the application to production. You will need to provision Sidekiq and Redis too!
## Tips
- Sidekiq is a background job framework that runs on Redis. You might want to read more about each of these common Rails tools [here](https://shashwat-creator.medium.com/all-you-need-to-know-about-sidekiq-a4b770a71f8f).
- Setting up Sidekiq and Redis can involve some challenging config. [Digital Ocean has a good tutorial guide](https://www.digitalocean.com/community/tutorials/how-to-add-sidekiq-and-redis-to-a-ruby-on-rails-application) if you're getting stuck.
|
2.0
|
Going Async - In the previous ticket, we enriched shows with external API data before rendering that enriched data to the view.
API requests can take a while to complete. At the moment, we're calling the API **synchronously**: we have to wait for the request to complete before continuing with rendering the view. This is going to get painful when our application gets popular, as we'll be potentially making hundreds of long-running API requests each minute.
In this ticket, you'll set up an **worker** that fetches data from the API **asynchronously**. You'll use **Sidekiq** to do this.
> Workers are especially useful for performing work outside of the standard Rails request/response lifecycle.
## To complete this challenge, you will have to:
- [ ] Set up [Sidekiq](https://github.com/mperham/sidekiq). (See the Tips below for more.)
- [ ] Create a Sidekiq background worker that calls the OMDB API and updates a show.
- [ ] Update your controller action so it returns all available movie data. If anything of the fields are missing, the action should respond with proper status and body, and enqueue a worker.
- [ ] Deploy the application to production. You will need to provision Sidekiq and Redis too!
## Tips
- Sidekiq is a background job framework that runs on Redis. You might want to read more about each of these common Rails tools [here](https://shashwat-creator.medium.com/all-you-need-to-know-about-sidekiq-a4b770a71f8f).
- Setting up Sidekiq and Redis can involve some challenging config. [Digital Ocean has a good tutorial guide](https://www.digitalocean.com/community/tutorials/how-to-add-sidekiq-and-redis-to-a-ruby-on-rails-application) if you're getting stuck.
|
process
|
going async in the previous ticket we enriched shows with external api data before rendering that enriched data to the view api requests can take a while to complete at the moment we re calling the api synchronously we have to wait for the request to complete before continuing with rendering the view this is going to get painful when our application gets popular as we ll be potentially making hundreds of long running api requests each minute in this ticket you ll set up an worker that fetches data from the api asynchronously you ll use sidekiq to do this workers are especially useful for performing work outside of the standard rails request response lifecycle to complete this challenge you will have to set up see the tips below for more create a sidekiq background worker that calls the omdb api and updates a show update your controller action so it returns all available movie data if anything of the fields are missing the action should respond with proper status and body and enqueue a worker deploy the application to production you will need to provision sidekiq and redis too tips sidekiq is a background job framework that runs on redis you might want to read more about each of these common rails tools setting up sidekiq and redis can involve some challenging config if you re getting stuck
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.