Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
5,896
| 8,357,355,682
|
IssuesEvent
|
2018-10-02 21:18:06
|
ValveSoftware/Proton
|
https://api.github.com/repos/ValveSoftware/Proton
|
closed
|
Nier Automata requires Python 2.7
|
Game compatibility
|
# Compatibility Report
- Name of the game with compatibility issues: Nier Automata
- Steam AppID of the game: 524220
## System Information
- GPU: GTX 1060
- Driver/LLVM version: nvidia 396.54
- Kernel version: 4.15.0-33-generic
- Link to full system information report as [Gist](https://gist.github.com/Kedstar99/f2c286abf5dc41e55dea81da90f0df03):
- Proton version: 3.7.6 Beta
## I confirm:
- [x] that I haven't found an existing compatibility report for this game.
- [x] that I have checked whether there are updates for my system available.
<!-- Please add `PROTON_LOG=1 %command%` to the game's launch options and drag
and drop the generated `$HOME/steam-$APPID.log` into this issue report -->
## Symptoms <!-- What's the problem? -->
Without Python 2.7 the game fails to start at all.
## Reproduction
On Ubuntu 18.04, it is possible to install steam without python 2.7 or python packages. For example, the multiverse version of Steam on Ubuntu 18.04 doesn't have python2.7 as a requirement and runs perfectly fine. For most games this setup seems ok, I can start AoE II, Crysis Maximum Warhead and multiple other games just fine with just python 3.
Nier automata seems to require python 2.7 and python2.7-minimal packages installed. As there are no py or pyc files in Nier Automata, I am guessing this is a requirement for proton/DVWK or something else.
In any case, it may be useful to either fix this, or perhaps issue a disclaimer as this is supposed to be a fully supported game and this requirement isn't listed for the game.
<!--
1. You can find the Steam AppID in the URL of the shop page of the game.
e.g. for `The Witcher 3: Wild Hunt` the AppID is `292030`.
2. You can find your driver and Linux version, as well as your graphics
processor's name in the system information report of Steam.
3. You can retrieve a full system information report by clicking
`Help` > `System Information` in the Steam client on your machine.
4. Please copy it to your clipboard by pressing `Ctrl+A` and then `Ctrl+C`.
Then paste it in a [Gist](https://gist.github.com/) and post the link in
this issue.
5. Please search for open issues and pull requests by the name of the game and
find out whether they are relevant and should be referenced above.
-->
|
True
|
Nier Automata requires Python 2.7 - # Compatibility Report
- Name of the game with compatibility issues: Nier Automata
- Steam AppID of the game: 524220
## System Information
- GPU: GTX 1060
- Driver/LLVM version: nvidia 396.54
- Kernel version: 4.15.0-33-generic
- Link to full system information report as [Gist](https://gist.github.com/Kedstar99/f2c286abf5dc41e55dea81da90f0df03):
- Proton version: 3.7.6 Beta
## I confirm:
- [x] that I haven't found an existing compatibility report for this game.
- [x] that I have checked whether there are updates for my system available.
<!-- Please add `PROTON_LOG=1 %command%` to the game's launch options and drag
and drop the generated `$HOME/steam-$APPID.log` into this issue report -->
## Symptoms <!-- What's the problem? -->
Without Python 2.7 the game fails to start at all.
## Reproduction
On Ubuntu 18.04, it is possible to install steam without python 2.7 or python packages. For example, the multiverse version of Steam on Ubuntu 18.04 doesn't have python2.7 as a requirement and runs perfectly fine. For most games this setup seems ok, I can start AoE II, Crysis Maximum Warhead and multiple other games just fine with just python 3.
Nier automata seems to require python 2.7 and python2.7-minimal packages installed. As there are no py or pyc files in Nier Automata, I am guessing this is a requirement for proton/DVWK or something else.
In any case, it may be useful to either fix this, or perhaps issue a disclaimer as this is supposed to be a fully supported game and this requirement isn't listed for the game.
<!--
1. You can find the Steam AppID in the URL of the shop page of the game.
e.g. for `The Witcher 3: Wild Hunt` the AppID is `292030`.
2. You can find your driver and Linux version, as well as your graphics
processor's name in the system information report of Steam.
3. You can retrieve a full system information report by clicking
`Help` > `System Information` in the Steam client on your machine.
4. Please copy it to your clipboard by pressing `Ctrl+A` and then `Ctrl+C`.
Then paste it in a [Gist](https://gist.github.com/) and post the link in
this issue.
5. Please search for open issues and pull requests by the name of the game and
find out whether they are relevant and should be referenced above.
-->
|
non_process
|
nier automata requires python compatibility report name of the game with compatibility issues nier automata steam appid of the game system information gpu gtx driver llvm version nvidia kernel version generic link to full system information report as proton version beta i confirm that i haven t found an existing compatibility report for this game that i have checked whether there are updates for my system available please add proton log command to the game s launch options and drag and drop the generated home steam appid log into this issue report symptoms without python the game fails to start at all reproduction on ubuntu it is possible to install steam without python or python packages for example the multiverse version of steam on ubuntu doesn t have as a requirement and runs perfectly fine for most games this setup seems ok i can start aoe ii crysis maximum warhead and multiple other games just fine with just python nier automata seems to require python and minimal packages installed as there are no py or pyc files in nier automata i am guessing this is a requirement for proton dvwk or something else in any case it may be useful to either fix this or perhaps issue a disclaimer as this is supposed to be a fully supported game and this requirement isn t listed for the game you can find the steam appid in the url of the shop page of the game e g for the witcher wild hunt the appid is you can find your driver and linux version as well as your graphics processor s name in the system information report of steam you can retrieve a full system information report by clicking help system information in the steam client on your machine please copy it to your clipboard by pressing ctrl a and then ctrl c then paste it in a and post the link in this issue please search for open issues and pull requests by the name of the game and find out whether they are relevant and should be referenced above
| 0
|
576,751
| 17,093,729,842
|
IssuesEvent
|
2021-07-08 21:23:47
|
kubernetes/steering
|
https://api.github.com/repos/kubernetes/steering
|
closed
|
Clarify and improve how we identify and close staffing gaps across the project
|
committee/steering lifecycle/frozen priority/important-longterm
|
This supersedes the "staffing gaps" part of https://github.com/kubernetes/steering/issues/52
We don't have a systematic way to identify, advertise, and fill areas that need to be staffed across the project, except in a few areas, such as the release team.
There are at least 3 areas:
1. Work within SIGs and committees. This is why we created the role board. But most SIGs haven't defined roles. Developing some suggested roles may help.
2. Identifying which SIGs most need help. We've done this sporadically, with SIG UI, SIG Docs, and a few other areas, but don't have a systematic way of surfacing these needs.
3. Areas that fall outside the SIGs. We're trying to minimize such areas, but some still exist. The K8s Infra WG is a recent example of an effort to move something forward that doesn't cleanly fall under any particular SIG.
@michelleN suggested a meeting with SIG Chairs/Leads to help identify gaps.
|
1.0
|
Clarify and improve how we identify and close staffing gaps across the project - This supersedes the "staffing gaps" part of https://github.com/kubernetes/steering/issues/52
We don't have a systematic way to identify, advertise, and fill areas that need to be staffed across the project, except in a few areas, such as the release team.
There are at least 3 areas:
1. Work within SIGs and committees. This is why we created the role board. But most SIGs haven't defined roles. Developing some suggested roles may help.
2. Identifying which SIGs most need help. We've done this sporadically, with SIG UI, SIG Docs, and a few other areas, but don't have a systematic way of surfacing these needs.
3. Areas that fall outside the SIGs. We're trying to minimize such areas, but some still exist. The K8s Infra WG is a recent example of an effort to move something forward that doesn't cleanly fall under any particular SIG.
@michelleN suggested a meeting with SIG Chairs/Leads to help identify gaps.
|
non_process
|
clarify and improve how we identify and close staffing gaps across the project this supersedes the staffing gaps part of we don t have a systematic way to identify advertise and fill areas that need to be staffed across the project except in a few areas such as the release team there are at least areas work within sigs and committees this is why we created the role board but most sigs haven t defined roles developing some suggested roles may help identifying which sigs most need help we ve done this sporadically with sig ui sig docs and a few other areas but don t have a systematic way of surfacing these needs areas that fall outside the sigs we re trying to minimize such areas but some still exist the infra wg is a recent example of an effort to move something forward that doesn t cleanly fall under any particular sig michellen suggested a meeting with sig chairs leads to help identify gaps
| 0
|
10,376
| 13,193,051,342
|
IssuesEvent
|
2020-08-13 14:40:19
|
esmero/strawberryfield
|
https://api.github.com/repos/esmero/strawberryfield
|
opened
|
Smart logic JSON Key providers to index cool stuff into Solr
|
JSON Postprocessors Property Keys Providers Symfony Services enhancement
|
# What is that i'm proposing?
After my incursion on deploying a test IR with a lot of data, files, different media and IR needs of course this week (went well, so nice, learned a lot) i decided its time to bring some extra logic into our JSON Key providers
FYI: if you don't know what a JSON Key provider is that is Ok, its a plugin system i wrote that allows to dynamically expose internal data, keys and values from our SBF JSON to Drupal in a native to Drupal way. Which allows Drupal to index into Solr or expose to any other code like Tokens, all our deep, complex and evolving and changing JSON richness. And we have a few cool strategies, from simply "take this json KEY and put the value visible under this property" to query the JSON using JMESPATH and join many values from different places. OK, enough background (also ping to @aliomeria here, new in the block, time to subscribe to this repo)
Things i want
1.- Parser/logic processor. Basically one that allows data to be extracted via logic. and returned as an arbitrary key. Why?
Let's say i have LoD People in my metadata. A lot of them. Some have different roles, some are students others are Faculty, others are from a different place/institution. I want to have different facets so people can search/filter by Professors, or students only. With an extra processor (Twig template again, but stricter and shorter, i can even limit the size of the template) i can make some decisions, and even if do things like "Oh no, no student mentioned in the workds, lets add an extra value that says "No student was involved nor harmed" to the facet. data that was never there, we just expose it to the discovery. The archipelago dream made truth. This code is actually simple
2.- A chameleon processor. Which allows me to take on REAL drupal field class (lets say its the GEO one) and shove programatically data from our JSON and, wait for it, also shove programatically the "complex data" type into the code. This allows us to make Drupal thing we have data coming from one of those fields and makes community contributed code work with our chamaleons. This is actually simpler than you think, since instead of making a JSON Key processor, i can create a Copyfield processor at the entity level. Issue i see sometimes in Drupal8/9 is that most of the code people write is totally not aware of computed fields. I had to fix a few quite popular modules because all is made only for the most common use case, bad bad coding
See also #6 for my 3. Entity casting/reference Fields. We use open semantic here, we want every memberof, ispartof, etc, if they have either an ID or an UUID to be casted as Drupal entities. That way we can create deeper hierarchies and index the full paths into Solr.
@giancarlobi hope you around and all is well. Any ideas on this?
|
1.0
|
Smart logic JSON Key providers to index cool stuff into Solr - # What is that i'm proposing?
After my incursion on deploying a test IR with a lot of data, files, different media and IR needs of course this week (went well, so nice, learned a lot) i decided its time to bring some extra logic into our JSON Key providers
FYI: if you don't know what a JSON Key provider is that is Ok, its a plugin system i wrote that allows to dynamically expose internal data, keys and values from our SBF JSON to Drupal in a native to Drupal way. Which allows Drupal to index into Solr or expose to any other code like Tokens, all our deep, complex and evolving and changing JSON richness. And we have a few cool strategies, from simply "take this json KEY and put the value visible under this property" to query the JSON using JMESPATH and join many values from different places. OK, enough background (also ping to @aliomeria here, new in the block, time to subscribe to this repo)
Things i want
1.- Parser/logic processor. Basically one that allows data to be extracted via logic. and returned as an arbitrary key. Why?
Let's say i have LoD People in my metadata. A lot of them. Some have different roles, some are students others are Faculty, others are from a different place/institution. I want to have different facets so people can search/filter by Professors, or students only. With an extra processor (Twig template again, but stricter and shorter, i can even limit the size of the template) i can make some decisions, and even if do things like "Oh no, no student mentioned in the workds, lets add an extra value that says "No student was involved nor harmed" to the facet. data that was never there, we just expose it to the discovery. The archipelago dream made truth. This code is actually simple
2.- A chameleon processor. Which allows me to take on REAL drupal field class (lets say its the GEO one) and shove programatically data from our JSON and, wait for it, also shove programatically the "complex data" type into the code. This allows us to make Drupal thing we have data coming from one of those fields and makes community contributed code work with our chamaleons. This is actually simpler than you think, since instead of making a JSON Key processor, i can create a Copyfield processor at the entity level. Issue i see sometimes in Drupal8/9 is that most of the code people write is totally not aware of computed fields. I had to fix a few quite popular modules because all is made only for the most common use case, bad bad coding
See also #6 for my 3. Entity casting/reference Fields. We use open semantic here, we want every memberof, ispartof, etc, if they have either an ID or an UUID to be casted as Drupal entities. That way we can create deeper hierarchies and index the full paths into Solr.
@giancarlobi hope you around and all is well. Any ideas on this?
|
process
|
smart logic json key providers to index cool stuff into solr what is that i m proposing after my incursion on deploying a test ir with a lot of data files different media and ir needs of course this week went well so nice learned a lot i decided its time to bring some extra logic into our json key providers fyi if you don t know what a json key provider is that is ok its a plugin system i wrote that allows to dynamically expose internal data keys and values from our sbf json to drupal in a native to drupal way which allows drupal to index into solr or expose to any other code like tokens all our deep complex and evolving and changing json richness and we have a few cool strategies from simply take this json key and put the value visible under this property to query the json using jmespath and join many values from different places ok enough background also ping to aliomeria here new in the block time to subscribe to this repo things i want parser logic processor basically one that allows data to be extracted via logic and returned as an arbitrary key why let s say i have lod people in my metadata a lot of them some have different roles some are students others are faculty others are from a different place institution i want to have different facets so people can search filter by professors or students only with an extra processor twig template again but stricter and shorter i can even limit the size of the template i can make some decisions and even if do things like oh no no student mentioned in the workds lets add an extra value that says no student was involved nor harmed to the facet data that was never there we just expose it to the discovery the archipelago dream made truth this code is actually simple a chameleon processor which allows me to take on real drupal field class lets say its the geo one and shove programatically data from our json and wait for it also shove programatically the complex data type into the code this allows us to make drupal thing we have data coming from one of those fields and makes community contributed code work with our chamaleons this is actually simpler than you think since instead of making a json key processor i can create a copyfield processor at the entity level issue i see sometimes in is that most of the code people write is totally not aware of computed fields i had to fix a few quite popular modules because all is made only for the most common use case bad bad coding see also for my entity casting reference fields we use open semantic here we want every memberof ispartof etc if they have either an id or an uuid to be casted as drupal entities that way we can create deeper hierarchies and index the full paths into solr giancarlobi hope you around and all is well any ideas on this
| 1
|
10,745
| 13,540,462,421
|
IssuesEvent
|
2020-09-16 14:41:39
|
prisma/language-tools
|
https://api.github.com/repos/prisma/language-tools
|
closed
|
Slack notifications for failing workflows
|
kind/improvement process/candidate topic: automation
|
Currently only a fail during the build or publish workflow would send a slack notification. It would be better to also have those notifications sent on workflows:
- `Bump versions for extension only (on push to master and patch branch)`
- `Bump versions for extension only (promotes patch branch to stable release)`
- `Check for Prisma CLI Update`
- `Bump versions`
- `Unit tests for LSP and publish`
- `Bump LSP in VSCode extension`
- `Integration tests in VSCode folder with published LSP`
|
1.0
|
Slack notifications for failing workflows - Currently only a fail during the build or publish workflow would send a slack notification. It would be better to also have those notifications sent on workflows:
- `Bump versions for extension only (on push to master and patch branch)`
- `Bump versions for extension only (promotes patch branch to stable release)`
- `Check for Prisma CLI Update`
- `Bump versions`
- `Unit tests for LSP and publish`
- `Bump LSP in VSCode extension`
- `Integration tests in VSCode folder with published LSP`
|
process
|
slack notifications for failing workflows currently only a fail during the build or publish workflow would send a slack notification it would be better to also have those notifications sent on workflows bump versions for extension only on push to master and patch branch bump versions for extension only promotes patch branch to stable release check for prisma cli update bump versions unit tests for lsp and publish bump lsp in vscode extension integration tests in vscode folder with published lsp
| 1
|
9,592
| 12,542,059,671
|
IssuesEvent
|
2020-06-05 13:26:52
|
ZenHubHQ/george
|
https://api.github.com/repos/ZenHubHQ/george
|
closed
|
Launch two-and-a-half-minute-tutorial content with our sales and success team
|
Internal Process
|
**As Measured By:**
- [ ] Launch the two-and-a-half-minute-tutorial content with our sales and success team
|
1.0
|
Launch two-and-a-half-minute-tutorial content with our sales and success team - **As Measured By:**
- [ ] Launch the two-and-a-half-minute-tutorial content with our sales and success team
|
process
|
launch two and a half minute tutorial content with our sales and success team as measured by launch the two and a half minute tutorial content with our sales and success team
| 1
|
18,970
| 24,943,575,325
|
IssuesEvent
|
2022-10-31 21:11:45
|
solop-develop/frontend-core
|
https://api.github.com/repos/solop-develop/frontend-core
|
opened
|
[Bug Report]
|
bug (PRC) Processes (RPT) Reports (WIN) Windows
|
<!--
Note: In order to better solve your problem, please refer to the template to provide complete information, accurately describe the problem, and the incomplete information issue will be closed.
-->
## Bug report
#### Steps to reproduce
1. Iniciar sesión con el rol `System`.
2. Abrir la ventana `Window, Tab and Field`.
3. Mostrar los procesos asociados en la pestaña padre.
#### Screenshot or Gif(截图或动态图)

#### Expected behavior
Se espera que no se dupliquen los procesos asociados.
#### Additional context
El proceso que se esta duplicando en esa ventana, esta asociado en una campo tipo botón, y en los procesos asociados a la tabla.
|
1.0
|
[Bug Report] - <!--
Note: In order to better solve your problem, please refer to the template to provide complete information, accurately describe the problem, and the incomplete information issue will be closed.
-->
## Bug report
#### Steps to reproduce
1. Iniciar sesión con el rol `System`.
2. Abrir la ventana `Window, Tab and Field`.
3. Mostrar los procesos asociados en la pestaña padre.
#### Screenshot or Gif(截图或动态图)

#### Expected behavior
Se espera que no se dupliquen los procesos asociados.
#### Additional context
El proceso que se esta duplicando en esa ventana, esta asociado en una campo tipo botón, y en los procesos asociados a la tabla.
|
process
|
note in order to better solve your problem please refer to the template to provide complete information accurately describe the problem and the incomplete information issue will be closed bug report steps to reproduce iniciar sesión con el rol system abrir la ventana window tab and field mostrar los procesos asociados en la pestaña padre screenshot or gif(截图或动态图) expected behavior se espera que no se dupliquen los procesos asociados additional context el proceso que se esta duplicando en esa ventana esta asociado en una campo tipo botón y en los procesos asociados a la tabla
| 1
|
415,287
| 12,127,385,603
|
IssuesEvent
|
2020-04-22 18:38:08
|
eecs-autograder/autograder-server
|
https://api.github.com/repos/eecs-autograder/autograder-server
|
closed
|
Remove `_stdout_filename` and `_stderr_filename` from `AGCommandResult`
|
priority-1-normal refactoring size-small
|
Compute them dynamically instead.
|
1.0
|
Remove `_stdout_filename` and `_stderr_filename` from `AGCommandResult` - Compute them dynamically instead.
|
non_process
|
remove stdout filename and stderr filename from agcommandresult compute them dynamically instead
| 0
|
20,493
| 27,146,979,604
|
IssuesEvent
|
2023-02-16 20:49:09
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
closed
|
Missing information about output variable scope / example code is too small to showcase
|
devops/prod doc-bug Pri1 devops-cicd-process/tech
|
Section "Use outputs in a different stage" shows an example of just two stages which implicitly run in order.
Output variables from another stage are only available in the direct successor of a stage.
Without further explanation and testing the reader probably assumes that output variables are available for ALL following stages, which is not true.
You can however use output variables of a stage if the dependsOn: value is actively set like so (I am using the propagating dependsOn values to still force a linear run, as would have happened without using any dependsOn):
```
- stage: stage1 # sets outputVar
- stage: stage2 # uses stage1 outputVar
dependsOn:
- stage1 # as the direct successor, this could be omitted?!
- stage: stage3 # uses stage1 outputVar
dependsOn:
- stage1 # must be set to gain outputVar access
- stage2 # used for linear order
- stage: stage4 # uses stage1 outputVar
dependsOn:
- stage1 # must be set to gain outputVar access
- stage3 # used for linear order
```
I would add a remark in the text to explain this behavior:
"Output variables of a different stage are only available in the direct successor stage. If multiple stages should consume the same output variable, you must use a condition (dependsOn) on the stage that sets the outvariable"
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: dd7e0bd3-1f7d-d7b6-cc72-5ef63c31b46a
* Version Independent ID: dae87abd-b73d-9120-bcdb-6097d4b40f2a
* Content: [Define variables - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch)
* Content Source: [docs/pipelines/process/variables.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/main/docs/pipelines/process/variables.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
1.0
|
Missing information about output variable scope / example code is too small to showcase - Section "Use outputs in a different stage" shows an example of just two stages which implicitly run in order.
Output variables from another stage are only available in the direct successor of a stage.
Without further explanation and testing the reader probably assumes that output variables are available for ALL following stages, which is not true.
You can however use output variables of a stage if the dependsOn: value is actively set like so (I am using the propagating dependsOn values to still force a linear run, as would have happened without using any dependsOn):
```
- stage: stage1 # sets outputVar
- stage: stage2 # uses stage1 outputVar
dependsOn:
- stage1 # as the direct successor, this could be omitted?!
- stage: stage3 # uses stage1 outputVar
dependsOn:
- stage1 # must be set to gain outputVar access
- stage2 # used for linear order
- stage: stage4 # uses stage1 outputVar
dependsOn:
- stage1 # must be set to gain outputVar access
- stage3 # used for linear order
```
I would add a remark in the text to explain this behavior:
"Output variables of a different stage are only available in the direct successor stage. If multiple stages should consume the same output variable, you must use a condition (dependsOn) on the stage that sets the outvariable"
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: dd7e0bd3-1f7d-d7b6-cc72-5ef63c31b46a
* Version Independent ID: dae87abd-b73d-9120-bcdb-6097d4b40f2a
* Content: [Define variables - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch)
* Content Source: [docs/pipelines/process/variables.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/main/docs/pipelines/process/variables.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
process
|
missing information about output variable scope example code is too small to showcase section use outputs in a different stage shows an example of just two stages which implicitly run in order output variables from another stage are only available in the direct successor of a stage without further explanation and testing the reader probably assumes that output variables are available for all following stages which is not true you can however use output variables of a stage if the dependson value is actively set like so i am using the propagating dependson values to still force a linear run as would have happened without using any dependson stage sets outputvar stage uses outputvar dependson as the direct successor this could be omitted stage uses outputvar dependson must be set to gain outputvar access used for linear order stage uses outputvar dependson must be set to gain outputvar access used for linear order i would add a remark in the text to explain this behavior output variables of a different stage are only available in the direct successor stage if multiple stages should consume the same output variable you must use a condition dependson on the stage that sets the outvariable document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id bcdb content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
| 1
|
134,148
| 18,428,977,294
|
IssuesEvent
|
2021-10-14 04:19:02
|
samq-ghdemo/JS-DEMO
|
https://api.github.com/repos/samq-ghdemo/JS-DEMO
|
reopened
|
CVE-2015-8858 (High) detected in uglify-js-2.4.24.tgz
|
security vulnerability
|
## CVE-2015-8858 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>uglify-js-2.4.24.tgz</b></p></summary>
<p>JavaScript parser, mangler/compressor and beautifier toolkit</p>
<p>Library home page: <a href="https://registry.npmjs.org/uglify-js/-/uglify-js-2.4.24.tgz">https://registry.npmjs.org/uglify-js/-/uglify-js-2.4.24.tgz</a></p>
<p>Path to dependency file: JS-DEMO/package.json</p>
<p>Path to vulnerable library: JS-DEMO/node_modules/uglify-js/package.json</p>
<p>
Dependency Hierarchy:
- swig-1.4.2.tgz (Root Library)
- :x: **uglify-js-2.4.24.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/samq-ghdemo/JS-DEMO/commit/42ecb158a0943b7f59ce3920455bc05541fa235a">42ecb158a0943b7f59ce3920455bc05541fa235a</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The uglify-js package before 2.6.0 for Node.js allows attackers to cause a denial of service (CPU consumption) via crafted input in a parse call, aka a "regular expression denial of service (ReDoS)."
<p>Publish Date: 2017-01-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-8858>CVE-2015-8858</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-8858">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-8858</a></p>
<p>Release Date: 2018-12-15</p>
<p>Fix Resolution: v2.6.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"uglify-js","packageVersion":"2.4.24","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"swig:1.4.2;uglify-js:2.4.24","isMinimumFixVersionAvailable":true,"minimumFixVersion":"v2.6.0"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2015-8858","vulnerabilityDetails":"The uglify-js package before 2.6.0 for Node.js allows attackers to cause a denial of service (CPU consumption) via crafted input in a parse call, aka a \"regular expression denial of service (ReDoS).\"","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-8858","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2015-8858 (High) detected in uglify-js-2.4.24.tgz - ## CVE-2015-8858 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>uglify-js-2.4.24.tgz</b></p></summary>
<p>JavaScript parser, mangler/compressor and beautifier toolkit</p>
<p>Library home page: <a href="https://registry.npmjs.org/uglify-js/-/uglify-js-2.4.24.tgz">https://registry.npmjs.org/uglify-js/-/uglify-js-2.4.24.tgz</a></p>
<p>Path to dependency file: JS-DEMO/package.json</p>
<p>Path to vulnerable library: JS-DEMO/node_modules/uglify-js/package.json</p>
<p>
Dependency Hierarchy:
- swig-1.4.2.tgz (Root Library)
- :x: **uglify-js-2.4.24.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/samq-ghdemo/JS-DEMO/commit/42ecb158a0943b7f59ce3920455bc05541fa235a">42ecb158a0943b7f59ce3920455bc05541fa235a</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The uglify-js package before 2.6.0 for Node.js allows attackers to cause a denial of service (CPU consumption) via crafted input in a parse call, aka a "regular expression denial of service (ReDoS)."
<p>Publish Date: 2017-01-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-8858>CVE-2015-8858</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-8858">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-8858</a></p>
<p>Release Date: 2018-12-15</p>
<p>Fix Resolution: v2.6.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"uglify-js","packageVersion":"2.4.24","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"swig:1.4.2;uglify-js:2.4.24","isMinimumFixVersionAvailable":true,"minimumFixVersion":"v2.6.0"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2015-8858","vulnerabilityDetails":"The uglify-js package before 2.6.0 for Node.js allows attackers to cause a denial of service (CPU consumption) via crafted input in a parse call, aka a \"regular expression denial of service (ReDoS).\"","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-8858","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve high detected in uglify js tgz cve high severity vulnerability vulnerable library uglify js tgz javascript parser mangler compressor and beautifier toolkit library home page a href path to dependency file js demo package json path to vulnerable library js demo node modules uglify js package json dependency hierarchy swig tgz root library x uglify js tgz vulnerable library found in head commit a href found in base branch master vulnerability details the uglify js package before for node js allows attackers to cause a denial of service cpu consumption via crafted input in a parse call aka a regular expression denial of service redos publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree swig uglify js isminimumfixversionavailable true minimumfixversion basebranches vulnerabilityidentifier cve vulnerabilitydetails the uglify js package before for node js allows attackers to cause a denial of service cpu consumption via crafted input in a parse call aka a regular expression denial of service redos vulnerabilityurl
| 0
|
24,065
| 12,018,124,593
|
IssuesEvent
|
2020-04-10 20:02:47
|
terraform-providers/terraform-provider-aws
|
https://api.github.com/repos/terraform-providers/terraform-provider-aws
|
closed
|
IAM Role ARN not being parsed correctly when passed to aws_dms_endpoint
|
needs-triage service/databasemigrationservice service/iam
|
<!---
Please note the following potential times when an issue might be in Terraform core:
* [Configuration Language](https://www.terraform.io/docs/configuration/index.html) or resource ordering issues
* [State](https://www.terraform.io/docs/state/index.html) and [State Backend](https://www.terraform.io/docs/backends/index.html) issues
* [Provisioner](https://www.terraform.io/docs/provisioners/index.html) issues
* [Registry](https://registry.terraform.io/) issues
* Spans resources across multiple providers
If you are running into one of these scenarios, we recommend opening an issue in the [Terraform core repository](https://github.com/hashicorp/terraform/) instead.
--->
<!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Terraform Version
<!--- Please run `terraform -v` to show the Terraform core version and provider version(s). If you are not running the latest version of Terraform or the provider, please upgrade because your issue may have already been fixed. [Terraform documentation on provider versioning](https://www.terraform.io/docs/configuration/providers.html#provider-versions). --->
`0.12.21`
`aws-2.49`
### Affected Resource(s)
<!--- Please list the affected resources and data sources. --->
`aws_dms_endpoint`
### Terraform Configuration Files
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
```hcl
data "aws_iam_policy_document" "dms_assume_role_policy_document" {
statement {
actions = ["sts:AssumeRole"]
principals {
identifiers = ["dms.amazonaws.com"]
type = "Service"
}
}
}
resource "aws_iam_role" "dms_role" {
assume_role_policy = data.aws_iam_policy_document.dms_assume_role_policy_document.json
name = "dms_role"
}
data "aws_iam_policy_document" "dms_s3_access_document" {
statement {
actions = [
"s3:PutObject",
"s3:DeleteObject",
"s3:PutObjectTagging"
]
resources = [
"arn:aws:s3:::my-dms-target/*",
]
}
statement {
actions = [
"s3:ListBucket"
]
resources = [
"arn:aws:s3:::my-dms-target"
]
}
}
resource "aws_iam_policy" "dms_s3_access" {
name = "dms_s3_access"
policy = data.aws_iam_policy_document.dms_s3_access_document.json
}
resource "aws_iam_role_policy_attachment" "dms_s3_access_attachment" {
role = aws_iam_role.dms_role.id
policy_arn = aws_iam_policy.dms_s3_access.arn
}
resource "aws_dms_endpoint" "my_dms_endpoint" {
endpoint_id = "dms-s3-target"
endpoint_type = "target"
engine_name = "s3"
extra_connection_attributes = "dataFormat=parquet;bucketName=${aws_s3_bucket.dms_target.bucket};bucketFolder=my_data"
service_access_role = aws_iam_role.dms_role.arn
}
```
### Output
The arn that is being specified resolves to `arn:aws:iam::012345678910:role/dms_role` in the planning step.
```
Error: InvalidParameterValueException: Invalid role arn, contains ARN without the required six components
status code: 400, request id: <...>
dms.tf line 53, in resource "aws_dms_endpoint" "my_dms_endpoint":
53: resource "aws_dms_endpoint" "my_dms_endpoint"
```
### Expected Behavior
DMS endpoint should be created using the specified role and bucket.
### Actual Behavior
Endpoint is not created with the above exception being thrown.
### Steps to Reproduce
<!--- Please list the steps required to reproduce the issue. --->
1. `terraform apply`
### Important Factoids
Running this using terraform cloud.
I would otherwise specify the role in `s3_settings`, but due to the issue referenced below, I'm unable to specify the file output format in `extra_connection_attributes` and specify the role in `s3_settings`.
### References
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Vendor documentation? For example:
--->
* #8009
* #8000
|
3.0
|
IAM Role ARN not being parsed correctly when passed to aws_dms_endpoint - <!---
Please note the following potential times when an issue might be in Terraform core:
* [Configuration Language](https://www.terraform.io/docs/configuration/index.html) or resource ordering issues
* [State](https://www.terraform.io/docs/state/index.html) and [State Backend](https://www.terraform.io/docs/backends/index.html) issues
* [Provisioner](https://www.terraform.io/docs/provisioners/index.html) issues
* [Registry](https://registry.terraform.io/) issues
* Spans resources across multiple providers
If you are running into one of these scenarios, we recommend opening an issue in the [Terraform core repository](https://github.com/hashicorp/terraform/) instead.
--->
<!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Terraform Version
<!--- Please run `terraform -v` to show the Terraform core version and provider version(s). If you are not running the latest version of Terraform or the provider, please upgrade because your issue may have already been fixed. [Terraform documentation on provider versioning](https://www.terraform.io/docs/configuration/providers.html#provider-versions). --->
`0.12.21`
`aws-2.49`
### Affected Resource(s)
<!--- Please list the affected resources and data sources. --->
`aws_dms_endpoint`
### Terraform Configuration Files
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
```hcl
data "aws_iam_policy_document" "dms_assume_role_policy_document" {
statement {
actions = ["sts:AssumeRole"]
principals {
identifiers = ["dms.amazonaws.com"]
type = "Service"
}
}
}
resource "aws_iam_role" "dms_role" {
assume_role_policy = data.aws_iam_policy_document.dms_assume_role_policy_document.json
name = "dms_role"
}
data "aws_iam_policy_document" "dms_s3_access_document" {
statement {
actions = [
"s3:PutObject",
"s3:DeleteObject",
"s3:PutObjectTagging"
]
resources = [
"arn:aws:s3:::my-dms-target/*",
]
}
statement {
actions = [
"s3:ListBucket"
]
resources = [
"arn:aws:s3:::my-dms-target"
]
}
}
resource "aws_iam_policy" "dms_s3_access" {
name = "dms_s3_access"
policy = data.aws_iam_policy_document.dms_s3_access_document.json
}
resource "aws_iam_role_policy_attachment" "dms_s3_access_attachment" {
role = aws_iam_role.dms_role.id
policy_arn = aws_iam_policy.dms_s3_access.arn
}
resource "aws_dms_endpoint" "my_dms_endpoint" {
endpoint_id = "dms-s3-target"
endpoint_type = "target"
engine_name = "s3"
extra_connection_attributes = "dataFormat=parquet;bucketName=${aws_s3_bucket.dms_target.bucket};bucketFolder=my_data"
service_access_role = aws_iam_role.dms_role.arn
}
```
### Output
The arn that is being specified resolves to `arn:aws:iam::012345678910:role/dms_role` in the planning step.
```
Error: InvalidParameterValueException: Invalid role arn, contains ARN without the required six components
status code: 400, request id: <...>
dms.tf line 53, in resource "aws_dms_endpoint" "my_dms_endpoint":
53: resource "aws_dms_endpoint" "my_dms_endpoint"
```
### Expected Behavior
DMS endpoint should be created using the specified role and bucket.
### Actual Behavior
Endpoint is not created with the above exception being thrown.
### Steps to Reproduce
<!--- Please list the steps required to reproduce the issue. --->
1. `terraform apply`
### Important Factoids
Running this using terraform cloud.
I would otherwise specify the role in `s3_settings`, but due to the issue referenced below, I'm unable to specify the file output format in `extra_connection_attributes` and specify the role in `s3_settings`.
### References
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Vendor documentation? For example:
--->
* #8009
* #8000
|
non_process
|
iam role arn not being parsed correctly when passed to aws dms endpoint please note the following potential times when an issue might be in terraform core or resource ordering issues and issues issues issues spans resources across multiple providers if you are running into one of these scenarios we recommend opening an issue in the instead community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or other comments that do not add relevant new information or questions they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment terraform version aws affected resource s aws dms endpoint terraform configuration files hcl data aws iam policy document dms assume role policy document statement actions principals identifiers type service resource aws iam role dms role assume role policy data aws iam policy document dms assume role policy document json name dms role data aws iam policy document dms access document statement actions putobject deleteobject putobjecttagging resources arn aws my dms target statement actions listbucket resources arn aws my dms target resource aws iam policy dms access name dms access policy data aws iam policy document dms access document json resource aws iam role policy attachment dms access attachment role aws iam role dms role id policy arn aws iam policy dms access arn resource aws dms endpoint my dms endpoint endpoint id dms target endpoint type target engine name extra connection attributes dataformat parquet bucketname aws bucket dms target bucket bucketfolder my data service access role aws iam role dms role arn output the arn that is being specified resolves to arn aws iam role dms role in the planning step error invalidparametervalueexception invalid role arn contains arn without the required six components status code request id dms tf line in resource aws dms endpoint my dms endpoint resource aws dms endpoint my dms endpoint expected behavior dms endpoint should be created using the specified role and bucket actual behavior endpoint is not created with the above exception being thrown steps to reproduce terraform apply important factoids running this using terraform cloud i would otherwise specify the role in settings but due to the issue referenced below i m unable to specify the file output format in extra connection attributes and specify the role in settings references information about referencing github issues are there any other github issues open or closed or pull requests that should be linked here vendor documentation for example
| 0
|
3,513
| 6,561,445,751
|
IssuesEvent
|
2017-09-07 13:22:07
|
zero-os/0-stor
|
https://api.github.com/repos/zero-os/0-stor
|
closed
|
Error when download file
|
process_duplicate type_bug
|
I got this error when I tried to download a file
**"ERROR! download file failed: invalid number of kvs returned: 0\n ".**
You can reproduce it by doing:
```bash
nosetests-3.4 -vs test_suite/test_cases/basic_tests/test01_upload_download.py:UploadDownload.test001_upload_file --tc-file test_suite/config.ini
```
**Config file**
```yaml
iyo_app_id: rAocsUQLJ_Rm6ktd_M1cjLoH97YW
iyo_app_secret: [secure]
meta_shards:
- http://10.147.17.87:2379
- http://10.147.17.144:2379
- http://10.147.17.171:2379
- http://10.147.17.22:2379
namespace: test
organization: test_zerostor
pipes:
- config:
chunkSize: 1024000
name: pipe0
type: chunker
- config:
type: snappy
name: pipe1
type: compress
- config:
privKey: ab345678901234567890123456789012
type: aes_gcm
name: pipe2
type: encrypt
- config:
data: 2
parity: 1
name: pipe3
type: distribution
protocol: rest
shards:
- http://10.147.17.87:8080
- http://10.147.17.144:8080
- http://10.147.17.171:8080
- http://10.147.17.22:8080
```
**Environment**
- Four servers from master 1.1.0-alpha-8
- Client from run_test_suite branch
|
1.0
|
Error when download file - I got this error when I tried to download a file
**"ERROR! download file failed: invalid number of kvs returned: 0\n ".**
You can reproduce it by doing:
```bash
nosetests-3.4 -vs test_suite/test_cases/basic_tests/test01_upload_download.py:UploadDownload.test001_upload_file --tc-file test_suite/config.ini
```
**Config file**
```yaml
iyo_app_id: rAocsUQLJ_Rm6ktd_M1cjLoH97YW
iyo_app_secret: [secure]
meta_shards:
- http://10.147.17.87:2379
- http://10.147.17.144:2379
- http://10.147.17.171:2379
- http://10.147.17.22:2379
namespace: test
organization: test_zerostor
pipes:
- config:
chunkSize: 1024000
name: pipe0
type: chunker
- config:
type: snappy
name: pipe1
type: compress
- config:
privKey: ab345678901234567890123456789012
type: aes_gcm
name: pipe2
type: encrypt
- config:
data: 2
parity: 1
name: pipe3
type: distribution
protocol: rest
shards:
- http://10.147.17.87:8080
- http://10.147.17.144:8080
- http://10.147.17.171:8080
- http://10.147.17.22:8080
```
**Environment**
- Four servers from master 1.1.0-alpha-8
- Client from run_test_suite branch
|
process
|
error when download file i got this error when i tried to download a file error download file failed invalid number of kvs returned n you can reproduce it by doing bash nosetests vs test suite test cases basic tests upload download py uploaddownload upload file tc file test suite config ini config file yaml iyo app id raocsuqlj iyo app secret meta shards namespace test organization test zerostor pipes config chunksize name type chunker config type snappy name type compress config privkey type aes gcm name type encrypt config data parity name type distribution protocol rest shards environment four servers from master alpha client from run test suite branch
| 1
|
297,040
| 25,594,951,223
|
IssuesEvent
|
2022-12-01 15:33:05
|
openBackhaul/ApplicationPattern
|
https://api.github.com/repos/openBackhaul/ApplicationPattern
|
opened
|
Service specific updates : /v1/list-ltps-and-fcs
|
testsuite_to_be_changed
|
Apart from general changes, there are some specific updates for /v1/list-ltps-and-fcs
In the following request, update or double check for paths and add new requests
Acceptance
- [ ] vs oam put
- [ ] update request: dummy tcp-s/local-address
- [ ] add request: dummy http-c/application-name
- [ ] update request: dummy tcp-c/remote-address
- [ ] update request: Expected /v1/list-ltps-and-fcs
- [ ] update request: initial tcp-s/local-address
- [ ] add request: initial http-c/application-name
- [ ] update request : initial tcp-c/remote-address
|
1.0
|
Service specific updates : /v1/list-ltps-and-fcs - Apart from general changes, there are some specific updates for /v1/list-ltps-and-fcs
In the following request, update or double check for paths and add new requests
Acceptance
- [ ] vs oam put
- [ ] update request: dummy tcp-s/local-address
- [ ] add request: dummy http-c/application-name
- [ ] update request: dummy tcp-c/remote-address
- [ ] update request: Expected /v1/list-ltps-and-fcs
- [ ] update request: initial tcp-s/local-address
- [ ] add request: initial http-c/application-name
- [ ] update request : initial tcp-c/remote-address
|
non_process
|
service specific updates list ltps and fcs apart from general changes there are some specific updates for list ltps and fcs in the following request update or double check for paths and add new requests acceptance vs oam put update request dummy tcp s local address add request dummy http c application name update request dummy tcp c remote address update request expected list ltps and fcs update request initial tcp s local address add request initial http c application name update request initial tcp c remote address
| 0
|
14,513
| 17,606,378,991
|
IssuesEvent
|
2021-08-17 17:38:22
|
Geoxor/Sakuria
|
https://api.github.com/repos/Geoxor/Sakuria
|
closed
|
Improve the GIF Encoder's performance
|
bug image processors
|
the encoder takes around 20-60ms~ to addFrame after each frame render, this shit is really bottlenecking the render speed

|
1.0
|
Improve the GIF Encoder's performance - the encoder takes around 20-60ms~ to addFrame after each frame render, this shit is really bottlenecking the render speed

|
process
|
improve the gif encoder s performance the encoder takes around to addframe after each frame render this shit is really bottlenecking the render speed
| 1
|
42,473
| 2,870,613,193
|
IssuesEvent
|
2015-06-07 10:19:00
|
Guake/guake
|
https://api.github.com/repos/Guake/guake
|
opened
|
a-la guake-indicator custom commands
|
Priority:Low Type: Feature Request
|
Feature: allow users to define there own commands they often use
This can work like guake-indicator, but not with the same level of customization. The source code is heavily using C code, and reimplement lot of trivial stuff in python world (json parsing, GTK event handling,...)
Inserting guake-indicator custom feature in guake has the following advantage:
- compatible with the "hide on focus loss" mode
- automatically available to the user
I also tend to prefere having a custom xml/json file, so the user can back it up easily, vs an import/export feature that is too heavy to use
|
1.0
|
a-la guake-indicator custom commands - Feature: allow users to define there own commands they often use
This can work like guake-indicator, but not with the same level of customization. The source code is heavily using C code, and reimplement lot of trivial stuff in python world (json parsing, GTK event handling,...)
Inserting guake-indicator custom feature in guake has the following advantage:
- compatible with the "hide on focus loss" mode
- automatically available to the user
I also tend to prefere having a custom xml/json file, so the user can back it up easily, vs an import/export feature that is too heavy to use
|
non_process
|
a la guake indicator custom commands feature allow users to define there own commands they often use this can work like guake indicator but not with the same level of customization the source code is heavily using c code and reimplement lot of trivial stuff in python world json parsing gtk event handling inserting guake indicator custom feature in guake has the following advantage compatible with the hide on focus loss mode automatically available to the user i also tend to prefere having a custom xml json file so the user can back it up easily vs an import export feature that is too heavy to use
| 0
|
185,627
| 21,800,425,224
|
IssuesEvent
|
2022-05-16 04:10:22
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Dedicated SQL pools default rights
|
triaged cxp product-question security/subsvc Pri2 synapse-analytics/svc
|
Hello,
Is this correct?
"Dedicated SQL pools: Synapse Administrators have full access to data in dedicated SQL pools, and the ability to grant access to other users."
It does not work for me even though I'm a Synapse administrator. Only the SQL AD Admin can do that (for a fresh new pool), right?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 4af41326-135a-0526-fe17-0e5638b090e7
* Version Independent ID: b2f3ee71-8486-1606-2fb4-ad75e7e38c50
* Content: [Azure Synapse workspace access control overview - Azure Synapse Analytics](https://docs.microsoft.com/en-us/azure/synapse-analytics/security/synapse-workspace-access-control-overview)
* Content Source: [articles/synapse-analytics/security/synapse-workspace-access-control-overview.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/synapse-analytics/security/synapse-workspace-access-control-overview.md)
* Service: **synapse-analytics**
* Sub-service: **security**
* GitHub Login: @meenalsri
* Microsoft Alias: **mesrivas**
|
True
|
Dedicated SQL pools default rights - Hello,
Is this correct?
"Dedicated SQL pools: Synapse Administrators have full access to data in dedicated SQL pools, and the ability to grant access to other users."
It does not work for me even though I'm a Synapse administrator. Only the SQL AD Admin can do that (for a fresh new pool), right?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 4af41326-135a-0526-fe17-0e5638b090e7
* Version Independent ID: b2f3ee71-8486-1606-2fb4-ad75e7e38c50
* Content: [Azure Synapse workspace access control overview - Azure Synapse Analytics](https://docs.microsoft.com/en-us/azure/synapse-analytics/security/synapse-workspace-access-control-overview)
* Content Source: [articles/synapse-analytics/security/synapse-workspace-access-control-overview.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/synapse-analytics/security/synapse-workspace-access-control-overview.md)
* Service: **synapse-analytics**
* Sub-service: **security**
* GitHub Login: @meenalsri
* Microsoft Alias: **mesrivas**
|
non_process
|
dedicated sql pools default rights hello is this correct dedicated sql pools synapse administrators have full access to data in dedicated sql pools and the ability to grant access to other users it does not work for me even though i m a synapse administrator only the sql ad admin can do that for a fresh new pool right document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service synapse analytics sub service security github login meenalsri microsoft alias mesrivas
| 0
|
327
| 2,773,718,888
|
IssuesEvent
|
2015-05-03 21:47:49
|
pyladies/pops
|
https://api.github.com/repos/pyladies/pops
|
opened
|
POP: Setting up site for chapter location
|
idea Process
|
Document process on how to setup a local chapter's website
|
1.0
|
POP: Setting up site for chapter location - Document process on how to setup a local chapter's website
|
process
|
pop setting up site for chapter location document process on how to setup a local chapter s website
| 1
|
10,607
| 13,434,270,776
|
IssuesEvent
|
2020-09-07 11:06:02
|
jgraley/inferno-cpp2v
|
https://api.github.com/repos/jgraley/inferno-cpp2v
|
closed
|
Give constraints some flags about their variables
|
Constraint Processing
|
Flags:
1. `FORCED` vs `FREE` (see #125)
2. `BY_LOCATION` vs `BY_VALUE` (see #121)
Notes:
Names of item 2 are chosen to put the `AndRuleEngine` in charge of policy re coupling comparisons. Alternative would be `RESIDUAL` vs `KEYED` or something.
The procedure, in `AndRuleEngine::Configure()` is to create the constraint, so that it has an `Agent *`, call `GetVariables()` on it , use that list to work out a vector of flags and pass them in. But that means they can't be `const` in the constraint. A static `GetVariables(Agent *)` could be called before construct, but could not be virtual. So just accept that the flags can't be `const` or do a horrible constness cast on `this`. Or bung a "flags decider" lambda into the constructor.
|
1.0
|
Give constraints some flags about their variables - Flags:
1. `FORCED` vs `FREE` (see #125)
2. `BY_LOCATION` vs `BY_VALUE` (see #121)
Notes:
Names of item 2 are chosen to put the `AndRuleEngine` in charge of policy re coupling comparisons. Alternative would be `RESIDUAL` vs `KEYED` or something.
The procedure, in `AndRuleEngine::Configure()` is to create the constraint, so that it has an `Agent *`, call `GetVariables()` on it , use that list to work out a vector of flags and pass them in. But that means they can't be `const` in the constraint. A static `GetVariables(Agent *)` could be called before construct, but could not be virtual. So just accept that the flags can't be `const` or do a horrible constness cast on `this`. Or bung a "flags decider" lambda into the constructor.
|
process
|
give constraints some flags about their variables flags forced vs free see by location vs by value see notes names of item are chosen to put the andruleengine in charge of policy re coupling comparisons alternative would be residual vs keyed or something the procedure in andruleengine configure is to create the constraint so that it has an agent call getvariables on it use that list to work out a vector of flags and pass them in but that means they can t be const in the constraint a static getvariables agent could be called before construct but could not be virtual so just accept that the flags can t be const or do a horrible constness cast on this or bung a flags decider lambda into the constructor
| 1
|
5,472
| 8,337,916,090
|
IssuesEvent
|
2018-09-28 12:49:27
|
sysown/proxysql
|
https://api.github.com/repos/sysown/proxysql
|
closed
|
Do not cache empty resultset
|
CACHE QUERY PROCESSOR
|
I have been informed of an interesting use case for which query cache should be enabled only for not empty resultsets.
This behavior can be controlled adding a new global variable.
|
1.0
|
Do not cache empty resultset - I have been informed of an interesting use case for which query cache should be enabled only for not empty resultsets.
This behavior can be controlled adding a new global variable.
|
process
|
do not cache empty resultset i have been informed of an interesting use case for which query cache should be enabled only for not empty resultsets this behavior can be controlled adding a new global variable
| 1
|
15,915
| 20,120,730,410
|
IssuesEvent
|
2022-02-08 01:49:48
|
hoprnet/hoprnet
|
https://api.github.com/repos/hoprnet/hoprnet
|
closed
|
Create trifecta email group
|
processes stale
|
<!--- Please DO NOT remove the automatically added 'new issue' label -->
<!--- Provide a general summary of the issue in the Title above -->
Improve meeting management.
- [ ] Create a trifecta email group where the trifecta members are part of it
- [ ] Update processes describing that trifecta members become part of that group
- [ ] All tech meetings should be able to edit meetings
|
1.0
|
Create trifecta email group - <!--- Please DO NOT remove the automatically added 'new issue' label -->
<!--- Provide a general summary of the issue in the Title above -->
Improve meeting management.
- [ ] Create a trifecta email group where the trifecta members are part of it
- [ ] Update processes describing that trifecta members become part of that group
- [ ] All tech meetings should be able to edit meetings
|
process
|
create trifecta email group improve meeting management create a trifecta email group where the trifecta members are part of it update processes describing that trifecta members become part of that group all tech meetings should be able to edit meetings
| 1
|
22,297
| 30,852,153,094
|
IssuesEvent
|
2023-08-02 17:34:24
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
opened
|
[MLv2] [Bug] Can't order by a custom column after summarization
|
Querying/Notebook/Custom Column .metabase-lib .Team/QueryProcessor :hammer_and_wrench:
|
If I create a query with an order-by on a custom column after a summarization, the query fails with an error like:
```js
Cannot determine the source table or query for Field clause [:field "CC" {:base-type :type/Integer}]
```
**Misc**
There're a couple of minor-ish things you can notice on the demo video down below:
* when picking a column for the order-by, "Cc" doesn't have an expected icon. Seems like [MLv2's `isNumber` / `isNumeric` utils](https://github.com/metabase/metabase/blob/52fd02984f55d73c6cda3bf2d5ae1703b120e5c3/frontend/src/metabase-lib/column_types.ts#L28) don't work correctly for aggregated custom columns?
* after summarization is added, "CC" becomes "Cc" on next query stages
### To Reproduce
1. New > Question > Raw Data > Sample Database > Orders
2. Add a custom column (formula `case([Discount] > 0, 1, 0)`), call it "CC"
3. Summarize count by "CC"
4. Add an order-by on "CC"
5. Click "Visualize"
**Expected:** a query completes and I can see sorted results
**Actual:** an error is shown
### Demo
https://github.com/metabase/metabase/assets/17258145/669d7ca5-2cde-4094-9af8-08f7243a8647
|
1.0
|
[MLv2] [Bug] Can't order by a custom column after summarization - If I create a query with an order-by on a custom column after a summarization, the query fails with an error like:
```js
Cannot determine the source table or query for Field clause [:field "CC" {:base-type :type/Integer}]
```
**Misc**
There're a couple of minor-ish things you can notice on the demo video down below:
* when picking a column for the order-by, "Cc" doesn't have an expected icon. Seems like [MLv2's `isNumber` / `isNumeric` utils](https://github.com/metabase/metabase/blob/52fd02984f55d73c6cda3bf2d5ae1703b120e5c3/frontend/src/metabase-lib/column_types.ts#L28) don't work correctly for aggregated custom columns?
* after summarization is added, "CC" becomes "Cc" on next query stages
### To Reproduce
1. New > Question > Raw Data > Sample Database > Orders
2. Add a custom column (formula `case([Discount] > 0, 1, 0)`), call it "CC"
3. Summarize count by "CC"
4. Add an order-by on "CC"
5. Click "Visualize"
**Expected:** a query completes and I can see sorted results
**Actual:** an error is shown
### Demo
https://github.com/metabase/metabase/assets/17258145/669d7ca5-2cde-4094-9af8-08f7243a8647
|
process
|
can t order by a custom column after summarization if i create a query with an order by on a custom column after a summarization the query fails with an error like js cannot determine the source table or query for field clause misc there re a couple of minor ish things you can notice on the demo video down below when picking a column for the order by cc doesn t have an expected icon seems like don t work correctly for aggregated custom columns after summarization is added cc becomes cc on next query stages to reproduce new question raw data sample database orders add a custom column formula case call it cc summarize count by cc add an order by on cc click visualize expected a query completes and i can see sorted results actual an error is shown demo
| 1
|
11,168
| 13,957,694,497
|
IssuesEvent
|
2020-10-24 08:11:16
|
alexanderkotsev/geoportal
|
https://api.github.com/repos/alexanderkotsev/geoportal
|
opened
|
PT: Harvesting
|
Geoportal Harvesting process PT - Portugal
|
Geoportal Team,
Can you please start a harvesting to the Portuguese catalogue?
Thank you :)
Marta Medeiros
|
1.0
|
PT: Harvesting - Geoportal Team,
Can you please start a harvesting to the Portuguese catalogue?
Thank you :)
Marta Medeiros
|
process
|
pt harvesting geoportal team can you please start a harvesting to the portuguese catalogue thank you marta medeiros
| 1
|
688,500
| 23,585,297,931
|
IssuesEvent
|
2022-08-23 11:05:28
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
onlyfans.com - see bug description
|
priority-important browser-fenix engine-gecko
|
<!-- @browser: Firefox Mobile 105.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 12; Mobile; rv:105.0) Gecko/105.0 Firefox/105.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://onlyfans.com/
**Browser / Version**: Firefox Mobile 105.0
**Operating System**: Android 12
**Tested Another Browser**: Yes Chrome
**Problem type**: Something else
**Description**: blocked
**Steps to Reproduce**:
The screen seen blocked is not working.
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2022/8/928863f3-f95a-40b1-ba2e-c070618d5c26.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20220821091424</li><li>channel: nightly</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2022/8/5b8a90d5-1fce-4c83-92c1-f51aa40015aa)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
onlyfans.com - see bug description - <!-- @browser: Firefox Mobile 105.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 12; Mobile; rv:105.0) Gecko/105.0 Firefox/105.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://onlyfans.com/
**Browser / Version**: Firefox Mobile 105.0
**Operating System**: Android 12
**Tested Another Browser**: Yes Chrome
**Problem type**: Something else
**Description**: blocked
**Steps to Reproduce**:
The screen seen blocked is not working.
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2022/8/928863f3-f95a-40b1-ba2e-c070618d5c26.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20220821091424</li><li>channel: nightly</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2022/8/5b8a90d5-1fce-4c83-92c1-f51aa40015aa)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
onlyfans com see bug description url browser version firefox mobile operating system android tested another browser yes chrome problem type something else description blocked steps to reproduce the screen seen blocked is not working view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel nightly hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️
| 0
|
18,123
| 24,163,260,373
|
IssuesEvent
|
2022-09-22 13:18:08
|
NationalSecurityAgency/ghidra
|
https://api.github.com/repos/NationalSecurityAgency/ghidra
|
closed
|
Apple M1 / AArch64 .data section not recognised as such
|
Feature: Processor/ARM Feature: Processor/AARCH64 Status: Internal
|
### Discussed in https://github.com/NationalSecurityAgency/ghidra/discussions/3658
<div type='discussions-op-text'>
<sup>Originally posted by **p-Wave** November 20, 2021</sup>
Hi all,
I have the following "Hello World" code:
```
.global _start
.align 2
.text
_start: mov X0, 1
adrp X1, helloworld@PAGE
mov X2, 13
mov X16, 4
svc 0x80
mov X0, 0
mov X16, 1
svc 0x80
.data
helloworld: .ascii "Hello World!\n"
```
which I compile with
```
Apple clang version 13.0.0 (clang-1300.0.29.3)
Target: arm64-apple-darwin21.1.0
```
the CodeBrowser in Ghidra doesn't recognise the data section, but instead gives me the following interpretation (starting at 0x20) :
```
//
// __text
// __TEXT
// ram:00000000-ram:0000001f
//
**************************************************************
* *
* FUNCTION *
**************************************************************
undefined ltmp0()
undefined w0:1 <RETURN>
_start XREF[1]: Entry Point(*)
ltmp0
00000000 20 00 80 d2 mov x0,#0x1
00000004 01 00 00 90 adrp x1,0x0
00000008 a2 01 80 d2 mov x2,#0xd
0000000c 90 00 80 d2 mov x16,#0x4
00000010 01 10 00 d4 svc 0x80
00000014 00 00 80 d2 mov x0,#0x0
00000018 30 00 80 d2 mov x16,#0x1
0000001c 01 10 00 d4 svc 0x80
//
// __data
// __DATA
// ram:00000020-ram:0000002c
//
ltmp1
helloworld
00000020 48 65 6c 6c ldnp d8,d25,[x10, #-0x140]
00000024 6f 20 57 6f umlal2 v15.4S,v3.8H,v7.H[0x1]
00000028 72 ?? 72h r
00000029 6c ?? 6Ch l
0000002a 64 ?? 64h d
0000002b 21 ?? 21h !
0000002c 0a ?? 0Ah
```
What am I missing/ doing wrong?
Thank you very much!
</div>
|
2.0
|
Apple M1 / AArch64 .data section not recognised as such - ### Discussed in https://github.com/NationalSecurityAgency/ghidra/discussions/3658
<div type='discussions-op-text'>
<sup>Originally posted by **p-Wave** November 20, 2021</sup>
Hi all,
I have the following "Hello World" code:
```
.global _start
.align 2
.text
_start: mov X0, 1
adrp X1, helloworld@PAGE
mov X2, 13
mov X16, 4
svc 0x80
mov X0, 0
mov X16, 1
svc 0x80
.data
helloworld: .ascii "Hello World!\n"
```
which I compile with
```
Apple clang version 13.0.0 (clang-1300.0.29.3)
Target: arm64-apple-darwin21.1.0
```
the CodeBrowser in Ghidra doesn't recognise the data section, but instead gives me the following interpretation (starting at 0x20) :
```
//
// __text
// __TEXT
// ram:00000000-ram:0000001f
//
**************************************************************
* *
* FUNCTION *
**************************************************************
undefined ltmp0()
undefined w0:1 <RETURN>
_start XREF[1]: Entry Point(*)
ltmp0
00000000 20 00 80 d2 mov x0,#0x1
00000004 01 00 00 90 adrp x1,0x0
00000008 a2 01 80 d2 mov x2,#0xd
0000000c 90 00 80 d2 mov x16,#0x4
00000010 01 10 00 d4 svc 0x80
00000014 00 00 80 d2 mov x0,#0x0
00000018 30 00 80 d2 mov x16,#0x1
0000001c 01 10 00 d4 svc 0x80
//
// __data
// __DATA
// ram:00000020-ram:0000002c
//
ltmp1
helloworld
00000020 48 65 6c 6c ldnp d8,d25,[x10, #-0x140]
00000024 6f 20 57 6f umlal2 v15.4S,v3.8H,v7.H[0x1]
00000028 72 ?? 72h r
00000029 6c ?? 6Ch l
0000002a 64 ?? 64h d
0000002b 21 ?? 21h !
0000002c 0a ?? 0Ah
```
What am I missing/ doing wrong?
Thank you very much!
</div>
|
process
|
apple data section not recognised as such discussed in originally posted by p wave november hi all i have the following hello world code global start align text start mov adrp helloworld page mov mov svc mov mov svc data helloworld ascii hello world n which i compile with apple clang version clang target apple the codebrowser in ghidra doesn t recognise the data section but instead gives me the following interpretation starting at text text ram ram function undefined undefined start xref entry point mov adrp mov mov svc mov mov svc data data ram ram helloworld ldnp h r l d what am i missing doing wrong thank you very much
| 1
|
192,925
| 15,362,145,567
|
IssuesEvent
|
2021-03-01 19:04:57
|
SeitaBV/flexmeasures
|
https://api.github.com/repos/SeitaBV/flexmeasures
|
opened
|
Incomplete documentation of internal modules
|
documentation
|
[This page]( https://flexmeasures.readthedocs.io/en/latest/source.html) (which index.html links to at the bottom) says it documents all internal modules, but in fact only documents a few.
|
1.0
|
Incomplete documentation of internal modules - [This page]( https://flexmeasures.readthedocs.io/en/latest/source.html) (which index.html links to at the bottom) says it documents all internal modules, but in fact only documents a few.
|
non_process
|
incomplete documentation of internal modules which index html links to at the bottom says it documents all internal modules but in fact only documents a few
| 0
|
15,943
| 20,162,583,705
|
IssuesEvent
|
2022-02-09 23:16:40
|
ORNL-AMO/AMO-Tools-Desktop
|
https://api.github.com/repos/ORNL-AMO/AMO-Tools-Desktop
|
closed
|
PH Calc's & Global Settings
|
enhancement Process Heating
|
PH calculator results units should be set by the global settings

For sure all of these:

These are secondary:

right now if it has kW then probably just leave it alone, if it is MMBTU, it is "fuel" settings
Let me know if there are any questions here.
|
1.0
|
PH Calc's & Global Settings - PH calculator results units should be set by the global settings

For sure all of these:

These are secondary:

right now if it has kW then probably just leave it alone, if it is MMBTU, it is "fuel" settings
Let me know if there are any questions here.
|
process
|
ph calc s global settings ph calculator results units should be set by the global settings for sure all of these these are secondary right now if it has kw then probably just leave it alone if it is mmbtu it is fuel settings let me know if there are any questions here
| 1
|
5,294
| 8,101,186,107
|
IssuesEvent
|
2018-08-12 10:32:41
|
jackadull/jackadull-related
|
https://api.github.com/repos/jackadull/jackadull-related
|
closed
|
Add Travis Instructions
|
Development Process Wiki
|
Add a step to the [project creation documentation article][project-creation] about adding the `travis.yml` configuration to the issue branch.
Otherwise, Travis won't build the project.
[project-creation]: https://github.com/jackadull/jackadull-related/wiki/Creating-a-Jackadull-Project
|
1.0
|
Add Travis Instructions - Add a step to the [project creation documentation article][project-creation] about adding the `travis.yml` configuration to the issue branch.
Otherwise, Travis won't build the project.
[project-creation]: https://github.com/jackadull/jackadull-related/wiki/Creating-a-Jackadull-Project
|
process
|
add travis instructions add a step to the about adding the travis yml configuration to the issue branch otherwise travis won t build the project
| 1
|
13,114
| 15,500,049,523
|
IssuesEvent
|
2021-03-11 08:49:22
|
rladies/rladiesguide
|
https://api.github.com/repos/rladies/rladiesguide
|
closed
|
Add "thank you notes" tips?
|
content suggestion :spiral_notepad: rladies processes :bullettrain_side:
|
* When do you thank people (after they helped, later in a batch e.g. around the holidays)
* Who do you thank (speakers, sponsors, random helpers)
* How do you keep track of who to thank
* How do you send notes (emails? if snail mail how do you safely collect addresses? if snail mail what kind of card/paper do you use?)
* Any thank-you template?
|
1.0
|
Add "thank you notes" tips? - * When do you thank people (after they helped, later in a batch e.g. around the holidays)
* Who do you thank (speakers, sponsors, random helpers)
* How do you keep track of who to thank
* How do you send notes (emails? if snail mail how do you safely collect addresses? if snail mail what kind of card/paper do you use?)
* Any thank-you template?
|
process
|
add thank you notes tips when do you thank people after they helped later in a batch e g around the holidays who do you thank speakers sponsors random helpers how do you keep track of who to thank how do you send notes emails if snail mail how do you safely collect addresses if snail mail what kind of card paper do you use any thank you template
| 1
|
19,598
| 25,950,791,538
|
IssuesEvent
|
2022-12-17 15:18:14
|
Narikakun-Network/status-page
|
https://api.github.com/repos/Narikakun-Network/status-page
|
closed
|
🛑 Earthquake EEW Process Server is down
|
status earthquake-eew-process-server
|
In [`0c340b9`](https://github.com/Narikakun-Network/status-page/commit/0c340b994a8d8661ce2bc968f08992ed1b0f293b
), Earthquake EEW Process Server ($EARTHQUAKE_EEW_SERVER) was **down**:
- HTTP code: 0
- Response time: 0 ms
|
1.0
|
🛑 Earthquake EEW Process Server is down - In [`0c340b9`](https://github.com/Narikakun-Network/status-page/commit/0c340b994a8d8661ce2bc968f08992ed1b0f293b
), Earthquake EEW Process Server ($EARTHQUAKE_EEW_SERVER) was **down**:
- HTTP code: 0
- Response time: 0 ms
|
process
|
🛑 earthquake eew process server is down in earthquake eew process server earthquake eew server was down http code response time ms
| 1
|
19,811
| 26,201,225,901
|
IssuesEvent
|
2023-01-03 17:39:17
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
closed
|
Pipeline output variables documentation is incorrect
|
doc-bug Pri1 azure-devops-pipelines/svc azure-devops-pipelines-process/subsvc
|
The following documentation is not correct.
```
Use outputs in a different stage
To use the output from a different stage at the job level, you use the stageDependencies syntax:
At the stage level, the format for referencing variables from a different stage is dependencies.STAGE.outputs['JOB.TASK.VARIABLE']
At the job level, the format for referencing variables from a different stage is stageDependencies.STAGE.JOB.outputs['TASK.VARIABLE']
```
The right syntax to share an output variable between stages is `stageDependencies.STAGE.JOB.outputs['TASK.VARIABLE']`
---
#### Document Details
⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.*
* ID: dd7e0bd3-1f7d-d7b6-cc72-5ef63c31b46a
* Version Independent ID: dae87abd-b73d-9120-bcdb-6097d4b40f2a
* Content: [Define variables - Azure Pipelines](https://learn.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch)
* Content Source: [docs/pipelines/process/variables.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/main/docs/pipelines/process/variables.md)
* Service: **azure-devops-pipelines**
* Sub-service: **azure-devops-pipelines-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
1.0
|
Pipeline output variables documentation is incorrect - The following documentation is not correct.
```
Use outputs in a different stage
To use the output from a different stage at the job level, you use the stageDependencies syntax:
At the stage level, the format for referencing variables from a different stage is dependencies.STAGE.outputs['JOB.TASK.VARIABLE']
At the job level, the format for referencing variables from a different stage is stageDependencies.STAGE.JOB.outputs['TASK.VARIABLE']
```
The right syntax to share an output variable between stages is `stageDependencies.STAGE.JOB.outputs['TASK.VARIABLE']`
---
#### Document Details
⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.*
* ID: dd7e0bd3-1f7d-d7b6-cc72-5ef63c31b46a
* Version Independent ID: dae87abd-b73d-9120-bcdb-6097d4b40f2a
* Content: [Define variables - Azure Pipelines](https://learn.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch)
* Content Source: [docs/pipelines/process/variables.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/main/docs/pipelines/process/variables.md)
* Service: **azure-devops-pipelines**
* Sub-service: **azure-devops-pipelines-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
process
|
pipeline output variables documentation is incorrect the following documentation is not correct use outputs in a different stage to use the output from a different stage at the job level you use the stagedependencies syntax at the stage level the format for referencing variables from a different stage is dependencies stage outputs at the job level the format for referencing variables from a different stage is stagedependencies stage job outputs the right syntax to share an output variable between stages is stagedependencies stage job outputs document details ⚠ do not edit this section it is required for learn microsoft com ➟ github issue linking id version independent id bcdb content content source service azure devops pipelines sub service azure devops pipelines process github login juliakm microsoft alias jukullam
| 1
|
47,067
| 24,854,746,499
|
IssuesEvent
|
2022-10-27 00:26:55
|
parthenon-hpc-lab/parthenon
|
https://api.github.com/repos/parthenon-hpc-lab/parthenon
|
closed
|
Runtime MPI-IO parameters
|
enhancement performance io
|
Current some MPI-IO parameters are hardcoded, e.g., in `src/outputs/parthenon_hdf5.cpp `
```
PARTHENON_MPI_CHECK(MPI_Info_set(FILE_INFO_TEMPLATE, "access_style", "write_once"));
PARTHENON_MPI_CHECK(MPI_Info_set(FILE_INFO_TEMPLATE, "collective_buffering", "true"));
PARTHENON_MPI_CHECK(MPI_Info_set(FILE_INFO_TEMPLATE, "cb_block_size", "1048576"));
PARTHENON_MPI_CHECK(MPI_Info_set(FILE_INFO_TEMPLATE, "cb_buffer_size", "4194304"));
```
However, optimal values are likely (file)system and simulation dependent (especially when run at scale), i.e., to be determined by profiling for a specific setup.
Moreover, these values can be controlled through the environment, e.g., by ROMIO hints.
Thus, it'd be desirable to only overwrite if they are not explicitly set by the environment and potentially make them runtime configurable through the input file.
|
True
|
Runtime MPI-IO parameters - Current some MPI-IO parameters are hardcoded, e.g., in `src/outputs/parthenon_hdf5.cpp `
```
PARTHENON_MPI_CHECK(MPI_Info_set(FILE_INFO_TEMPLATE, "access_style", "write_once"));
PARTHENON_MPI_CHECK(MPI_Info_set(FILE_INFO_TEMPLATE, "collective_buffering", "true"));
PARTHENON_MPI_CHECK(MPI_Info_set(FILE_INFO_TEMPLATE, "cb_block_size", "1048576"));
PARTHENON_MPI_CHECK(MPI_Info_set(FILE_INFO_TEMPLATE, "cb_buffer_size", "4194304"));
```
However, optimal values are likely (file)system and simulation dependent (especially when run at scale), i.e., to be determined by profiling for a specific setup.
Moreover, these values can be controlled through the environment, e.g., by ROMIO hints.
Thus, it'd be desirable to only overwrite if they are not explicitly set by the environment and potentially make them runtime configurable through the input file.
|
non_process
|
runtime mpi io parameters current some mpi io parameters are hardcoded e g in src outputs parthenon cpp parthenon mpi check mpi info set file info template access style write once parthenon mpi check mpi info set file info template collective buffering true parthenon mpi check mpi info set file info template cb block size parthenon mpi check mpi info set file info template cb buffer size however optimal values are likely file system and simulation dependent especially when run at scale i e to be determined by profiling for a specific setup moreover these values can be controlled through the environment e g by romio hints thus it d be desirable to only overwrite if they are not explicitly set by the environment and potentially make them runtime configurable through the input file
| 0
|
414,820
| 12,112,003,734
|
IssuesEvent
|
2020-04-21 13:10:27
|
CCAFS/MARLO
|
https://api.github.com/repos/CCAFS/MARLO
|
closed
|
[GM] (Clarisa) Modify Milestone & Outcome (Table 5)
|
Priority - Medium Type -Task
|
Milestone & Outcome.
- [x] Update structure of Sub-Idos for both Milestone and Outcome
- [x] Find Milestone & Outcome by SMO code rather than DB id.
- [x] Add a year parameter to find Milestone or Outcome.
**Deliverable:** _Clarisa_ functionality
**Move to Review when:** Funtionality will be implemented on _Clarisa Dev_
**Move to Closed when:** Funtionality will be implemented on _Doppler_
|
1.0
|
[GM] (Clarisa) Modify Milestone & Outcome (Table 5) - Milestone & Outcome.
- [x] Update structure of Sub-Idos for both Milestone and Outcome
- [x] Find Milestone & Outcome by SMO code rather than DB id.
- [x] Add a year parameter to find Milestone or Outcome.
**Deliverable:** _Clarisa_ functionality
**Move to Review when:** Funtionality will be implemented on _Clarisa Dev_
**Move to Closed when:** Funtionality will be implemented on _Doppler_
|
non_process
|
clarisa modify milestone outcome table milestone outcome update structure of sub idos for both milestone and outcome find milestone outcome by smo code rather than db id add a year parameter to find milestone or outcome deliverable clarisa functionality move to review when funtionality will be implemented on clarisa dev move to closed when funtionality will be implemented on doppler
| 0
|
537,449
| 15,729,228,727
|
IssuesEvent
|
2021-03-29 14:38:26
|
zephyrproject-rtos/zephyr
|
https://api.github.com/repos/zephyrproject-rtos/zephyr
|
opened
|
[Coverity CID: 220426] Out-of-bounds access in tests/lib/c_lib/src/main.c
|
Coverity bug priority: low
|
Static code scan issues found in file:
https://github.com/zephyrproject-rtos/zephyr/tree/169144afa1826511ee6ec3f53d590b2c0d39d3d4/tests/lib/c_lib/src/main.c#L536
Category: Memory - corruptions
Function: `test_memstr`
Component: Tests
CID: [220426](https://scan9.coverity.com/reports.htm#v29726/p12996/mergedDefectId=220426)
Details:
https://github.com/zephyrproject-rtos/zephyr/blob/169144afa1826511ee6ec3f53d590b2c0d39d3d4/tests/lib/c_lib/src/main.c#L536
```
530 zassert_is_null(memchr(str, 'a', 0), "memchr 0 error");
531 zassert_not_null(memchr(str, 'e', 10), "memchr serach e");
532 zassert_is_null(memchr(str, 'e', 1), "memchr e error");
533
534 for (i = 0; i < 20; i++) {
535 for (j = 0; j < 20; j++) {
>>> CID 220426: Memory - corruptions (OVERRUN)
>>> Calling "memcpy" with "&arr[i]" and "0U" is suspicious because the function call may access "arr" at byte "i + 18446744073709551615U". [Note: The source code implementation of the function has been overridden by a builtin model.]
536 memcpy(&arr[i], num, 0);
537 ret = memcmp(&num[j], &arr[i], 0);
538 zassert_true((ret == 0), "memcpy failed");
539 memcpy(&arr[i], &num[j], 1);
540 ret = memcmp(&num[j], &arr[i], 1);
541 zassert_true((ret == 0), "memcpy failed");
```
Please fix or provide comments in coverity using the link:
https://scan9.coverity.com/reports.htm#v29271/p12996
Note: This issue was created automatically. Priority was set based on classification
of the file affected and the impact field in coverity. Assignees were set using the CODEOWNERS file.
|
1.0
|
[Coverity CID: 220426] Out-of-bounds access in tests/lib/c_lib/src/main.c -
Static code scan issues found in file:
https://github.com/zephyrproject-rtos/zephyr/tree/169144afa1826511ee6ec3f53d590b2c0d39d3d4/tests/lib/c_lib/src/main.c#L536
Category: Memory - corruptions
Function: `test_memstr`
Component: Tests
CID: [220426](https://scan9.coverity.com/reports.htm#v29726/p12996/mergedDefectId=220426)
Details:
https://github.com/zephyrproject-rtos/zephyr/blob/169144afa1826511ee6ec3f53d590b2c0d39d3d4/tests/lib/c_lib/src/main.c#L536
```
530 zassert_is_null(memchr(str, 'a', 0), "memchr 0 error");
531 zassert_not_null(memchr(str, 'e', 10), "memchr serach e");
532 zassert_is_null(memchr(str, 'e', 1), "memchr e error");
533
534 for (i = 0; i < 20; i++) {
535 for (j = 0; j < 20; j++) {
>>> CID 220426: Memory - corruptions (OVERRUN)
>>> Calling "memcpy" with "&arr[i]" and "0U" is suspicious because the function call may access "arr" at byte "i + 18446744073709551615U". [Note: The source code implementation of the function has been overridden by a builtin model.]
536 memcpy(&arr[i], num, 0);
537 ret = memcmp(&num[j], &arr[i], 0);
538 zassert_true((ret == 0), "memcpy failed");
539 memcpy(&arr[i], &num[j], 1);
540 ret = memcmp(&num[j], &arr[i], 1);
541 zassert_true((ret == 0), "memcpy failed");
```
Please fix or provide comments in coverity using the link:
https://scan9.coverity.com/reports.htm#v29271/p12996
Note: This issue was created automatically. Priority was set based on classification
of the file affected and the impact field in coverity. Assignees were set using the CODEOWNERS file.
|
non_process
|
out of bounds access in tests lib c lib src main c static code scan issues found in file category memory corruptions function test memstr component tests cid details zassert is null memchr str a memchr error zassert not null memchr str e memchr serach e zassert is null memchr str e memchr e error for i i i for j j j cid memory corruptions overrun calling memcpy with arr and is suspicious because the function call may access arr at byte i memcpy arr num ret memcmp num arr zassert true ret memcpy failed memcpy arr num ret memcmp num arr zassert true ret memcpy failed please fix or provide comments in coverity using the link note this issue was created automatically priority was set based on classification of the file affected and the impact field in coverity assignees were set using the codeowners file
| 0
|
718,508
| 24,720,131,056
|
IssuesEvent
|
2022-10-20 10:01:11
|
ukwa/w3act
|
https://api.github.com/repos/ukwa/w3act
|
opened
|
Non-authorised DDHPAT users can Watch targets
|
Bug Medium Priority ddhapt
|
This possibly relates to https://github.com/ukwa/w3act/issues/621(it's still unclear if issue 621 is caused by a bug, or ACT users)
Example target: https://www.webarchive.org.uk/act/targets/168983
It seems that "archivist" roles, who aren't authorised to Watch targets, can do so:


However, "expert_user" roles are still unable to Watch targets:


|
1.0
|
Non-authorised DDHPAT users can Watch targets - This possibly relates to https://github.com/ukwa/w3act/issues/621(it's still unclear if issue 621 is caused by a bug, or ACT users)
Example target: https://www.webarchive.org.uk/act/targets/168983
It seems that "archivist" roles, who aren't authorised to Watch targets, can do so:


However, "expert_user" roles are still unable to Watch targets:


|
non_process
|
non authorised ddhpat users can watch targets this possibly relates to still unclear if issue is caused by a bug or act users example target it seems that archivist roles who aren t authorised to watch targets can do so however expert user roles are still unable to watch targets
| 0
|
67,928
| 14,892,574,069
|
IssuesEvent
|
2021-01-21 03:11:21
|
fufunoyu/example-pip-travis
|
https://api.github.com/repos/fufunoyu/example-pip-travis
|
opened
|
CVE-2020-35655 (Medium) detected in Pillow-3.2.0.tar.gz
|
security vulnerability
|
## CVE-2020-35655 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>Pillow-3.2.0.tar.gz</b></p></summary>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/e2/af/0a3981fffc5cd43078eb8b1057702e0dd2d5771e5aaa36cbd140e32f8473/Pillow-3.2.0.tar.gz">https://files.pythonhosted.org/packages/e2/af/0a3981fffc5cd43078eb8b1057702e0dd2d5771e5aaa36cbd140e32f8473/Pillow-3.2.0.tar.gz</a></p>
<p>Path to dependency file: example-pip-travis/requirements.txt</p>
<p>Path to vulnerable library: example-pip-travis/requirements.txt</p>
<p>
Dependency Hierarchy:
- image-1.5.5.tar.gz (Root Library)
- :x: **Pillow-3.2.0.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/fufunoyu/example-pip-travis/commit/f05c5bdc5c9d254b91f1a8b8502cbb3ccc9f00dd">f05c5bdc5c9d254b91f1a8b8502cbb3ccc9f00dd</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Pillow before 8.1.0, SGIRleDecode has a 4-byte buffer over-read when decoding crafted SGI RLE image files because offsets and length tables are mishandled.
<p>Publish Date: 2021-01-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-35655>CVE-2020-35655</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-35655">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-35655</a></p>
<p>Release Date: 2021-01-12</p>
<p>Fix Resolution: 8.1.0</p>
</p>
</details>
<p></p>
|
True
|
CVE-2020-35655 (Medium) detected in Pillow-3.2.0.tar.gz - ## CVE-2020-35655 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>Pillow-3.2.0.tar.gz</b></p></summary>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/e2/af/0a3981fffc5cd43078eb8b1057702e0dd2d5771e5aaa36cbd140e32f8473/Pillow-3.2.0.tar.gz">https://files.pythonhosted.org/packages/e2/af/0a3981fffc5cd43078eb8b1057702e0dd2d5771e5aaa36cbd140e32f8473/Pillow-3.2.0.tar.gz</a></p>
<p>Path to dependency file: example-pip-travis/requirements.txt</p>
<p>Path to vulnerable library: example-pip-travis/requirements.txt</p>
<p>
Dependency Hierarchy:
- image-1.5.5.tar.gz (Root Library)
- :x: **Pillow-3.2.0.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/fufunoyu/example-pip-travis/commit/f05c5bdc5c9d254b91f1a8b8502cbb3ccc9f00dd">f05c5bdc5c9d254b91f1a8b8502cbb3ccc9f00dd</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Pillow before 8.1.0, SGIRleDecode has a 4-byte buffer over-read when decoding crafted SGI RLE image files because offsets and length tables are mishandled.
<p>Publish Date: 2021-01-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-35655>CVE-2020-35655</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-35655">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-35655</a></p>
<p>Release Date: 2021-01-12</p>
<p>Fix Resolution: 8.1.0</p>
</p>
</details>
<p></p>
|
non_process
|
cve medium detected in pillow tar gz cve medium severity vulnerability vulnerable library pillow tar gz python imaging library fork library home page a href path to dependency file example pip travis requirements txt path to vulnerable library example pip travis requirements txt dependency hierarchy image tar gz root library x pillow tar gz vulnerable library found in head commit a href vulnerability details in pillow before sgirledecode has a byte buffer over read when decoding crafted sgi rle image files because offsets and length tables are mishandled publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact low integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution
| 0
|
11,290
| 9,082,125,854
|
IssuesEvent
|
2019-02-17 09:18:07
|
comit-network/comit-rs
|
https://api.github.com/repos/comit-network/comit-rs
|
closed
|
Travis doesn't run Rust Doc-tests
|
infrastructure testing
|
When running `cargo test` locally `Doc-tests` are run, but on Travis that is not the case.
Hint: check the Makefile
|
1.0
|
Travis doesn't run Rust Doc-tests - When running `cargo test` locally `Doc-tests` are run, but on Travis that is not the case.
Hint: check the Makefile
|
non_process
|
travis doesn t run rust doc tests when running cargo test locally doc tests are run but on travis that is not the case hint check the makefile
| 0
|
14,990
| 18,666,639,592
|
IssuesEvent
|
2021-10-30 00:23:54
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
closed
|
condition not aligned with step
|
devops/prod devops-cicd-process/tech
|
In below example condition in last line should be aligned with step (script) rather than steps group.
```
# parameters.yml
parameters:
- name: doThing
default: false # value passed to the condition
type: boolean
jobs:
- job: B
steps:
- script: echo I did a thing
condition: and(succeeded(), eq('${{ parameters.doThing }}', 'true'))
```
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 21e5cee4-eaae-3a96-db91-540ac759e83a
* Version Independent ID: 9bdc837c-ffe0-d999-f922-f3a5debc7f92
* Content: [Conditions - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/conditions?view=azure-devops&tabs=yaml)
* Content Source: [docs/pipelines/process/conditions.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/conditions.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
1.0
|
condition not aligned with step - In below example condition in last line should be aligned with step (script) rather than steps group.
```
# parameters.yml
parameters:
- name: doThing
default: false # value passed to the condition
type: boolean
jobs:
- job: B
steps:
- script: echo I did a thing
condition: and(succeeded(), eq('${{ parameters.doThing }}', 'true'))
```
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 21e5cee4-eaae-3a96-db91-540ac759e83a
* Version Independent ID: 9bdc837c-ffe0-d999-f922-f3a5debc7f92
* Content: [Conditions - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/conditions?view=azure-devops&tabs=yaml)
* Content Source: [docs/pipelines/process/conditions.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/conditions.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
process
|
condition not aligned with step in below example condition in last line should be aligned with step script rather than steps group parameters yml parameters name dothing default false value passed to the condition type boolean jobs job b steps script echo i did a thing condition and succeeded eq parameters dothing true document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id eaae version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
| 1
|
26,131
| 7,785,905,451
|
IssuesEvent
|
2018-06-06 17:14:33
|
paypal/NNAnalytics
|
https://api.github.com/repos/paypal/NNAnalytics
|
opened
|
Setup Codecov in Travis CI build
|
build enhancement
|
We now have Codecov hosting at: https://codecov.io/gh/paypal/NNAnalytics
We already have JaCoCo plugin installed in NNA Gradle build; we just need to configure some build options and set the Travis build. Documentation how to do so provided here: https://github.com/codecov/example-gradle
|
1.0
|
Setup Codecov in Travis CI build - We now have Codecov hosting at: https://codecov.io/gh/paypal/NNAnalytics
We already have JaCoCo plugin installed in NNA Gradle build; we just need to configure some build options and set the Travis build. Documentation how to do so provided here: https://github.com/codecov/example-gradle
|
non_process
|
setup codecov in travis ci build we now have codecov hosting at we already have jacoco plugin installed in nna gradle build we just need to configure some build options and set the travis build documentation how to do so provided here
| 0
|
11,650
| 14,503,441,996
|
IssuesEvent
|
2020-12-11 22:43:15
|
kubernetes/minikube
|
https://api.github.com/repos/kubernetes/minikube
|
closed
|
investigate why functional test takes 8 minutes on windows
|
kind/process os/windows priority/important-soon
|
on linux functional part of the integration test takes 2 minutes but on windows it takes 8 minutes (both nested azure vms and physical windows)
we need to investigate what part of the test is slower and why
```
make integration-functional-only
```
|
1.0
|
investigate why functional test takes 8 minutes on windows - on linux functional part of the integration test takes 2 minutes but on windows it takes 8 minutes (both nested azure vms and physical windows)
we need to investigate what part of the test is slower and why
```
make integration-functional-only
```
|
process
|
investigate why functional test takes minutes on windows on linux functional part of the integration test takes minutes but on windows it takes minutes both nested azure vms and physical windows we need to investigate what part of the test is slower and why make integration functional only
| 1
|
207,594
| 16,089,141,637
|
IssuesEvent
|
2021-04-26 14:44:13
|
api3dao/api3-docs
|
https://api.github.com/repos/api3dao/api3-docs
|
opened
|
OIS Security Key needs improvement
|
documentation
|
Currently, the api-integration.md (OIS) file pushed the user to an OAS web page to learn how to add security to an OIS object. It may be better to try and explain security in line.
|
1.0
|
OIS Security Key needs improvement - Currently, the api-integration.md (OIS) file pushed the user to an OAS web page to learn how to add security to an OIS object. It may be better to try and explain security in line.
|
non_process
|
ois security key needs improvement currently the api integration md ois file pushed the user to an oas web page to learn how to add security to an ois object it may be better to try and explain security in line
| 0
|
21,540
| 29,864,211,309
|
IssuesEvent
|
2023-06-20 01:18:27
|
cncf/tag-security
|
https://api.github.com/repos/cncf/tag-security
|
closed
|
where to put assessment summary slides in the repo?
|
assessment-process inactive
|
I don't think the [assessment summary slides](https://docs.google.com/presentation/d/1Cinwa2grYSdP4yNQKS1ICIBH3oSkBPgDZl2RlG5asRA/edit#slide=id.g5a0cdb412d_0_0) are linked from anywhere...
I would prefer an open format, but don't know of one that works well for slides. In the meantime, these should be linked somewhere
|
1.0
|
where to put assessment summary slides in the repo? - I don't think the [assessment summary slides](https://docs.google.com/presentation/d/1Cinwa2grYSdP4yNQKS1ICIBH3oSkBPgDZl2RlG5asRA/edit#slide=id.g5a0cdb412d_0_0) are linked from anywhere...
I would prefer an open format, but don't know of one that works well for slides. In the meantime, these should be linked somewhere
|
process
|
where to put assessment summary slides in the repo i don t think the are linked from anywhere i would prefer an open format but don t know of one that works well for slides in the meantime these should be linked somewhere
| 1
|
5,026
| 2,610,164,992
|
IssuesEvent
|
2015-02-26 18:52:17
|
chrsmith/republic-at-war
|
https://api.github.com/repos/chrsmith/republic-at-war
|
closed
|
Gameplay Error
|
auto-migrated Priority-Medium Type-Defect
|
```
Commander Cody is only 20 credits
Delta Squad is only 20 credits
Boris Offee is 100 credits
Luminara Unduli is 100 credits
Obi-Wan is 230 credits
Plo koon is 100 credits
```
-----
Original issue reported on code.google.com by `z3r0...@gmail.com` on 4 May 2011 at 11:29
|
1.0
|
Gameplay Error - ```
Commander Cody is only 20 credits
Delta Squad is only 20 credits
Boris Offee is 100 credits
Luminara Unduli is 100 credits
Obi-Wan is 230 credits
Plo koon is 100 credits
```
-----
Original issue reported on code.google.com by `z3r0...@gmail.com` on 4 May 2011 at 11:29
|
non_process
|
gameplay error commander cody is only credits delta squad is only credits boris offee is credits luminara unduli is credits obi wan is credits plo koon is credits original issue reported on code google com by gmail com on may at
| 0
|
38,694
| 6,689,172,039
|
IssuesEvent
|
2017-10-08 22:57:49
|
general-language-syntax/GLS
|
https://api.github.com/repos/general-language-syntax/GLS
|
opened
|
Update CONTRIBUTING.md to be more helpful
|
claimed documentation
|
It's quite old and not very useful for getting started!
|
1.0
|
Update CONTRIBUTING.md to be more helpful - It's quite old and not very useful for getting started!
|
non_process
|
update contributing md to be more helpful it s quite old and not very useful for getting started
| 0
|
667,653
| 22,495,528,150
|
IssuesEvent
|
2022-06-23 07:12:45
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
business.apple.com - site is not usable
|
browser-firefox priority-critical engine-gecko
|
<!-- @browser: Firefox 101.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:101.0) Gecko/20100101 Firefox/101.0 -->
<!-- @reported_with: unknown -->
**URL**: https://business.apple.com/
**Browser / Version**: Firefox 101.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes Chrome
**Problem type**: Site is not usable
**Description**: Browser unsupported
**Steps to Reproduce**:
Simply attempted to load the URL https://business.apple.com/ and was immediately redirected to the unsupported page which specifically calls out Firefox as supported. Firefox 101.0.1 (64-bit) is the latest at this time. Also tried in private window with all add-on's disabled and it still wouldn't load.
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2022/6/031f845a-cf42-48c2-a0cd-f5b2be981786.jpg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
business.apple.com - site is not usable - <!-- @browser: Firefox 101.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:101.0) Gecko/20100101 Firefox/101.0 -->
<!-- @reported_with: unknown -->
**URL**: https://business.apple.com/
**Browser / Version**: Firefox 101.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes Chrome
**Problem type**: Site is not usable
**Description**: Browser unsupported
**Steps to Reproduce**:
Simply attempted to load the URL https://business.apple.com/ and was immediately redirected to the unsupported page which specifically calls out Firefox as supported. Firefox 101.0.1 (64-bit) is the latest at this time. Also tried in private window with all add-on's disabled and it still wouldn't load.
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2022/6/031f845a-cf42-48c2-a0cd-f5b2be981786.jpg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
business apple com site is not usable url browser version firefox operating system windows tested another browser yes chrome problem type site is not usable description browser unsupported steps to reproduce simply attempted to load the url and was immediately redirected to the unsupported page which specifically calls out firefox as supported firefox bit is the latest at this time also tried in private window with all add on s disabled and it still wouldn t load view the screenshot img alt screenshot src browser configuration none from with ❤️
| 0
|
63,978
| 3,203,093,257
|
IssuesEvent
|
2015-10-02 17:13:04
|
ceylon/ceylon-js
|
https://api.github.com/repos/ceylon/ceylon-js
|
closed
|
iterable weirdness
|
bug high priority
|
I'm guessing this is new:
```ceylon
shared void run() {
print({ 1, 2 }.filter((Integer x) => true)); // prints "1"
print({ 1, 2 }.filter((Integer x) => true).sequence()); // error
}
```
console:
```
jvasileff@orion:simple$ ceylon compile-js simple && ceylon run-js simple
Note: Created module simple/1.0.0
1
/private/tmp/simple/modules/simple/1.0.0/simple-1.0.0.js:12
le:m$1.mtt$([{t:m$1.Integer}]),Return$Callable:{t:m$1.$_Boolean}})).sequence()
^
TypeError: Object 1 has no method 'sequence'
at run (/private/tmp/simple/modules/simple/1.0.0/simple-1.0.0.js:12:218)
at [eval]:1:282
at Object.<anonymous> ([eval]-wrapper:6:22)
at Module._compile (module.js:456:26)
at evalScript (node.js:536:25)
at startup (node.js:80:7)
at node.js:906:3
```
|
1.0
|
iterable weirdness - I'm guessing this is new:
```ceylon
shared void run() {
print({ 1, 2 }.filter((Integer x) => true)); // prints "1"
print({ 1, 2 }.filter((Integer x) => true).sequence()); // error
}
```
console:
```
jvasileff@orion:simple$ ceylon compile-js simple && ceylon run-js simple
Note: Created module simple/1.0.0
1
/private/tmp/simple/modules/simple/1.0.0/simple-1.0.0.js:12
le:m$1.mtt$([{t:m$1.Integer}]),Return$Callable:{t:m$1.$_Boolean}})).sequence()
^
TypeError: Object 1 has no method 'sequence'
at run (/private/tmp/simple/modules/simple/1.0.0/simple-1.0.0.js:12:218)
at [eval]:1:282
at Object.<anonymous> ([eval]-wrapper:6:22)
at Module._compile (module.js:456:26)
at evalScript (node.js:536:25)
at startup (node.js:80:7)
at node.js:906:3
```
|
non_process
|
iterable weirdness i m guessing this is new ceylon shared void run print filter integer x true prints print filter integer x true sequence error console jvasileff orion simple ceylon compile js simple ceylon run js simple note created module simple private tmp simple modules simple simple js le m mtt return callable t m boolean sequence typeerror object has no method sequence at run private tmp simple modules simple simple js at at object wrapper at module compile module js at evalscript node js at startup node js at node js
| 0
|
72,705
| 31,769,019,983
|
IssuesEvent
|
2023-09-12 10:32:15
|
gauravrs18/issue_onboarding
|
https://api.github.com/repos/gauravrs18/issue_onboarding
|
closed
|
dev-angular-code-account-services-new-connection-component-approve-component
-consumer-details-component
-connect-component
-work-order-type-component
|
CX-account-services
|
dev-angular-code-account-services-new-connection-component-approve-component
-consumer-details-component
-connect-component
-work-order-type-component
|
1.0
|
dev-angular-code-account-services-new-connection-component-approve-component
-consumer-details-component
-connect-component
-work-order-type-component - dev-angular-code-account-services-new-connection-component-approve-component
-consumer-details-component
-connect-component
-work-order-type-component
|
non_process
|
dev angular code account services new connection component approve component consumer details component connect component work order type component dev angular code account services new connection component approve component consumer details component connect component work order type component
| 0
|
16,720
| 21,882,870,731
|
IssuesEvent
|
2022-05-19 15:43:04
|
RobertCraigie/prisma-client-py
|
https://api.github.com/repos/RobertCraigie/prisma-client-py
|
closed
|
prisma generate crash with new release of pydantic (1.9.1)
|
bug/2-confirmed kind/bug process/candidate priority/high
|
## Bug description
`prisma generate` crashes with this error when pydantic 1.9.1 is installed by pip as a dependency:
```shell
$ prisma generate
Prisma schema loaded from prisma/schema.prisma
Error:
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for Model
info
cannot pickle 'generator' object (type=type_error)
```
## How to reproduce
1. Install prisma from pip
2. If the installed version of pydantic is not 1.9.1, force the install of this version with `pip install pydantic==1.9.1`
3. `prisma generate` any schema
4. See error
## Expected behavior
No error
## Prisma information
```prisma
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
}
generator client_py {
provider = "prisma-client-py"
interface = "sync"
output = "../src/thoth_database/prisma"
}
model Article {
article_id Int @id @default(autoincrement())
}
```
## Environment & setup
<!-- In which environment does the problem occur -->
- OS: Linux
- Database: PostgreSQL
- Python version: Pyhon 3.9.10
- Prisma version:
```
prisma : 3.13.0
prisma client python : 0.6.5
platform : debian-openssl-1.1.x
engines : efdf9b1183dddfd4258cd181a72125755215ab7b
install path : /home/elassyo/.pyenv/versions/3.9.10/lib/python3.9/site-packages/prisma
installed extras : []
```
|
1.0
|
prisma generate crash with new release of pydantic (1.9.1) - ## Bug description
`prisma generate` crashes with this error when pydantic 1.9.1 is installed by pip as a dependency:
```shell
$ prisma generate
Prisma schema loaded from prisma/schema.prisma
Error:
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for Model
info
cannot pickle 'generator' object (type=type_error)
```
## How to reproduce
1. Install prisma from pip
2. If the installed version of pydantic is not 1.9.1, force the install of this version with `pip install pydantic==1.9.1`
3. `prisma generate` any schema
4. See error
## Expected behavior
No error
## Prisma information
```prisma
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
}
generator client_py {
provider = "prisma-client-py"
interface = "sync"
output = "../src/thoth_database/prisma"
}
model Article {
article_id Int @id @default(autoincrement())
}
```
## Environment & setup
<!-- In which environment does the problem occur -->
- OS: Linux
- Database: PostgreSQL
- Python version: Pyhon 3.9.10
- Prisma version:
```
prisma : 3.13.0
prisma client python : 0.6.5
platform : debian-openssl-1.1.x
engines : efdf9b1183dddfd4258cd181a72125755215ab7b
install path : /home/elassyo/.pyenv/versions/3.9.10/lib/python3.9/site-packages/prisma
installed extras : []
```
|
process
|
prisma generate crash with new release of pydantic bug description prisma generate crashes with this error when pydantic is installed by pip as a dependency shell prisma generate prisma schema loaded from prisma schema prisma error file pydantic main py line in pydantic main basemodel init pydantic error wrappers validationerror validation error for model info cannot pickle generator object type type error how to reproduce install prisma from pip if the installed version of pydantic is not force the install of this version with pip install pydantic prisma generate any schema see error expected behavior no error prisma information prisma datasource db provider postgresql url env database url generator client py provider prisma client py interface sync output src thoth database prisma model article article id int id default autoincrement environment setup os linux database postgresql python version pyhon prisma version prisma prisma client python platform debian openssl x engines install path home elassyo pyenv versions lib site packages prisma installed extras
| 1
|
235,274
| 7,736,056,787
|
IssuesEvent
|
2018-05-27 21:53:49
|
Blockrazor/blockrazor
|
https://api.github.com/repos/Blockrazor/blockrazor
|
closed
|
problem: [/addcoin] user not conveniently informed of errors
|
Paid-contributor Priority done
|
Problem: for add coin wizard, the more of validations happens on the server side, only after completely submitting the form.
So when there is validation issue in an intermediary step, it wont focused, user has to go back step by step searching where's the issue
Solution:
1. introduce step by step validations with server.
(currently there are only client side step validations, which only checks whether some fields non-empty, and not checks their values are valid or within the range etc.)
or
2. after wizard form submitted, if there is validation issue in intermediary step, automatically navigate to that step and highlight the fields which has issues
or
3. at least highlight the step headers, where the validation issues exist. so user can manually navigate to that step and correct
(If step by step dynamic validation not introducing, some validations like "currency symbol cannot exceed 5 characters" need to be also happen in the client side, otherwise very annoying to go back in the final moment. It is ok if going with #1 solution)
|
1.0
|
problem: [/addcoin] user not conveniently informed of errors - Problem: for add coin wizard, the more of validations happens on the server side, only after completely submitting the form.
So when there is validation issue in an intermediary step, it wont focused, user has to go back step by step searching where's the issue
Solution:
1. introduce step by step validations with server.
(currently there are only client side step validations, which only checks whether some fields non-empty, and not checks their values are valid or within the range etc.)
or
2. after wizard form submitted, if there is validation issue in intermediary step, automatically navigate to that step and highlight the fields which has issues
or
3. at least highlight the step headers, where the validation issues exist. so user can manually navigate to that step and correct
(If step by step dynamic validation not introducing, some validations like "currency symbol cannot exceed 5 characters" need to be also happen in the client side, otherwise very annoying to go back in the final moment. It is ok if going with #1 solution)
|
non_process
|
problem user not conveniently informed of errors problem for add coin wizard the more of validations happens on the server side only after completely submitting the form so when there is validation issue in an intermediary step it wont focused user has to go back step by step searching where s the issue solution introduce step by step validations with server currently there are only client side step validations which only checks whether some fields non empty and not checks their values are valid or within the range etc or after wizard form submitted if there is validation issue in intermediary step automatically navigate to that step and highlight the fields which has issues or at least highlight the step headers where the validation issues exist so user can manually navigate to that step and correct if step by step dynamic validation not introducing some validations like currency symbol cannot exceed characters need to be also happen in the client side otherwise very annoying to go back in the final moment it is ok if going with solution
| 0
|
131,259
| 18,234,879,121
|
IssuesEvent
|
2021-10-01 05:01:35
|
graywidjaya/snyk-scanning-testing
|
https://api.github.com/repos/graywidjaya/snyk-scanning-testing
|
opened
|
CVE-2019-10202 (High) detected in jackson-databind-2.9.8.jar
|
security vulnerability
|
## CVE-2019-10202 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.8.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: snyk-scanning-testing/ProductManager/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.8/jackson-databind-2.9.8.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.3.RELEASE.jar (Root Library)
- spring-boot-starter-json-2.1.3.RELEASE.jar
- :x: **jackson-databind-2.9.8.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/graywidjaya/snyk-scanning-testing/commit/8e11d4935d4cae9cfc1d6d0b55433a3b1002a16e">8e11d4935d4cae9cfc1d6d0b55433a3b1002a16e</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A series of deserialization vulnerabilities have been discovered in Codehaus 1.9.x implemented in EAP 7. This CVE fixes CVE-2017-17485, CVE-2017-7525, CVE-2017-15095, CVE-2018-5968, CVE-2018-7489, CVE-2018-1000873, CVE-2019-12086 reported for FasterXML jackson-databind by implementing a whitelist approach that will mitigate these vulnerabilities and future ones alike.
<p>Publish Date: 2019-10-01
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-10202>CVE-2019-10202</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://access.redhat.com/errata/RHSA-2019:2938">https://access.redhat.com/errata/RHSA-2019:2938</a></p>
<p>Release Date: 2019-10-01</p>
<p>Fix Resolution: JBoss Enterprise Application Platform - 7.2.4;com.fasterxml.jackson.core:jackson-databind:2.9.9</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2019-10202 (High) detected in jackson-databind-2.9.8.jar - ## CVE-2019-10202 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.8.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: snyk-scanning-testing/ProductManager/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.8/jackson-databind-2.9.8.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.3.RELEASE.jar (Root Library)
- spring-boot-starter-json-2.1.3.RELEASE.jar
- :x: **jackson-databind-2.9.8.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/graywidjaya/snyk-scanning-testing/commit/8e11d4935d4cae9cfc1d6d0b55433a3b1002a16e">8e11d4935d4cae9cfc1d6d0b55433a3b1002a16e</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A series of deserialization vulnerabilities have been discovered in Codehaus 1.9.x implemented in EAP 7. This CVE fixes CVE-2017-17485, CVE-2017-7525, CVE-2017-15095, CVE-2018-5968, CVE-2018-7489, CVE-2018-1000873, CVE-2019-12086 reported for FasterXML jackson-databind by implementing a whitelist approach that will mitigate these vulnerabilities and future ones alike.
<p>Publish Date: 2019-10-01
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-10202>CVE-2019-10202</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://access.redhat.com/errata/RHSA-2019:2938">https://access.redhat.com/errata/RHSA-2019:2938</a></p>
<p>Release Date: 2019-10-01</p>
<p>Fix Resolution: JBoss Enterprise Application Platform - 7.2.4;com.fasterxml.jackson.core:jackson-databind:2.9.9</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file snyk scanning testing productmanager pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy spring boot starter web release jar root library spring boot starter json release jar x jackson databind jar vulnerable library found in head commit a href found in base branch main vulnerability details a series of deserialization vulnerabilities have been discovered in codehaus x implemented in eap this cve fixes cve cve cve cve cve cve cve reported for fasterxml jackson databind by implementing a whitelist approach that will mitigate these vulnerabilities and future ones alike publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jboss enterprise application platform com fasterxml jackson core jackson databind step up your open source security game with whitesource
| 0
|
13,539
| 16,075,569,002
|
IssuesEvent
|
2021-04-25 09:29:45
|
didi/mpx
|
https://api.github.com/repos/didi/mpx
|
closed
|
[Bug report] Cannot find name 'global'; optionProcessor not export member 'getWxsMixin'
|
processing
|
**问题描述**
执行命令 npm run watch:web 后报错
```
TS2614: Module '"../node_modules/@mpxjs/webpack-plugin/lib/runtime/optionProcessor"' has no exported member 'getWxsMixin'.
Did you mean to use 'import getWxsMixin from "../node_modules/@mpxjs/webpack-plugin/lib/runtime/optionProcessor"' instead?
```
```
TS2304: Cannot find name 'global'.
```
**环境信息描述**
MacOS Big Sur 11.2.3
@mpxjs/core: 2.6.59
@mpxjs/api-proxy: 2.6.59
@mpxjs/webpack-plugin: 2.6.59
**最简复现demo**
https://github.com/js5323/mpx-project-ts.git
|
1.0
|
[Bug report] Cannot find name 'global'; optionProcessor not export member 'getWxsMixin' - **问题描述**
执行命令 npm run watch:web 后报错
```
TS2614: Module '"../node_modules/@mpxjs/webpack-plugin/lib/runtime/optionProcessor"' has no exported member 'getWxsMixin'.
Did you mean to use 'import getWxsMixin from "../node_modules/@mpxjs/webpack-plugin/lib/runtime/optionProcessor"' instead?
```
```
TS2304: Cannot find name 'global'.
```
**环境信息描述**
MacOS Big Sur 11.2.3
@mpxjs/core: 2.6.59
@mpxjs/api-proxy: 2.6.59
@mpxjs/webpack-plugin: 2.6.59
**最简复现demo**
https://github.com/js5323/mpx-project-ts.git
|
process
|
cannot find name global optionprocessor not export member getwxsmixin 问题描述 执行命令 npm run watch web 后报错 module node modules mpxjs webpack plugin lib runtime optionprocessor has no exported member getwxsmixin did you mean to use import getwxsmixin from node modules mpxjs webpack plugin lib runtime optionprocessor instead cannot find name global 环境信息描述 macos big sur mpxjs core mpxjs api proxy mpxjs webpack plugin 最简复现demo
| 1
|
10,145
| 13,044,162,531
|
IssuesEvent
|
2020-07-29 03:47:33
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
UCP: Migrate scalar function `RegexpUTF8Sig` from TiDB
|
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
|
## Description
Port the scalar function `RegexpUTF8Sig` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @mapleFU
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
2.0
|
UCP: Migrate scalar function `RegexpUTF8Sig` from TiDB -
## Description
Port the scalar function `RegexpUTF8Sig` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @mapleFU
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
process
|
ucp migrate scalar function from tidb description port the scalar function from tidb to coprocessor score mentor s maplefu recommended skills rust programming learning materials already implemented expressions ported from tidb
| 1
|
13,059
| 15,394,503,413
|
IssuesEvent
|
2021-03-03 17:59:34
|
opendistro-for-elasticsearch/opendistro-build
|
https://api.github.com/repos/opendistro-for-elasticsearch/opendistro-build
|
closed
|
yum repository unavailable
|
bug in process
|
**Describe the bug**
Problem retrieving files from 'Release RPM artifacts of OpenDistroForElasticsearch'.
Permission to access 'https://d3g5vo6xdbdb9a.cloudfront.net/yum/noarch/media.1/media' denied.
**To Reproduce**
Steps to reproduce the behavior:
1. Install 'curl https://d3g5vo6xdbdb9a.cloudfront.net/yum/opendistroforelasticsearch-artifacts.repo -o /etc/yum.repos.d/opendistroforelasticsearch-artifacts.repo'
2. Run 'zypper cc; zypper ref or yum clean all; yum makecache'
3. See error
**Expected behavior**
zypper, yum etc. will cache the repository metadata properly
**Configuration (please complete the following information):**
- ODFE/Kibana version [latest]
- Distribution [RPM]
- Host Machine [OepnSuse 15.2, Centos 7]
**Relevant information**
It's happening since Saturday 12:30 (24H) CET timezone.
|
1.0
|
yum repository unavailable - **Describe the bug**
Problem retrieving files from 'Release RPM artifacts of OpenDistroForElasticsearch'.
Permission to access 'https://d3g5vo6xdbdb9a.cloudfront.net/yum/noarch/media.1/media' denied.
**To Reproduce**
Steps to reproduce the behavior:
1. Install 'curl https://d3g5vo6xdbdb9a.cloudfront.net/yum/opendistroforelasticsearch-artifacts.repo -o /etc/yum.repos.d/opendistroforelasticsearch-artifacts.repo'
2. Run 'zypper cc; zypper ref or yum clean all; yum makecache'
3. See error
**Expected behavior**
zypper, yum etc. will cache the repository metadata properly
**Configuration (please complete the following information):**
- ODFE/Kibana version [latest]
- Distribution [RPM]
- Host Machine [OepnSuse 15.2, Centos 7]
**Relevant information**
It's happening since Saturday 12:30 (24H) CET timezone.
|
process
|
yum repository unavailable describe the bug problem retrieving files from release rpm artifacts of opendistroforelasticsearch permission to access denied to reproduce steps to reproduce the behavior install curl o etc yum repos d opendistroforelasticsearch artifacts repo run zypper cc zypper ref or yum clean all yum makecache see error expected behavior zypper yum etc will cache the repository metadata properly configuration please complete the following information odfe kibana version distribution host machine relevant information it s happening since saturday cet timezone
| 1
|
2,360
| 5,166,317,493
|
IssuesEvent
|
2017-01-17 15:57:10
|
openvstorage/framework
|
https://api.github.com/repos/openvstorage/framework
|
closed
|
Creating multiple backends at once causes them all to fail
|
priority_minor process_wontfix type_bug
|
## Problem description
I was adding multiple backends at once and after a while only the first was installed. I investigated the issue and found the following:
In /var/log/upstart/ovs-workers.log:
```
2016-08-19 15:41:25 99500 +0200 - ovs-node1 - 2935/140020444075840 - celery/celery.worker.strategy - 296 - INFO - Received task: alba.add_cluster[5057bb1a-f247-420b-b2e6-58c7ac51aa0a]
2016-08-19 15:41:25 99500 +0200 - ovs-node1 - 2935/140020444075840 - celery/celery.pool - 297 - DEBUG - TaskPool: Apply <function _fast_trace_task at 0x7f59082da758> (args:('alba.add_cluster', '5057bb1a-f247-420b-b2e6-58c7ac51aa0a', ('b5ddfce9-3bf6-4ab8-aaad-aaa9ef431ce8',), {}, {'utc': True, u'is_eager': False, 'chord': None, u'group': None, 'args': ('b5ddfce9-3bf6-4ab8-aaad-aaa9ef431ce8',), 'retries': 0, u'delivery_info': {u'priority': None, u'redelivered': False, u'routing_key': u'generic.#', u'exchange': u'generic'}, 'expires': None, u'hostname': 'celery@ovs-node1', 'task': 'alba.add_cluster', 'callbacks': None, u'correlation_id': u'5057bb1a-f247-420b-b2e6-58c7ac51aa0a', 'errbacks': None, 'timelimit': (None, None), 'taskset': None, 'kwargs': {}, 'eta': None, u'reply_to': u'5da4bb9e-e471-3234-881d-b5992486b235', 'id': '5057bb1a-f247-420b-b2e6-58c7ac51aa0a', u'headers': {}}) kwargs:{})
2016-08-19 15:41:25 99600 +0200 - ovs-node1 - 2935/140020444075840 - celery/celery.worker.job - 298 - DEBUG - Task accepted: alba.add_cluster[5057bb1a-f247-420b-b2e6-58c7ac51aa0a] pid:14357
2016-08-19 15:41:25 99700 +0200 - ovs-node1 - 14357/140020444075840 - lib/scheduled tasks - 251 - INFO - Ensure single DEFAULT mode - ID 1471614085_fDM9Sbkff5 - Execution of task alba.alba_arakoon_checkup discarded
2016-08-19 15:41:26 06700 +0200 - ovs-node1 - 14357/140020444075840 - celery/celery.redirected - 253 - WARNING - 2016-08-19 15:41:26 06600 +0200 - ovs-node1 - 14357/140020444075840 - extensions/albacli - 252 - ERROR - Error: Arakoon_etcd.ProcessFailure(_)
Traceback (most recent call last):
File "/opt/OpenvStorage/ovs/extensions/plugins/albacli.py", line 103, in run
raise RuntimeError(output['error']['message'])
RuntimeError: Arakoon_etcd.ProcessFailure(_)
2016-08-19 15:41:26 06800 +0200 - ovs-node1 - 14357/140020444075840 - celery/celery.redirected - 255 - WARNING - 2016-08-19 15:41:26 06700 +0200 - ovs-node1 - 14357/140020444075840 - extensions/albacli - 254 - DEBUG - Command: /usr/bin/alba get-alba-id --attempts=5 --config=etcd://127.0.0.1:2379/ovs/arakoon/test19-abm/config --to-json
2016-08-19 15:41:26 06800 +0200 - ovs-node1 - 14357/140020444075840 - celery/celery.redirected - 257 - WARNING - 2016-08-19 15:41:26 06800 +0200 - ovs-node1 - 14357/140020444075840 - extensions/albacli - 256 - DEBUG - stderr: 2016-08-19 15:41:26 41934 +0200 - ovs-node1 - 21790/0 - alba/cli - 0 - info - ETCD: etcdctl --peers=127.0.0.1:2379 get --quorum ovs/arakoon/test19-abm/configError: 100: Key not found (/ovs/arakoon/test19-abm) [897]
2016-08-19 15:41:26 06800 +0200 - ovs-node1 - 14357/140020444075840 - celery/celery.redirected - 259 - WARNING - 2016-08-19 15:41:26 06800 +0200 - ovs-node1 - 14357/140020444075840 - extensions/albacli - 258 - DEBUG - stdout:
{"success":false,"error":{"message":"Arakoon_etcd.ProcessFailure(_)","exception_type":"unknown","exception_code":0}}
2016-08-19 15:41:26 07400 +0200 - ovs-node1 - 2935/140020444075840 - celery/celery.worker.job - 299 - ERROR - Task alba.add_cluster[5057bb1a-f247-420b-b2e6-58c7ac51aa0a] raised unexpected: RuntimeError(u'Arakoon_etcd.ProcessFailure(_)',)
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/celery/app/trace.py", line 240, in trace_task
R = retval = fun(*args, **kwargs)
File "/usr/lib/python2.7/dist-packages/celery/app/trace.py", line 438, in __protected_call__
return self.run(*args, **kwargs)
File "/opt/OpenvStorage/ovs/lib/albacontroller.py", line 288, in add_cluster
alba_backend.alba_id = AlbaCLI.run(command='get-alba-id', config=config, to_json=True, attempts=5)['id']
File "/opt/OpenvStorage/ovs/extensions/plugins/albacli.py", line 103, in run
raise RuntimeError(output['error']['message'])
RuntimeError: Arakoon_etcd.ProcessFailure(_)
```
Due to this error I am left with a list of backends that are stuck on the installing status. You cannot remove these from your list. (Deleting them via Albabackend.delete() doensn't help aswell)


## Additional information
### Setup
Hyperconverged setup
- Three nodes with each three disks for the back-end
### Package information
- openvstorage 2.7.2-rev.3859.42b3488-1 amd64 openvStorage
- openvstorage-backend 1.7.2-rev.675.37ca5b8-1 amd64 openvStorage Backend plugin
- openvstorage-backend-core 1.7.2-rev.675.37ca5b8-1 amd64 openvStorage Backend plugin core
- openvstorage-backend-webapps 1.7.2-rev.675.37ca5b8-1 amd64 openvStorage Backend plugin Web Applications
- openvstorage-cinder-plugin 1.2.2-rev.32.948a8c1-1 amd64 OpenvStorage Cinder plugin for OpenStack
- openvstorage-core 2.7.2-rev.3859.42b3488-1 amd64 openvStorage core
- openvstorage-hc 1.7.2-rev.675.37ca5b8-1 amd64 openvStorage Backend plugin HyperConverged
- openvstorage-health-check 2.0.0-rev.117.3212ea9-1 amd64 Open vStorage HealthCheck
- openvstorage-sdm 1.6.2-rev.330.f06c8de-1 amd64 Open vStorage Backend ASD Manager
- openvstorage-test 2.7.2-rev.980.0dbc80f-1 amd64 openvStorage autotest suite
- openvstorage-webapps 2.7.2-rev.3859.42b3488-1 amd64 openvStorage Web Applications
|
1.0
|
Creating multiple backends at once causes them all to fail - ## Problem description
I was adding multiple backends at once and after a while only the first was installed. I investigated the issue and found the following:
In /var/log/upstart/ovs-workers.log:
```
2016-08-19 15:41:25 99500 +0200 - ovs-node1 - 2935/140020444075840 - celery/celery.worker.strategy - 296 - INFO - Received task: alba.add_cluster[5057bb1a-f247-420b-b2e6-58c7ac51aa0a]
2016-08-19 15:41:25 99500 +0200 - ovs-node1 - 2935/140020444075840 - celery/celery.pool - 297 - DEBUG - TaskPool: Apply <function _fast_trace_task at 0x7f59082da758> (args:('alba.add_cluster', '5057bb1a-f247-420b-b2e6-58c7ac51aa0a', ('b5ddfce9-3bf6-4ab8-aaad-aaa9ef431ce8',), {}, {'utc': True, u'is_eager': False, 'chord': None, u'group': None, 'args': ('b5ddfce9-3bf6-4ab8-aaad-aaa9ef431ce8',), 'retries': 0, u'delivery_info': {u'priority': None, u'redelivered': False, u'routing_key': u'generic.#', u'exchange': u'generic'}, 'expires': None, u'hostname': 'celery@ovs-node1', 'task': 'alba.add_cluster', 'callbacks': None, u'correlation_id': u'5057bb1a-f247-420b-b2e6-58c7ac51aa0a', 'errbacks': None, 'timelimit': (None, None), 'taskset': None, 'kwargs': {}, 'eta': None, u'reply_to': u'5da4bb9e-e471-3234-881d-b5992486b235', 'id': '5057bb1a-f247-420b-b2e6-58c7ac51aa0a', u'headers': {}}) kwargs:{})
2016-08-19 15:41:25 99600 +0200 - ovs-node1 - 2935/140020444075840 - celery/celery.worker.job - 298 - DEBUG - Task accepted: alba.add_cluster[5057bb1a-f247-420b-b2e6-58c7ac51aa0a] pid:14357
2016-08-19 15:41:25 99700 +0200 - ovs-node1 - 14357/140020444075840 - lib/scheduled tasks - 251 - INFO - Ensure single DEFAULT mode - ID 1471614085_fDM9Sbkff5 - Execution of task alba.alba_arakoon_checkup discarded
2016-08-19 15:41:26 06700 +0200 - ovs-node1 - 14357/140020444075840 - celery/celery.redirected - 253 - WARNING - 2016-08-19 15:41:26 06600 +0200 - ovs-node1 - 14357/140020444075840 - extensions/albacli - 252 - ERROR - Error: Arakoon_etcd.ProcessFailure(_)
Traceback (most recent call last):
File "/opt/OpenvStorage/ovs/extensions/plugins/albacli.py", line 103, in run
raise RuntimeError(output['error']['message'])
RuntimeError: Arakoon_etcd.ProcessFailure(_)
2016-08-19 15:41:26 06800 +0200 - ovs-node1 - 14357/140020444075840 - celery/celery.redirected - 255 - WARNING - 2016-08-19 15:41:26 06700 +0200 - ovs-node1 - 14357/140020444075840 - extensions/albacli - 254 - DEBUG - Command: /usr/bin/alba get-alba-id --attempts=5 --config=etcd://127.0.0.1:2379/ovs/arakoon/test19-abm/config --to-json
2016-08-19 15:41:26 06800 +0200 - ovs-node1 - 14357/140020444075840 - celery/celery.redirected - 257 - WARNING - 2016-08-19 15:41:26 06800 +0200 - ovs-node1 - 14357/140020444075840 - extensions/albacli - 256 - DEBUG - stderr: 2016-08-19 15:41:26 41934 +0200 - ovs-node1 - 21790/0 - alba/cli - 0 - info - ETCD: etcdctl --peers=127.0.0.1:2379 get --quorum ovs/arakoon/test19-abm/configError: 100: Key not found (/ovs/arakoon/test19-abm) [897]
2016-08-19 15:41:26 06800 +0200 - ovs-node1 - 14357/140020444075840 - celery/celery.redirected - 259 - WARNING - 2016-08-19 15:41:26 06800 +0200 - ovs-node1 - 14357/140020444075840 - extensions/albacli - 258 - DEBUG - stdout:
{"success":false,"error":{"message":"Arakoon_etcd.ProcessFailure(_)","exception_type":"unknown","exception_code":0}}
2016-08-19 15:41:26 07400 +0200 - ovs-node1 - 2935/140020444075840 - celery/celery.worker.job - 299 - ERROR - Task alba.add_cluster[5057bb1a-f247-420b-b2e6-58c7ac51aa0a] raised unexpected: RuntimeError(u'Arakoon_etcd.ProcessFailure(_)',)
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/celery/app/trace.py", line 240, in trace_task
R = retval = fun(*args, **kwargs)
File "/usr/lib/python2.7/dist-packages/celery/app/trace.py", line 438, in __protected_call__
return self.run(*args, **kwargs)
File "/opt/OpenvStorage/ovs/lib/albacontroller.py", line 288, in add_cluster
alba_backend.alba_id = AlbaCLI.run(command='get-alba-id', config=config, to_json=True, attempts=5)['id']
File "/opt/OpenvStorage/ovs/extensions/plugins/albacli.py", line 103, in run
raise RuntimeError(output['error']['message'])
RuntimeError: Arakoon_etcd.ProcessFailure(_)
```
Due to this error I am left with a list of backends that are stuck on the installing status. You cannot remove these from your list. (Deleting them via Albabackend.delete() doensn't help aswell)


## Additional information
### Setup
Hyperconverged setup
- Three nodes with each three disks for the back-end
### Package information
- openvstorage 2.7.2-rev.3859.42b3488-1 amd64 openvStorage
- openvstorage-backend 1.7.2-rev.675.37ca5b8-1 amd64 openvStorage Backend plugin
- openvstorage-backend-core 1.7.2-rev.675.37ca5b8-1 amd64 openvStorage Backend plugin core
- openvstorage-backend-webapps 1.7.2-rev.675.37ca5b8-1 amd64 openvStorage Backend plugin Web Applications
- openvstorage-cinder-plugin 1.2.2-rev.32.948a8c1-1 amd64 OpenvStorage Cinder plugin for OpenStack
- openvstorage-core 2.7.2-rev.3859.42b3488-1 amd64 openvStorage core
- openvstorage-hc 1.7.2-rev.675.37ca5b8-1 amd64 openvStorage Backend plugin HyperConverged
- openvstorage-health-check 2.0.0-rev.117.3212ea9-1 amd64 Open vStorage HealthCheck
- openvstorage-sdm 1.6.2-rev.330.f06c8de-1 amd64 Open vStorage Backend ASD Manager
- openvstorage-test 2.7.2-rev.980.0dbc80f-1 amd64 openvStorage autotest suite
- openvstorage-webapps 2.7.2-rev.3859.42b3488-1 amd64 openvStorage Web Applications
|
process
|
creating multiple backends at once causes them all to fail problem description i was adding multiple backends at once and after a while only the first was installed i investigated the issue and found the following in var log upstart ovs workers log ovs celery celery worker strategy info received task alba add cluster ovs celery celery pool debug taskpool apply args alba add cluster aaad utc true u is eager false chord none u group none args aaad retries u delivery info u priority none u redelivered false u routing key u generic u exchange u generic expires none u hostname celery ovs task alba add cluster callbacks none u correlation id u errbacks none timelimit none none taskset none kwargs eta none u reply to u id u headers kwargs ovs celery celery worker job debug task accepted alba add cluster pid ovs lib scheduled tasks info ensure single default mode id execution of task alba alba arakoon checkup discarded ovs celery celery redirected warning ovs extensions albacli error error arakoon etcd processfailure traceback most recent call last file opt openvstorage ovs extensions plugins albacli py line in run raise runtimeerror output runtimeerror arakoon etcd processfailure ovs celery celery redirected warning ovs extensions albacli debug command usr bin alba get alba id attempts config etcd ovs arakoon abm config to json ovs celery celery redirected warning ovs extensions albacli debug stderr ovs alba cli info etcd etcdctl peers get quorum ovs arakoon abm configerror key not found ovs arakoon abm ovs celery celery redirected warning ovs extensions albacli debug stdout success false error message arakoon etcd processfailure exception type unknown exception code ovs celery celery worker job error task alba add cluster raised unexpected runtimeerror u arakoon etcd processfailure traceback most recent call last file usr lib dist packages celery app trace py line in trace task r retval fun args kwargs file usr lib dist packages celery app trace py line in protected call return self run args kwargs file opt openvstorage ovs lib albacontroller py line in add cluster alba backend alba id albacli run command get alba id config config to json true attempts file opt openvstorage ovs extensions plugins albacli py line in run raise runtimeerror output runtimeerror arakoon etcd processfailure due to this error i am left with a list of backends that are stuck on the installing status you cannot remove these from your list deleting them via albabackend delete doensn t help aswell additional information setup hyperconverged setup three nodes with each three disks for the back end package information openvstorage rev openvstorage openvstorage backend rev openvstorage backend plugin openvstorage backend core rev openvstorage backend plugin core openvstorage backend webapps rev openvstorage backend plugin web applications openvstorage cinder plugin rev openvstorage cinder plugin for openstack openvstorage core rev openvstorage core openvstorage hc rev openvstorage backend plugin hyperconverged openvstorage health check rev open vstorage healthcheck openvstorage sdm rev open vstorage backend asd manager openvstorage test rev openvstorage autotest suite openvstorage webapps rev openvstorage web applications
| 1
|
3,240
| 9,307,506,269
|
IssuesEvent
|
2019-03-25 12:28:42
|
ietf-tapswg/api-drafts
|
https://api.github.com/repos/ietf-tapswg/api-drafts
|
closed
|
Consider per-context addresses as suggested by draft-gont-taps-address-usage-*
|
API Architecture discuss
|
The discussion around ``draft-gont-taps-address-usage-problem-statement`` and ``draft-gont-taps-address-usage-analysis``suggests that it should be possible to request per-application / per-context / per-connection source addresses.
- For the cases of per-application and per-connection source addresses, this can be tuning this can be accomplished by the addition of a property. This includes name-resolution issues.
- For per-context source addresses, I am not sure whether our current architecture is fit to support this.
Consider the following case: a browser wants to separate different tabs onto different source addresses. It needs some way to specify some kind of *context* to allow the transport API either to assign the right address to a new connection or request a new address for all upcoming connections and name resolutions done within this context.
As this is orthogonal to multi-streaming, simply cloning an existing connection is not sufficient.
One possible solution would be to introduce some new "Taps Context" that is used when creating pre-connections, but that looks like overkill for this feature…
|
1.0
|
Consider per-context addresses as suggested by draft-gont-taps-address-usage-* - The discussion around ``draft-gont-taps-address-usage-problem-statement`` and ``draft-gont-taps-address-usage-analysis``suggests that it should be possible to request per-application / per-context / per-connection source addresses.
- For the cases of per-application and per-connection source addresses, this can be tuning this can be accomplished by the addition of a property. This includes name-resolution issues.
- For per-context source addresses, I am not sure whether our current architecture is fit to support this.
Consider the following case: a browser wants to separate different tabs onto different source addresses. It needs some way to specify some kind of *context* to allow the transport API either to assign the right address to a new connection or request a new address for all upcoming connections and name resolutions done within this context.
As this is orthogonal to multi-streaming, simply cloning an existing connection is not sufficient.
One possible solution would be to introduce some new "Taps Context" that is used when creating pre-connections, but that looks like overkill for this feature…
|
non_process
|
consider per context addresses as suggested by draft gont taps address usage the discussion around draft gont taps address usage problem statement and draft gont taps address usage analysis suggests that it should be possible to request per application per context per connection source addresses for the cases of per application and per connection source addresses this can be tuning this can be accomplished by the addition of a property this includes name resolution issues for per context source addresses i am not sure whether our current architecture is fit to support this consider the following case a browser wants to separate different tabs onto different source addresses it needs some way to specify some kind of context to allow the transport api either to assign the right address to a new connection or request a new address for all upcoming connections and name resolutions done within this context as this is orthogonal to multi streaming simply cloning an existing connection is not sufficient one possible solution would be to introduce some new taps context that is used when creating pre connections but that looks like overkill for this feature…
| 0
|
15,926
| 20,142,744,383
|
IssuesEvent
|
2022-02-09 02:07:04
|
RobertCraigie/prisma-client-py
|
https://api.github.com/repos/RobertCraigie/prisma-client-py
|
closed
|
Using Prisma Model in FastAPI response annotation gives warnings
|
bug/2-confirmed kind/bug process/candidate level/beginner priority/high topic: dx
|
<!--
Thanks for helping us improve Prisma Client Python! 🙏 Please follow the sections in the template and provide as much information as possible about your problem, e.g. by enabling additional logging output.
See https://prisma-client-py.readthedocs.io/en/stable/reference/logging/ for how to enable additional logging output.
-->
## Bug description
Using Prisma Model in FastAPI response annotation gives warnings about being subclassed, when user cannot change behaviour.
<!-- A clear and concise description of what the bug is. -->
## How to reproduce
Run a basic FastAPI app and return data from db using prisma, ensure to use type annotations or the response model (I used fastapi-utils InferringRouter). In the output will be something similar to:
```bash
backend-fastapi-prisma | /usr/local/lib/python3.10/site-packages/fastapi/utils.py:88: UnsupportedSubclassWarning: Subclassing models while using pseudo-recursive types may cause unexpected errors when static type checking;
backend-fastapi-prisma | You can disable this warning by generating fully recursive types:
backend-fastapi-prisma | https://prisma-client-py.readthedocs.io/en/stable/reference/config/#recursive
backend-fastapi-prisma | or if that is not possible you can pass warn_subclass=False e.g.
backend-fastapi-prisma | class Role(prisma.models.Role, warn_subclass=False):
backend-fastapi-prisma | use_type = create_model(original_type.__name__, __base__=original_type)
```
<!--
Steps to reproduce the behavior:
1. Go to '...'
2. Change '....'
3. Run '....'
4. See error
-->
## Expected behavior
No Warning to be shown
<!-- A clear and concise description of what you expected to happen. -->
## Prisma information
<!-- Your Prisma schema, Prisma Client Python queries, ...
Do not include your database credentials when sharing your Prisma schema! -->
Any schema
## Environment & setup
<!-- In which environment does the problem occur -->
- OS: All
- Database: All
- Python version: All
- Prisma version: 0.5.0
<!--[Run `prisma py version` to see your Prisma version and paste it between the ´´´]-->
```
```
|
1.0
|
Using Prisma Model in FastAPI response annotation gives warnings - <!--
Thanks for helping us improve Prisma Client Python! 🙏 Please follow the sections in the template and provide as much information as possible about your problem, e.g. by enabling additional logging output.
See https://prisma-client-py.readthedocs.io/en/stable/reference/logging/ for how to enable additional logging output.
-->
## Bug description
Using Prisma Model in FastAPI response annotation gives warnings about being subclassed, when user cannot change behaviour.
<!-- A clear and concise description of what the bug is. -->
## How to reproduce
Run a basic FastAPI app and return data from db using prisma, ensure to use type annotations or the response model (I used fastapi-utils InferringRouter). In the output will be something similar to:
```bash
backend-fastapi-prisma | /usr/local/lib/python3.10/site-packages/fastapi/utils.py:88: UnsupportedSubclassWarning: Subclassing models while using pseudo-recursive types may cause unexpected errors when static type checking;
backend-fastapi-prisma | You can disable this warning by generating fully recursive types:
backend-fastapi-prisma | https://prisma-client-py.readthedocs.io/en/stable/reference/config/#recursive
backend-fastapi-prisma | or if that is not possible you can pass warn_subclass=False e.g.
backend-fastapi-prisma | class Role(prisma.models.Role, warn_subclass=False):
backend-fastapi-prisma | use_type = create_model(original_type.__name__, __base__=original_type)
```
<!--
Steps to reproduce the behavior:
1. Go to '...'
2. Change '....'
3. Run '....'
4. See error
-->
## Expected behavior
No Warning to be shown
<!-- A clear and concise description of what you expected to happen. -->
## Prisma information
<!-- Your Prisma schema, Prisma Client Python queries, ...
Do not include your database credentials when sharing your Prisma schema! -->
Any schema
## Environment & setup
<!-- In which environment does the problem occur -->
- OS: All
- Database: All
- Python version: All
- Prisma version: 0.5.0
<!--[Run `prisma py version` to see your Prisma version and paste it between the ´´´]-->
```
```
|
process
|
using prisma model in fastapi response annotation gives warnings thanks for helping us improve prisma client python 🙏 please follow the sections in the template and provide as much information as possible about your problem e g by enabling additional logging output see for how to enable additional logging output bug description using prisma model in fastapi response annotation gives warnings about being subclassed when user cannot change behaviour how to reproduce run a basic fastapi app and return data from db using prisma ensure to use type annotations or the response model i used fastapi utils inferringrouter in the output will be something similar to bash backend fastapi prisma usr local lib site packages fastapi utils py unsupportedsubclasswarning subclassing models while using pseudo recursive types may cause unexpected errors when static type checking backend fastapi prisma you can disable this warning by generating fully recursive types backend fastapi prisma backend fastapi prisma or if that is not possible you can pass warn subclass false e g backend fastapi prisma class role prisma models role warn subclass false backend fastapi prisma use type create model original type name base original type steps to reproduce the behavior go to change run see error expected behavior no warning to be shown prisma information your prisma schema prisma client python queries do not include your database credentials when sharing your prisma schema any schema environment setup os all database all python version all prisma version
| 1
|
261,890
| 27,828,514,907
|
IssuesEvent
|
2023-03-20 01:06:36
|
IncPlusPlus/betterstat-server
|
https://api.github.com/repos/IncPlusPlus/betterstat-server
|
opened
|
CVE-2022-1471 (High) detected in snakeyaml-1.26.jar
|
Mend: dependency security vulnerability
|
## CVE-2022-1471 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>snakeyaml-1.26.jar</b></p></summary>
<p>YAML 1.1 parser and emitter for Java</p>
<p>Library home page: <a href="http://www.snakeyaml.org">http://www.snakeyaml.org</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.26/snakeyaml-1.26.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-2.3.0.RC1.jar (Root Library)
- :x: **snakeyaml-1.26.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
SnakeYaml's Constructor() class does not restrict types which can be instantiated during deserialization. Deserializing yaml content provided by an attacker can lead to remote code execution. We recommend using SnakeYaml's SafeConsturctor when parsing untrusted content to restrict deserialization.
<p>Publish Date: 2022-12-01
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-1471>CVE-2022-1471</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://bitbucket.org/snakeyaml/snakeyaml/issues/561/cve-2022-1471-vulnerability-in#comment-64634374">https://bitbucket.org/snakeyaml/snakeyaml/issues/561/cve-2022-1471-vulnerability-in#comment-64634374</a></p>
<p>Release Date: 2022-12-01</p>
<p>Fix Resolution: org.yaml:snakeyaml:2.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-1471 (High) detected in snakeyaml-1.26.jar - ## CVE-2022-1471 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>snakeyaml-1.26.jar</b></p></summary>
<p>YAML 1.1 parser and emitter for Java</p>
<p>Library home page: <a href="http://www.snakeyaml.org">http://www.snakeyaml.org</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.26/snakeyaml-1.26.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-2.3.0.RC1.jar (Root Library)
- :x: **snakeyaml-1.26.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
SnakeYaml's Constructor() class does not restrict types which can be instantiated during deserialization. Deserializing yaml content provided by an attacker can lead to remote code execution. We recommend using SnakeYaml's SafeConsturctor when parsing untrusted content to restrict deserialization.
<p>Publish Date: 2022-12-01
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-1471>CVE-2022-1471</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://bitbucket.org/snakeyaml/snakeyaml/issues/561/cve-2022-1471-vulnerability-in#comment-64634374">https://bitbucket.org/snakeyaml/snakeyaml/issues/561/cve-2022-1471-vulnerability-in#comment-64634374</a></p>
<p>Release Date: 2022-12-01</p>
<p>Fix Resolution: org.yaml:snakeyaml:2.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in snakeyaml jar cve high severity vulnerability vulnerable library snakeyaml jar yaml parser and emitter for java library home page a href path to dependency file pom xml path to vulnerable library home wss scanner repository org yaml snakeyaml snakeyaml jar dependency hierarchy spring boot starter jar root library x snakeyaml jar vulnerable library found in base branch master vulnerability details snakeyaml s constructor class does not restrict types which can be instantiated during deserialization deserializing yaml content provided by an attacker can lead to remote code execution we recommend using snakeyaml s safeconsturctor when parsing untrusted content to restrict deserialization publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org yaml snakeyaml step up your open source security game with mend
| 0
|
11,515
| 14,399,102,648
|
IssuesEvent
|
2020-12-03 10:28:10
|
decidim/decidim
|
https://api.github.com/repos/decidim/decidim
|
closed
|
Add Metadata in Process Groups
|
contract: process-groups
|
Ref: PG01-3
**Is your feature request related to a problem? Please describe.**
As a visitor I want to know extra information about the Process Group:
* how many processes there are within that group of processes: count of all the processes
* project's website: URL
* who promotes the process
* organization area
* scopes
* who it's oriented to (who participates)
* what is decided
* how it's decided
Note that most of these fields are already on Participatory Processes. They are all textarea with i18n.
**Describe the solution you'd like**
Add to the main page of the process group (/processes_groups/X) the metadata for the process group.
To be consistent we should have this in the PG sidebar as other spaces, but we really want to try something new here.
**Describe alternatives you've considered**
1. To show it a block (so it doesn't' take a sidebar for itself, especially on this page that we don't have yet the typical actions: Follow, Share, Embed, etc)
1. To show it hidden (behind a "Read more") as this is something that probably only power users need.
**Additional context**

**Does this issue could impact on users private data?**
No, it's all public data.
**Acceptance criteria**
- [x] As a visitor I can see how many published processes there are on a PG
- [x] As a visitor I can see who promotes the process
- [x] As a visitor I can see who it's oriented to
- [x] As a visitor I can see the project's website
|
1.0
|
Add Metadata in Process Groups - Ref: PG01-3
**Is your feature request related to a problem? Please describe.**
As a visitor I want to know extra information about the Process Group:
* how many processes there are within that group of processes: count of all the processes
* project's website: URL
* who promotes the process
* organization area
* scopes
* who it's oriented to (who participates)
* what is decided
* how it's decided
Note that most of these fields are already on Participatory Processes. They are all textarea with i18n.
**Describe the solution you'd like**
Add to the main page of the process group (/processes_groups/X) the metadata for the process group.
To be consistent we should have this in the PG sidebar as other spaces, but we really want to try something new here.
**Describe alternatives you've considered**
1. To show it a block (so it doesn't' take a sidebar for itself, especially on this page that we don't have yet the typical actions: Follow, Share, Embed, etc)
1. To show it hidden (behind a "Read more") as this is something that probably only power users need.
**Additional context**

**Does this issue could impact on users private data?**
No, it's all public data.
**Acceptance criteria**
- [x] As a visitor I can see how many published processes there are on a PG
- [x] As a visitor I can see who promotes the process
- [x] As a visitor I can see who it's oriented to
- [x] As a visitor I can see the project's website
|
process
|
add metadata in process groups ref is your feature request related to a problem please describe as a visitor i want to know extra information about the process group how many processes there are within that group of processes count of all the processes project s website url who promotes the process organization area scopes who it s oriented to who participates what is decided how it s decided note that most of these fields are already on participatory processes they are all textarea with describe the solution you d like add to the main page of the process group processes groups x the metadata for the process group to be consistent we should have this in the pg sidebar as other spaces but we really want to try something new here describe alternatives you ve considered to show it a block so it doesn t take a sidebar for itself especially on this page that we don t have yet the typical actions follow share embed etc to show it hidden behind a read more as this is something that probably only power users need additional context does this issue could impact on users private data no it s all public data acceptance criteria as a visitor i can see how many published processes there are on a pg as a visitor i can see who promotes the process as a visitor i can see who it s oriented to as a visitor i can see the project s website
| 1
|
2,215
| 5,051,859,582
|
IssuesEvent
|
2016-12-20 23:21:58
|
jlm2017/jlm-video-subtitles
|
https://api.github.com/repos/jlm2017/jlm-video-subtitles
|
opened
|
[subtitles] [FR] Syrie : un risque de guerre mondiale
|
Language: French Process: [0] Awaiting subtitles
|
Video title
Syrie : un risque de guerre mondiale
URL
https://www.youtube.com/watch?v=NuXazR4EUnI
Youtube subtitle language
Français
Duration
4:14
URL subtitles
https://www.youtube.com/timedtext_editor?v=NuXazR4EUnI&tab=captions&lang=fr&ui=hd&action_mde_edit_form=1&ref=player&bl=vmp
|
1.0
|
[subtitles] [FR] Syrie : un risque de guerre mondiale - Video title
Syrie : un risque de guerre mondiale
URL
https://www.youtube.com/watch?v=NuXazR4EUnI
Youtube subtitle language
Français
Duration
4:14
URL subtitles
https://www.youtube.com/timedtext_editor?v=NuXazR4EUnI&tab=captions&lang=fr&ui=hd&action_mde_edit_form=1&ref=player&bl=vmp
|
process
|
syrie un risque de guerre mondiale video title syrie un risque de guerre mondiale url youtube subtitle language français duration url subtitles
| 1
|
6,961
| 10,115,767,407
|
IssuesEvent
|
2019-07-30 22:55:22
|
dotnet/corefx
|
https://api.github.com/repos/dotnet/corefx
|
closed
|
.NET breaks smartctl program
|
area-System.Diagnostics.Process bug tenet-compatibility
|
I can't use the `smartctl` program from a .NET Core application. On Ubuntu 16.04 Server, this happens when I run the command `smartctl -A '/dev/sda'` as root:
```
smartctl 6.5 2016-01-24 r4214 [x86_64-linux-4.4.0-148-generic] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF READ SMART DATA SECTION ===
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x000b 100 100 016 Pre-fail Always - 0
2 Throughput_Performance 0x0005 140 140 054 Pre-fail Offline - 68
3 Spin_Up_Time 0x0007 100 100 024 Pre-fail Always - 0
4 Start_Stop_Count 0x0012 100 100 000 Old_age Always - 4
5 Reallocated_Sector_Ct 0x0033 100 100 005 Pre-fail Always - 0
7 Seek_Error_Rate 0x000b 100 100 067 Pre-fail Always - 0
8 Seek_Time_Performance 0x0005 124 124 020 Pre-fail Offline - 33
9 Power_On_Hours 0x0012 095 095 000 Old_age Always - 39604
10 Spin_Retry_Count 0x0013 100 100 060 Pre-fail Always - 0
12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 4
192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 15
193 Load_Cycle_Count 0x0012 100 100 000 Old_age Always - 15
194 Temperature_Celsius 0x0002 181 181 000 Old_age Always - 33 (Min/Max 25/39)
196 Reallocated_Event_Count 0x0032 100 100 000 Old_age Always - 0
197 Current_Pending_Sector 0x0022 100 100 000 Old_age Always - 0
198 Offline_Uncorrectable 0x0008 100 100 000 Old_age Offline - 0
199 UDMA_CRC_Error_Count 0x000a 200 200 000 Old_age Always - 0
```
When I start the same command through .NET Core, this is the output:
```
smartctl 6.5 2016-01-24 r4214 [x86_64-linux-4.4.0-148-generic] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org
'/dev/sda': Unable to detect device type
Please specify device type with the -d option.
Use smartctl -h to get a usage summary
```
This is the code:
```c#
long? temp = null;
var psi = new ProcessStartInfo("smartctl", "-A '/dev/" + device + "'")
{
RedirectStandardOutput = true,
UseShellExecute = false
};
var p = Process.Start(psi);
while (!p.StandardOutput.EndOfStream)
{
string line = p.StandardOutput.ReadLine();
Console.WriteLine(line);
if (line.StartsWith("194 "))
{
Console.WriteLine("[Match 194]");
string[] parts = Regex.Split(line, @"\s+");
if (parts.Length >= 10)
{
temp = long.Parse(parts[9]);
Console.WriteLine("[Temp]");
}
p.StandardOutput.ReadToEnd(); // Kindly eat up the remaining output
}
}
```
.NET Core 2.2 on Linux x64. The same command works fine when calling from a PHP interpreter and when piping through `cat`. So I guess .NET Core destroys the environment in a way that breaks `smartctl`.
|
1.0
|
.NET breaks smartctl program - I can't use the `smartctl` program from a .NET Core application. On Ubuntu 16.04 Server, this happens when I run the command `smartctl -A '/dev/sda'` as root:
```
smartctl 6.5 2016-01-24 r4214 [x86_64-linux-4.4.0-148-generic] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF READ SMART DATA SECTION ===
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x000b 100 100 016 Pre-fail Always - 0
2 Throughput_Performance 0x0005 140 140 054 Pre-fail Offline - 68
3 Spin_Up_Time 0x0007 100 100 024 Pre-fail Always - 0
4 Start_Stop_Count 0x0012 100 100 000 Old_age Always - 4
5 Reallocated_Sector_Ct 0x0033 100 100 005 Pre-fail Always - 0
7 Seek_Error_Rate 0x000b 100 100 067 Pre-fail Always - 0
8 Seek_Time_Performance 0x0005 124 124 020 Pre-fail Offline - 33
9 Power_On_Hours 0x0012 095 095 000 Old_age Always - 39604
10 Spin_Retry_Count 0x0013 100 100 060 Pre-fail Always - 0
12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 4
192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 15
193 Load_Cycle_Count 0x0012 100 100 000 Old_age Always - 15
194 Temperature_Celsius 0x0002 181 181 000 Old_age Always - 33 (Min/Max 25/39)
196 Reallocated_Event_Count 0x0032 100 100 000 Old_age Always - 0
197 Current_Pending_Sector 0x0022 100 100 000 Old_age Always - 0
198 Offline_Uncorrectable 0x0008 100 100 000 Old_age Offline - 0
199 UDMA_CRC_Error_Count 0x000a 200 200 000 Old_age Always - 0
```
When I start the same command through .NET Core, this is the output:
```
smartctl 6.5 2016-01-24 r4214 [x86_64-linux-4.4.0-148-generic] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org
'/dev/sda': Unable to detect device type
Please specify device type with the -d option.
Use smartctl -h to get a usage summary
```
This is the code:
```c#
long? temp = null;
var psi = new ProcessStartInfo("smartctl", "-A '/dev/" + device + "'")
{
RedirectStandardOutput = true,
UseShellExecute = false
};
var p = Process.Start(psi);
while (!p.StandardOutput.EndOfStream)
{
string line = p.StandardOutput.ReadLine();
Console.WriteLine(line);
if (line.StartsWith("194 "))
{
Console.WriteLine("[Match 194]");
string[] parts = Regex.Split(line, @"\s+");
if (parts.Length >= 10)
{
temp = long.Parse(parts[9]);
Console.WriteLine("[Temp]");
}
p.StandardOutput.ReadToEnd(); // Kindly eat up the remaining output
}
}
```
.NET Core 2.2 on Linux x64. The same command works fine when calling from a PHP interpreter and when piping through `cat`. So I guess .NET Core destroys the environment in a way that breaks `smartctl`.
|
process
|
net breaks smartctl program i can t use the smartctl program from a net core application on ubuntu server this happens when i run the command smartctl a dev sda as root smartctl local build copyright c bruce allen christian franke start of read smart data section smart attributes data structure revision number vendor specific smart attributes with thresholds id attribute name flag value worst thresh type updated when failed raw value raw read error rate pre fail always throughput performance pre fail offline spin up time pre fail always start stop count old age always reallocated sector ct pre fail always seek error rate pre fail always seek time performance pre fail offline power on hours old age always spin retry count pre fail always power cycle count old age always power off retract count old age always load cycle count old age always temperature celsius old age always min max reallocated event count old age always current pending sector old age always offline uncorrectable old age offline udma crc error count old age always when i start the same command through net core this is the output smartctl local build copyright c bruce allen christian franke dev sda unable to detect device type please specify device type with the d option use smartctl h to get a usage summary this is the code c long temp null var psi new processstartinfo smartctl a dev device redirectstandardoutput true useshellexecute false var p process start psi while p standardoutput endofstream string line p standardoutput readline console writeline line if line startswith console writeline string parts regex split line s if parts length temp long parse parts console writeline p standardoutput readtoend kindly eat up the remaining output net core on linux the same command works fine when calling from a php interpreter and when piping through cat so i guess net core destroys the environment in a way that breaks smartctl
| 1
|
149,768
| 23,526,577,758
|
IssuesEvent
|
2022-08-19 11:27:21
|
unicef/inventory-hugo-theme
|
https://api.github.com/repos/unicef/inventory-hugo-theme
|
closed
|
As a reader, I want to go the home page on clicking on the unicef logo.
|
T: bug C: design thinking
|
### Summary
User reviews in testing:
- Clicking UNICEF logo takes you to UNICEF website; clicking on either should go to the same page (i.e. O.S. Inventory)
- UNICEF.org redirect from top logo was confusing!
### Priority
primary
### Category
front-end
### Type
functional
|
1.0
|
As a reader, I want to go the home page on clicking on the unicef logo. - ### Summary
User reviews in testing:
- Clicking UNICEF logo takes you to UNICEF website; clicking on either should go to the same page (i.e. O.S. Inventory)
- UNICEF.org redirect from top logo was confusing!
### Priority
primary
### Category
front-end
### Type
functional
|
non_process
|
as a reader i want to go the home page on clicking on the unicef logo summary user reviews in testing clicking unicef logo takes you to unicef website clicking on either should go to the same page i e o s inventory unicef org redirect from top logo was confusing priority primary category front end type functional
| 0
|
21,912
| 10,698,645,131
|
IssuesEvent
|
2019-10-23 19:08:53
|
rugk/threema-msgapi-sdk-php
|
https://api.github.com/repos/rugk/threema-msgapi-sdk-php
|
opened
|
Important: Update default pinned public key
|
bug security
|
Because Threema is going to change their key pair and also their certificate.
The new one is already used on https://threema.ch/ and signed by [the Entrust Root CA](https://www.entrust.com/root-certificates/entrust_g2_ca.cer).
|
True
|
Important: Update default pinned public key - Because Threema is going to change their key pair and also their certificate.
The new one is already used on https://threema.ch/ and signed by [the Entrust Root CA](https://www.entrust.com/root-certificates/entrust_g2_ca.cer).
|
non_process
|
important update default pinned public key because threema is going to change their key pair and also their certificate the new one is already used on and signed by
| 0
|
5,035
| 7,852,944,681
|
IssuesEvent
|
2018-06-20 15:53:51
|
cptechinc/soft-dpluso
|
https://api.github.com/repos/cptechinc/soft-dpluso
|
opened
|
Rename Processwire configs
|
Processwire
|
In Processwire go into templates, choose a template
then under advanced there''s an area that says rename template, then you can change the name of the template. Rename the following templates
customer-config -> config
actions-config -> config->useractions
dplus-config -> config->dplus
interfax-config -> config->interfax
form-fields-config -> config->form-fields
sales-orders-config -> config->sales-orders
quotes-config -> config->quotes
ii-config -> config->ii
cart-config -> config->cart-config
add config->dashboard template
add config->dashboard to allowable children templates of config
add instance of config dashboard as Dashboard under /config/
Add field show_salespanel (checkbox) "Show Top 25 Customers Panel?"
Add field show_bookingspanel (checkbox) "Show bookings ?"
Do this for Bellboy Liquor, Bellboy Bar Supply
|
1.0
|
Rename Processwire configs - In Processwire go into templates, choose a template
then under advanced there''s an area that says rename template, then you can change the name of the template. Rename the following templates
customer-config -> config
actions-config -> config->useractions
dplus-config -> config->dplus
interfax-config -> config->interfax
form-fields-config -> config->form-fields
sales-orders-config -> config->sales-orders
quotes-config -> config->quotes
ii-config -> config->ii
cart-config -> config->cart-config
add config->dashboard template
add config->dashboard to allowable children templates of config
add instance of config dashboard as Dashboard under /config/
Add field show_salespanel (checkbox) "Show Top 25 Customers Panel?"
Add field show_bookingspanel (checkbox) "Show bookings ?"
Do this for Bellboy Liquor, Bellboy Bar Supply
|
process
|
rename processwire configs in processwire go into templates choose a template then under advanced there s an area that says rename template then you can change the name of the template rename the following templates customer config config actions config config useractions dplus config config dplus interfax config config interfax form fields config config form fields sales orders config config sales orders quotes config config quotes ii config config ii cart config config cart config add config dashboard template add config dashboard to allowable children templates of config add instance of config dashboard as dashboard under config add field show salespanel checkbox show top customers panel add field show bookingspanel checkbox show bookings do this for bellboy liquor bellboy bar supply
| 1
|
160,173
| 12,505,686,289
|
IssuesEvent
|
2020-06-02 11:11:51
|
aliasrobotics/RVD
|
https://api.github.com/repos/aliasrobotics/RVD
|
opened
|
Use of unsafe yaml load., /opt/ros_melodic_ws/src/actionlib/tools/library.py:132
|
bandit bug static analysis testing triage
|
```yaml
{
"id": 1,
"title": "Use of unsafe yaml load., /opt/ros_melodic_ws/src/actionlib/tools/library.py:132",
"type": "bug",
"description": "HIGH confidence of MEDIUM severity bug. Use of unsafe yaml load. Allows instantiation of arbitrary objects. Consider yaml.safe_load(). at /opt/ros_melodic_ws/src/actionlib/tools/library.py:132 See links for more info on the bug.",
"cwe": "None",
"cve": "None",
"keywords": [
"bandit",
"bug",
"static analysis",
"testing",
"triage",
"bug"
],
"system": "",
"vendor": null,
"severity": {
"rvss-score": 0,
"rvss-vector": "",
"severity-description": "",
"cvss-score": 0,
"cvss-vector": ""
},
"links": "",
"flaw": {
"phase": "testing",
"specificity": "subject-specific",
"architectural-location": "application-specific",
"application": "N/A",
"subsystem": "N/A",
"package": "N/A",
"languages": "None",
"date-detected": "2020-06-02 (11:11)",
"detected-by": "Alias Robotics",
"detected-by-method": "testing static",
"date-reported": "2020-06-02 (11:11)",
"reported-by": "Alias Robotics",
"reported-by-relationship": "automatic",
"issue": "",
"reproducibility": "always",
"trace": "/opt/ros_melodic_ws/src/actionlib/tools/library.py:132",
"reproduction": "See artifacts below (if available)",
"reproduction-image": ""
},
"exploitation": {
"description": "",
"exploitation-image": "",
"exploitation-vector": ""
},
"mitigation": {
"description": "",
"pull-request": "",
"date-mitigation": ""
}
}
```
|
1.0
|
Use of unsafe yaml load., /opt/ros_melodic_ws/src/actionlib/tools/library.py:132 - ```yaml
{
"id": 1,
"title": "Use of unsafe yaml load., /opt/ros_melodic_ws/src/actionlib/tools/library.py:132",
"type": "bug",
"description": "HIGH confidence of MEDIUM severity bug. Use of unsafe yaml load. Allows instantiation of arbitrary objects. Consider yaml.safe_load(). at /opt/ros_melodic_ws/src/actionlib/tools/library.py:132 See links for more info on the bug.",
"cwe": "None",
"cve": "None",
"keywords": [
"bandit",
"bug",
"static analysis",
"testing",
"triage",
"bug"
],
"system": "",
"vendor": null,
"severity": {
"rvss-score": 0,
"rvss-vector": "",
"severity-description": "",
"cvss-score": 0,
"cvss-vector": ""
},
"links": "",
"flaw": {
"phase": "testing",
"specificity": "subject-specific",
"architectural-location": "application-specific",
"application": "N/A",
"subsystem": "N/A",
"package": "N/A",
"languages": "None",
"date-detected": "2020-06-02 (11:11)",
"detected-by": "Alias Robotics",
"detected-by-method": "testing static",
"date-reported": "2020-06-02 (11:11)",
"reported-by": "Alias Robotics",
"reported-by-relationship": "automatic",
"issue": "",
"reproducibility": "always",
"trace": "/opt/ros_melodic_ws/src/actionlib/tools/library.py:132",
"reproduction": "See artifacts below (if available)",
"reproduction-image": ""
},
"exploitation": {
"description": "",
"exploitation-image": "",
"exploitation-vector": ""
},
"mitigation": {
"description": "",
"pull-request": "",
"date-mitigation": ""
}
}
```
|
non_process
|
use of unsafe yaml load opt ros melodic ws src actionlib tools library py yaml id title use of unsafe yaml load opt ros melodic ws src actionlib tools library py type bug description high confidence of medium severity bug use of unsafe yaml load allows instantiation of arbitrary objects consider yaml safe load at opt ros melodic ws src actionlib tools library py see links for more info on the bug cwe none cve none keywords bandit bug static analysis testing triage bug system vendor null severity rvss score rvss vector severity description cvss score cvss vector links flaw phase testing specificity subject specific architectural location application specific application n a subsystem n a package n a languages none date detected detected by alias robotics detected by method testing static date reported reported by alias robotics reported by relationship automatic issue reproducibility always trace opt ros melodic ws src actionlib tools library py reproduction see artifacts below if available reproduction image exploitation description exploitation image exploitation vector mitigation description pull request date mitigation
| 0
|
812,645
| 30,345,913,175
|
IssuesEvent
|
2023-07-11 15:24:50
|
ubiquity/ubiquibot
|
https://api.github.com/repos/ubiquity/ubiquibot
|
closed
|
Re-Enable GitHub Action Runtime Environment
|
Time: <1 Week Priority: 2 (High) Price: 600 USD Permitted
|
We should enable support from running the bot natively off of GitHub Actions again, but this time using `npx tsx index.ts` in order to eliminate the complexities around compilation, and then updating the `dist/index.js` file.
The bot should use GitHub Actions as the runtime environment for forks automatically for testing purposes.
For production we should continue using a server less backend (we are currently using Netlify) this is because, from what I understand, the only way to easily add the bot from the GitHub Marketplace and immediately start using is if it is using an external backend. Otherwise, keeping all of the infrastructure on GitHub is preferred.
|
1.0
|
Re-Enable GitHub Action Runtime Environment - We should enable support from running the bot natively off of GitHub Actions again, but this time using `npx tsx index.ts` in order to eliminate the complexities around compilation, and then updating the `dist/index.js` file.
The bot should use GitHub Actions as the runtime environment for forks automatically for testing purposes.
For production we should continue using a server less backend (we are currently using Netlify) this is because, from what I understand, the only way to easily add the bot from the GitHub Marketplace and immediately start using is if it is using an external backend. Otherwise, keeping all of the infrastructure on GitHub is preferred.
|
non_process
|
re enable github action runtime environment we should enable support from running the bot natively off of github actions again but this time using npx tsx index ts in order to eliminate the complexities around compilation and then updating the dist index js file the bot should use github actions as the runtime environment for forks automatically for testing purposes for production we should continue using a server less backend we are currently using netlify this is because from what i understand the only way to easily add the bot from the github marketplace and immediately start using is if it is using an external backend otherwise keeping all of the infrastructure on github is preferred
| 0
|
9,800
| 12,813,257,883
|
IssuesEvent
|
2020-07-04 11:52:55
|
spine-generic/spine-generic
|
https://api.github.com/repos/spine-generic/spine-generic
|
closed
|
Look for manual labels under derivatives/ folder
|
processing
|
To follow BIDS philosophy.
Currently processing is run within the raw data folder.
|
1.0
|
Look for manual labels under derivatives/ folder - To follow BIDS philosophy.
Currently processing is run within the raw data folder.
|
process
|
look for manual labels under derivatives folder to follow bids philosophy currently processing is run within the raw data folder
| 1
|
388,724
| 11,491,598,337
|
IssuesEvent
|
2020-02-11 19:18:31
|
prysmaticlabs/prysm
|
https://api.github.com/repos/prysmaticlabs/prysm
|
closed
|
AttestationPool grpc method results are not paginated
|
API Enhancement Priority: Medium
|
The AttestationPool grpc method results are not paginated, and as the attestation pool is huge now, it s causing some issues to fetch it. even the http api seems to return an empty reponse https://api.prylabs.net/#/BeaconChain/AttestationPool
Also useful would be a call just to return the "in pool" counter, but if you paginate the call, then you can just get the counter from the first response...
thanks
|
1.0
|
AttestationPool grpc method results are not paginated - The AttestationPool grpc method results are not paginated, and as the attestation pool is huge now, it s causing some issues to fetch it. even the http api seems to return an empty reponse https://api.prylabs.net/#/BeaconChain/AttestationPool
Also useful would be a call just to return the "in pool" counter, but if you paginate the call, then you can just get the counter from the first response...
thanks
|
non_process
|
attestationpool grpc method results are not paginated the attestationpool grpc method results are not paginated and as the attestation pool is huge now it s causing some issues to fetch it even the http api seems to return an empty reponse also useful would be a call just to return the in pool counter but if you paginate the call then you can just get the counter from the first response thanks
| 0
|
17,592
| 6,478,372,706
|
IssuesEvent
|
2017-08-18 07:41:29
|
JabRef/jabref
|
https://api.github.com/repos/JabRef/jabref
|
opened
|
Self-made deb and rpm packages
|
build-system help-wanted linux
|
This tracks the state of jabref-issued `deb` and `rpm` packages.
We are experimenting using https://github.com/nebula-plugins/gradle-ospackage-plugin on the branch https://github.com/JabRef/jabref/tree/deb-and-rpm.
Current state: `jar` is packed into the `deb`. No startup scripts, no desktop integration.
This issue becomes obsolete in case JabRef is fully integrated with all features in Debian - see https://github.com/koppor/jabref/issues/135.
|
1.0
|
Self-made deb and rpm packages - This tracks the state of jabref-issued `deb` and `rpm` packages.
We are experimenting using https://github.com/nebula-plugins/gradle-ospackage-plugin on the branch https://github.com/JabRef/jabref/tree/deb-and-rpm.
Current state: `jar` is packed into the `deb`. No startup scripts, no desktop integration.
This issue becomes obsolete in case JabRef is fully integrated with all features in Debian - see https://github.com/koppor/jabref/issues/135.
|
non_process
|
self made deb and rpm packages this tracks the state of jabref issued deb and rpm packages we are experimenting using on the branch current state jar is packed into the deb no startup scripts no desktop integration this issue becomes obsolete in case jabref is fully integrated with all features in debian see
| 0
|
231,725
| 17,753,671,297
|
IssuesEvent
|
2021-08-28 10:02:11
|
lourkeur/miniguest
|
https://api.github.com/repos/lourkeur/miniguest
|
closed
|
short description
|
documentation help wanted
|
What's the best way to describe miniguest in less than ten words? Here are the taglines I've already used:
1. Low-footprint NixOS images
2. guest NixOS images with minimal footprint
3. lightweight and declarative guest operating systems profiles
Personally, I would go for a mix between 2 and 3. What do you think?
|
1.0
|
short description - What's the best way to describe miniguest in less than ten words? Here are the taglines I've already used:
1. Low-footprint NixOS images
2. guest NixOS images with minimal footprint
3. lightweight and declarative guest operating systems profiles
Personally, I would go for a mix between 2 and 3. What do you think?
|
non_process
|
short description what s the best way to describe miniguest in less than ten words here are the taglines i ve already used low footprint nixos images guest nixos images with minimal footprint lightweight and declarative guest operating systems profiles personally i would go for a mix between and what do you think
| 0
|
18,526
| 24,552,108,639
|
IssuesEvent
|
2022-10-12 13:22:08
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[Android][iOS] Inconsistent study order between Android and iOS for study list
|
Bug P2 iOS Android Process: Fixed Process: Tested QA Process: Tested dev
|
Steps:-
1. Login into the mobile apps (Gateway)
2. Navigate to Study list
3. Compare the study order in the list between Android and iOS
A/R:- Inconsistency between Android and iOS platforms for study order
E/R:- Consistency should be maintained between the platforms for study order
|
3.0
|
[Android][iOS] Inconsistent study order between Android and iOS for study list - Steps:-
1. Login into the mobile apps (Gateway)
2. Navigate to Study list
3. Compare the study order in the list between Android and iOS
A/R:- Inconsistency between Android and iOS platforms for study order
E/R:- Consistency should be maintained between the platforms for study order
|
process
|
inconsistent study order between android and ios for study list steps login into the mobile apps gateway navigate to study list compare the study order in the list between android and ios a r inconsistency between android and ios platforms for study order e r consistency should be maintained between the platforms for study order
| 1
|
140,198
| 12,889,150,549
|
IssuesEvent
|
2020-07-13 14:07:01
|
maxcleme/beadcolors
|
https://api.github.com/repos/maxcleme/beadcolors
|
closed
|
readme has rgb_a when it should probably be rgb_r
|
documentation
|

Typically the "A" value when talking about rgba is the Alpha channel for transparency. I believe these should all say rbg_r, meaning the red channel
|
1.0
|
readme has rgb_a when it should probably be rgb_r - 
Typically the "A" value when talking about rgba is the Alpha channel for transparency. I believe these should all say rbg_r, meaning the red channel
|
non_process
|
readme has rgb a when it should probably be rgb r typically the a value when talking about rgba is the alpha channel for transparency i believe these should all say rbg r meaning the red channel
| 0
|
43,479
| 5,637,987,334
|
IssuesEvent
|
2017-04-06 10:38:02
|
owncloud/core
|
https://api.github.com/repos/owncloud/core
|
closed
|
[Accessibility] Missing label in Device Name textbox
|
bug comp:authentication design sev4-low
|
### Steps to reproduce
1. Go to Personal Page menu, and scroll down to Devices section
### Expected behaviour
Device Name textbox should have a label
### Actual behaviour
Device Name textbox is not labered
### Server configuration
**Operating system**:
Ubuntu 14.04
**Web server:**
Apache
**Database:**
MySQL
**PHP version:**
5.5.9
**ownCloud version:"9.1.0.3","versionstring":"9.1.0 pre alpha","edition":"Enterprise"
**Updated from an older ownCloud or fresh install:**
Fresh
**Are you using external storage, if yes which one:** local/smb/sftp/...
No
**Are you using encryption:**
No
**Logs**
```
```
### Client configuration
**browser**
Firefox

@owncloud/designers @ChristophWurst
|
1.0
|
[Accessibility] Missing label in Device Name textbox - ### Steps to reproduce
1. Go to Personal Page menu, and scroll down to Devices section
### Expected behaviour
Device Name textbox should have a label
### Actual behaviour
Device Name textbox is not labered
### Server configuration
**Operating system**:
Ubuntu 14.04
**Web server:**
Apache
**Database:**
MySQL
**PHP version:**
5.5.9
**ownCloud version:"9.1.0.3","versionstring":"9.1.0 pre alpha","edition":"Enterprise"
**Updated from an older ownCloud or fresh install:**
Fresh
**Are you using external storage, if yes which one:** local/smb/sftp/...
No
**Are you using encryption:**
No
**Logs**
```
```
### Client configuration
**browser**
Firefox

@owncloud/designers @ChristophWurst
|
non_process
|
missing label in device name textbox steps to reproduce go to personal page menu and scroll down to devices section expected behaviour device name textbox should have a label actual behaviour device name textbox is not labered server configuration operating system ubuntu web server apache database mysql php version owncloud version versionstring pre alpha edition enterprise updated from an older owncloud or fresh install fresh are you using external storage if yes which one local smb sftp no are you using encryption no logs client configuration browser firefox owncloud designers christophwurst
| 0
|
103,895
| 16,612,457,136
|
IssuesEvent
|
2021-06-02 13:10:55
|
joshnewton31080/JavaVulnerableLab
|
https://api.github.com/repos/joshnewton31080/JavaVulnerableLab
|
opened
|
CVE-2019-14900 (Medium) detected in hibernate-core-4.0.1.Final.jar
|
security vulnerability
|
## CVE-2019-14900 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>hibernate-core-4.0.1.Final.jar</b></p></summary>
<p>A module of the Hibernate Core project</p>
<p>Library home page: <a href="http://hibernate.org">http://hibernate.org</a></p>
<p>Path to dependency file: JavaVulnerableLab/pom.xml</p>
<p>Path to vulnerable library: canner/.m2/repository/org/hibernate/hibernate-core/4.0.1.Final/hibernate-core-4.0.1.Final.jar,JavaVulnerableLab/target/JavaVulnerableLab/WEB-INF/lib/hibernate-core-4.0.1.Final.jar</p>
<p>
Dependency Hierarchy:
- :x: **hibernate-core-4.0.1.Final.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/joshnewton31080/JavaVulnerableLab/commit/8a5defe68446887a5bc449463ebd25cd3134edc1">8a5defe68446887a5bc449463ebd25cd3134edc1</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A flaw was found in Hibernate ORM in versions before 5.3.18, 5.4.18 and 5.5.0.Beta1. A SQL injection in the implementation of the JPA Criteria API can permit unsanitized literals when a literal is used in the SELECT or GROUP BY parts of the query. This flaw could allow an attacker to access unauthorized information or possibly conduct further attacks.
<p>Publish Date: 2020-07-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-14900>CVE-2019-14900</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14900">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14900</a></p>
<p>Release Date: 2020-07-06</p>
<p>Fix Resolution: org.hibernate:hibernate-core:5.4.18.Final</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.hibernate","packageName":"hibernate-core","packageVersion":"4.0.1.Final","packageFilePaths":["/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"org.hibernate:hibernate-core:4.0.1.Final","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.hibernate:hibernate-core:5.4.18.Final"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2019-14900","vulnerabilityDetails":"A flaw was found in Hibernate ORM in versions before 5.3.18, 5.4.18 and 5.5.0.Beta1. A SQL injection in the implementation of the JPA Criteria API can permit unsanitized literals when a literal is used in the SELECT or GROUP BY parts of the query. This flaw could allow an attacker to access unauthorized information or possibly conduct further attacks.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-14900","cvss3Severity":"medium","cvss3Score":"6.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"Low","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2019-14900 (Medium) detected in hibernate-core-4.0.1.Final.jar - ## CVE-2019-14900 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>hibernate-core-4.0.1.Final.jar</b></p></summary>
<p>A module of the Hibernate Core project</p>
<p>Library home page: <a href="http://hibernate.org">http://hibernate.org</a></p>
<p>Path to dependency file: JavaVulnerableLab/pom.xml</p>
<p>Path to vulnerable library: canner/.m2/repository/org/hibernate/hibernate-core/4.0.1.Final/hibernate-core-4.0.1.Final.jar,JavaVulnerableLab/target/JavaVulnerableLab/WEB-INF/lib/hibernate-core-4.0.1.Final.jar</p>
<p>
Dependency Hierarchy:
- :x: **hibernate-core-4.0.1.Final.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/joshnewton31080/JavaVulnerableLab/commit/8a5defe68446887a5bc449463ebd25cd3134edc1">8a5defe68446887a5bc449463ebd25cd3134edc1</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A flaw was found in Hibernate ORM in versions before 5.3.18, 5.4.18 and 5.5.0.Beta1. A SQL injection in the implementation of the JPA Criteria API can permit unsanitized literals when a literal is used in the SELECT or GROUP BY parts of the query. This flaw could allow an attacker to access unauthorized information or possibly conduct further attacks.
<p>Publish Date: 2020-07-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-14900>CVE-2019-14900</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14900">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14900</a></p>
<p>Release Date: 2020-07-06</p>
<p>Fix Resolution: org.hibernate:hibernate-core:5.4.18.Final</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.hibernate","packageName":"hibernate-core","packageVersion":"4.0.1.Final","packageFilePaths":["/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"org.hibernate:hibernate-core:4.0.1.Final","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.hibernate:hibernate-core:5.4.18.Final"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2019-14900","vulnerabilityDetails":"A flaw was found in Hibernate ORM in versions before 5.3.18, 5.4.18 and 5.5.0.Beta1. A SQL injection in the implementation of the JPA Criteria API can permit unsanitized literals when a literal is used in the SELECT or GROUP BY parts of the query. This flaw could allow an attacker to access unauthorized information or possibly conduct further attacks.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-14900","cvss3Severity":"medium","cvss3Score":"6.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"Low","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve medium detected in hibernate core final jar cve medium severity vulnerability vulnerable library hibernate core final jar a module of the hibernate core project library home page a href path to dependency file javavulnerablelab pom xml path to vulnerable library canner repository org hibernate hibernate core final hibernate core final jar javavulnerablelab target javavulnerablelab web inf lib hibernate core final jar dependency hierarchy x hibernate core final jar vulnerable library found in head commit a href found in base branch master vulnerability details a flaw was found in hibernate orm in versions before and a sql injection in the implementation of the jpa criteria api can permit unsanitized literals when a literal is used in the select or group by parts of the query this flaw could allow an attacker to access unauthorized information or possibly conduct further attacks publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org hibernate hibernate core final rescue worker helmet automatic remediation is available for this issue isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree org hibernate hibernate core final isminimumfixversionavailable true minimumfixversion org hibernate hibernate core final basebranches vulnerabilityidentifier cve vulnerabilitydetails a flaw was found in hibernate orm in versions before and a sql injection in the implementation of the jpa criteria api can permit unsanitized literals when a literal is used in the select or group by parts of the query this flaw could allow an attacker to access unauthorized information or possibly conduct further attacks vulnerabilityurl
| 0
|
6,229
| 9,172,090,351
|
IssuesEvent
|
2019-03-04 05:27:34
|
flutterchina/dio
|
https://api.github.com/repos/flutterchina/dio
|
closed
|
BaseOptions 中 method参数无效
|
processing
|
版本 v2.0.14
` BaseOptions options = new BaseOptions(
baseUrl: "https://github.com",
method: "GET",
);
Dio dio = new Dio(options);
Response<String> r =
await dio.request<String>("/flutterchina/dio/issues/196");`
An Observatory debugger and profiler on MI 6 is available at: http://127.0.0.1:50775/
For a more detailed help message, press "h". To detach, press "d"; to quit, press "q".
I/zygote64(10723): Do partial code cache collection, code=29KB, data=20KB
I/zygote64(10723): After code cache collection, code=29KB, data=20KB
I/zygote64(10723): Increasing code cache capacity to 128KB
E/flutter (10723): [ERROR:flutter/lib/ui/ui_dart_state.cc(148)] Unhandled Exception: NoSuchMethodError: The getter 'method' was called on null.
E/flutter (10723): Receiver: null
E/flutter (10723): Tried calling: method
E/flutter (10723): #0 Object.noSuchMethod (dart:core/runtime/libobject_patch.dart:50:5)
E/flutter (10723): #1 Dio._mergeOptions (package:dio/src/dio.dart:895:19)
E/flutter (10723): #2 Dio._request (package:dio/src/dio.dart:681:9)
E/flutter (10723): <asynchronous suspension>
E/flutter (10723): #3 Dio.request (package:dio/src/dio.dart:613:12)
E/flutter (10723): <asynchronous suspension>
E/flutter (10723): #4 HttpUtils.name (package:demo/utils/http_utils.dart:13:19)
E/flutter (10723): <asynchronous suspension>
E/flutter (10723): #5 _HomeMainPageState._getData (package:ffxapp/ui/pages/home/home_main_page.dart:25:37)
E/flutter (10723): <asynchronous suspension>
E/flutter (10723): #6 _InkResponseState._handleTap (package:flutter/src/material/ink_well.dart:513:14)
E/flutter (10723): #7 _InkResponseState.build.<anonymous closure> (package:flutter/src/material/ink_well.dart:568:30)
E/flutter (10723): #8 GestureRecognizer.invokeCallback (package:flutter/src/gestures/recognizer.dart:120:24)
E/flutter (10723): #9 TapGestureRecognizer._checkUp (package:flutter/src/gestures/tap.dart:242:9)
E/flutter (10723): #10 TapGestureRecognizer.acceptGesture (package:flutter/src/gestures/tap.dart:204:7)
E/flutter (10723): #11 GestureArenaManager.sweep (package:flutter/src/gestures/arena.dart:156:27)
E/flutter (10723): #12 _WidgetsFlutterBinding&BindingBase&GestureBinding.handleEvent (package:flutter/src/gestures/binding.dart:218:20)
E/flutter (10723): #13 _WidgetsFlutterBinding&BindingBase&GestureBinding.dispatchEvent (package:flutter/src/gestures/binding.dart:192:22)
E/flutter (10723): #14 _WidgetsFlutterBinding&BindingBase&GestureBinding._handlePointerEvent (package:flutter/src/gestures/binding.dart:149:7)
E/flutter (10723): #15 _WidgetsFlutterBinding&BindingBase&GestureBinding._flushPointerEventQueue (package:flutter/src/gestures/binding.dart:101:7)
E/flutter (10723): #16 _WidgetsFlutterBinding&BindingBase&GestureBinding._handlePointerDataPacket (package:flutter/src/gestures/binding.dart:85:7)
E/flutter (10723): #17 _rootRunUnary (dart:async/zone.dart:1136:13)
E/flutter (10723): #18 _CustomZone.runUnary (dart:async/zone.dart:1029:19)
E/flutter (10723): #19 _CustomZone.runUnaryGuarded (dart:async/zone.dart:931:7)
E/flutter (10723): #20 _invoke1 (dart:ui/hooks.dart:223:10)
E/flutter (10723): #21 _dispatchPointerDataPacket (dart:ui/hooks.dart:144:5)
E/flutter (10723):
|
1.0
|
BaseOptions 中 method参数无效 - 版本 v2.0.14
` BaseOptions options = new BaseOptions(
baseUrl: "https://github.com",
method: "GET",
);
Dio dio = new Dio(options);
Response<String> r =
await dio.request<String>("/flutterchina/dio/issues/196");`
An Observatory debugger and profiler on MI 6 is available at: http://127.0.0.1:50775/
For a more detailed help message, press "h". To detach, press "d"; to quit, press "q".
I/zygote64(10723): Do partial code cache collection, code=29KB, data=20KB
I/zygote64(10723): After code cache collection, code=29KB, data=20KB
I/zygote64(10723): Increasing code cache capacity to 128KB
E/flutter (10723): [ERROR:flutter/lib/ui/ui_dart_state.cc(148)] Unhandled Exception: NoSuchMethodError: The getter 'method' was called on null.
E/flutter (10723): Receiver: null
E/flutter (10723): Tried calling: method
E/flutter (10723): #0 Object.noSuchMethod (dart:core/runtime/libobject_patch.dart:50:5)
E/flutter (10723): #1 Dio._mergeOptions (package:dio/src/dio.dart:895:19)
E/flutter (10723): #2 Dio._request (package:dio/src/dio.dart:681:9)
E/flutter (10723): <asynchronous suspension>
E/flutter (10723): #3 Dio.request (package:dio/src/dio.dart:613:12)
E/flutter (10723): <asynchronous suspension>
E/flutter (10723): #4 HttpUtils.name (package:demo/utils/http_utils.dart:13:19)
E/flutter (10723): <asynchronous suspension>
E/flutter (10723): #5 _HomeMainPageState._getData (package:ffxapp/ui/pages/home/home_main_page.dart:25:37)
E/flutter (10723): <asynchronous suspension>
E/flutter (10723): #6 _InkResponseState._handleTap (package:flutter/src/material/ink_well.dart:513:14)
E/flutter (10723): #7 _InkResponseState.build.<anonymous closure> (package:flutter/src/material/ink_well.dart:568:30)
E/flutter (10723): #8 GestureRecognizer.invokeCallback (package:flutter/src/gestures/recognizer.dart:120:24)
E/flutter (10723): #9 TapGestureRecognizer._checkUp (package:flutter/src/gestures/tap.dart:242:9)
E/flutter (10723): #10 TapGestureRecognizer.acceptGesture (package:flutter/src/gestures/tap.dart:204:7)
E/flutter (10723): #11 GestureArenaManager.sweep (package:flutter/src/gestures/arena.dart:156:27)
E/flutter (10723): #12 _WidgetsFlutterBinding&BindingBase&GestureBinding.handleEvent (package:flutter/src/gestures/binding.dart:218:20)
E/flutter (10723): #13 _WidgetsFlutterBinding&BindingBase&GestureBinding.dispatchEvent (package:flutter/src/gestures/binding.dart:192:22)
E/flutter (10723): #14 _WidgetsFlutterBinding&BindingBase&GestureBinding._handlePointerEvent (package:flutter/src/gestures/binding.dart:149:7)
E/flutter (10723): #15 _WidgetsFlutterBinding&BindingBase&GestureBinding._flushPointerEventQueue (package:flutter/src/gestures/binding.dart:101:7)
E/flutter (10723): #16 _WidgetsFlutterBinding&BindingBase&GestureBinding._handlePointerDataPacket (package:flutter/src/gestures/binding.dart:85:7)
E/flutter (10723): #17 _rootRunUnary (dart:async/zone.dart:1136:13)
E/flutter (10723): #18 _CustomZone.runUnary (dart:async/zone.dart:1029:19)
E/flutter (10723): #19 _CustomZone.runUnaryGuarded (dart:async/zone.dart:931:7)
E/flutter (10723): #20 _invoke1 (dart:ui/hooks.dart:223:10)
E/flutter (10723): #21 _dispatchPointerDataPacket (dart:ui/hooks.dart:144:5)
E/flutter (10723):
|
process
|
baseoptions 中 method参数无效 版本 baseoptions options new baseoptions baseurl method get dio dio new dio options response r await dio request flutterchina dio issues an observatory debugger and profiler on mi is available at for a more detailed help message press h to detach press d to quit press q i do partial code cache collection code data i after code cache collection code data i increasing code cache capacity to e flutter unhandled exception nosuchmethoderror the getter method was called on null e flutter receiver null e flutter tried calling method e flutter object nosuchmethod dart core runtime libobject patch dart e flutter dio mergeoptions package dio src dio dart e flutter dio request package dio src dio dart e flutter e flutter dio request package dio src dio dart e flutter e flutter httputils name package demo utils http utils dart e flutter e flutter homemainpagestate getdata package ffxapp ui pages home home main page dart e flutter e flutter inkresponsestate handletap package flutter src material ink well dart e flutter inkresponsestate build package flutter src material ink well dart e flutter gesturerecognizer invokecallback package flutter src gestures recognizer dart e flutter tapgesturerecognizer checkup package flutter src gestures tap dart e flutter tapgesturerecognizer acceptgesture package flutter src gestures tap dart e flutter gesturearenamanager sweep package flutter src gestures arena dart e flutter widgetsflutterbinding bindingbase gesturebinding handleevent package flutter src gestures binding dart e flutter widgetsflutterbinding bindingbase gesturebinding dispatchevent package flutter src gestures binding dart e flutter widgetsflutterbinding bindingbase gesturebinding handlepointerevent package flutter src gestures binding dart e flutter widgetsflutterbinding bindingbase gesturebinding flushpointereventqueue package flutter src gestures binding dart e flutter widgetsflutterbinding bindingbase gesturebinding handlepointerdatapacket package flutter src gestures binding dart e flutter rootrununary dart async zone dart e flutter customzone rununary dart async zone dart e flutter customzone rununaryguarded dart async zone dart e flutter dart ui hooks dart e flutter dispatchpointerdatapacket dart ui hooks dart e flutter
| 1
|
19,376
| 25,506,023,458
|
IssuesEvent
|
2022-11-28 09:30:28
|
pycaret/pycaret
|
https://api.github.com/repos/pycaret/pycaret
|
closed
|
[BUG]: Order of pipeline in Time Series
|
bug time_series preprocessing
|
### pycaret version checks
- [X] I have checked that this issue has not already been reported [here](https://github.com/pycaret/pycaret/issues).
- [X] I have confirmed this bug exists on the [latest version](https://github.com/pycaret/pycaret/releases) of pycaret.
- [X] I have confirmed this bug exists on the master branch of pycaret (pip install -U git+https://github.com/pycaret/pycaret.git@master).
### Issue Description
What should be the the order of feature engineering, feature transformation and feature scaling in time series forecasting?
In time series, I have (1) imputation, (2) transformation, (3) scaling, (4) feature engineering (4 is new - I just added yesterday). But that seems sort of incorrect because feature engineered variables will not be scaled. Is there a rationale to the order?
https://github.com/pycaret/pycaret/blob/127c771252bc0472ff73f42965c6f5ce81b1209d/pycaret/time_series/forecasting/oop.py#L1024-#L1052
### Reproducible Example
```python
See code snippet above
```
### Expected Behavior
Should it not be (1) imputation, (2) feature engineering, (3) transformation, (4) scaling?
### Actual Results
```python-traceback
See above
```
### Installed Versions
<details>
Version on Github master as of 27th Nov 2022
</details>
|
1.0
|
[BUG]: Order of pipeline in Time Series - ### pycaret version checks
- [X] I have checked that this issue has not already been reported [here](https://github.com/pycaret/pycaret/issues).
- [X] I have confirmed this bug exists on the [latest version](https://github.com/pycaret/pycaret/releases) of pycaret.
- [X] I have confirmed this bug exists on the master branch of pycaret (pip install -U git+https://github.com/pycaret/pycaret.git@master).
### Issue Description
What should be the the order of feature engineering, feature transformation and feature scaling in time series forecasting?
In time series, I have (1) imputation, (2) transformation, (3) scaling, (4) feature engineering (4 is new - I just added yesterday). But that seems sort of incorrect because feature engineered variables will not be scaled. Is there a rationale to the order?
https://github.com/pycaret/pycaret/blob/127c771252bc0472ff73f42965c6f5ce81b1209d/pycaret/time_series/forecasting/oop.py#L1024-#L1052
### Reproducible Example
```python
See code snippet above
```
### Expected Behavior
Should it not be (1) imputation, (2) feature engineering, (3) transformation, (4) scaling?
### Actual Results
```python-traceback
See above
```
### Installed Versions
<details>
Version on Github master as of 27th Nov 2022
</details>
|
process
|
order of pipeline in time series pycaret version checks i have checked that this issue has not already been reported i have confirmed this bug exists on the of pycaret i have confirmed this bug exists on the master branch of pycaret pip install u git issue description what should be the the order of feature engineering feature transformation and feature scaling in time series forecasting in time series i have imputation transformation scaling feature engineering is new i just added yesterday but that seems sort of incorrect because feature engineered variables will not be scaled is there a rationale to the order reproducible example python see code snippet above expected behavior should it not be imputation feature engineering transformation scaling actual results python traceback see above installed versions version on github master as of nov
| 1
|
12,327
| 2,691,987,002
|
IssuesEvent
|
2015-04-01 02:22:08
|
cakephp/cakephp
|
https://api.github.com/repos/cakephp/cakephp
|
closed
|
Hash::maxDimensions() is never 1
|
Defect utility
|
I'm currently using 2.6.3 and noticed that Hash::dimensions() and Hash::maxDimensions() deliver different results for simple arrays. If the array has multiple dimensions, everything is fine. But in a case where the contents of an array can differ from empty to several dimensions, the behavior of Hash::maxDimensions() is somehow unexpected:
```
// example 1: empty array
$array = array();
pr(Hash::dimensions($array)); // -> 0
pr(Hash::maxDimensions($array)); // -> Warning (2): max(): Array must contain at least one element [CORE/Cake/Utility/Hash.php, line 769]
// example 2: only 1 dimension
$array = array('a', 'b', 'c');
pr(Hash::dimensions($array)); // -> 1
pr(Hash::maxDimensions($array)); // -> 2
// example 3: multiple dimensions
$array = array('a' => array('x'), 'b', 'c');
pr(Hash::dimensions($array)); // -> 2
pr(Hash::maxDimensions($array)); // -> 2
```
So in examples 1 and 2 you can see, what I'm talking about. I modified the core method from
```
public static function maxDimensions(array $data) {
$depth = array();
if (is_array($data) && reset($data) !== false) {
foreach ($data as $value) {
$depth[] = self::dimensions((array)$value) + 1;
}
}
return max($depth);
}
```
to
```
public static function maxDimensions(array $data) {
$depth = array();
if (is_array($data) && reset($data) !== false) {
foreach ($data as $value) {
$depth[] = self::dimensions((array)$value) + ((is_array($value)) ? 1 : 0); // modified
}
}
return (!empty($depth)) ? max($depth) : 0; // modified
}
```
to achieve the same results from Hash::dimensions() and Hash::maxDimensions(), but i guess it could be fixed somehow "cleaner".
|
1.0
|
Hash::maxDimensions() is never 1 - I'm currently using 2.6.3 and noticed that Hash::dimensions() and Hash::maxDimensions() deliver different results for simple arrays. If the array has multiple dimensions, everything is fine. But in a case where the contents of an array can differ from empty to several dimensions, the behavior of Hash::maxDimensions() is somehow unexpected:
```
// example 1: empty array
$array = array();
pr(Hash::dimensions($array)); // -> 0
pr(Hash::maxDimensions($array)); // -> Warning (2): max(): Array must contain at least one element [CORE/Cake/Utility/Hash.php, line 769]
// example 2: only 1 dimension
$array = array('a', 'b', 'c');
pr(Hash::dimensions($array)); // -> 1
pr(Hash::maxDimensions($array)); // -> 2
// example 3: multiple dimensions
$array = array('a' => array('x'), 'b', 'c');
pr(Hash::dimensions($array)); // -> 2
pr(Hash::maxDimensions($array)); // -> 2
```
So in examples 1 and 2 you can see, what I'm talking about. I modified the core method from
```
public static function maxDimensions(array $data) {
$depth = array();
if (is_array($data) && reset($data) !== false) {
foreach ($data as $value) {
$depth[] = self::dimensions((array)$value) + 1;
}
}
return max($depth);
}
```
to
```
public static function maxDimensions(array $data) {
$depth = array();
if (is_array($data) && reset($data) !== false) {
foreach ($data as $value) {
$depth[] = self::dimensions((array)$value) + ((is_array($value)) ? 1 : 0); // modified
}
}
return (!empty($depth)) ? max($depth) : 0; // modified
}
```
to achieve the same results from Hash::dimensions() and Hash::maxDimensions(), but i guess it could be fixed somehow "cleaner".
|
non_process
|
hash maxdimensions is never i m currently using and noticed that hash dimensions and hash maxdimensions deliver different results for simple arrays if the array has multiple dimensions everything is fine but in a case where the contents of an array can differ from empty to several dimensions the behavior of hash maxdimensions is somehow unexpected example empty array array array pr hash dimensions array pr hash maxdimensions array warning max array must contain at least one element example only dimension array array a b c pr hash dimensions array pr hash maxdimensions array example multiple dimensions array array a array x b c pr hash dimensions array pr hash maxdimensions array so in examples and you can see what i m talking about i modified the core method from public static function maxdimensions array data depth array if is array data reset data false foreach data as value depth self dimensions array value return max depth to public static function maxdimensions array data depth array if is array data reset data false foreach data as value depth self dimensions array value is array value modified return empty depth max depth modified to achieve the same results from hash dimensions and hash maxdimensions but i guess it could be fixed somehow cleaner
| 0
|
10,918
| 13,691,626,277
|
IssuesEvent
|
2020-09-30 15:47:08
|
panther-labs/panther
|
https://api.github.com/repos/panther-labs/panther
|
closed
|
Create API that allows retrieving rule failures
|
p1 story team:data processing
|
### Description
Create API that allows retrieving rule failures
### Acceptance Criteria
- Panther backend offers an API that allows retrieving the error that occurred while running rules over events.
|
1.0
|
Create API that allows retrieving rule failures - ### Description
Create API that allows retrieving rule failures
### Acceptance Criteria
- Panther backend offers an API that allows retrieving the error that occurred while running rules over events.
|
process
|
create api that allows retrieving rule failures description create api that allows retrieving rule failures acceptance criteria panther backend offers an api that allows retrieving the error that occurred while running rules over events
| 1
|
14,593
| 17,703,547,187
|
IssuesEvent
|
2021-08-25 03:15:16
|
tdwg/dwc
|
https://api.github.com/repos/tdwg/dwc
|
closed
|
New term - genericName
|
Term - add Class - Taxon normative Process - complete
|
## New Term Recommendation
Submitter: Markus Döring
Justification: In order to accurately represent the genus part of a parsed scientific name a new term is needed as dwc:genus is (for good reasons) defined to be the accepted genus, see discussion in https://code.google.com/p/darwincore/issues/detail?id=151
Proponents: GBIF, Catalogue of Life
Definition: The genus part of the scientificName without authorship.
Comment: For synonyms the accepted genus and the genus part of the name may be different. The term genericName should be used together with specificEpithet to form a binomial and with infraspecificEpithet to form a trinomial. The term genericName should only be used for combinations. Uninomials of generic rank do not have a genericName.
Examples: `Felis` (for scientificName "Felis concolor", with accompanying values of "Puma concolor" in acceptedNameUsage and "Puma" in genus).
Refines: None
Replaces: None
ABCD 2.06: https://abcd.tdwg.org/terms/#genusOrMonomial (ABCD 3.0)
Original comment:
Was https://code.google.com/p/darwincore/issues/detail?id=227
==New Term Recommendation==
Submitter: Markus Döring
Justification: In order to accurately represent the genus part of a parsed scientific name a new term is needed as dwc:genus is (for good reasons) defined to be the accepted genus, see discussion in https://code.google.com/p/darwincore/issues/detail?id=151
Definition: The genus part of the scientificName without authorship
Comment: For synonyms the accepted genus and the genus part of the name are different. For example for "Felis concolor" dwc:genus is Puma while dwc:genericName is Felis.
Refines:
Has Domain:
Has Range:
Replaces:
ABCD 2.06:
Feb 14 2014 Comment #1 wixner
This proposed new term is already in use by GBIF and the Catalog of Life (i4Life Darwin Core Archive Profile)
Mar 27, 2014 comment #3 chuck.miller@mobot.org
Why would this term not be called genericEpithet, like all the other name parsed terms - specificEpithet, infraspecificEpithet, cultivarEpithet? In this context, it is just another epithet in the name. Why not be consistent? It is the "genus part of the name" but calling it "genericName" allows other interpetations? Epithet is what we have been using to refer to "part of a name".
Mar 27, 2014 comment #4 wixner
Could do, Chuck. My understanding of epithet though is a word that is "attached" to some existing thing. A refinement if you like. And the genus is the main part which the epithet refines, therefore I did not think genericEpithet is applicable. But this might simply be me not being a native english speaker. Wikipedia seems to support that view though: http://en.wikipedia.org/wiki/Epithet
In the TDWG ontology it is called "genusPart": http://rs.tdwg.org/ontology/voc/TaxonName.rdf#genusPart
In TCS simply Genus
Jul 25, 2014 comment #5 morris.bob
Speaking as a non-biologist, I'd really like to see both biological and informatics arguments about the point raised in #1, #3 and #4.
On one hand, #1 shows there are important use cases. On the other hand, the consistency advocated in #3 seems appealing, but I have no opinion on the linguistics discussion in #3 and #4, especially as use of DwC in general may find one or the other of arguably better.
In general, I believe that a use of unratified terminology in a particular case---here the i4Life profile and perhaps others(?)---should be viewed with suspicion if it does not generalize to other cases that the community needs to support. Alas, I have no way to judge if that is so here.
|
1.0
|
New term - genericName - ## New Term Recommendation
Submitter: Markus Döring
Justification: In order to accurately represent the genus part of a parsed scientific name a new term is needed as dwc:genus is (for good reasons) defined to be the accepted genus, see discussion in https://code.google.com/p/darwincore/issues/detail?id=151
Proponents: GBIF, Catalogue of Life
Definition: The genus part of the scientificName without authorship.
Comment: For synonyms the accepted genus and the genus part of the name may be different. The term genericName should be used together with specificEpithet to form a binomial and with infraspecificEpithet to form a trinomial. The term genericName should only be used for combinations. Uninomials of generic rank do not have a genericName.
Examples: `Felis` (for scientificName "Felis concolor", with accompanying values of "Puma concolor" in acceptedNameUsage and "Puma" in genus).
Refines: None
Replaces: None
ABCD 2.06: https://abcd.tdwg.org/terms/#genusOrMonomial (ABCD 3.0)
Original comment:
Was https://code.google.com/p/darwincore/issues/detail?id=227
==New Term Recommendation==
Submitter: Markus Döring
Justification: In order to accurately represent the genus part of a parsed scientific name a new term is needed as dwc:genus is (for good reasons) defined to be the accepted genus, see discussion in https://code.google.com/p/darwincore/issues/detail?id=151
Definition: The genus part of the scientificName without authorship
Comment: For synonyms the accepted genus and the genus part of the name are different. For example for "Felis concolor" dwc:genus is Puma while dwc:genericName is Felis.
Refines:
Has Domain:
Has Range:
Replaces:
ABCD 2.06:
Feb 14 2014 Comment #1 wixner
This proposed new term is already in use by GBIF and the Catalog of Life (i4Life Darwin Core Archive Profile)
Mar 27, 2014 comment #3 chuck.miller@mobot.org
Why would this term not be called genericEpithet, like all the other name parsed terms - specificEpithet, infraspecificEpithet, cultivarEpithet? In this context, it is just another epithet in the name. Why not be consistent? It is the "genus part of the name" but calling it "genericName" allows other interpetations? Epithet is what we have been using to refer to "part of a name".
Mar 27, 2014 comment #4 wixner
Could do, Chuck. My understanding of epithet though is a word that is "attached" to some existing thing. A refinement if you like. And the genus is the main part which the epithet refines, therefore I did not think genericEpithet is applicable. But this might simply be me not being a native english speaker. Wikipedia seems to support that view though: http://en.wikipedia.org/wiki/Epithet
In the TDWG ontology it is called "genusPart": http://rs.tdwg.org/ontology/voc/TaxonName.rdf#genusPart
In TCS simply Genus
Jul 25, 2014 comment #5 morris.bob
Speaking as a non-biologist, I'd really like to see both biological and informatics arguments about the point raised in #1, #3 and #4.
On one hand, #1 shows there are important use cases. On the other hand, the consistency advocated in #3 seems appealing, but I have no opinion on the linguistics discussion in #3 and #4, especially as use of DwC in general may find one or the other of arguably better.
In general, I believe that a use of unratified terminology in a particular case---here the i4Life profile and perhaps others(?)---should be viewed with suspicion if it does not generalize to other cases that the community needs to support. Alas, I have no way to judge if that is so here.
|
process
|
new term genericname new term recommendation submitter markus döring justification in order to accurately represent the genus part of a parsed scientific name a new term is needed as dwc genus is for good reasons defined to be the accepted genus see discussion in proponents gbif catalogue of life definition the genus part of the scientificname without authorship comment for synonyms the accepted genus and the genus part of the name may be different the term genericname should be used together with specificepithet to form a binomial and with infraspecificepithet to form a trinomial the term genericname should only be used for combinations uninomials of generic rank do not have a genericname examples felis for scientificname felis concolor with accompanying values of puma concolor in acceptednameusage and puma in genus refines none replaces none abcd abcd original comment was new term recommendation submitter markus döring justification in order to accurately represent the genus part of a parsed scientific name a new term is needed as dwc genus is for good reasons defined to be the accepted genus see discussion in definition the genus part of the scientificname without authorship comment for synonyms the accepted genus and the genus part of the name are different for example for felis concolor dwc genus is puma while dwc genericname is felis refines has domain has range replaces abcd feb comment wixner this proposed new term is already in use by gbif and the catalog of life darwin core archive profile mar comment chuck miller mobot org why would this term not be called genericepithet like all the other name parsed terms specificepithet infraspecificepithet cultivarepithet in this context it is just another epithet in the name why not be consistent it is the genus part of the name but calling it genericname allows other interpetations epithet is what we have been using to refer to part of a name mar comment wixner could do chuck my understanding of epithet though is a word that is attached to some existing thing a refinement if you like and the genus is the main part which the epithet refines therefore i did not think genericepithet is applicable but this might simply be me not being a native english speaker wikipedia seems to support that view though in the tdwg ontology it is called genuspart in tcs simply genus jul comment morris bob speaking as a non biologist i d really like to see both biological and informatics arguments about the point raised in and on one hand shows there are important use cases on the other hand the consistency advocated in seems appealing but i have no opinion on the linguistics discussion in and especially as use of dwc in general may find one or the other of arguably better in general i believe that a use of unratified terminology in a particular case here the profile and perhaps others should be viewed with suspicion if it does not generalize to other cases that the community needs to support alas i have no way to judge if that is so here
| 1
|
17,199
| 22,774,906,195
|
IssuesEvent
|
2022-07-08 13:37:08
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
closed
|
prisma Update not working properly on MongoDB types
|
bug/1-unconfirmed kind/bug process/candidate topic: broken query team/client topic: mongodb topic: composite-types topic: embedded documents
|
### Bug description
The update method does not work properly on Schema "types". You can fill in the data, but you cannot update it!
### How to reproduce
<!--
1. Create a Mongo Db Schema containing "type object"
2. Fill it with dummy data
3. Try to update the data
4. See error
-->
### Expected behavior
The boolean to update from false to true
### Prisma information
<!-- Do not include your database credentials when sharing your Prisma schema! -->
```SCHEMA
model User {
id String @id @default(auto()) @map("_id") @db.ObjectId
uid String @unique
createdAt DateTime @default(now())
email String? @unique
emailVerified Boolean @default(false)
password String
totp Totp?
@@unique([email, profile.username, profile.hash])
}
type Totp {
token String
enabled Boolean
}
```
```SHELL
await prisma.user.update({
where: {
uid: 'ee9a45e7ada233527b'
},
data: {
totp: {
+ token: String,
+ enabled: Boolean
}
}
})
Argument token for data.totp.token is missing.
Argument enabled for data.totp.enabled is missing.
Note: Lines with + are required
What I receive in return
await prisma.user.update({
where: {
uid: 'ee9a45e7ada233527b'
},
data: {
totp: {
enabled: 'true'
~~~~~~~
}
}
})
**Unknown arg `enabled`** in data.totp.enabled for type TotpNullableUpdateEnvelopeInput. Did you mean `unset`? Available args:
type TotpNullableUpdateEnvelopeInput {
set?: TotpCreateInput | Null
upsert?: TotpUpsertInput
unset?: Boolean
}
```
### Environment & setup
- OS: Windows 11
- Database: MongoDB
- Node.js version: v16.15.1
### Prisma Version
```
prisma : 4.0.0
@prisma/client : 4.0.0
Current platform : windows
Query Engine (Node-API) : libquery-engine da41d2bb3406da22087b849f0e911199ba4fbf11 (at node_modules\@prisma\engines\query_engine-windows.dll.node)
Migration Engine : migration-engine-cli da41d2bb3406da22087b849f0e911199ba4fbf11 (at node_modules\@prisma\engines\migration-engine-windows.exe)
Introspection Engine : introspection-core da41d2bb3406da22087b849f0e911199ba4fbf11 (at node_modules\@prisma\engines\introspection-engine-windows.exe)
Format Binary : prisma-fmt da41d2bb3406da22087b849f0e911199ba4fbf11 (at node_modules\@prisma\engines\prisma-fmt-windows.exe)
Default Engines Hash : da41d2bb3406da22087b849f0e911199ba4fbf11
Studio : 0.465.0
```
|
1.0
|
prisma Update not working properly on MongoDB types - ### Bug description
The update method does not work properly on Schema "types". You can fill in the data, but you cannot update it!
### How to reproduce
<!--
1. Create a Mongo Db Schema containing "type object"
2. Fill it with dummy data
3. Try to update the data
4. See error
-->
### Expected behavior
The boolean to update from false to true
### Prisma information
<!-- Do not include your database credentials when sharing your Prisma schema! -->
```SCHEMA
model User {
id String @id @default(auto()) @map("_id") @db.ObjectId
uid String @unique
createdAt DateTime @default(now())
email String? @unique
emailVerified Boolean @default(false)
password String
totp Totp?
@@unique([email, profile.username, profile.hash])
}
type Totp {
token String
enabled Boolean
}
```
```SHELL
await prisma.user.update({
where: {
uid: 'ee9a45e7ada233527b'
},
data: {
totp: {
+ token: String,
+ enabled: Boolean
}
}
})
Argument token for data.totp.token is missing.
Argument enabled for data.totp.enabled is missing.
Note: Lines with + are required
What I receive in return
await prisma.user.update({
where: {
uid: 'ee9a45e7ada233527b'
},
data: {
totp: {
enabled: 'true'
~~~~~~~
}
}
})
**Unknown arg `enabled`** in data.totp.enabled for type TotpNullableUpdateEnvelopeInput. Did you mean `unset`? Available args:
type TotpNullableUpdateEnvelopeInput {
set?: TotpCreateInput | Null
upsert?: TotpUpsertInput
unset?: Boolean
}
```
### Environment & setup
- OS: Windows 11
- Database: MongoDB
- Node.js version: v16.15.1
### Prisma Version
```
prisma : 4.0.0
@prisma/client : 4.0.0
Current platform : windows
Query Engine (Node-API) : libquery-engine da41d2bb3406da22087b849f0e911199ba4fbf11 (at node_modules\@prisma\engines\query_engine-windows.dll.node)
Migration Engine : migration-engine-cli da41d2bb3406da22087b849f0e911199ba4fbf11 (at node_modules\@prisma\engines\migration-engine-windows.exe)
Introspection Engine : introspection-core da41d2bb3406da22087b849f0e911199ba4fbf11 (at node_modules\@prisma\engines\introspection-engine-windows.exe)
Format Binary : prisma-fmt da41d2bb3406da22087b849f0e911199ba4fbf11 (at node_modules\@prisma\engines\prisma-fmt-windows.exe)
Default Engines Hash : da41d2bb3406da22087b849f0e911199ba4fbf11
Studio : 0.465.0
```
|
process
|
prisma update not working properly on mongodb types bug description the update method does not work properly on schema types you can fill in the data but you cannot update it how to reproduce create a mongo db schema containing type object fill it with dummy data try to update the data see error expected behavior the boolean to update from false to true prisma information schema model user id string id default auto map id db objectid uid string unique createdat datetime default now email string unique emailverified boolean default false password string totp totp unique type totp token string enabled boolean shell await prisma user update where uid data totp token string enabled boolean argument token for data totp token is missing argument enabled for data totp enabled is missing note lines with are required what i receive in return await prisma user update where uid data totp enabled true unknown arg enabled in data totp enabled for type totpnullableupdateenvelopeinput did you mean unset available args type totpnullableupdateenvelopeinput set totpcreateinput null upsert totpupsertinput unset boolean environment setup os windows database mongodb node js version prisma version prisma prisma client current platform windows query engine node api libquery engine at node modules prisma engines query engine windows dll node migration engine migration engine cli at node modules prisma engines migration engine windows exe introspection engine introspection core at node modules prisma engines introspection engine windows exe format binary prisma fmt at node modules prisma engines prisma fmt windows exe default engines hash studio
| 1
|
17,136
| 22,674,482,728
|
IssuesEvent
|
2022-07-04 01:56:55
|
Carlosmtp/DomuzSGI
|
https://api.github.com/repos/Carlosmtp/DomuzSGI
|
opened
|
Añadir columnas en los indicadores de procesos
|
Enhancement High Process Management Reports Management
|
- [ ] Crear columna goal en indicadores de procesos
- [ ] Enviar los perodic_reports en la funcion para consultar los procesos
- [ ] Crear funcion para actualizar la tabla que se va a crear
- [ ]
|
1.0
|
Añadir columnas en los indicadores de procesos - - [ ] Crear columna goal en indicadores de procesos
- [ ] Enviar los perodic_reports en la funcion para consultar los procesos
- [ ] Crear funcion para actualizar la tabla que se va a crear
- [ ]
|
process
|
añadir columnas en los indicadores de procesos crear columna goal en indicadores de procesos enviar los perodic reports en la funcion para consultar los procesos crear funcion para actualizar la tabla que se va a crear
| 1
|
6,406
| 9,487,476,730
|
IssuesEvent
|
2019-04-22 16:58:12
|
18F/federal-grant-reporting
|
https://api.github.com/repos/18F/federal-grant-reporting
|
opened
|
Create an onboarding checklist
|
backlog process
|
## User story
As a person working on this project, I want to have an onboarding checklist for new teammates so that I can ensure they have access to everything they need.
## Acceptance criteria
- [ ] An onboarding checklist exists.
- [ ] It's readily findable from the README and/or other onboarding docs.
- [ ] The onboarding checklist includes adding teammates as contributors to this repo.
## Outstanding questions
@bpdesigns, did you and @mheadd or @tram already start such a checklist?
|
1.0
|
Create an onboarding checklist - ## User story
As a person working on this project, I want to have an onboarding checklist for new teammates so that I can ensure they have access to everything they need.
## Acceptance criteria
- [ ] An onboarding checklist exists.
- [ ] It's readily findable from the README and/or other onboarding docs.
- [ ] The onboarding checklist includes adding teammates as contributors to this repo.
## Outstanding questions
@bpdesigns, did you and @mheadd or @tram already start such a checklist?
|
process
|
create an onboarding checklist user story as a person working on this project i want to have an onboarding checklist for new teammates so that i can ensure they have access to everything they need acceptance criteria an onboarding checklist exists it s readily findable from the readme and or other onboarding docs the onboarding checklist includes adding teammates as contributors to this repo outstanding questions bpdesigns did you and mheadd or tram already start such a checklist
| 1
|
5,591
| 8,444,508,425
|
IssuesEvent
|
2018-10-18 18:38:48
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
opened
|
Remove deprecated Python flags
|
P3 team-Rules-Python type: process
|
Specifically, --python2_path and --python3_path in BazelPythonConfiguration. I think this can bypass the incompatible change mechanism since it's a 2/3 issue and py3 support is experimental.
|
1.0
|
Remove deprecated Python flags - Specifically, --python2_path and --python3_path in BazelPythonConfiguration. I think this can bypass the incompatible change mechanism since it's a 2/3 issue and py3 support is experimental.
|
process
|
remove deprecated python flags specifically path and path in bazelpythonconfiguration i think this can bypass the incompatible change mechanism since it s a issue and support is experimental
| 1
|
8,481
| 11,643,543,609
|
IssuesEvent
|
2020-02-29 14:12:26
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
closed
|
Process.MainWindowHandle is not correctly refreshed
|
area-System.Diagnostics.Process untriaged
|
I have come across an [issue](https://github.com/FlaUI/FlaUI/issues/298) with a third-party UI test library which I have finally tracked down to their internal use of `Process.MainWindowHandle`. Consider the following example:
```csharp
bool TestProcess()
{
var process = Process.Start("notepad");
try
{
for (var attempt = 0; attempt < 20; ++attempt)
{
process?.Refresh();
if (process?.MainWindowHandle != IntPtr.Zero)
{
return true;
}
Thread.Sleep(100);
}
return false;
}
finally
{
process?.Kill();
}
}
```
This function will return `true` in .NET Framework after a few refreshes, but `false` in .NET Core.
The reason is: once the getter of `MainWindowHandle` is called once, it will never be evaluated again. See [Process.Win32.cs](https://github.com/dotnet/runtime/blob/d872a664488f773b98376365da93c2f3425e15bd/src/libraries/System.Diagnostics.Process/src/System/Diagnostics/Process.Win32.cs#L237):
```csharp
public IntPtr MainWindowHandle
{
get
{
if (!_haveMainWindow)
{
EnsureState(State.IsLocal | State.HaveId);
_mainWindowHandle = ProcessManager.GetMainWindowHandle(_processId);
_haveMainWindow = true;
}
return _mainWindowHandle;
}
}
```
There is no other `_haveMainWindow` assignment anywhere else that I can see. Therefore, if the handle was not ready yet (i.e. `_mainWindowHandle` gets assigned `IntPtr.Zero`), the class would assume it was ready anyway, set `_haveMainWindow` to true, and never re-evaluate this.
Due to this, I would propose two changes:
1. Change the line with the true assignment to: `_haveMainWindow = _mainWindowHandle != IntPtr.Zero;`
2. In [Process.Refresh()](https://github.com/dotnet/runtime/blob/d872a664488f773b98376365da93c2f3425e15bd/src/libraries/System.Diagnostics.Process/src/System/Diagnostics/Process.cs#L1117), add `_haveMainWindow = false;` (it seems to me this information should also be discarded together with the rest)
If you guys are happy with these changes, I would like to take care of making the pull request. I can also include a test demonstrating the issue as it stands, which would be very similar to the example I put above. I have already done these changes locally and can confirm it solves my issue without breaking any other test. Just be aware that it is unavoidable to launch a GUI to test this (I think). Microsoft-hosted agents in Azure Pipelines should not have any issues dealing with this, but perhaps you guys have other concerns.
|
1.0
|
Process.MainWindowHandle is not correctly refreshed - I have come across an [issue](https://github.com/FlaUI/FlaUI/issues/298) with a third-party UI test library which I have finally tracked down to their internal use of `Process.MainWindowHandle`. Consider the following example:
```csharp
bool TestProcess()
{
var process = Process.Start("notepad");
try
{
for (var attempt = 0; attempt < 20; ++attempt)
{
process?.Refresh();
if (process?.MainWindowHandle != IntPtr.Zero)
{
return true;
}
Thread.Sleep(100);
}
return false;
}
finally
{
process?.Kill();
}
}
```
This function will return `true` in .NET Framework after a few refreshes, but `false` in .NET Core.
The reason is: once the getter of `MainWindowHandle` is called once, it will never be evaluated again. See [Process.Win32.cs](https://github.com/dotnet/runtime/blob/d872a664488f773b98376365da93c2f3425e15bd/src/libraries/System.Diagnostics.Process/src/System/Diagnostics/Process.Win32.cs#L237):
```csharp
public IntPtr MainWindowHandle
{
get
{
if (!_haveMainWindow)
{
EnsureState(State.IsLocal | State.HaveId);
_mainWindowHandle = ProcessManager.GetMainWindowHandle(_processId);
_haveMainWindow = true;
}
return _mainWindowHandle;
}
}
```
There is no other `_haveMainWindow` assignment anywhere else that I can see. Therefore, if the handle was not ready yet (i.e. `_mainWindowHandle` gets assigned `IntPtr.Zero`), the class would assume it was ready anyway, set `_haveMainWindow` to true, and never re-evaluate this.
Due to this, I would propose two changes:
1. Change the line with the true assignment to: `_haveMainWindow = _mainWindowHandle != IntPtr.Zero;`
2. In [Process.Refresh()](https://github.com/dotnet/runtime/blob/d872a664488f773b98376365da93c2f3425e15bd/src/libraries/System.Diagnostics.Process/src/System/Diagnostics/Process.cs#L1117), add `_haveMainWindow = false;` (it seems to me this information should also be discarded together with the rest)
If you guys are happy with these changes, I would like to take care of making the pull request. I can also include a test demonstrating the issue as it stands, which would be very similar to the example I put above. I have already done these changes locally and can confirm it solves my issue without breaking any other test. Just be aware that it is unavoidable to launch a GUI to test this (I think). Microsoft-hosted agents in Azure Pipelines should not have any issues dealing with this, but perhaps you guys have other concerns.
|
process
|
process mainwindowhandle is not correctly refreshed i have come across an with a third party ui test library which i have finally tracked down to their internal use of process mainwindowhandle consider the following example csharp bool testprocess var process process start notepad try for var attempt attempt attempt process refresh if process mainwindowhandle intptr zero return true thread sleep return false finally process kill this function will return true in net framework after a few refreshes but false in net core the reason is once the getter of mainwindowhandle is called once it will never be evaluated again see csharp public intptr mainwindowhandle get if havemainwindow ensurestate state islocal state haveid mainwindowhandle processmanager getmainwindowhandle processid havemainwindow true return mainwindowhandle there is no other havemainwindow assignment anywhere else that i can see therefore if the handle was not ready yet i e mainwindowhandle gets assigned intptr zero the class would assume it was ready anyway set havemainwindow to true and never re evaluate this due to this i would propose two changes change the line with the true assignment to havemainwindow mainwindowhandle intptr zero in add havemainwindow false it seems to me this information should also be discarded together with the rest if you guys are happy with these changes i would like to take care of making the pull request i can also include a test demonstrating the issue as it stands which would be very similar to the example i put above i have already done these changes locally and can confirm it solves my issue without breaking any other test just be aware that it is unavoidable to launch a gui to test this i think microsoft hosted agents in azure pipelines should not have any issues dealing with this but perhaps you guys have other concerns
| 1
|
17,547
| 23,358,013,501
|
IssuesEvent
|
2022-08-10 09:09:45
|
ArneBinder/pie-utils
|
https://api.github.com/repos/ArneBinder/pie-utils
|
opened
|
collect and show distribution of text lengths (num tokens)
|
document processor
|
Add a document processor that tokenizes the text (e.g. with a Huggingface tokenizer), collects the lengths of the documents in means of number of tokens and displays that information in a way that it is easy to digest.
|
1.0
|
collect and show distribution of text lengths (num tokens) - Add a document processor that tokenizes the text (e.g. with a Huggingface tokenizer), collects the lengths of the documents in means of number of tokens and displays that information in a way that it is easy to digest.
|
process
|
collect and show distribution of text lengths num tokens add a document processor that tokenizes the text e g with a huggingface tokenizer collects the lengths of the documents in means of number of tokens and displays that information in a way that it is easy to digest
| 1
|
637
| 3,092,148,171
|
IssuesEvent
|
2015-08-26 16:21:57
|
e-government-ua/iBP
|
https://api.github.com/repos/e-government-ua/iBP
|
opened
|
Ужгород - "Підготовка та завіряння копій рішень, витягів з них, підготовка ксерокопій матеріалів"
|
in process of creating
|
Опис
https://drive.google.com/open?id=0B5RcylWxCQTwRnBvbDlTekMtanM
Зразок заяви: https://docs.google.com/document/d/15jhM1dVLig9OFZm2MPnj2Mzcct_a-08Lb8QhZGo0tZw/edit#
|
1.0
|
Ужгород - "Підготовка та завіряння копій рішень, витягів з них, підготовка ксерокопій матеріалів" - Опис
https://drive.google.com/open?id=0B5RcylWxCQTwRnBvbDlTekMtanM
Зразок заяви: https://docs.google.com/document/d/15jhM1dVLig9OFZm2MPnj2Mzcct_a-08Lb8QhZGo0tZw/edit#
|
process
|
ужгород підготовка та завіряння копій рішень витягів з них підготовка ксерокопій матеріалів опис зразок заяви
| 1
|
106,081
| 13,244,213,601
|
IssuesEvent
|
2020-08-19 12:43:46
|
Open-Systems-Pharmacology/OSPSuite.Core
|
https://api.github.com/repos/Open-Systems-Pharmacology/OSPSuite.Core
|
closed
|
Observed data format: 025_TOrganCompSpecies_CEGeo
|
Importer-Redesign
|
Time [min] | Organ | Compartment | Species | Concentration [mg/ml] | Error
-- | -- | -- | -- | -- | --
1 | Brain | Plasma | Human | 0,1 |
2 | Brain | Plasma | Human | 12 | 3
3 | Brain | Plasma | Human | 2 | 1
10 | Brain | Plasma | Human | 1 | 0,5
20 | Brain | Plasma | Human | 0,01 |
1 | Liver | Plasma | Human | 0,2 | 0,1
2 | Liver | Plasma | Human | 8 | 2
3 | Liver | Plasma | Human | 2 | 1
10 | Liver | Plasma | Human | 0,5 | 0,4
20 | Liver | Plasma | Human | 0,05 |
[025_TOrganCompSpecies_CEGeo.xlsx](https://github.com/Open-Systems-Pharmacology/OSPSuite.Core/files/3712144/025_TOrganCompSpecies_CEGeo.xlsx)
|
1.0
|
Observed data format: 025_TOrganCompSpecies_CEGeo -
Time [min] | Organ | Compartment | Species | Concentration [mg/ml] | Error
-- | -- | -- | -- | -- | --
1 | Brain | Plasma | Human | 0,1 |
2 | Brain | Plasma | Human | 12 | 3
3 | Brain | Plasma | Human | 2 | 1
10 | Brain | Plasma | Human | 1 | 0,5
20 | Brain | Plasma | Human | 0,01 |
1 | Liver | Plasma | Human | 0,2 | 0,1
2 | Liver | Plasma | Human | 8 | 2
3 | Liver | Plasma | Human | 2 | 1
10 | Liver | Plasma | Human | 0,5 | 0,4
20 | Liver | Plasma | Human | 0,05 |
[025_TOrganCompSpecies_CEGeo.xlsx](https://github.com/Open-Systems-Pharmacology/OSPSuite.Core/files/3712144/025_TOrganCompSpecies_CEGeo.xlsx)
|
non_process
|
observed data format torgancompspecies cegeo time organ compartment species concentration error brain plasma human brain plasma human brain plasma human brain plasma human brain plasma human liver plasma human liver plasma human liver plasma human liver plasma human liver plasma human
| 0
|
18,963
| 24,925,250,899
|
IssuesEvent
|
2022-10-31 06:37:14
|
ctapobep/blog
|
https://api.github.com/repos/ctapobep/blog
|
opened
|
Scrum is not what you think
|
dev processes
|
This page describes answers to some common mistakes of people who think they know what Scrum is. In reality, a lot of what people think of Scrum is based on the 1st version of Scrum Guide, which was drastically changed in 2011. And it kept changing in the following years.
*Disclaimer:* these days there’s no reason to use Scrum as there are more effective and agile methodologies/techniques available: Theory of Constraints, Lean’s Just-in-time, Continuous Delivery. But if you do use Scrum, at least you should understand what it is.
# It’s Scrum, not SCRUM
SCRUM isn’t an abbreviation - it’s a real word. Remember those football or rugby players, standing in a circle and viciously deciding on the plan in the next part of the game? Well, that’s Scrum.
And the name tells us the most important underlying idea of Scrum methodology/framework - it’s all about teamwork. All the activities within Scrum are supposed to be conducted with the whole team present, everyone should actively participate in the discussions.
# Scrum is not agile
The success of Scrum tightly relates to the success of Agile. But this is due to the excellent marketing campaign of Scrum.
Agile Manifesto was written in 2001 during a small gathering of software engineers, and the founders of Scrum (Ken Schwaber and Jeff Sutherland) were [part of that group](https://agilemanifesto.org/authors.html). But Scrum itself isn’t agile at all - it’s a very strict process that doesn’t allow for a lot of variability. As mentioned on [Scrum's home page](https://scrumguides.org/):
> While implementing only parts of Scrum is possible, the result is not Scrum. Scrum exists only in its entirety and functions well as a container for other techniques, methodologies, and practices.
So while Scrum allows running each of its activities the way you want, the _list_ of these activities is set in stone and can’t be altered. Otherwise it’s not Scrum.
And even though Scrum is more agile than a lot of corporate processes of the 1990's, compared to modern methodologies (like Just-in-Time, Theory of Constraints and Continuous Delivery) it dictates too much and it’s really more of an anti-agile.
# You don’t have to estimate tasks in Scrum
While the original Scrum Guide 2010 mandated that we estimate our Sprints, starting from the next version in 2011 [that stopped](https://scrumguides.org/revisions.html):
> The Development Team creates a forecast of work it believes will be done
So it’s not actually required to know how much time each task takes, what’s important is what tasks you think you’ll complete within the Sprint. You can use estimating for this of course, but a simple hunch is fine too.
# You don’t have to measure Velocity
Nowhere in the [Scrum Guide](https://scrumguides.org/scrum-guide.html) you’ll find any mentioning of Velocity. Burn-down charts were also [removed in v2](https://scrumguides.org/revisions.html) as a requirement:
> Scrum does not mandate a burn-down chart to monitor progress.
Velocity is supposed to be used to help with estimating next sprints. But as I mentioned before - Scrum doesn’t mandate how you forecast Sprints. So neither Velocity nor Burn-down Charts are part of Scrum. But of course Scrum doesn’t preclude you from using these tools.
# Planning poker isn’t part of Scrum
A lot of people connect Planning Poker with Scrum. But again - it’s not part of it. Planning Poker is just one of a myriad of gamification techniques.
# Sprints always fail
This is sarcasm, but it’s [rooted in fundamental problems](http://qala.io/blog/you-must-fail-scrum-sprints.html) of Scrum.
|
1.0
|
Scrum is not what you think - This page describes answers to some common mistakes of people who think they know what Scrum is. In reality, a lot of what people think of Scrum is based on the 1st version of Scrum Guide, which was drastically changed in 2011. And it kept changing in the following years.
*Disclaimer:* these days there’s no reason to use Scrum as there are more effective and agile methodologies/techniques available: Theory of Constraints, Lean’s Just-in-time, Continuous Delivery. But if you do use Scrum, at least you should understand what it is.
# It’s Scrum, not SCRUM
SCRUM isn’t an abbreviation - it’s a real word. Remember those football or rugby players, standing in a circle and viciously deciding on the plan in the next part of the game? Well, that’s Scrum.
And the name tells us the most important underlying idea of Scrum methodology/framework - it’s all about teamwork. All the activities within Scrum are supposed to be conducted with the whole team present, everyone should actively participate in the discussions.
# Scrum is not agile
The success of Scrum tightly relates to the success of Agile. But this is due to the excellent marketing campaign of Scrum.
Agile Manifesto was written in 2001 during a small gathering of software engineers, and the founders of Scrum (Ken Schwaber and Jeff Sutherland) were [part of that group](https://agilemanifesto.org/authors.html). But Scrum itself isn’t agile at all - it’s a very strict process that doesn’t allow for a lot of variability. As mentioned on [Scrum's home page](https://scrumguides.org/):
> While implementing only parts of Scrum is possible, the result is not Scrum. Scrum exists only in its entirety and functions well as a container for other techniques, methodologies, and practices.
So while Scrum allows running each of its activities the way you want, the _list_ of these activities is set in stone and can’t be altered. Otherwise it’s not Scrum.
And even though Scrum is more agile than a lot of corporate processes of the 1990's, compared to modern methodologies (like Just-in-Time, Theory of Constraints and Continuous Delivery) it dictates too much and it’s really more of an anti-agile.
# You don’t have to estimate tasks in Scrum
While the original Scrum Guide 2010 mandated that we estimate our Sprints, starting from the next version in 2011 [that stopped](https://scrumguides.org/revisions.html):
> The Development Team creates a forecast of work it believes will be done
So it’s not actually required to know how much time each task takes, what’s important is what tasks you think you’ll complete within the Sprint. You can use estimating for this of course, but a simple hunch is fine too.
# You don’t have to measure Velocity
Nowhere in the [Scrum Guide](https://scrumguides.org/scrum-guide.html) you’ll find any mentioning of Velocity. Burn-down charts were also [removed in v2](https://scrumguides.org/revisions.html) as a requirement:
> Scrum does not mandate a burn-down chart to monitor progress.
Velocity is supposed to be used to help with estimating next sprints. But as I mentioned before - Scrum doesn’t mandate how you forecast Sprints. So neither Velocity nor Burn-down Charts are part of Scrum. But of course Scrum doesn’t preclude you from using these tools.
# Planning poker isn’t part of Scrum
A lot of people connect Planning Poker with Scrum. But again - it’s not part of it. Planning Poker is just one of a myriad of gamification techniques.
# Sprints always fail
This is sarcasm, but it’s [rooted in fundamental problems](http://qala.io/blog/you-must-fail-scrum-sprints.html) of Scrum.
|
process
|
scrum is not what you think this page describes answers to some common mistakes of people who think they know what scrum is in reality a lot of what people think of scrum is based on the version of scrum guide which was drastically changed in and it kept changing in the following years disclaimer these days there’s no reason to use scrum as there are more effective and agile methodologies techniques available theory of constraints lean’s just in time continuous delivery but if you do use scrum at least you should understand what it is it’s scrum not scrum scrum isn’t an abbreviation it’s a real word remember those football or rugby players standing in a circle and viciously deciding on the plan in the next part of the game well that’s scrum and the name tells us the most important underlying idea of scrum methodology framework it’s all about teamwork all the activities within scrum are supposed to be conducted with the whole team present everyone should actively participate in the discussions scrum is not agile the success of scrum tightly relates to the success of agile but this is due to the excellent marketing campaign of scrum agile manifesto was written in during a small gathering of software engineers and the founders of scrum ken schwaber and jeff sutherland were but scrum itself isn’t agile at all it’s a very strict process that doesn’t allow for a lot of variability as mentioned on while implementing only parts of scrum is possible the result is not scrum scrum exists only in its entirety and functions well as a container for other techniques methodologies and practices so while scrum allows running each of its activities the way you want the list of these activities is set in stone and can’t be altered otherwise it’s not scrum and even though scrum is more agile than a lot of corporate processes of the s compared to modern methodologies like just in time theory of constraints and continuous delivery it dictates too much and it’s really more of an anti agile you don’t have to estimate tasks in scrum while the original scrum guide mandated that we estimate our sprints starting from the next version in the development team creates a forecast of work it believes will be done so it’s not actually required to know how much time each task takes what’s important is what tasks you think you’ll complete within the sprint you can use estimating for this of course but a simple hunch is fine too you don’t have to measure velocity nowhere in the you’ll find any mentioning of velocity burn down charts were also as a requirement scrum does not mandate a burn down chart to monitor progress velocity is supposed to be used to help with estimating next sprints but as i mentioned before scrum doesn’t mandate how you forecast sprints so neither velocity nor burn down charts are part of scrum but of course scrum doesn’t preclude you from using these tools planning poker isn’t part of scrum a lot of people connect planning poker with scrum but again it’s not part of it planning poker is just one of a myriad of gamification techniques sprints always fail this is sarcasm but it’s of scrum
| 1
|
21,211
| 28,263,679,087
|
IssuesEvent
|
2023-04-07 03:26:25
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
closed
|
Release X.Y.Z - $MONTH $YEAR
|
P1 type: process release team-OSS
|
# Status of Bazel X.Y.Z
<!-- The first item is only needed for major releases (X.0.0) -->
- Target baseline: [date]
- Expected release date: [date]
- [List of release blockers](link-to-milestone)
To report a release-blocking bug, please add a comment with the text `@bazel-io flag` to the issue. A release manager will triage it and add it to the milestone.
To cherry-pick a mainline commit into X.Y.Z, simply send a PR against the `release-X.Y.Z` branch.
**Task list:**
<!-- The first three items are only needed for major releases (X.0.0) -->
- [ ] Pick release baseline: [link to base commit]
- [ ] Create release candidate: X.Y.Zrc1
- [ ] Check downstream projects
- [ ] Create [draft release announcement](https://docs.google.com/document/d/1pu2ARPweOCTxPsRR8snoDtkC9R51XWRyBXeiC6Ql5so/edit) <!-- Note that there should be a new Bazel Release Announcement document for every major release. For minor and patch releases, use the latest open doc. -->
- [ ] Send the release announcement PR for review: [link to bazel-blog PR] <!-- Only for major releases. -->
- [ ] Push the release and notify package maintainers: [link to comment notifying package maintainers]
- [ ] Update the documentation
- [ ] Push the blog post: [link to blog post] <!-- Only for major releases. -->
- [ ] Update the [release page](https://github.com/bazelbuild/bazel/releases/)
|
1.0
|
Release X.Y.Z - $MONTH $YEAR - # Status of Bazel X.Y.Z
<!-- The first item is only needed for major releases (X.0.0) -->
- Target baseline: [date]
- Expected release date: [date]
- [List of release blockers](link-to-milestone)
To report a release-blocking bug, please add a comment with the text `@bazel-io flag` to the issue. A release manager will triage it and add it to the milestone.
To cherry-pick a mainline commit into X.Y.Z, simply send a PR against the `release-X.Y.Z` branch.
**Task list:**
<!-- The first three items are only needed for major releases (X.0.0) -->
- [ ] Pick release baseline: [link to base commit]
- [ ] Create release candidate: X.Y.Zrc1
- [ ] Check downstream projects
- [ ] Create [draft release announcement](https://docs.google.com/document/d/1pu2ARPweOCTxPsRR8snoDtkC9R51XWRyBXeiC6Ql5so/edit) <!-- Note that there should be a new Bazel Release Announcement document for every major release. For minor and patch releases, use the latest open doc. -->
- [ ] Send the release announcement PR for review: [link to bazel-blog PR] <!-- Only for major releases. -->
- [ ] Push the release and notify package maintainers: [link to comment notifying package maintainers]
- [ ] Update the documentation
- [ ] Push the blog post: [link to blog post] <!-- Only for major releases. -->
- [ ] Update the [release page](https://github.com/bazelbuild/bazel/releases/)
|
process
|
release x y z month year status of bazel x y z target baseline expected release date link to milestone to report a release blocking bug please add a comment with the text bazel io flag to the issue a release manager will triage it and add it to the milestone to cherry pick a mainline commit into x y z simply send a pr against the release x y z branch task list pick release baseline create release candidate x y check downstream projects create send the release announcement pr for review push the release and notify package maintainers update the documentation push the blog post update the
| 1
|
7,056
| 6,732,757,900
|
IssuesEvent
|
2017-10-18 12:45:06
|
kubernetes-incubator/kubespray
|
https://api.github.com/repos/kubernetes-incubator/kubespray
|
closed
|
Elevate individual tasks to super user only as needed
|
security
|
I was going to give kargo a try for rolling a kubernetes cluster but it got stuck at:
```
TASK [adduser : User | Create User Group] **************************************
fatal: [k8s-etcd-3]: FAILED! => {"changed": false, "failed": true, "msg": "groupadd: Permission denied.\ngroupadd: cannot lock /etc/group; try again later.\n", "name": "kube-cert"}
fatal: [k8s-etcd-2]: FAILED! => {"changed": false, "failed": true, "msg": "groupadd: Permission denied.\ngroupadd: cannot lock /etc/group; try again later.\n", "name": "kube-cert"}
fatal: [k8s-etcd-1]: FAILED! => {"changed": false, "failed": true, "msg": "groupadd: Permission denied.\ngroupadd: cannot lock /etc/group; try again later.\n", "name": "kube-cert"}
```
I checked in the ansible files and it seemed that the task had no `become: true` annotation which seemed odd given it requires super user permissions. I asked about this and was told that people using kargo normally add the become on the command line. But that would mean running all tasks as a super user. This seems wrong and instead only tasks requiring super user privileges should become a super user.
|
True
|
Elevate individual tasks to super user only as needed - I was going to give kargo a try for rolling a kubernetes cluster but it got stuck at:
```
TASK [adduser : User | Create User Group] **************************************
fatal: [k8s-etcd-3]: FAILED! => {"changed": false, "failed": true, "msg": "groupadd: Permission denied.\ngroupadd: cannot lock /etc/group; try again later.\n", "name": "kube-cert"}
fatal: [k8s-etcd-2]: FAILED! => {"changed": false, "failed": true, "msg": "groupadd: Permission denied.\ngroupadd: cannot lock /etc/group; try again later.\n", "name": "kube-cert"}
fatal: [k8s-etcd-1]: FAILED! => {"changed": false, "failed": true, "msg": "groupadd: Permission denied.\ngroupadd: cannot lock /etc/group; try again later.\n", "name": "kube-cert"}
```
I checked in the ansible files and it seemed that the task had no `become: true` annotation which seemed odd given it requires super user permissions. I asked about this and was told that people using kargo normally add the become on the command line. But that would mean running all tasks as a super user. This seems wrong and instead only tasks requiring super user privileges should become a super user.
|
non_process
|
elevate individual tasks to super user only as needed i was going to give kargo a try for rolling a kubernetes cluster but it got stuck at task fatal failed changed false failed true msg groupadd permission denied ngroupadd cannot lock etc group try again later n name kube cert fatal failed changed false failed true msg groupadd permission denied ngroupadd cannot lock etc group try again later n name kube cert fatal failed changed false failed true msg groupadd permission denied ngroupadd cannot lock etc group try again later n name kube cert i checked in the ansible files and it seemed that the task had no become true annotation which seemed odd given it requires super user permissions i asked about this and was told that people using kargo normally add the become on the command line but that would mean running all tasks as a super user this seems wrong and instead only tasks requiring super user privileges should become a super user
| 0
|
189,808
| 14,523,559,690
|
IssuesEvent
|
2020-12-14 10:16:13
|
kalexmills/github-vet-tests-dec2020
|
https://api.github.com/repos/kalexmills/github-vet-tests-dec2020
|
closed
|
sawankumar/Fclone: vfs/read_write_test.go; 3 LoC
|
fresh test tiny
|
Found a possible issue in [sawankumar/Fclone](https://www.github.com/sawankumar/Fclone) at [vfs/read_write_test.go](https://github.com/sawankumar/Fclone/blob/1e8345214c4c02f99eedb94693ae0ef96e920bad/vfs/read_write_test.go#L670-L672)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first
issue it finds, so please do not limit your consideration to the contents of the below message.
> function call which takes a reference to test at line 671 may start a goroutine
[Click here to see the code in its original context.](https://github.com/sawankumar/Fclone/blob/1e8345214c4c02f99eedb94693ae0ef96e920bad/vfs/read_write_test.go#L670-L672)
<details>
<summary>Click here to show the 3 line(s) of Go which triggered the analyzer.</summary>
```go
for _, test := range openTests {
testRWFileHandleOpenTest(t, vfs, &test)
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 1e8345214c4c02f99eedb94693ae0ef96e920bad
|
1.0
|
sawankumar/Fclone: vfs/read_write_test.go; 3 LoC -
Found a possible issue in [sawankumar/Fclone](https://www.github.com/sawankumar/Fclone) at [vfs/read_write_test.go](https://github.com/sawankumar/Fclone/blob/1e8345214c4c02f99eedb94693ae0ef96e920bad/vfs/read_write_test.go#L670-L672)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first
issue it finds, so please do not limit your consideration to the contents of the below message.
> function call which takes a reference to test at line 671 may start a goroutine
[Click here to see the code in its original context.](https://github.com/sawankumar/Fclone/blob/1e8345214c4c02f99eedb94693ae0ef96e920bad/vfs/read_write_test.go#L670-L672)
<details>
<summary>Click here to show the 3 line(s) of Go which triggered the analyzer.</summary>
```go
for _, test := range openTests {
testRWFileHandleOpenTest(t, vfs, &test)
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 1e8345214c4c02f99eedb94693ae0ef96e920bad
|
non_process
|
sawankumar fclone vfs read write test go loc found a possible issue in at below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message function call which takes a reference to test at line may start a goroutine click here to show the line s of go which triggered the analyzer go for test range opentests testrwfilehandleopentest t vfs test leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id
| 0
|
128,047
| 27,183,935,356
|
IssuesEvent
|
2023-02-19 00:46:03
|
zardoy/typescript-vscode-plugins
|
https://api.github.com/repos/zardoy/typescript-vscode-plugins
|
opened
|
Refactoring: Split Declaration and Initialization
|
code-action
|
```ts
const a = true
// converted into
let a: boolean
a = true
```
So its easier than to wrap into block.
Activation range: `const` keyword
id: `splitDeclarationAndInitialization`
and kind i think should be `refactor.rewrite.split-declaration-and-initialization`
This is essentially the same, as p42 split declaration and initialization + infer type from usage, but imo a big time-saver.
@tjx666 Maybe you will be interested in implementing this? I'm 100% sure how it can be done, so I can help here for sure! First of all it should use changesTracker (similar refactoring [changeStringReplaceToRegex](https://github.com/zardoy/typescript-vscode-plugins/blob/6c349d9d3464974989214d44f2d41e2c670584b4/typescript/src/codeActions/custom/changeStringReplaceToRegex.ts#L19) uses it)...
|
1.0
|
Refactoring: Split Declaration and Initialization - ```ts
const a = true
// converted into
let a: boolean
a = true
```
So its easier than to wrap into block.
Activation range: `const` keyword
id: `splitDeclarationAndInitialization`
and kind i think should be `refactor.rewrite.split-declaration-and-initialization`
This is essentially the same, as p42 split declaration and initialization + infer type from usage, but imo a big time-saver.
@tjx666 Maybe you will be interested in implementing this? I'm 100% sure how it can be done, so I can help here for sure! First of all it should use changesTracker (similar refactoring [changeStringReplaceToRegex](https://github.com/zardoy/typescript-vscode-plugins/blob/6c349d9d3464974989214d44f2d41e2c670584b4/typescript/src/codeActions/custom/changeStringReplaceToRegex.ts#L19) uses it)...
|
non_process
|
refactoring split declaration and initialization ts const a true converted into let a boolean a true so its easier than to wrap into block activation range const keyword id splitdeclarationandinitialization and kind i think should be refactor rewrite split declaration and initialization this is essentially the same as split declaration and initialization infer type from usage but imo a big time saver maybe you will be interested in implementing this i m sure how it can be done so i can help here for sure first of all it should use changestracker similar refactoring uses it
| 0
|
8,888
| 11,984,916,913
|
IssuesEvent
|
2020-04-07 16:36:16
|
threefoldfoundation/tft-stellar
|
https://api.github.com/repos/threefoldfoundation/tft-stellar
|
closed
|
Add a locktime possibility to the faucet
|
priority_major process_wontfix
|
This will allow client implementers to more easily develop/test their wallets
|
1.0
|
Add a locktime possibility to the faucet - This will allow client implementers to more easily develop/test their wallets
|
process
|
add a locktime possibility to the faucet this will allow client implementers to more easily develop test their wallets
| 1
|
214,633
| 7,275,096,639
|
IssuesEvent
|
2018-02-21 12:23:28
|
mozilla/addons-frontend
|
https://api.github.com/repos/mozilla/addons-frontend
|
closed
|
Remove beta version support
|
priority: p3 triaged
|
This is part of https://github.com/mozilla/addons-server/issues/7163
The frontend currently exposes beta versions. Since we're going to remove this feature, we need to remove all references to beta versions, gated by a waffle flag.
|
1.0
|
Remove beta version support - This is part of https://github.com/mozilla/addons-server/issues/7163
The frontend currently exposes beta versions. Since we're going to remove this feature, we need to remove all references to beta versions, gated by a waffle flag.
|
non_process
|
remove beta version support this is part of the frontend currently exposes beta versions since we re going to remove this feature we need to remove all references to beta versions gated by a waffle flag
| 0
|
64,858
| 16,054,397,295
|
IssuesEvent
|
2021-04-23 01:07:14
|
spack/spack
|
https://api.github.com/repos/spack/spack
|
closed
|
Installation issue: hpctoolkit
|
build-error
|
<!-- Thanks for taking the time to report this build failure. To proceed with the report please:
1. Title the issue "Installation issue: <name-of-the-package>".
2. Provide the information required below.
We encourage you to try, as much as possible, to reduce your problem to the minimal example that still reproduces the issue. That would help us a lot in fixing it quickly and effectively! -->
### Steps to reproduce the issue
<!-- Fill in the exact spec you are trying to build and the relevant part of the error message -->
```console
$ spack install hpctoolkit
==> Installing hpctoolkit-2021.03.01-cu24uuwungpo3s52tki4m3qqf6kf4ag7
==> No binary for hpctoolkit-2021.03.01-cu24uuwungpo3s52tki4m3qqf6kf4ag7 found: installing from source
==> Using cached archive: /home/eschnetter/src/CarpetX/spack/var/spack/cache/_source-cache/git//HPCToolkit/hpctoolkit.git/68a051044c952f0f4dac459d9941875c700039e7.tar.gz
==> Warning: Fetching from mirror without a checksum!
This package is normally checked out from a version control system, but it has been archived on a spack mirror. This means we cannot know a checksum for the tarball in advance. Be sure that your connection to this mirror is secure!
==> Using cached archive: /home/eschnetter/src/CarpetX/spack/var/spack/cache/_source-cache/archive/f8/f8507c3ce9672c70c2db9f9deb5766c8120ea06e20866d0f553a17866e810b91
Reversed (or previously applied) patch detected! Assume -R? [n]
Apply anyway? [n]
1 out of 1 hunk ignored -- saving rejects to file src/tool/hpcrun/gpu/gpu-metrics.h.rej
==> Patch https://github.com/blue42u/hpctoolkit/commit/b3f6f9e4846d9256cf0d841465ff89d78c6bf422.patch failed.
==> Error: ProcessError: Command exited with status 1:
'/usr/bin/patch' '-s' '-p' '1' '-i' '/tmp/eschnetter/spack-stage/spack-stage-n4_0zhk7/b3f6f9e4846d9256cf0d841465ff89d78c6bf422.patch' '-d' '.'
```
### Information on your system
<!-- Please include the output of `spack debug report` -->
```
spack debug report
* **Spack:** 0.16.1-2176-85e70600ed
* **Python:** 3.6.9
* **Platform:** linux-ubuntu18.04-skylake_avx512
* **Concretizer:** original
```
<!-- If you have any relevant configuration detail (custom `packages.yaml` or `modules.yaml`, etc.) you can add that here as well. -->
### Additional information
<!-- Please upload the following files. They should be present in the stage directory of the failing build. Also upload any config.log or similar file if one exists. -->
* [spack-build-out.txt]()
* [spack-build-env.txt]()
I could not find these files – maybe they were not yet generated?
```
$ ls /tmp/eschnetter/spack-stage/*
/tmp/eschnetter/spack-stage/spack-stage-hpctoolkit-2021.03.01-cu24uuwungpo3s52tki4m3qqf6kf4ag7:
68a051044c952f0f4dac459d9941875c700039e7.tar.gz spack-src
```
<!-- Some packages have maintainers who have volunteered to debug build failures. Run `spack maintainers <name-of-the-package>` and @mention them here if they exist. -->
@mwkrentel
### General information
<!-- These boxes can be checked by replacing [ ] with [x] or by clicking them after submitting the issue. -->
- [x] I have run `spack debug report` and reported the version of Spack/Python/Platform
- [x] I have run `spack maintainers <name-of-the-package>` and @mentioned any maintainers
- [x] I have uploaded the build log and environment files
- [x] I have searched the issues of this repo and believe this is not a duplicate
|
1.0
|
Installation issue: hpctoolkit - <!-- Thanks for taking the time to report this build failure. To proceed with the report please:
1. Title the issue "Installation issue: <name-of-the-package>".
2. Provide the information required below.
We encourage you to try, as much as possible, to reduce your problem to the minimal example that still reproduces the issue. That would help us a lot in fixing it quickly and effectively! -->
### Steps to reproduce the issue
<!-- Fill in the exact spec you are trying to build and the relevant part of the error message -->
```console
$ spack install hpctoolkit
==> Installing hpctoolkit-2021.03.01-cu24uuwungpo3s52tki4m3qqf6kf4ag7
==> No binary for hpctoolkit-2021.03.01-cu24uuwungpo3s52tki4m3qqf6kf4ag7 found: installing from source
==> Using cached archive: /home/eschnetter/src/CarpetX/spack/var/spack/cache/_source-cache/git//HPCToolkit/hpctoolkit.git/68a051044c952f0f4dac459d9941875c700039e7.tar.gz
==> Warning: Fetching from mirror without a checksum!
This package is normally checked out from a version control system, but it has been archived on a spack mirror. This means we cannot know a checksum for the tarball in advance. Be sure that your connection to this mirror is secure!
==> Using cached archive: /home/eschnetter/src/CarpetX/spack/var/spack/cache/_source-cache/archive/f8/f8507c3ce9672c70c2db9f9deb5766c8120ea06e20866d0f553a17866e810b91
Reversed (or previously applied) patch detected! Assume -R? [n]
Apply anyway? [n]
1 out of 1 hunk ignored -- saving rejects to file src/tool/hpcrun/gpu/gpu-metrics.h.rej
==> Patch https://github.com/blue42u/hpctoolkit/commit/b3f6f9e4846d9256cf0d841465ff89d78c6bf422.patch failed.
==> Error: ProcessError: Command exited with status 1:
'/usr/bin/patch' '-s' '-p' '1' '-i' '/tmp/eschnetter/spack-stage/spack-stage-n4_0zhk7/b3f6f9e4846d9256cf0d841465ff89d78c6bf422.patch' '-d' '.'
```
### Information on your system
<!-- Please include the output of `spack debug report` -->
```
spack debug report
* **Spack:** 0.16.1-2176-85e70600ed
* **Python:** 3.6.9
* **Platform:** linux-ubuntu18.04-skylake_avx512
* **Concretizer:** original
```
<!-- If you have any relevant configuration detail (custom `packages.yaml` or `modules.yaml`, etc.) you can add that here as well. -->
### Additional information
<!-- Please upload the following files. They should be present in the stage directory of the failing build. Also upload any config.log or similar file if one exists. -->
* [spack-build-out.txt]()
* [spack-build-env.txt]()
I could not find these files – maybe they were not yet generated?
```
$ ls /tmp/eschnetter/spack-stage/*
/tmp/eschnetter/spack-stage/spack-stage-hpctoolkit-2021.03.01-cu24uuwungpo3s52tki4m3qqf6kf4ag7:
68a051044c952f0f4dac459d9941875c700039e7.tar.gz spack-src
```
<!-- Some packages have maintainers who have volunteered to debug build failures. Run `spack maintainers <name-of-the-package>` and @mention them here if they exist. -->
@mwkrentel
### General information
<!-- These boxes can be checked by replacing [ ] with [x] or by clicking them after submitting the issue. -->
- [x] I have run `spack debug report` and reported the version of Spack/Python/Platform
- [x] I have run `spack maintainers <name-of-the-package>` and @mentioned any maintainers
- [x] I have uploaded the build log and environment files
- [x] I have searched the issues of this repo and believe this is not a duplicate
|
non_process
|
installation issue hpctoolkit thanks for taking the time to report this build failure to proceed with the report please title the issue installation issue provide the information required below we encourage you to try as much as possible to reduce your problem to the minimal example that still reproduces the issue that would help us a lot in fixing it quickly and effectively steps to reproduce the issue console spack install hpctoolkit installing hpctoolkit no binary for hpctoolkit found installing from source using cached archive home eschnetter src carpetx spack var spack cache source cache git hpctoolkit hpctoolkit git tar gz warning fetching from mirror without a checksum this package is normally checked out from a version control system but it has been archived on a spack mirror this means we cannot know a checksum for the tarball in advance be sure that your connection to this mirror is secure using cached archive home eschnetter src carpetx spack var spack cache source cache archive reversed or previously applied patch detected assume r apply anyway out of hunk ignored saving rejects to file src tool hpcrun gpu gpu metrics h rej patch failed error processerror command exited with status usr bin patch s p i tmp eschnetter spack stage spack stage patch d information on your system spack debug report spack python platform linux skylake concretizer original additional information i could not find these files – maybe they were not yet generated ls tmp eschnetter spack stage tmp eschnetter spack stage spack stage hpctoolkit tar gz spack src and mention them here if they exist mwkrentel general information i have run spack debug report and reported the version of spack python platform i have run spack maintainers and mentioned any maintainers i have uploaded the build log and environment files i have searched the issues of this repo and believe this is not a duplicate
| 0
|
8,083
| 11,255,552,980
|
IssuesEvent
|
2020-01-12 10:12:57
|
CATcher-org/CATcher
|
https://api.github.com/repos/CATcher-org/CATcher
|
closed
|
Make AppImage executable before uploading
|
aspect-Process
|
Run this command `chmod +x CATcher.AppImage` before uploading AppImage file on Github.
|
1.0
|
Make AppImage executable before uploading - Run this command `chmod +x CATcher.AppImage` before uploading AppImage file on Github.
|
process
|
make appimage executable before uploading run this command chmod x catcher appimage before uploading appimage file on github
| 1
|
14,019
| 8,787,495,666
|
IssuesEvent
|
2018-12-20 18:52:23
|
NCAR/VAPOR
|
https://api.github.com/repos/NCAR/VAPOR
|
closed
|
Barbs: length and thickness scaling
|
Fixed High Performance Postponed Usability
|
The Length and Thickness scale parameters on the Barb Layout tab are not intuitive. They are labeled "scale", and one would expect a linear scaling (e.g. a value of 2 would double the length of the glyph), but this does not appear to be what is happening. When the DUKU data set is used with U10 and V10 set as the vector field the range of valid values for the length and thickness are 0..58334.7, and 0..11.6669, respectively.
|
True
|
Barbs: length and thickness scaling - The Length and Thickness scale parameters on the Barb Layout tab are not intuitive. They are labeled "scale", and one would expect a linear scaling (e.g. a value of 2 would double the length of the glyph), but this does not appear to be what is happening. When the DUKU data set is used with U10 and V10 set as the vector field the range of valid values for the length and thickness are 0..58334.7, and 0..11.6669, respectively.
|
non_process
|
barbs length and thickness scaling the length and thickness scale parameters on the barb layout tab are not intuitive they are labeled scale and one would expect a linear scaling e g a value of would double the length of the glyph but this does not appear to be what is happening when the duku data set is used with and set as the vector field the range of valid values for the length and thickness are and respectively
| 0
|
82,196
| 10,273,260,585
|
IssuesEvent
|
2019-08-23 18:46:20
|
square/workflow
|
https://api.github.com/repos/square/workflow
|
opened
|
testRender docs incorrectly state children must be instances of MockChildWorkflow
|
bug documentation
|
`RenderTester` does not actually require that specific subclass, it only requires that the child has no state (`Unit`), and that its `render` method doesn't do anything with the `RenderContext` (since the context that gets passed into the child is fake).
|
1.0
|
testRender docs incorrectly state children must be instances of MockChildWorkflow - `RenderTester` does not actually require that specific subclass, it only requires that the child has no state (`Unit`), and that its `render` method doesn't do anything with the `RenderContext` (since the context that gets passed into the child is fake).
|
non_process
|
testrender docs incorrectly state children must be instances of mockchildworkflow rendertester does not actually require that specific subclass it only requires that the child has no state unit and that its render method doesn t do anything with the rendercontext since the context that gets passed into the child is fake
| 0
|
20,044
| 26,529,555,904
|
IssuesEvent
|
2023-01-19 11:23:06
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
opened
|
Introspection of CockroachDB views
|
process/candidate topic: introspection topic: re-introspection tech/engines/introspection engine team/schema topic: cockroachdb topic: view kind/subtask
|
This might differ from PostgreSQL: https://github.com/prisma/prisma/issues/17413
Same things apply about relation re-introspection and the unique constraints.
Part of: https://github.com/prisma/prisma/issues/17412
|
1.0
|
Introspection of CockroachDB views - This might differ from PostgreSQL: https://github.com/prisma/prisma/issues/17413
Same things apply about relation re-introspection and the unique constraints.
Part of: https://github.com/prisma/prisma/issues/17412
|
process
|
introspection of cockroachdb views this might differ from postgresql same things apply about relation re introspection and the unique constraints part of
| 1
|
4,685
| 5,229,515,748
|
IssuesEvent
|
2017-01-29 04:59:01
|
dart-lang/sdk
|
https://api.github.com/repos/dart-lang/sdk
|
opened
|
Please add https://github.com/brendan-duncan/archive to DEPS
|
area-infrastructure
|
I'm sorry.
It seems too complicated and multi step process.
Probably it would be easier and faster it you guys will do this.
CC @ricowind
|
1.0
|
Please add https://github.com/brendan-duncan/archive to DEPS - I'm sorry.
It seems too complicated and multi step process.
Probably it would be easier and faster it you guys will do this.
CC @ricowind
|
non_process
|
please add to deps i m sorry it seems too complicated and multi step process probably it would be easier and faster it you guys will do this cc ricowind
| 0
|
9,651
| 12,624,814,969
|
IssuesEvent
|
2020-06-14 08:29:54
|
shogun-toolbox/shogun
|
https://api.github.com/repos/shogun-toolbox/shogun
|
closed
|
port GP code to be compatible with new API
|
ML: Gaussian Process Tag: Cleanup Tag: Meta Examples
|
The goal of this issue is to be able to use the Gaussian Process framework of Shogun from the new API (factories, base classes). I.e. the issue is a sub-issue of #4463
In a nutshell, we want to replace all explicit constructor calls around GP objects `GaussianProcessRegression, ProbitLikelihood, ZeroMeanFunction, ExactInferenceMethod`, etc, with factory calls. And then fix everything downstream from there.
A good starting point is to try to port the current meta examples for GPs to the new API. This will bring up a number of issues
* missing base classes for GPs -> add them (C++)
* some specialized methods called in GPs -> refactor (C++) so they can be exposed via `Machine`, or potentially add a GP baseclass that has the method
* there will be cases where it is not 100% clear what the best solution -> try to come up with a draft and send a PR, ask in irc/mailing-list.
Once the meta examples are ported, continue with the GP notebook
|
1.0
|
port GP code to be compatible with new API - The goal of this issue is to be able to use the Gaussian Process framework of Shogun from the new API (factories, base classes). I.e. the issue is a sub-issue of #4463
In a nutshell, we want to replace all explicit constructor calls around GP objects `GaussianProcessRegression, ProbitLikelihood, ZeroMeanFunction, ExactInferenceMethod`, etc, with factory calls. And then fix everything downstream from there.
A good starting point is to try to port the current meta examples for GPs to the new API. This will bring up a number of issues
* missing base classes for GPs -> add them (C++)
* some specialized methods called in GPs -> refactor (C++) so they can be exposed via `Machine`, or potentially add a GP baseclass that has the method
* there will be cases where it is not 100% clear what the best solution -> try to come up with a draft and send a PR, ask in irc/mailing-list.
Once the meta examples are ported, continue with the GP notebook
|
process
|
port gp code to be compatible with new api the goal of this issue is to be able to use the gaussian process framework of shogun from the new api factories base classes i e the issue is a sub issue of in a nutshell we want to replace all explicit constructor calls around gp objects gaussianprocessregression probitlikelihood zeromeanfunction exactinferencemethod etc with factory calls and then fix everything downstream from there a good starting point is to try to port the current meta examples for gps to the new api this will bring up a number of issues missing base classes for gps add them c some specialized methods called in gps refactor c so they can be exposed via machine or potentially add a gp baseclass that has the method there will be cases where it is not clear what the best solution try to come up with a draft and send a pr ask in irc mailing list once the meta examples are ported continue with the gp notebook
| 1
|
4,254
| 7,189,047,110
|
IssuesEvent
|
2018-02-02 12:33:23
|
Great-Hill-Corporation/quickBlocks
|
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
|
closed
|
Possible use for chain-generated bloom filters
|
libs-etherlib status-inprocess type-enhancement
|
I have an idea that because I create my own blooms, that I can remove the block-level and receipt-level blooms, which would greatly decrease the size of the data. However, on second thought, I might actually be able the chain-generated blooms in the function isTransactionofInterest to reject particular transactions more easily than asking (in --deep) mode to get the traces.
|
1.0
|
Possible use for chain-generated bloom filters - I have an idea that because I create my own blooms, that I can remove the block-level and receipt-level blooms, which would greatly decrease the size of the data. However, on second thought, I might actually be able the chain-generated blooms in the function isTransactionofInterest to reject particular transactions more easily than asking (in --deep) mode to get the traces.
|
process
|
possible use for chain generated bloom filters i have an idea that because i create my own blooms that i can remove the block level and receipt level blooms which would greatly decrease the size of the data however on second thought i might actually be able the chain generated blooms in the function istransactionofinterest to reject particular transactions more easily than asking in deep mode to get the traces
| 1
|
262
| 2,552,628,697
|
IssuesEvent
|
2015-02-02 18:22:52
|
AdamsLair/duality
|
https://api.github.com/repos/AdamsLair/duality
|
closed
|
Provide Declarative Editor Window Interface
|
Editor Usability
|
## Problem
- In the current EditorPlugin implementation scheme, the plugin explicitly tells the editors main form to create some menu entries.
- This requires all EditorPlugins to rely on AdamsLair.WinForms, as soon as they provide a new window.
- Changing method interfaces in AdamsLair.WinForms can break plugins that way.
## Solution
- Become more Declarative.
- Introduce an attribute to declare a menu item for a class deriving from DockContent.
- Let the editors main window take care of this itself. No plugin-side interaction required.
|
True
|
Provide Declarative Editor Window Interface - ## Problem
- In the current EditorPlugin implementation scheme, the plugin explicitly tells the editors main form to create some menu entries.
- This requires all EditorPlugins to rely on AdamsLair.WinForms, as soon as they provide a new window.
- Changing method interfaces in AdamsLair.WinForms can break plugins that way.
## Solution
- Become more Declarative.
- Introduce an attribute to declare a menu item for a class deriving from DockContent.
- Let the editors main window take care of this itself. No plugin-side interaction required.
|
non_process
|
provide declarative editor window interface problem in the current editorplugin implementation scheme the plugin explicitly tells the editors main form to create some menu entries this requires all editorplugins to rely on adamslair winforms as soon as they provide a new window changing method interfaces in adamslair winforms can break plugins that way solution become more declarative introduce an attribute to declare a menu item for a class deriving from dockcontent let the editors main window take care of this itself no plugin side interaction required
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.