Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 1 744 | labels stringlengths 4 574 | body stringlengths 9 211k | index stringclasses 10 values | text_combine stringlengths 96 211k | label stringclasses 2 values | text stringlengths 96 188k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
73,934 | 8,953,797,155 | IssuesEvent | 2019-01-25 20:36:09 | phetsims/energy-forms-and-changes | https://api.github.com/repos/phetsims/energy-forms-and-changes | closed | Behavior of fan on low energy rate transfer | design:general sim:legacy-bug type:question | I was just playing with the fan and noticed that when the energy rate of transfer is very low it sort of stops, then starts, then stops, as the actual energy chunks arrive. The wheel in constrast moves at a constant, but slower rate. I think it would be more physically accurate to make the fan more like the wheel -- a constant steady state. (Even if the chunks themselves come along slowly).

| 1.0 | Behavior of fan on low energy rate transfer - I was just playing with the fan and noticed that when the energy rate of transfer is very low it sort of stops, then starts, then stops, as the actual energy chunks arrive. The wheel in constrast moves at a constant, but slower rate. I think it would be more physically accurate to make the fan more like the wheel -- a constant steady state. (Even if the chunks themselves come along slowly).

| non_process | behavior of fan on low energy rate transfer i was just playing with the fan and noticed that when the energy rate of transfer is very low it sort of stops then starts then stops as the actual energy chunks arrive the wheel in constrast moves at a constant but slower rate i think it would be more physically accurate to make the fan more like the wheel a constant steady state even if the chunks themselves come along slowly | 0 |
130,424 | 5,115,770,653 | IssuesEvent | 2017-01-06 22:58:52 | phetsims/chipper | https://api.github.com/repos/phetsims/chipper | closed | Add an eslint rule to check for uses of native constructors | priority:4-low | `image` and `text` are native to HTML5, and both have constructors that match Image and Text nodes of scenery. Because of this, it is easy to forget to include Image and Text in modules, and the resulting error is difficult to recognize if one is not familiar with it.
@andrewadare suggested that we include a linting rule that can check for missing `Text` and `Image` modules. In 3/24/16 dev meeting, we discussed and decided that this was something that could be worked on by @andrewadare and @jessegreenberg.
This tool would be helpful to those that are learning the PhET codebase. Once one is familiar with this error, it is standard to look for missing `Image` and `Text` modules, so adding a low priority tag.
| 1.0 | Add an eslint rule to check for uses of native constructors - `image` and `text` are native to HTML5, and both have constructors that match Image and Text nodes of scenery. Because of this, it is easy to forget to include Image and Text in modules, and the resulting error is difficult to recognize if one is not familiar with it.
@andrewadare suggested that we include a linting rule that can check for missing `Text` and `Image` modules. In 3/24/16 dev meeting, we discussed and decided that this was something that could be worked on by @andrewadare and @jessegreenberg.
This tool would be helpful to those that are learning the PhET codebase. Once one is familiar with this error, it is standard to look for missing `Image` and `Text` modules, so adding a low priority tag.
| non_process | add an eslint rule to check for uses of native constructors image and text are native to and both have constructors that match image and text nodes of scenery because of this it is easy to forget to include image and text in modules and the resulting error is difficult to recognize if one is not familiar with it andrewadare suggested that we include a linting rule that can check for missing text and image modules in dev meeting we discussed and decided that this was something that could be worked on by andrewadare and jessegreenberg this tool would be helpful to those that are learning the phet codebase once one is familiar with this error it is standard to look for missing image and text modules so adding a low priority tag | 0 |
1,215 | 2,534,698,414 | IssuesEvent | 2015-01-25 07:27:53 | driftyco/ionic | https://api.github.com/repos/driftyco/ionic | closed | Clear input X | css design feature | Hi Ioniconians,
As always love the work you guys do.
I had a fanastic and simple feature request. Can you extend <input> to include a "clear-icon" attribute to add a little x in the input field that when tapped clears the input?
Very simple, I even might be able to build it myself. But I thought if you guys are already on making the best framework possible, I would throw it out there.
Thanks!
| 1.0 | Clear input X - Hi Ioniconians,
As always love the work you guys do.
I had a fanastic and simple feature request. Can you extend <input> to include a "clear-icon" attribute to add a little x in the input field that when tapped clears the input?
Very simple, I even might be able to build it myself. But I thought if you guys are already on making the best framework possible, I would throw it out there.
Thanks!
| non_process | clear input x hi ioniconians as always love the work you guys do i had a fanastic and simple feature request can you extend to include a clear icon attribute to add a little x in the input field that when tapped clears the input very simple i even might be able to build it myself but i thought if you guys are already on making the best framework possible i would throw it out there thanks | 0 |
21,770 | 30,287,428,262 | IssuesEvent | 2023-07-08 21:34:06 | winter-telescope/mirar | https://api.github.com/repos/winter-telescope/mirar | closed | Nan/zero in swarp | enhancement nearfuture processors | To quote @virajkaram:
"Swarp sets sets masked pixels to zero when it resamples, but the other processors only masks nans. This affects the subtractions"
Right now we mask zeros when loading raw images. Maybe we should try making this self-contained/do such things in the swarp processor. | 1.0 | Nan/zero in swarp - To quote @virajkaram:
"Swarp sets sets masked pixels to zero when it resamples, but the other processors only masks nans. This affects the subtractions"
Right now we mask zeros when loading raw images. Maybe we should try making this self-contained/do such things in the swarp processor. | process | nan zero in swarp to quote virajkaram swarp sets sets masked pixels to zero when it resamples but the other processors only masks nans this affects the subtractions right now we mask zeros when loading raw images maybe we should try making this self contained do such things in the swarp processor | 1 |
511 | 2,981,826,050 | IssuesEvent | 2015-07-17 06:02:38 | e-government-ua/i | https://api.github.com/repos/e-government-ua/i | closed | На бэке централа (wf-central), доработать сервис getDocumentFile, чтоб доступ мог получить не только хозяин документа, но и тот, кому так-же он был открыт | hi priority In process of testing test version | Кроме проверки на соответствие nID_Subject
Реализовать альтернативную проверку через логику сервиса:
getDocumentAccessByHandler (взять от туда образец кода)
и переданные опциональные параметры:
sCode_DocumentAccess
nID_DocumentOperator_SubjectOrgan
nID_DocumentType
sPass
- Если вызов не вернул эксепшина, то так-же отрабатывать метод.
- Проверять альтернативно, только если как минимум 3 первых параметра были переданы. | 1.0 | На бэке централа (wf-central), доработать сервис getDocumentFile, чтоб доступ мог получить не только хозяин документа, но и тот, кому так-же он был открыт - Кроме проверки на соответствие nID_Subject
Реализовать альтернативную проверку через логику сервиса:
getDocumentAccessByHandler (взять от туда образец кода)
и переданные опциональные параметры:
sCode_DocumentAccess
nID_DocumentOperator_SubjectOrgan
nID_DocumentType
sPass
- Если вызов не вернул эксепшина, то так-же отрабатывать метод.
- Проверять альтернативно, только если как минимум 3 первых параметра были переданы. | process | на бэке централа wf central доработать сервис getdocumentfile чтоб доступ мог получить не только хозяин документа но и тот кому так же он был открыт кроме проверки на соответствие nid subject реализовать альтернативную проверку через логику сервиса getdocumentaccessbyhandler взять от туда образец кода и переданные опциональные параметры scode documentaccess nid documentoperator subjectorgan nid documenttype spass если вызов не вернул эксепшина то так же отрабатывать метод проверять альтернативно только если как минимум первых параметра были переданы | 1 |
11,618 | 14,481,529,463 | IssuesEvent | 2020-12-10 12:46:07 | e4exp/paper_manager_abstract | https://api.github.com/repos/e4exp/paper_manager_abstract | opened | Structure-Aware Procedural Text Generation from an Image Sequence | 2020 Natural Language Processing Recipe Generation Tree Structure _read_later | * 2020
* http://www.lsta.media.kyoto-u.ac.jp/mori/research/public/nishimura-IEEE20.pdf
* 解説 : https://medium.com/sinicx/ieee-access%E3%81%AB%E8%AB%96%E6%96%87%E3%81%8C%E6%8E%A1%E9%8C%B2%E3%81%95%E3%82%8C%E3%81%BE%E3%81%97%E3%81%9F-6fb4f6e35384
素材を組み合わせて新たな価値を生み出すことは、私たちの社会にとって重要な活動です。
手続き文は、日常の料理から工業製品の製造に至るまで、その方法を記述したものであり、読者はこれらの活動の手順を再現することができます。
自然言語理解のための先行研究でも指摘されているように、手続き的テキストの重要な性質の一つは、材料の結合操作である文脈依存性であり、グラフや木構造で表現することが可能である。
本論文では、このような文脈依存性を明示的に導入することが、画像列から手続き的テキストを生成する際の視覚・言語タスクに与える影響を調べることを目的としている。
そのために,
(1)木構造を結合した木の定義を視覚・言語版に拡張した新しいデータセットを提案し,
(2)文脈依存性を効率的に学習する新しい構造認識型手続き的テキスト生成モデルを提案する.
実験結果から,提案手法は従来の汎用性の高い手法の性能を向上させることができることが示された.
| 1.0 | Structure-Aware Procedural Text Generation from an Image Sequence - * 2020
* http://www.lsta.media.kyoto-u.ac.jp/mori/research/public/nishimura-IEEE20.pdf
* 解説 : https://medium.com/sinicx/ieee-access%E3%81%AB%E8%AB%96%E6%96%87%E3%81%8C%E6%8E%A1%E9%8C%B2%E3%81%95%E3%82%8C%E3%81%BE%E3%81%97%E3%81%9F-6fb4f6e35384
素材を組み合わせて新たな価値を生み出すことは、私たちの社会にとって重要な活動です。
手続き文は、日常の料理から工業製品の製造に至るまで、その方法を記述したものであり、読者はこれらの活動の手順を再現することができます。
自然言語理解のための先行研究でも指摘されているように、手続き的テキストの重要な性質の一つは、材料の結合操作である文脈依存性であり、グラフや木構造で表現することが可能である。
本論文では、このような文脈依存性を明示的に導入することが、画像列から手続き的テキストを生成する際の視覚・言語タスクに与える影響を調べることを目的としている。
そのために,
(1)木構造を結合した木の定義を視覚・言語版に拡張した新しいデータセットを提案し,
(2)文脈依存性を効率的に学習する新しい構造認識型手続き的テキスト生成モデルを提案する.
実験結果から,提案手法は従来の汎用性の高い手法の性能を向上させることができることが示された.
| process | structure aware procedural text generation from an image sequence 解説 素材を組み合わせて新たな価値を生み出すことは、私たちの社会にとって重要な活動です。 手続き文は、日常の料理から工業製品の製造に至るまで、その方法を記述したものであり、読者はこれらの活動の手順を再現することができます。 自然言語理解のための先行研究でも指摘されているように、手続き的テキストの重要な性質の一つは、材料の結合操作である文脈依存性であり、グラフや木構造で表現することが可能である。 本論文では、このような文脈依存性を明示的に導入することが、画像列から手続き的テキストを生成する際の視覚・言語タスクに与える影響を調べることを目的としている。 そのために, 木構造を結合した木の定義を視覚・言語版に拡張した新しいデータセットを提案し, 文脈依存性を効率的に学習する新しい構造認識型手続き的テキスト生成モデルを提案する. 実験結果から,提案手法は従来の汎用性の高い手法の性能を向上させることができることが示された. | 1 |
19,213 | 25,347,066,660 | IssuesEvent | 2022-11-19 10:21:00 | geneontology/go-ontology | https://api.github.com/repos/geneontology/go-ontology | closed | GO:0007035 ! vacuolar acidification | Other term-related request low priority AspGD-CGD parent relationship query cellular processes | compare:
[Term]
id: GO:0007035
name: vacuolar acidification
namespace: biological_process
def: "Any process that reduces the pH of the vacuole, measured by the concentration of the hydrogen ion." [GOC:jid]
is_a: GO:0051452 ! intracellular pH reduction
intersection_of: GO:0045851 ! pH reduction
intersection_of: occurs_in GO:0005773 ! vacuole
and:
[Term]
id: GO:0007042
name: lysosomal lumen acidification
namespace: biological_process
def: "Any process that reduces the pH of the lysosomal lumen, measured by the concentration of the hydrogen ion." [GOC:jid]
synonym: "lysosome pH reduction" EXACT [GOC:bf, GOC:rph]
is_a: GO:0035751 ! regulation of lysosomal lumen pH
is_a: GO:0051452 ! intracellular pH reduction
one has a logical def, the other doesn't. One is in the regulation hierarchy, the other isn't. One explicitly mentions the lumen, the other doesn't (the acidification is presumably always in the lumen)
on a side note, should the child terms here not have negative and positive synonyms?
GO:0035751 ! regulation of lysosomal lumen pH
~~~ is_a GO:0007042 ! lysosomal lumen acidification
~~~ is_a GO:0035752 ! lysosomal lumen pH elevation
What should the general template for acidification terms be?
One option is to treat them all as regulation terms.
We have to be careful with occurs_in. It is strong, because
participates_in o occurs_in --> part_of (at some time)
This means that if G actively participates in vacuolar acidification, and this occurs in the vacuole (or even more specifically, the lumen) then G is at some time part of the vacuole (or lumen)
The safest thing might be to remove the acidification logical defs from the main GO, bounce them back to bp_xp_cc, and then put them back in later this year when we consider others of the same broad category. It may be safest to use the pattern of very specification relations here (and reference the lumen specifically)
one more thing
[Term]
id: GO:0035751
name: regulation of lysosomal lumen pH
namespace: biological_process
def: "Any process that modulates the pH of the lysosomal lumen, measured by the concentration of the hydrogen ion." [GOC:rph]
is_a: GO:0051453 ! regulation of intracellular pH
relationship: part_of GO:0007040 ! lysosome organization
How do we infer that this process is part of lysosome organization? The organization def template doesn't really mention anything about acidification. Crudely speaking, it seems more about shuffling around of parts (I guess the protons are parts...)
Reported by: cmungall
Original Ticket: [geneontology/ontology-requests/8986](https://sourceforge.net/p/geneontology/ontology-requests/8986)
| 1.0 | GO:0007035 ! vacuolar acidification - compare:
[Term]
id: GO:0007035
name: vacuolar acidification
namespace: biological_process
def: "Any process that reduces the pH of the vacuole, measured by the concentration of the hydrogen ion." [GOC:jid]
is_a: GO:0051452 ! intracellular pH reduction
intersection_of: GO:0045851 ! pH reduction
intersection_of: occurs_in GO:0005773 ! vacuole
and:
[Term]
id: GO:0007042
name: lysosomal lumen acidification
namespace: biological_process
def: "Any process that reduces the pH of the lysosomal lumen, measured by the concentration of the hydrogen ion." [GOC:jid]
synonym: "lysosome pH reduction" EXACT [GOC:bf, GOC:rph]
is_a: GO:0035751 ! regulation of lysosomal lumen pH
is_a: GO:0051452 ! intracellular pH reduction
one has a logical def, the other doesn't. One is in the regulation hierarchy, the other isn't. One explicitly mentions the lumen, the other doesn't (the acidification is presumably always in the lumen)
on a side note, should the child terms here not have negative and positive synonyms?
GO:0035751 ! regulation of lysosomal lumen pH
~~~ is_a GO:0007042 ! lysosomal lumen acidification
~~~ is_a GO:0035752 ! lysosomal lumen pH elevation
What should the general template for acidification terms be?
One option is to treat them all as regulation terms.
We have to be careful with occurs_in. It is strong, because
participates_in o occurs_in --> part_of (at some time)
This means that if G actively participates in vacuolar acidification, and this occurs in the vacuole (or even more specifically, the lumen) then G is at some time part of the vacuole (or lumen)
The safest thing might be to remove the acidification logical defs from the main GO, bounce them back to bp_xp_cc, and then put them back in later this year when we consider others of the same broad category. It may be safest to use the pattern of very specification relations here (and reference the lumen specifically)
one more thing
[Term]
id: GO:0035751
name: regulation of lysosomal lumen pH
namespace: biological_process
def: "Any process that modulates the pH of the lysosomal lumen, measured by the concentration of the hydrogen ion." [GOC:rph]
is_a: GO:0051453 ! regulation of intracellular pH
relationship: part_of GO:0007040 ! lysosome organization
How do we infer that this process is part of lysosome organization? The organization def template doesn't really mention anything about acidification. Crudely speaking, it seems more about shuffling around of parts (I guess the protons are parts...)
Reported by: cmungall
Original Ticket: [geneontology/ontology-requests/8986](https://sourceforge.net/p/geneontology/ontology-requests/8986)
| process | go vacuolar acidification compare id go name vacuolar acidification namespace biological process def any process that reduces the ph of the vacuole measured by the concentration of the hydrogen ion is a go intracellular ph reduction intersection of go ph reduction intersection of occurs in go vacuole and id go name lysosomal lumen acidification namespace biological process def any process that reduces the ph of the lysosomal lumen measured by the concentration of the hydrogen ion synonym lysosome ph reduction exact is a go regulation of lysosomal lumen ph is a go intracellular ph reduction one has a logical def the other doesn t one is in the regulation hierarchy the other isn t one explicitly mentions the lumen the other doesn t the acidification is presumably always in the lumen on a side note should the child terms here not have negative and positive synonyms go regulation of lysosomal lumen ph is a go lysosomal lumen acidification is a go lysosomal lumen ph elevation what should the general template for acidification terms be one option is to treat them all as regulation terms we have to be careful with occurs in it is strong because participates in o occurs in gt part of at some time this means that if g actively participates in vacuolar acidification and this occurs in the vacuole or even more specifically the lumen then g is at some time part of the vacuole or lumen the safest thing might be to remove the acidification logical defs from the main go bounce them back to bp xp cc and then put them back in later this year when we consider others of the same broad category it may be safest to use the pattern of very specification relations here and reference the lumen specifically one more thing id go name regulation of lysosomal lumen ph namespace biological process def any process that modulates the ph of the lysosomal lumen measured by the concentration of the hydrogen ion is a go regulation of intracellular ph relationship part of go lysosome organization how do we infer that this process is part of lysosome organization the organization def template doesn t really mention anything about acidification crudely speaking it seems more about shuffling around of parts i guess the protons are parts reported by cmungall original ticket | 1 |
525,453 | 15,253,739,017 | IssuesEvent | 2021-02-20 09:02:16 | wso2/product-apim | https://api.github.com/repos/wso2/product-apim | closed | Need to remove un-used analytics usage | APIM-ANALYTICS Priority/Normal Type/Improvement | - Need to remove SP based analytics usage
- this includes handlers and other attributes added to the synapse configs | 1.0 | Need to remove un-used analytics usage - - Need to remove SP based analytics usage
- this includes handlers and other attributes added to the synapse configs | non_process | need to remove un used analytics usage need to remove sp based analytics usage this includes handlers and other attributes added to the synapse configs | 0 |
6,361 | 9,416,160,818 | IssuesEvent | 2019-04-10 14:08:50 | AmpersandTarski/Ampersand | https://api.github.com/repos/AmpersandTarski/Ampersand | closed | stack install fails on MacBook | OSX priority:normal software process | #### Version of ampersand that was used
This problem occurred on commit c0239e76195cbf7692b5bb7e15dbcdb7aafd60cf, on the development branch of the github Ampersand repository.
#### What I expected
Since the purpose of stack is to build in platform independent ways, I expected "stack install" to build Ampersand for me. Since the configuration files `package.yaml` and `stack.yaml` contain no commands and only (static) configuration, it should work on my Mac...
#### What happened instead
The command "stack install" fails quite quickly:
<img width="682" alt="Screenshot 2019-04-10 at 13 30 52" src="https://user-images.githubusercontent.com/275768/55875911-8bc9f000-5b96-11e9-9995-c06879796b43.png">
#### Research
I tried something different. I tossed out `~/.stack` and `.stack-work` from my working directory (`~/git/Ampersand`, which is a clone of the git repo). This means that all of stack's history is gone and the thing is built entirely from scratch. This is what happened:
<img width="682" alt="Screenshot 2019-04-10 at 14 41 38" src="https://user-images.githubusercontent.com/275768/55879644-65f51900-5b9f-11e9-9d6f-63e8e3f0817b.png">
What I don't get is that the `package.yaml` file specifies `tested-with: GHC == 8.6.3`. To add to the confusion, the `stack.yaml` file comments: `Make sure we can already use GHC 7.10.3`.
This confusion points me towards the `stack.yaml` file in my working directory. Indeed, the `nightly-2018-11-24` comes from this file. So I changed it into the most recent one `lts-13.16`. That produces the following result:
<img width="682" alt="Screenshot 2019-04-10 at 15 14 36" src="https://user-images.githubusercontent.com/275768/55881673-61326400-5ba3-11e9-973f-8dce2dcafaa9.png">
This looks good: it brings me further. Time to retry the `stack install`:
<img width="682" alt="Screenshot 2019-04-10 at 15 19 56" src="https://user-images.githubusercontent.com/275768/55882261-82e01b00-5ba4-11e9-8789-1db213615fc7.png">
Now, this is going somewhere. With the help of stack I had to upgrade some extra-dependencies in `stack.yaml` and some dependencies in `package.yaml`:
<img width="1207" alt="Screenshot 2019-04-10 at 15 49 46" src="https://user-images.githubusercontent.com/275768/55884538-88d7fb00-5ba8-11e9-9182-1ab8624b39c2.png">
This is sufficient to get the compiler going. Since I cleaned out the entire stack tree, this takes a while. That gives me time to document what I have been doing. And lo and behold, here is my compiled Ampersand compiler:
<img width="682" alt="Screenshot 2019-04-10 at 16 06 49" src="https://user-images.githubusercontent.com/275768/55885733-c047a700-5baa-11e9-8f72-53938cf2b9cf.png">
| 1.0 | stack install fails on MacBook - #### Version of ampersand that was used
This problem occurred on commit c0239e76195cbf7692b5bb7e15dbcdb7aafd60cf, on the development branch of the github Ampersand repository.
#### What I expected
Since the purpose of stack is to build in platform independent ways, I expected "stack install" to build Ampersand for me. Since the configuration files `package.yaml` and `stack.yaml` contain no commands and only (static) configuration, it should work on my Mac...
#### What happened instead
The command "stack install" fails quite quickly:
<img width="682" alt="Screenshot 2019-04-10 at 13 30 52" src="https://user-images.githubusercontent.com/275768/55875911-8bc9f000-5b96-11e9-9995-c06879796b43.png">
#### Research
I tried something different. I tossed out `~/.stack` and `.stack-work` from my working directory (`~/git/Ampersand`, which is a clone of the git repo). This means that all of stack's history is gone and the thing is built entirely from scratch. This is what happened:
<img width="682" alt="Screenshot 2019-04-10 at 14 41 38" src="https://user-images.githubusercontent.com/275768/55879644-65f51900-5b9f-11e9-9d6f-63e8e3f0817b.png">
What I don't get is that the `package.yaml` file specifies `tested-with: GHC == 8.6.3`. To add to the confusion, the `stack.yaml` file comments: `Make sure we can already use GHC 7.10.3`.
This confusion points me towards the `stack.yaml` file in my working directory. Indeed, the `nightly-2018-11-24` comes from this file. So I changed it into the most recent one `lts-13.16`. That produces the following result:
<img width="682" alt="Screenshot 2019-04-10 at 15 14 36" src="https://user-images.githubusercontent.com/275768/55881673-61326400-5ba3-11e9-973f-8dce2dcafaa9.png">
This looks good: it brings me further. Time to retry the `stack install`:
<img width="682" alt="Screenshot 2019-04-10 at 15 19 56" src="https://user-images.githubusercontent.com/275768/55882261-82e01b00-5ba4-11e9-8789-1db213615fc7.png">
Now, this is going somewhere. With the help of stack I had to upgrade some extra-dependencies in `stack.yaml` and some dependencies in `package.yaml`:
<img width="1207" alt="Screenshot 2019-04-10 at 15 49 46" src="https://user-images.githubusercontent.com/275768/55884538-88d7fb00-5ba8-11e9-9182-1ab8624b39c2.png">
This is sufficient to get the compiler going. Since I cleaned out the entire stack tree, this takes a while. That gives me time to document what I have been doing. And lo and behold, here is my compiled Ampersand compiler:
<img width="682" alt="Screenshot 2019-04-10 at 16 06 49" src="https://user-images.githubusercontent.com/275768/55885733-c047a700-5baa-11e9-8f72-53938cf2b9cf.png">
| process | stack install fails on macbook version of ampersand that was used this problem occurred on commit on the development branch of the github ampersand repository what i expected since the purpose of stack is to build in platform independent ways i expected stack install to build ampersand for me since the configuration files package yaml and stack yaml contain no commands and only static configuration it should work on my mac what happened instead the command stack install fails quite quickly img width alt screenshot at src research i tried something different i tossed out stack and stack work from my working directory git ampersand which is a clone of the git repo this means that all of stack s history is gone and the thing is built entirely from scratch this is what happened img width alt screenshot at src what i don t get is that the package yaml file specifies tested with ghc to add to the confusion the stack yaml file comments make sure we can already use ghc this confusion points me towards the stack yaml file in my working directory indeed the nightly comes from this file so i changed it into the most recent one lts that produces the following result img width alt screenshot at src this looks good it brings me further time to retry the stack install img width alt screenshot at src now this is going somewhere with the help of stack i had to upgrade some extra dependencies in stack yaml and some dependencies in package yaml img width alt screenshot at src this is sufficient to get the compiler going since i cleaned out the entire stack tree this takes a while that gives me time to document what i have been doing and lo and behold here is my compiled ampersand compiler img width alt screenshot at src | 1 |
10,580 | 13,389,473,781 | IssuesEvent | 2020-09-02 18:56:34 | prisma/prisma | https://api.github.com/repos/prisma/prisma | closed | Re-Introspection does not output list of fields that `cuid()` was applied to | 2.5 bug/2-confirmed kind/bug process/candidate team/typescript topic: re-introspection |

_Originally posted by @alan345 in https://github.com/prisma/upgrade/issues/59#issuecomment-675741019_ | 1.0 | Re-Introspection does not output list of fields that `cuid()` was applied to -

_Originally posted by @alan345 in https://github.com/prisma/upgrade/issues/59#issuecomment-675741019_ | process | re introspection does not output list of fields that cuid was applied to originally posted by in | 1 |
18,223 | 24,284,461,192 | IssuesEvent | 2022-09-28 20:33:47 | dotnet/runtime | https://api.github.com/repos/dotnet/runtime | opened | Mono Bionic arm64 CI Failure: System.Diagnostics.Tests.ProcessStartInfoTests, not a valid Base-64 string | arch-arm64 area-System.Diagnostics.Process area-CoreLib-mono test-failure | First saw this in a release/7.0 backport PR: https://github.com/dotnet/runtime/pull/76052
Please help determine if a fix needs to get backported to 7.0.
- Queue: Build Linux_bionic arm64 Release AllSubsets_Mono
- Tests: System.Diagnostics.Process.Tests
- Job results: https://dev.azure.com/dnceng-public/public/_build/results?buildId=29620&view=logs&j=30e4a6b3-73be-5b96-c535-0d9023575f9c&t=80b5ca1e-012b-5240-15a1-0677df5434af
- Log file: https://helixre107v0xdeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-76052-merge-e753eaafe08545388c/System.Diagnostics.Process.Tests/1/console.de78222a.log?helixlogtype=result
<details>
<summary>Callstack</summary>
```
Std out:
Discovering: System.Diagnostics.Process.Tests (method display = ClassAndMethod, method display options = None)
Discovered: System.Diagnostics.Process.Tests (found 257 of 317 test cases)
Starting: System.Diagnostics.Process.Tests (parallel test collections = on, max threads = 8)
System.FormatException : The input is not a valid Base-64 string as it contains a non-base 64 character, more than two padding characters, or an illegal character among the padding characters.
Stack Trace:
/_/src/libraries/System.Private.CoreLib/src/System/Convert.cs(2906,0): at System.Convert.FromBase64_ComputeResultLength(Char* inputPtr, Int32 inputLength)
/_/src/libraries/System.Private.CoreLib/src/System/Convert.cs(2836,0): at System.Convert.FromBase64CharPtr(Char* inputPtr, Int32 inputLength)
/_/src/libraries/System.Private.CoreLib/src/System/Convert.cs(2643,0): at System.Convert.FromBase64String(String s)
/_/src/libraries/System.Diagnostics.Process/tests/ProcessStartInfoTests.cs(373,0): at System.Diagnostics.Tests.ProcessStartInfoTests.<>c.<ExecuteProcessAndReturnParsedOutput>b__9_0(String s)
/_/src/libraries/System.Linq/src/System/Linq/Select.SpeedOpt.cs(83,0): at System.Linq.Enumerable.SelectArrayIterator`2[[System.String, System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e],[System.String, System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]].ToArray()
/_/src/libraries/System.Linq/src/System/Linq/ToCollection.cs(17,0): at System.Linq.Enumerable.ToArray[String](IEnumerable`1 source)
/_/src/libraries/System.Diagnostics.Process/tests/ProcessStartInfoTests.cs(373,0): at System.Diagnostics.Tests.ProcessStartInfoTests.ExecuteProcessAndReturnParsedOutput(Dictionary`2 envVars, Func`1 processWork)
/_/src/libraries/System.Diagnostics.Process/tests/ProcessStartInfoTests.cs(289,0): at System.Diagnostics.Tests.ProcessStartInfoTests.EnvironmentGetEnvironmentVariablesIsCaseSensitive()
/_/src/mono/System.Private.CoreLib/src/System/Reflection/MethodInvoker.Mono.cs(33,0): at System.Reflection.MethodInvoker.InterpretedInvoke(Object obj, Span`1 args, BindingFlags invokeAttr)
System.FormatException : The input is not a valid Base-64 string as it contains a non-base 64 character, more than two padding characters, or an illegal character among the padding characters.
Stack Trace:
/_/src/libraries/System.Private.CoreLib/src/System/Convert.cs(2906,0): at System.Convert.FromBase64_ComputeResultLength(Char* inputPtr, Int32 inputLength)
/_/src/libraries/System.Private.CoreLib/src/System/Convert.cs(2836,0): at System.Convert.FromBase64CharPtr(Char* inputPtr, Int32 inputLength)
/_/src/libraries/System.Private.CoreLib/src/System/Convert.cs(2643,0): at System.Convert.FromBase64String(String s)
/_/src/libraries/System.Diagnostics.Process/tests/ProcessStartInfoTests.cs(373,0): at System.Diagnostics.Tests.ProcessStartInfoTests.<>c.<ExecuteProcessAndReturnParsedOutput>b__9_0(String s)
/_/src/libraries/System.Linq/src/System/Linq/Select.SpeedOpt.cs(83,0): at System.Linq.Enumerable.SelectArrayIterator`2[[System.String, System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e],[System.String, System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]].ToArray()
/_/src/libraries/System.Linq/src/System/Linq/ToCollection.cs(17,0): at System.Linq.Enumerable.ToArray[String](IEnumerable`1 source)
/_/src/libraries/System.Diagnostics.Process/tests/ProcessStartInfoTests.cs(373,0): at System.Diagnostics.Tests.ProcessStartInfoTests.ExecuteProcessAndReturnParsedOutput(Dictionary`2 envVars, Func`1 processWork)
/_/src/libraries/System.Diagnostics.Process/tests/ProcessStartInfoTests.cs(336,0): at System.Diagnostics.Tests.ProcessStartInfoTests.ProcessStartInfoEnvironmentVariablesDoesNotThrowForCaseSensitiveDuplicates()
/_/src/mono/System.Private.CoreLib/src/System/Reflection/MethodInvoker.Mono.cs(33,0): at System.Reflection.MethodInvoker.InterpretedInvoke(Object obj, Span`1 args, BindingFlags invokeAttr)
System.FormatException : The input is not a valid Base-64 string as it contains a non-base 64 character, more than two padding characters, or an illegal character among the padding characters.
Stack Trace:
/_/src/libraries/System.Private.CoreLib/src/System/Convert.cs(2906,0): at System.Convert.FromBase64_ComputeResultLength(Char* inputPtr, Int32 inputLength)
/_/src/libraries/System.Private.CoreLib/src/System/Convert.cs(2836,0): at System.Convert.FromBase64CharPtr(Char* inputPtr, Int32 inputLength)
/_/src/libraries/System.Private.CoreLib/src/System/Convert.cs(2643,0): at System.Convert.FromBase64String(String s)
/_/src/libraries/System.Diagnostics.Process/tests/ProcessStartInfoTests.cs(373,0): at System.Diagnostics.Tests.ProcessStartInfoTests.<>c.<ExecuteProcessAndReturnParsedOutput>b__9_0(String s)
/_/src/libraries/System.Linq/src/System/Linq/Select.SpeedOpt.cs(83,0): at System.Linq.Enumerable.SelectArrayIterator`2[[System.String, System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e],[System.String, System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]].ToArray()
/_/src/libraries/System.Linq/src/System/Linq/ToCollection.cs(17,0): at System.Linq.Enumerable.ToArray[String](IEnumerable`1 source)
/_/src/libraries/System.Diagnostics.Process/tests/ProcessStartInfoTests.cs(373,0): at System.Diagnostics.Tests.ProcessStartInfoTests.ExecuteProcessAndReturnParsedOutput(Dictionary`2 envVars, Func`1 processWork)
/_/src/libraries/System.Diagnostics.Process/tests/ProcessStartInfoTests.cs(312,0): at System.Diagnostics.Tests.ProcessStartInfoTests.ProcessStartInfoEnvironmentDoesNotThrowForCaseSensitiveDuplicates()
/_/src/mono/System.Private.CoreLib/src/System/Reflection/MethodInvoker.Mono.cs(33,0): at System.Reflection.MethodInvoker.InterpretedInvoke(Object obj, Span`1 args, BindingFlags invokeAttr)
Finished: System.Diagnostics.Process.Tests
=== TEST EXECUTION SUMMARY ===
System.Diagnostics.Process.Tests Total: 319, Errors: 0, Failed: 3, Skipped: 6, Time: 51.155s
Std err:
System.Diagnostics.Tests.ProcessStartInfoTests.EnvironmentGetEnvironmentVariablesIsCaseSensitive [FAIL]
System.Diagnostics.Tests.ProcessStartInfoTests.ProcessStartInfoEnvironmentVariablesDoesNotThrowForCaseSensitiveDuplicates [FAIL]
System.Diagnostics.Tests.ProcessStartInfoTests.ProcessStartInfoEnvironmentDoesNotThrowForCaseSensitiveDuplicates [FAIL]
```
</details> | 1.0 | Mono Bionic arm64 CI Failure: System.Diagnostics.Tests.ProcessStartInfoTests, not a valid Base-64 string - First saw this in a release/7.0 backport PR: https://github.com/dotnet/runtime/pull/76052
Please help determine if a fix needs to get backported to 7.0.
- Queue: Build Linux_bionic arm64 Release AllSubsets_Mono
- Tests: System.Diagnostics.Process.Tests
- Job results: https://dev.azure.com/dnceng-public/public/_build/results?buildId=29620&view=logs&j=30e4a6b3-73be-5b96-c535-0d9023575f9c&t=80b5ca1e-012b-5240-15a1-0677df5434af
- Log file: https://helixre107v0xdeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-76052-merge-e753eaafe08545388c/System.Diagnostics.Process.Tests/1/console.de78222a.log?helixlogtype=result
<details>
<summary>Callstack</summary>
```
Std out:
Discovering: System.Diagnostics.Process.Tests (method display = ClassAndMethod, method display options = None)
Discovered: System.Diagnostics.Process.Tests (found 257 of 317 test cases)
Starting: System.Diagnostics.Process.Tests (parallel test collections = on, max threads = 8)
System.FormatException : The input is not a valid Base-64 string as it contains a non-base 64 character, more than two padding characters, or an illegal character among the padding characters.
Stack Trace:
/_/src/libraries/System.Private.CoreLib/src/System/Convert.cs(2906,0): at System.Convert.FromBase64_ComputeResultLength(Char* inputPtr, Int32 inputLength)
/_/src/libraries/System.Private.CoreLib/src/System/Convert.cs(2836,0): at System.Convert.FromBase64CharPtr(Char* inputPtr, Int32 inputLength)
/_/src/libraries/System.Private.CoreLib/src/System/Convert.cs(2643,0): at System.Convert.FromBase64String(String s)
/_/src/libraries/System.Diagnostics.Process/tests/ProcessStartInfoTests.cs(373,0): at System.Diagnostics.Tests.ProcessStartInfoTests.<>c.<ExecuteProcessAndReturnParsedOutput>b__9_0(String s)
/_/src/libraries/System.Linq/src/System/Linq/Select.SpeedOpt.cs(83,0): at System.Linq.Enumerable.SelectArrayIterator`2[[System.String, System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e],[System.String, System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]].ToArray()
/_/src/libraries/System.Linq/src/System/Linq/ToCollection.cs(17,0): at System.Linq.Enumerable.ToArray[String](IEnumerable`1 source)
/_/src/libraries/System.Diagnostics.Process/tests/ProcessStartInfoTests.cs(373,0): at System.Diagnostics.Tests.ProcessStartInfoTests.ExecuteProcessAndReturnParsedOutput(Dictionary`2 envVars, Func`1 processWork)
/_/src/libraries/System.Diagnostics.Process/tests/ProcessStartInfoTests.cs(289,0): at System.Diagnostics.Tests.ProcessStartInfoTests.EnvironmentGetEnvironmentVariablesIsCaseSensitive()
/_/src/mono/System.Private.CoreLib/src/System/Reflection/MethodInvoker.Mono.cs(33,0): at System.Reflection.MethodInvoker.InterpretedInvoke(Object obj, Span`1 args, BindingFlags invokeAttr)
System.FormatException : The input is not a valid Base-64 string as it contains a non-base 64 character, more than two padding characters, or an illegal character among the padding characters.
Stack Trace:
/_/src/libraries/System.Private.CoreLib/src/System/Convert.cs(2906,0): at System.Convert.FromBase64_ComputeResultLength(Char* inputPtr, Int32 inputLength)
/_/src/libraries/System.Private.CoreLib/src/System/Convert.cs(2836,0): at System.Convert.FromBase64CharPtr(Char* inputPtr, Int32 inputLength)
/_/src/libraries/System.Private.CoreLib/src/System/Convert.cs(2643,0): at System.Convert.FromBase64String(String s)
/_/src/libraries/System.Diagnostics.Process/tests/ProcessStartInfoTests.cs(373,0): at System.Diagnostics.Tests.ProcessStartInfoTests.<>c.<ExecuteProcessAndReturnParsedOutput>b__9_0(String s)
/_/src/libraries/System.Linq/src/System/Linq/Select.SpeedOpt.cs(83,0): at System.Linq.Enumerable.SelectArrayIterator`2[[System.String, System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e],[System.String, System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]].ToArray()
/_/src/libraries/System.Linq/src/System/Linq/ToCollection.cs(17,0): at System.Linq.Enumerable.ToArray[String](IEnumerable`1 source)
/_/src/libraries/System.Diagnostics.Process/tests/ProcessStartInfoTests.cs(373,0): at System.Diagnostics.Tests.ProcessStartInfoTests.ExecuteProcessAndReturnParsedOutput(Dictionary`2 envVars, Func`1 processWork)
/_/src/libraries/System.Diagnostics.Process/tests/ProcessStartInfoTests.cs(336,0): at System.Diagnostics.Tests.ProcessStartInfoTests.ProcessStartInfoEnvironmentVariablesDoesNotThrowForCaseSensitiveDuplicates()
/_/src/mono/System.Private.CoreLib/src/System/Reflection/MethodInvoker.Mono.cs(33,0): at System.Reflection.MethodInvoker.InterpretedInvoke(Object obj, Span`1 args, BindingFlags invokeAttr)
System.FormatException : The input is not a valid Base-64 string as it contains a non-base 64 character, more than two padding characters, or an illegal character among the padding characters.
Stack Trace:
/_/src/libraries/System.Private.CoreLib/src/System/Convert.cs(2906,0): at System.Convert.FromBase64_ComputeResultLength(Char* inputPtr, Int32 inputLength)
/_/src/libraries/System.Private.CoreLib/src/System/Convert.cs(2836,0): at System.Convert.FromBase64CharPtr(Char* inputPtr, Int32 inputLength)
/_/src/libraries/System.Private.CoreLib/src/System/Convert.cs(2643,0): at System.Convert.FromBase64String(String s)
/_/src/libraries/System.Diagnostics.Process/tests/ProcessStartInfoTests.cs(373,0): at System.Diagnostics.Tests.ProcessStartInfoTests.<>c.<ExecuteProcessAndReturnParsedOutput>b__9_0(String s)
/_/src/libraries/System.Linq/src/System/Linq/Select.SpeedOpt.cs(83,0): at System.Linq.Enumerable.SelectArrayIterator`2[[System.String, System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e],[System.String, System.Private.CoreLib, Version=7.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]].ToArray()
/_/src/libraries/System.Linq/src/System/Linq/ToCollection.cs(17,0): at System.Linq.Enumerable.ToArray[String](IEnumerable`1 source)
/_/src/libraries/System.Diagnostics.Process/tests/ProcessStartInfoTests.cs(373,0): at System.Diagnostics.Tests.ProcessStartInfoTests.ExecuteProcessAndReturnParsedOutput(Dictionary`2 envVars, Func`1 processWork)
/_/src/libraries/System.Diagnostics.Process/tests/ProcessStartInfoTests.cs(312,0): at System.Diagnostics.Tests.ProcessStartInfoTests.ProcessStartInfoEnvironmentDoesNotThrowForCaseSensitiveDuplicates()
/_/src/mono/System.Private.CoreLib/src/System/Reflection/MethodInvoker.Mono.cs(33,0): at System.Reflection.MethodInvoker.InterpretedInvoke(Object obj, Span`1 args, BindingFlags invokeAttr)
Finished: System.Diagnostics.Process.Tests
=== TEST EXECUTION SUMMARY ===
System.Diagnostics.Process.Tests Total: 319, Errors: 0, Failed: 3, Skipped: 6, Time: 51.155s
Std err:
System.Diagnostics.Tests.ProcessStartInfoTests.EnvironmentGetEnvironmentVariablesIsCaseSensitive [FAIL]
System.Diagnostics.Tests.ProcessStartInfoTests.ProcessStartInfoEnvironmentVariablesDoesNotThrowForCaseSensitiveDuplicates [FAIL]
System.Diagnostics.Tests.ProcessStartInfoTests.ProcessStartInfoEnvironmentDoesNotThrowForCaseSensitiveDuplicates [FAIL]
```
</details> | process | mono bionic ci failure system diagnostics tests processstartinfotests not a valid base string first saw this in a release backport pr please help determine if a fix needs to get backported to queue build linux bionic release allsubsets mono tests system diagnostics process tests job results log file callstack std out discovering system diagnostics process tests method display classandmethod method display options none discovered system diagnostics process tests found of test cases starting system diagnostics process tests parallel test collections on max threads system formatexception the input is not a valid base string as it contains a non base character more than two padding characters or an illegal character among the padding characters stack trace src libraries system private corelib src system convert cs at system convert computeresultlength char inputptr inputlength src libraries system private corelib src system convert cs at system convert char inputptr inputlength src libraries system private corelib src system convert cs at system convert string s src libraries system diagnostics process tests processstartinfotests cs at system diagnostics tests processstartinfotests c b string s src libraries system linq src system linq select speedopt cs at system linq enumerable selectarrayiterator toarray src libraries system linq src system linq tocollection cs at system linq enumerable toarray ienumerable source src libraries system diagnostics process tests processstartinfotests cs at system diagnostics tests processstartinfotests executeprocessandreturnparsedoutput dictionary envvars func processwork src libraries system diagnostics process tests processstartinfotests cs at system diagnostics tests processstartinfotests environmentgetenvironmentvariablesiscasesensitive src mono system private corelib src system reflection methodinvoker mono cs at system reflection methodinvoker interpretedinvoke object obj span args bindingflags invokeattr system formatexception the input is not a valid base string as it contains a non base character more than two padding characters or an illegal character among the padding characters stack trace src libraries system private corelib src system convert cs at system convert computeresultlength char inputptr inputlength src libraries system private corelib src system convert cs at system convert char inputptr inputlength src libraries system private corelib src system convert cs at system convert string s src libraries system diagnostics process tests processstartinfotests cs at system diagnostics tests processstartinfotests c b string s src libraries system linq src system linq select speedopt cs at system linq enumerable selectarrayiterator toarray src libraries system linq src system linq tocollection cs at system linq enumerable toarray ienumerable source src libraries system diagnostics process tests processstartinfotests cs at system diagnostics tests processstartinfotests executeprocessandreturnparsedoutput dictionary envvars func processwork src libraries system diagnostics process tests processstartinfotests cs at system diagnostics tests processstartinfotests processstartinfoenvironmentvariablesdoesnotthrowforcasesensitiveduplicates src mono system private corelib src system reflection methodinvoker mono cs at system reflection methodinvoker interpretedinvoke object obj span args bindingflags invokeattr system formatexception the input is not a valid base string as it contains a non base character more than two padding characters or an illegal character among the padding characters stack trace src libraries system private corelib src system convert cs at system convert computeresultlength char inputptr inputlength src libraries system private corelib src system convert cs at system convert char inputptr inputlength src libraries system private corelib src system convert cs at system convert string s src libraries system diagnostics process tests processstartinfotests cs at system diagnostics tests processstartinfotests c b string s src libraries system linq src system linq select speedopt cs at system linq enumerable selectarrayiterator toarray src libraries system linq src system linq tocollection cs at system linq enumerable toarray ienumerable source src libraries system diagnostics process tests processstartinfotests cs at system diagnostics tests processstartinfotests executeprocessandreturnparsedoutput dictionary envvars func processwork src libraries system diagnostics process tests processstartinfotests cs at system diagnostics tests processstartinfotests processstartinfoenvironmentdoesnotthrowforcasesensitiveduplicates src mono system private corelib src system reflection methodinvoker mono cs at system reflection methodinvoker interpretedinvoke object obj span args bindingflags invokeattr finished system diagnostics process tests test execution summary system diagnostics process tests total errors failed skipped time std err system diagnostics tests processstartinfotests environmentgetenvironmentvariablesiscasesensitive system diagnostics tests processstartinfotests processstartinfoenvironmentvariablesdoesnotthrowforcasesensitiveduplicates system diagnostics tests processstartinfotests processstartinfoenvironmentdoesnotthrowforcasesensitiveduplicates | 1 |
9,352 | 12,365,680,822 | IssuesEvent | 2020-05-18 09:12:59 | prisma/prisma | https://api.github.com/repos/prisma/prisma | opened | Prisma Client | process/candidate | ## Bug description
Originally found by @rmatei in https://github.com/prisma/studio/issues/416
## How to reproduce
Use this schema:
```
model Area {
id String @default(cuid()) @id
name String
blocks Block[]
score Float?
createdAt DateTime @default(now())
updatedAt DateTime @default(now()) @updatedAt
}
model Block {
id String @default(cuid()) @id
area Area @relation(fields: [areaId], references: [id])
areaId String
createdAt DateTime @default(now())
updatedAt DateTime @default(now()) @updatedAt
}
```
<details>
<summary>And use this script: </summary>
```
import { PrismaClient } from '@prisma/client'
const prisma = new PrismaClient()
const main = async () => {
const area = await prisma.area.create({
data: {
name: 'something',
score: null,
},
select: {
id: true,
name: true,
blocks: true,
score: true,
createdAt: true,
updatedAt: true,
},
})
await prisma.block.create({
data: {
area: {
connect: {
id: area.id,
},
},
},
select: {
id: true,
area: true,
areaId: true,
createdAt: true,
updatedAt: true,
},
})
}
main()
.catch(async (e) => console.log(e))
.finally(async () => {
await prisma.disconnect()
})
```
</details>
<details>
<summary>You should see this error:</summary>
```
PrismaClientUnknownRequestError:
Invalid `prisma.block.create()` invocation in
/Users/siddhant/Code/Tests/experiments/index.ts:22:22
18 createdAt: true,
19 updatedAt: true,
20 },
21 })
→ 22 await prisma.block.create(PANIC: called `Result::unwrap()` on an `Err` value: ErrorMessage { msg: "Could not parse stored DateTime string: ckac9lbau0000eg0xrvs7y535 (input contains invalid characters)" }
0: backtrace::backtrace::trace
1: backtrace::capture::Backtrace::new_unresolved
2: failure::backtrace::internal::InternalBacktrace::new
3: failure::backtrace::Backtrace::new
4: sql_query_connector::row::row_value_to_prisma_value
5: <quaint::connector::result_set::result_row::ResultRow as sql_query_connector::row::ToSqlRow>::to_sql_row
6: <std::future::GenFuture<T> as core::future::future::Future>::poll
7: <std::future::GenFuture<T> as core::future::future::Future>::poll
8: std::future::poll_with_tls_context
9: <std::future::GenFuture<T> as core::future::future::Future>::poll
10: <std::future::GenFuture<T> as core::future::future::Future>::poll
11: <std::future::GenFuture<T> as core::future::future::Future>::poll
12: <std::future::GenFuture<T> as core::future::future::Future>::poll
13: <std::future::GenFuture<T> as core::future::future::Future>::poll
14: <std::future::GenFuture<T> as core::future::future::Future>::poll
15: <std::future::GenFuture<T> as core::future::future::Future>::poll
16: <std::future::GenFuture<T> as core::future::future::Future>::poll
17: <std::future::GenFuture<T> as core::future::future::Future>::poll
18: <std::future::GenFuture<T> as core::future::future::Future>::poll
19: std::future::poll_with_tls_context
20: <std::future::GenFuture<T> as core::future::future::Future>::poll
21: <std::future::GenFuture<T> as core::future::future::Future>::poll
22: std::panicking::try::do_call
23: __rust_maybe_catch_panic
24: <std::future::GenFuture<T> as core::future::future::Future>::poll
25: <std::future::GenFuture<T> as core::future::future::Future>::poll
26: <std::future::GenFuture<T> as core::future::future::Future>::poll
27: hyper::proto::h1::dispatch::Dispatcher<D,Bs,I,T>::poll_catch
28: <hyper::server::conn::upgrades::UpgradeableConnection<I,S,E> as core::future::future::Future>::poll
29: <hyper::server::conn::spawn_all::NewSvcTask<I,N,S,E,W> as core::future::future::Future>::poll
30: tokio::task::core::Core<T>::poll
31: std::panicking::try::do_call
32: __rust_maybe_catch_panic
33: tokio::task::harness::Harness<T,S>::poll
34: tokio::runtime::thread_pool::worker::GenerationGuard::run_task
35: tokio::runtime::thread_pool::worker::GenerationGuard::run
36: std::thread::local::LocalKey<T>::with
37: tokio::runtime::thread_pool::worker::Worker::run
38: tokio::task::core::Core<T>::poll
39: std::panicking::try::do_call
40: __rust_maybe_catch_panic
41: tokio::task::harness::Harness<T,S>::poll
42: tokio::runtime::blocking::pool::Inner::run
43: tokio::runtime::context::enter
44: std::sys_common::backtrace::__rust_begin_short_backtrace
45: std::panicking::try::do_call
46: __rust_maybe_catch_panic
47: core::ops::function::FnOnce::call_once{{vtable.shim}}
48: <alloc::boxed::Box<F> as core::ops::function::FnOnce<A>>::call_once
49: std::sys::unix::thread::Thread::new::thread_start
50: _ZL12preoptimized
at PrismaClientFetcher.message (/Users/siddhant/Code/Tests/experiments/node_modules/@prisma/client/src/runtime/getPrismaClient.ts:649:46)
at processTicksAndRejections (internal/process/task_queues.js:97:5)
Error in Prisma Client:
PANIC: called `Result::unwrap()` on an `Err` value: ErrorMessage { msg: "Could not parse stored DateTime string: ckac9lbau0000eg0xrvs7y535 (input contains invalid characters)" }
0: backtrace::backtrace::trace
1: backtrace::capture::Backtrace::new_unresolved
2: failure::backtrace::internal::InternalBacktrace::new
3: failure::backtrace::Backtrace::new
4: sql_query_connector::row::row_value_to_prisma_value
5: <quaint::connector::result_set::result_row::ResultRow as sql_query_connector::row::ToSqlRow>::to_sql_row
6: <std::future::GenFuture<T> as core::future::future::Future>::poll
7: <std::future::GenFuture<T> as core::future::future::Future>::poll
8: std::future::poll_with_tls_context
9: <std::future::GenFuture<T> as core::future::future::Future>::poll
10: <std::future::GenFuture<T> as core::future::future::Future>::poll
11: <std::future::GenFuture<T> as core::future::future::Future>::poll
12: <std::future::GenFuture<T> as core::future::future::Future>::poll
13: <std::future::GenFuture<T> as core::future::future::Future>::poll
14: <std::future::GenFuture<T> as core::future::future::Future>::poll
15: <std::future::GenFuture<T> as core::future::future::Future>::poll
16: <std::future::GenFuture<T> as core::future::future::Future>::poll
17: <std::future::GenFuture<T> as core::future::future::Future>::poll
18: <std::future::GenFuture<T> as core::future::future::Future>::poll
19: std::future::poll_with_tls_context
20: <std::future::GenFuture<T> as core::future::future::Future>::poll
21: <std::future::GenFuture<T> as core::future::future::Future>::poll
22: std::panicking::try::do_call
23: __rust_maybe_catch_panic
24: <std::future::GenFuture<T> as core::future::future::Future>::poll
25: <std::future::GenFuture<T> as core::future::future::Future>::poll
26: <std::future::GenFuture<T> as core::future::future::Future>::poll
27: hyper::proto::h1::dispatch::Dispatcher<D,Bs,I,T>::poll_catch
28: <hyper::server::conn::upgrades::UpgradeableConnection<I,S,E> as core::future::future::Future>::poll
29: <hyper::server::conn::spawn_all::NewSvcTask<I,N,S,E,W> as core::future::future::Future>::poll
30: tokio::task::core::Core<T>::poll
31: std::panicking::try::do_call
32: __rust_maybe_catch_panic
33: tokio::task::harness::Harness<T,S>::poll
34: tokio::runtime::thread_pool::worker::GenerationGuard::run_task
35: tokio::runtime::thread_pool::worker::GenerationGuard::run
36: std::thread::local::LocalKey<T>::with
37: tokio::runtime::thread_pool::worker::Worker::run
38: tokio::task::core::Core<T>::poll
39: std::panicking::try::do_call
40: __rust_maybe_catch_panic
41: tokio::task::harness::Harness<T,S>::poll
42: tokio::runtime::blocking::pool::Inner::run
43: tokio::runtime::context::enter
44: std::sys_common::backtrace::__rust_begin_short_backtrace
45: std::panicking::try::do_call
46: __rust_maybe_catch_panic
47: core::ops::function::FnOnce::call_once{{vtable.shim}}
48: <alloc::boxed::Box<F> as core::ops::function::FnOnce<A>>::call_once
49: std::sys::unix::thread::Thread::new::thread_start
50: _ZL12preoptimized
in
query-engine/connectors/sql-query-connector/src/row.rs:119:26
This is a non-recoverable error which probably happens when the Prisma Query Engine has a panic.
Please create an issue in https://github.com/prisma/prisma-client-js describing the last Prisma Client query you called.
```
</details>
This seems similar to #2442, that the problem seems to be with the `select` argument of the `create` query. If I don't `select` anything, then the script does not crash.
This time though, QE seems to be mixing up DateTimes and IDs.
If you think this is the same issue, please feel free to close this and add this as a comment to #2442
## Expected behavior
1. The script should not crash
2. The arguments in pass in to `select` (or if I don't use it at all) should not dictate if the script crashes or not.
## Environment & setup
- OS: macOS
- Database: Postgres
- Prisma version:
```
@prisma/cli : 2.0.0-beta.5
Current platform : darwin
Query Engine : query-engine 0c2ec197653b278b2978845ef958db88824cd82e
Migration Engine : migration-engine-cli 0c2ec197653b278b2978845ef958db88824cd82e
Introspection Engine : introspection-core 0c2ec197653b278b2978845ef958db88824cd82e
Format Binary : prisma-fmt 0c2ec197653b278b2978845ef958db88824cd82e
```
- Node.js version: v12.16.3
| 1.0 | Prisma Client - ## Bug description
Originally found by @rmatei in https://github.com/prisma/studio/issues/416
## How to reproduce
Use this schema:
```
model Area {
id String @default(cuid()) @id
name String
blocks Block[]
score Float?
createdAt DateTime @default(now())
updatedAt DateTime @default(now()) @updatedAt
}
model Block {
id String @default(cuid()) @id
area Area @relation(fields: [areaId], references: [id])
areaId String
createdAt DateTime @default(now())
updatedAt DateTime @default(now()) @updatedAt
}
```
<details>
<summary>And use this script: </summary>
```
import { PrismaClient } from '@prisma/client'
const prisma = new PrismaClient()
const main = async () => {
const area = await prisma.area.create({
data: {
name: 'something',
score: null,
},
select: {
id: true,
name: true,
blocks: true,
score: true,
createdAt: true,
updatedAt: true,
},
})
await prisma.block.create({
data: {
area: {
connect: {
id: area.id,
},
},
},
select: {
id: true,
area: true,
areaId: true,
createdAt: true,
updatedAt: true,
},
})
}
main()
.catch(async (e) => console.log(e))
.finally(async () => {
await prisma.disconnect()
})
```
</details>
<details>
<summary>You should see this error:</summary>
```
PrismaClientUnknownRequestError:
Invalid `prisma.block.create()` invocation in
/Users/siddhant/Code/Tests/experiments/index.ts:22:22
18 createdAt: true,
19 updatedAt: true,
20 },
21 })
→ 22 await prisma.block.create(PANIC: called `Result::unwrap()` on an `Err` value: ErrorMessage { msg: "Could not parse stored DateTime string: ckac9lbau0000eg0xrvs7y535 (input contains invalid characters)" }
0: backtrace::backtrace::trace
1: backtrace::capture::Backtrace::new_unresolved
2: failure::backtrace::internal::InternalBacktrace::new
3: failure::backtrace::Backtrace::new
4: sql_query_connector::row::row_value_to_prisma_value
5: <quaint::connector::result_set::result_row::ResultRow as sql_query_connector::row::ToSqlRow>::to_sql_row
6: <std::future::GenFuture<T> as core::future::future::Future>::poll
7: <std::future::GenFuture<T> as core::future::future::Future>::poll
8: std::future::poll_with_tls_context
9: <std::future::GenFuture<T> as core::future::future::Future>::poll
10: <std::future::GenFuture<T> as core::future::future::Future>::poll
11: <std::future::GenFuture<T> as core::future::future::Future>::poll
12: <std::future::GenFuture<T> as core::future::future::Future>::poll
13: <std::future::GenFuture<T> as core::future::future::Future>::poll
14: <std::future::GenFuture<T> as core::future::future::Future>::poll
15: <std::future::GenFuture<T> as core::future::future::Future>::poll
16: <std::future::GenFuture<T> as core::future::future::Future>::poll
17: <std::future::GenFuture<T> as core::future::future::Future>::poll
18: <std::future::GenFuture<T> as core::future::future::Future>::poll
19: std::future::poll_with_tls_context
20: <std::future::GenFuture<T> as core::future::future::Future>::poll
21: <std::future::GenFuture<T> as core::future::future::Future>::poll
22: std::panicking::try::do_call
23: __rust_maybe_catch_panic
24: <std::future::GenFuture<T> as core::future::future::Future>::poll
25: <std::future::GenFuture<T> as core::future::future::Future>::poll
26: <std::future::GenFuture<T> as core::future::future::Future>::poll
27: hyper::proto::h1::dispatch::Dispatcher<D,Bs,I,T>::poll_catch
28: <hyper::server::conn::upgrades::UpgradeableConnection<I,S,E> as core::future::future::Future>::poll
29: <hyper::server::conn::spawn_all::NewSvcTask<I,N,S,E,W> as core::future::future::Future>::poll
30: tokio::task::core::Core<T>::poll
31: std::panicking::try::do_call
32: __rust_maybe_catch_panic
33: tokio::task::harness::Harness<T,S>::poll
34: tokio::runtime::thread_pool::worker::GenerationGuard::run_task
35: tokio::runtime::thread_pool::worker::GenerationGuard::run
36: std::thread::local::LocalKey<T>::with
37: tokio::runtime::thread_pool::worker::Worker::run
38: tokio::task::core::Core<T>::poll
39: std::panicking::try::do_call
40: __rust_maybe_catch_panic
41: tokio::task::harness::Harness<T,S>::poll
42: tokio::runtime::blocking::pool::Inner::run
43: tokio::runtime::context::enter
44: std::sys_common::backtrace::__rust_begin_short_backtrace
45: std::panicking::try::do_call
46: __rust_maybe_catch_panic
47: core::ops::function::FnOnce::call_once{{vtable.shim}}
48: <alloc::boxed::Box<F> as core::ops::function::FnOnce<A>>::call_once
49: std::sys::unix::thread::Thread::new::thread_start
50: _ZL12preoptimized
at PrismaClientFetcher.message (/Users/siddhant/Code/Tests/experiments/node_modules/@prisma/client/src/runtime/getPrismaClient.ts:649:46)
at processTicksAndRejections (internal/process/task_queues.js:97:5)
Error in Prisma Client:
PANIC: called `Result::unwrap()` on an `Err` value: ErrorMessage { msg: "Could not parse stored DateTime string: ckac9lbau0000eg0xrvs7y535 (input contains invalid characters)" }
0: backtrace::backtrace::trace
1: backtrace::capture::Backtrace::new_unresolved
2: failure::backtrace::internal::InternalBacktrace::new
3: failure::backtrace::Backtrace::new
4: sql_query_connector::row::row_value_to_prisma_value
5: <quaint::connector::result_set::result_row::ResultRow as sql_query_connector::row::ToSqlRow>::to_sql_row
6: <std::future::GenFuture<T> as core::future::future::Future>::poll
7: <std::future::GenFuture<T> as core::future::future::Future>::poll
8: std::future::poll_with_tls_context
9: <std::future::GenFuture<T> as core::future::future::Future>::poll
10: <std::future::GenFuture<T> as core::future::future::Future>::poll
11: <std::future::GenFuture<T> as core::future::future::Future>::poll
12: <std::future::GenFuture<T> as core::future::future::Future>::poll
13: <std::future::GenFuture<T> as core::future::future::Future>::poll
14: <std::future::GenFuture<T> as core::future::future::Future>::poll
15: <std::future::GenFuture<T> as core::future::future::Future>::poll
16: <std::future::GenFuture<T> as core::future::future::Future>::poll
17: <std::future::GenFuture<T> as core::future::future::Future>::poll
18: <std::future::GenFuture<T> as core::future::future::Future>::poll
19: std::future::poll_with_tls_context
20: <std::future::GenFuture<T> as core::future::future::Future>::poll
21: <std::future::GenFuture<T> as core::future::future::Future>::poll
22: std::panicking::try::do_call
23: __rust_maybe_catch_panic
24: <std::future::GenFuture<T> as core::future::future::Future>::poll
25: <std::future::GenFuture<T> as core::future::future::Future>::poll
26: <std::future::GenFuture<T> as core::future::future::Future>::poll
27: hyper::proto::h1::dispatch::Dispatcher<D,Bs,I,T>::poll_catch
28: <hyper::server::conn::upgrades::UpgradeableConnection<I,S,E> as core::future::future::Future>::poll
29: <hyper::server::conn::spawn_all::NewSvcTask<I,N,S,E,W> as core::future::future::Future>::poll
30: tokio::task::core::Core<T>::poll
31: std::panicking::try::do_call
32: __rust_maybe_catch_panic
33: tokio::task::harness::Harness<T,S>::poll
34: tokio::runtime::thread_pool::worker::GenerationGuard::run_task
35: tokio::runtime::thread_pool::worker::GenerationGuard::run
36: std::thread::local::LocalKey<T>::with
37: tokio::runtime::thread_pool::worker::Worker::run
38: tokio::task::core::Core<T>::poll
39: std::panicking::try::do_call
40: __rust_maybe_catch_panic
41: tokio::task::harness::Harness<T,S>::poll
42: tokio::runtime::blocking::pool::Inner::run
43: tokio::runtime::context::enter
44: std::sys_common::backtrace::__rust_begin_short_backtrace
45: std::panicking::try::do_call
46: __rust_maybe_catch_panic
47: core::ops::function::FnOnce::call_once{{vtable.shim}}
48: <alloc::boxed::Box<F> as core::ops::function::FnOnce<A>>::call_once
49: std::sys::unix::thread::Thread::new::thread_start
50: _ZL12preoptimized
in
query-engine/connectors/sql-query-connector/src/row.rs:119:26
This is a non-recoverable error which probably happens when the Prisma Query Engine has a panic.
Please create an issue in https://github.com/prisma/prisma-client-js describing the last Prisma Client query you called.
```
</details>
This seems similar to #2442, that the problem seems to be with the `select` argument of the `create` query. If I don't `select` anything, then the script does not crash.
This time though, QE seems to be mixing up DateTimes and IDs.
If you think this is the same issue, please feel free to close this and add this as a comment to #2442
## Expected behavior
1. The script should not crash
2. The arguments in pass in to `select` (or if I don't use it at all) should not dictate if the script crashes or not.
## Environment & setup
- OS: macOS
- Database: Postgres
- Prisma version:
```
@prisma/cli : 2.0.0-beta.5
Current platform : darwin
Query Engine : query-engine 0c2ec197653b278b2978845ef958db88824cd82e
Migration Engine : migration-engine-cli 0c2ec197653b278b2978845ef958db88824cd82e
Introspection Engine : introspection-core 0c2ec197653b278b2978845ef958db88824cd82e
Format Binary : prisma-fmt 0c2ec197653b278b2978845ef958db88824cd82e
```
- Node.js version: v12.16.3
| process | prisma client bug description originally found by rmatei in how to reproduce use this schema model area id string default cuid id name string blocks block score float createdat datetime default now updatedat datetime default now updatedat model block id string default cuid id area area relation fields references areaid string createdat datetime default now updatedat datetime default now updatedat and use this script import prismaclient from prisma client const prisma new prismaclient const main async const area await prisma area create data name something score null select id true name true blocks true score true createdat true updatedat true await prisma block create data area connect id area id select id true area true areaid true createdat true updatedat true main catch async e console log e finally async await prisma disconnect you should see this error prismaclientunknownrequesterror invalid prisma block create invocation in users siddhant code tests experiments index ts createdat true updatedat true → await prisma block create panic called result unwrap on an err value errormessage msg could not parse stored datetime string input contains invalid characters backtrace backtrace trace backtrace capture backtrace new unresolved failure backtrace internal internalbacktrace new failure backtrace backtrace new sql query connector row row value to prisma value to sql row as core future future future poll as core future future future poll std future poll with tls context as core future future future poll as core future future future poll as core future future future poll as core future future future poll as core future future future poll as core future future future poll as core future future future poll as core future future future poll as core future future future poll as core future future future poll std future poll with tls context as core future future future poll as core future future future poll std panicking try do call rust maybe catch panic as core future future future poll as core future future future poll as core future future future poll hyper proto dispatch dispatcher poll catch as core future future future poll as core future future future poll tokio task core core poll std panicking try do call rust maybe catch panic tokio task harness harness poll tokio runtime thread pool worker generationguard run task tokio runtime thread pool worker generationguard run std thread local localkey with tokio runtime thread pool worker worker run tokio task core core poll std panicking try do call rust maybe catch panic tokio task harness harness poll tokio runtime blocking pool inner run tokio runtime context enter std sys common backtrace rust begin short backtrace std panicking try do call rust maybe catch panic core ops function fnonce call once vtable shim as core ops function fnonce call once std sys unix thread thread new thread start at prismaclientfetcher message users siddhant code tests experiments node modules prisma client src runtime getprismaclient ts at processticksandrejections internal process task queues js error in prisma client panic called result unwrap on an err value errormessage msg could not parse stored datetime string input contains invalid characters backtrace backtrace trace backtrace capture backtrace new unresolved failure backtrace internal internalbacktrace new failure backtrace backtrace new sql query connector row row value to prisma value to sql row as core future future future poll as core future future future poll std future poll with tls context as core future future future poll as core future future future poll as core future future future poll as core future future future poll as core future future future poll as core future future future poll as core future future future poll as core future future future poll as core future future future poll as core future future future poll std future poll with tls context as core future future future poll as core future future future poll std panicking try do call rust maybe catch panic as core future future future poll as core future future future poll as core future future future poll hyper proto dispatch dispatcher poll catch as core future future future poll as core future future future poll tokio task core core poll std panicking try do call rust maybe catch panic tokio task harness harness poll tokio runtime thread pool worker generationguard run task tokio runtime thread pool worker generationguard run std thread local localkey with tokio runtime thread pool worker worker run tokio task core core poll std panicking try do call rust maybe catch panic tokio task harness harness poll tokio runtime blocking pool inner run tokio runtime context enter std sys common backtrace rust begin short backtrace std panicking try do call rust maybe catch panic core ops function fnonce call once vtable shim as core ops function fnonce call once std sys unix thread thread new thread start in query engine connectors sql query connector src row rs this is a non recoverable error which probably happens when the prisma query engine has a panic please create an issue in describing the last prisma client query you called this seems similar to that the problem seems to be with the select argument of the create query if i don t select anything then the script does not crash this time though qe seems to be mixing up datetimes and ids if you think this is the same issue please feel free to close this and add this as a comment to expected behavior the script should not crash the arguments in pass in to select or if i don t use it at all should not dictate if the script crashes or not environment setup os macos database postgres prisma version prisma cli beta current platform darwin query engine query engine migration engine migration engine cli introspection engine introspection core format binary prisma fmt node js version | 1 |
235,889 | 25,962,072,546 | IssuesEvent | 2022-12-19 01:03:29 | michaeldotson/auth-app | https://api.github.com/repos/michaeldotson/auth-app | opened | CVE-2022-23516 (High) detected in loofah-2.2.3.gem | security vulnerability | ## CVE-2022-23516 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>loofah-2.2.3.gem</b></p></summary>
<p>Loofah is a general library for manipulating and transforming HTML/XML
documents and fragments. It's built on top of Nokogiri and libxml2, so
it's fast and has a nice API.
Loofah excels at HTML sanitization (XSS prevention). It includes some
nice HTML sanitizers, which are based on HTML5lib's whitelist, so it
most likely won't make your codes less secure. (These statements have
not been evaluated by Netexperts.)
ActiveRecord extensions for sanitization are available in the
[`loofah-activerecord` gem](https://github.com/flavorjones/loofah-activerecord).</p>
<p>Library home page: <a href="https://rubygems.org/gems/loofah-2.2.3.gem">https://rubygems.org/gems/loofah-2.2.3.gem</a></p>
<p>Path to dependency file: /auth-app/Gemfile.lock</p>
<p>Path to vulnerable library: /var/lib/gems/2.3.0/cache/loofah-2.2.3.gem</p>
<p>
Dependency Hierarchy:
- web-console-3.7.0.gem (Root Library)
- actionview-5.2.2.gem
- rails-html-sanitizer-1.0.4.gem
- :x: **loofah-2.2.3.gem** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Loofah is a general library for manipulating and transforming HTML/XML documents and fragments, built on top of Nokogiri. Loofah >= 2.2.0, < 2.19.1 uses recursion for sanitizing CDATA sections, making it susceptible to stack exhaustion and raising a SystemStackError exception. This may lead to a denial of service through CPU resource consumption. This issue is patched in version 2.19.1. Users who are unable to upgrade may be able to mitigate this vulnerability by limiting the length of the strings that are sanitized.
<p>Publish Date: 2022-12-14
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-23516>CVE-2022-23516</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/flavorjones/loofah/security/advisories/GHSA-3x8r-x6xp-q4vm">https://github.com/flavorjones/loofah/security/advisories/GHSA-3x8r-x6xp-q4vm</a></p>
<p>Release Date: 2022-12-14</p>
<p>Fix Resolution: loofah - 2.19.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-23516 (High) detected in loofah-2.2.3.gem - ## CVE-2022-23516 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>loofah-2.2.3.gem</b></p></summary>
<p>Loofah is a general library for manipulating and transforming HTML/XML
documents and fragments. It's built on top of Nokogiri and libxml2, so
it's fast and has a nice API.
Loofah excels at HTML sanitization (XSS prevention). It includes some
nice HTML sanitizers, which are based on HTML5lib's whitelist, so it
most likely won't make your codes less secure. (These statements have
not been evaluated by Netexperts.)
ActiveRecord extensions for sanitization are available in the
[`loofah-activerecord` gem](https://github.com/flavorjones/loofah-activerecord).</p>
<p>Library home page: <a href="https://rubygems.org/gems/loofah-2.2.3.gem">https://rubygems.org/gems/loofah-2.2.3.gem</a></p>
<p>Path to dependency file: /auth-app/Gemfile.lock</p>
<p>Path to vulnerable library: /var/lib/gems/2.3.0/cache/loofah-2.2.3.gem</p>
<p>
Dependency Hierarchy:
- web-console-3.7.0.gem (Root Library)
- actionview-5.2.2.gem
- rails-html-sanitizer-1.0.4.gem
- :x: **loofah-2.2.3.gem** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Loofah is a general library for manipulating and transforming HTML/XML documents and fragments, built on top of Nokogiri. Loofah >= 2.2.0, < 2.19.1 uses recursion for sanitizing CDATA sections, making it susceptible to stack exhaustion and raising a SystemStackError exception. This may lead to a denial of service through CPU resource consumption. This issue is patched in version 2.19.1. Users who are unable to upgrade may be able to mitigate this vulnerability by limiting the length of the strings that are sanitized.
<p>Publish Date: 2022-12-14
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-23516>CVE-2022-23516</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/flavorjones/loofah/security/advisories/GHSA-3x8r-x6xp-q4vm">https://github.com/flavorjones/loofah/security/advisories/GHSA-3x8r-x6xp-q4vm</a></p>
<p>Release Date: 2022-12-14</p>
<p>Fix Resolution: loofah - 2.19.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | cve high detected in loofah gem cve high severity vulnerability vulnerable library loofah gem loofah is a general library for manipulating and transforming html xml documents and fragments it s built on top of nokogiri and so it s fast and has a nice api loofah excels at html sanitization xss prevention it includes some nice html sanitizers which are based on s whitelist so it most likely won t make your codes less secure these statements have not been evaluated by netexperts activerecord extensions for sanitization are available in the library home page a href path to dependency file auth app gemfile lock path to vulnerable library var lib gems cache loofah gem dependency hierarchy web console gem root library actionview gem rails html sanitizer gem x loofah gem vulnerable library vulnerability details loofah is a general library for manipulating and transforming html xml documents and fragments built on top of nokogiri loofah uses recursion for sanitizing cdata sections making it susceptible to stack exhaustion and raising a systemstackerror exception this may lead to a denial of service through cpu resource consumption this issue is patched in version users who are unable to upgrade may be able to mitigate this vulnerability by limiting the length of the strings that are sanitized publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution loofah step up your open source security game with mend | 0 |
240,880 | 20,099,339,247 | IssuesEvent | 2022-02-07 00:38:20 | ADA-ANU/Request-Access-Console | https://api.github.com/repos/ADA-ANU/Request-Access-Console | opened | UAT Case 8 - Requester submits request for a dataset and then goes back to request access to a sibling dataset but as part of a new request | User Acceptance Test Case | Pre-requisites: UAT Case 2 has been completed with ANU Poll 35 (for example)
ex. User requests access to ANU Poll 35 (https://dataverse5-test.ada.edu.au/dataset.xhtml?persistentId=doi:10.26193/ZFGFNE)
Steps.
1. User navigates in Dataverse to a 'sibling dataset' of ANU Poll 35 (ex. ANU Poll 2017: Housing https://dataverse5-test.ada.edu.au/dataset.xhtml?persistentId=doi:10.26193/EL5WHN)
2. User selects request access to one or multiple restricted files in ANU Poll 2017
3. [requestaccess tool] request access application opens with the guestbook questions in the request access form
4. The fields are empty
5. The requester enters the guestbook responses (possibly by copying and pasting
6. The requester scrolls down to the bottom of the request access page
Expected Result for step 6: The ANU Poll 35 dataset (requested before) is greyed out and disabled (so the requester cannot submit a second access request for the same dataset)
7. The requester clicks 'Submit' and a new request access is submitted for ANU Poll 2017 as in UAT Case 2
The point is to verify sibling datasets that have already been requested are greyed out and disabled so the requester can't request twice.
| 1.0 | UAT Case 8 - Requester submits request for a dataset and then goes back to request access to a sibling dataset but as part of a new request - Pre-requisites: UAT Case 2 has been completed with ANU Poll 35 (for example)
ex. User requests access to ANU Poll 35 (https://dataverse5-test.ada.edu.au/dataset.xhtml?persistentId=doi:10.26193/ZFGFNE)
Steps.
1. User navigates in Dataverse to a 'sibling dataset' of ANU Poll 35 (ex. ANU Poll 2017: Housing https://dataverse5-test.ada.edu.au/dataset.xhtml?persistentId=doi:10.26193/EL5WHN)
2. User selects request access to one or multiple restricted files in ANU Poll 2017
3. [requestaccess tool] request access application opens with the guestbook questions in the request access form
4. The fields are empty
5. The requester enters the guestbook responses (possibly by copying and pasting
6. The requester scrolls down to the bottom of the request access page
Expected Result for step 6: The ANU Poll 35 dataset (requested before) is greyed out and disabled (so the requester cannot submit a second access request for the same dataset)
7. The requester clicks 'Submit' and a new request access is submitted for ANU Poll 2017 as in UAT Case 2
The point is to verify sibling datasets that have already been requested are greyed out and disabled so the requester can't request twice.
| non_process | uat case requester submits request for a dataset and then goes back to request access to a sibling dataset but as part of a new request pre requisites uat case has been completed with anu poll for example ex user requests access to anu poll steps user navigates in dataverse to a sibling dataset of anu poll ex anu poll housing user selects request access to one or multiple restricted files in anu poll request access application opens with the guestbook questions in the request access form the fields are empty the requester enters the guestbook responses possibly by copying and pasting the requester scrolls down to the bottom of the request access page expected result for step the anu poll dataset requested before is greyed out and disabled so the requester cannot submit a second access request for the same dataset the requester clicks submit and a new request access is submitted for anu poll as in uat case the point is to verify sibling datasets that have already been requested are greyed out and disabled so the requester can t request twice | 0 |
4,108 | 2,715,314,417 | IssuesEvent | 2015-04-10 12:15:04 | macaw-movies/macaw-movies | https://api.github.com/repos/macaw-movies/macaw-movies | closed | Orphans slots not triggered when deleting last people/tag | DatabaseManager Design IMPORTANT | since 85011f26a8b1fcff6df16d6ce4dfb32ac15daf6d I realized that the `askForOrphan***Deletion` slots are not triggered anymore...
Is it due to the singleton design or the cascades ?
To be investigated. | 1.0 | Orphans slots not triggered when deleting last people/tag - since 85011f26a8b1fcff6df16d6ce4dfb32ac15daf6d I realized that the `askForOrphan***Deletion` slots are not triggered anymore...
Is it due to the singleton design or the cascades ?
To be investigated. | non_process | orphans slots not triggered when deleting last people tag since i realized that the askfororphan deletion slots are not triggered anymore is it due to the singleton design or the cascades to be investigated | 0 |
9,816 | 12,825,917,395 | IssuesEvent | 2020-07-06 15:41:27 | deepset-ai/haystack | https://api.github.com/repos/deepset-ai/haystack | closed | PDF Support | preprocessing question | Is there any way to directly work with PDF documents?
For now, every time I need to work with PDF files, I have to convert it into a text file and then use it.
```
import PyPDF2
article=[]
pdfFileObj = open('data/article/doc.pdf', 'rb')
pdfReader = PyPDF2.PdfFileReader(pdfFileObj)
for page in range(pdfReader.numPages):
pageObj = pdfReader.getPage(page)
article.append(pageObj.extractText())
print(article)
article_text = open('data/article/doc.txt',"w")
article_text.writelines(article)
write_documents_to_db(document_store=document_store, document_dir=doc_dir, clean_func=clean_wiki_text, only_empty_db=True)
```
But is there any pre-built support for PDF documents. | 1.0 | PDF Support - Is there any way to directly work with PDF documents?
For now, every time I need to work with PDF files, I have to convert it into a text file and then use it.
```
import PyPDF2
article=[]
pdfFileObj = open('data/article/doc.pdf', 'rb')
pdfReader = PyPDF2.PdfFileReader(pdfFileObj)
for page in range(pdfReader.numPages):
pageObj = pdfReader.getPage(page)
article.append(pageObj.extractText())
print(article)
article_text = open('data/article/doc.txt',"w")
article_text.writelines(article)
write_documents_to_db(document_store=document_store, document_dir=doc_dir, clean_func=clean_wiki_text, only_empty_db=True)
```
But is there any pre-built support for PDF documents. | process | pdf support is there any way to directly work with pdf documents for now every time i need to work with pdf files i have to convert it into a text file and then use it import article pdffileobj open data article doc pdf rb pdfreader pdffilereader pdffileobj for page in range pdfreader numpages pageobj pdfreader getpage page article append pageobj extracttext print article article text open data article doc txt w article text writelines article write documents to db document store document store document dir doc dir clean func clean wiki text only empty db true but is there any pre built support for pdf documents | 1 |
2,958 | 5,955,837,953 | IssuesEvent | 2017-05-28 11:00:47 | eranhd/Anti-Drug-Jerusalem | https://api.github.com/repos/eranhd/Anti-Drug-Jerusalem | closed | סיום עבודה על נקודות קרות וחמות ושמירת מיקומם ב-database | in process | צריך לסיים את העבודה הקשורה לנקודות החמות והקרות במהלך המסלול | 1.0 | סיום עבודה על נקודות קרות וחמות ושמירת מיקומם ב-database - צריך לסיים את העבודה הקשורה לנקודות החמות והקרות במהלך המסלול | process | סיום עבודה על נקודות קרות וחמות ושמירת מיקומם ב database צריך לסיים את העבודה הקשורה לנקודות החמות והקרות במהלך המסלול | 1 |
641,311 | 20,823,790,961 | IssuesEvent | 2022-03-18 18:10:44 | rathena/rathena | https://api.github.com/repos/rathena/rathena | closed | 4th Class Windhawk can attack normally when riding the wolf? | status:confirmed component:core priority:low mode:renewal type:bug | <!-- NOTE: Anything within these brackets will be hidden on the preview of the Issue. -->
* **rAthena Hash**:
latest
<!-- Please specify the rAthena [GitHub hash](https://help.github.com/articles/autolinked-references-and-urls/#commit-shas) on which you encountered this issue.
How to get your GitHub Hash:
1. cd your/rAthena/directory/
2. git rev-parse --short HEAD
3. Copy the resulting hash.
-->
* **Client Date**: 20211103
<!-- Please specify the client date you used. -->
* **Server Mode**: Re
<!-- Which mode does your server use: Pre-Renewal or Renewal? -->
* **Description of Issue**: Windhawk can attack normally when riding the wolf, is this the offical behaviour?
* Result: <!-- Describe the issue that you experienced in detail. -->
* Expected Result: <!-- Describe what you would expect to happen in detail. -->
* How to Reproduce: <!-- If you have not stated in the description of the result already, please give us a short guide how we can reproduce your issue. -->4th class Windhawk, use WUGRIDER skill, then attack target
* Official Information: <!-- If possible, provide information from official servers (kRO or other sources) which prove that the result is wrong. Please take into account that iRO (especially iRO Wiki) is not always the same as kRO. -->
<!-- * _NOTE: Make sure you quote ``` `@atcommands` ``` just like this so that you do not tag uninvolved GitHub users!_ -->
* **Modifications that may affect results**:
<!-- * Please provide any information that could influence the expected result. -->
<!-- * This can be either configurations you changed, database values you changed, or even external source modifications. -->
| 1.0 | 4th Class Windhawk can attack normally when riding the wolf? - <!-- NOTE: Anything within these brackets will be hidden on the preview of the Issue. -->
* **rAthena Hash**:
latest
<!-- Please specify the rAthena [GitHub hash](https://help.github.com/articles/autolinked-references-and-urls/#commit-shas) on which you encountered this issue.
How to get your GitHub Hash:
1. cd your/rAthena/directory/
2. git rev-parse --short HEAD
3. Copy the resulting hash.
-->
* **Client Date**: 20211103
<!-- Please specify the client date you used. -->
* **Server Mode**: Re
<!-- Which mode does your server use: Pre-Renewal or Renewal? -->
* **Description of Issue**: Windhawk can attack normally when riding the wolf, is this the offical behaviour?
* Result: <!-- Describe the issue that you experienced in detail. -->
* Expected Result: <!-- Describe what you would expect to happen in detail. -->
* How to Reproduce: <!-- If you have not stated in the description of the result already, please give us a short guide how we can reproduce your issue. -->4th class Windhawk, use WUGRIDER skill, then attack target
* Official Information: <!-- If possible, provide information from official servers (kRO or other sources) which prove that the result is wrong. Please take into account that iRO (especially iRO Wiki) is not always the same as kRO. -->
<!-- * _NOTE: Make sure you quote ``` `@atcommands` ``` just like this so that you do not tag uninvolved GitHub users!_ -->
* **Modifications that may affect results**:
<!-- * Please provide any information that could influence the expected result. -->
<!-- * This can be either configurations you changed, database values you changed, or even external source modifications. -->
| non_process | class windhawk can attack normally when riding the wolf rathena hash latest please specify the rathena on which you encountered this issue how to get your github hash cd your rathena directory git rev parse short head copy the resulting hash client date server mode re description of issue windhawk can attack normally when riding the wolf is this the offical behaviour result expected result how to reproduce class windhawk use wugrider skill then attack target official information modifications that may affect results | 0 |
467,680 | 13,452,608,656 | IssuesEvent | 2020-09-08 22:35:55 | googleapis/java-bigtable | https://api.github.com/repos/googleapis/java-bigtable | closed | RowCells are not actually serializeable | api: bigtable priority: p2 type: bug | RowCells might contain UnmodifiableLazyStringList for values, which are not actually serializable.
We need to add custom hooks to RowCell to fix serialization.
The primary issue is that labels are a UnmodifiableLazyStringList, which is not serializeable | 1.0 | RowCells are not actually serializeable - RowCells might contain UnmodifiableLazyStringList for values, which are not actually serializable.
We need to add custom hooks to RowCell to fix serialization.
The primary issue is that labels are a UnmodifiableLazyStringList, which is not serializeable | non_process | rowcells are not actually serializeable rowcells might contain unmodifiablelazystringlist for values which are not actually serializable we need to add custom hooks to rowcell to fix serialization the primary issue is that labels are a unmodifiablelazystringlist which is not serializeable | 0 |
8,067 | 11,244,094,156 | IssuesEvent | 2020-01-10 05:55:35 | towavephone/GatsbyBlog | https://api.github.com/repos/towavephone/GatsbyBlog | opened | CSS世界强大文本处理能力 | /CSS-world-text-processing/ Gitalk | /CSS-world-text-processing/line-height 的另外一个朋友 font-size 第 5 章介绍过 line-height 和 vertical-align 的好朋友关系,实际上,font-size 也和 line-height 是好朋友,同样也无处不在,并且纸面上 line-height… | 1.0 | CSS世界强大文本处理能力 - /CSS-world-text-processing/line-height 的另外一个朋友 font-size 第 5 章介绍过 line-height 和 vertical-align 的好朋友关系,实际上,font-size 也和 line-height 是好朋友,同样也无处不在,并且纸面上 line-height… | process | css世界强大文本处理能力 css world text processing line height 的另外一个朋友 font size 第 章介绍过 line height 和 vertical align 的好朋友关系,实际上,font size 也和 line height 是好朋友,同样也无处不在,并且纸面上 line height… | 1 |
16,657 | 21,726,709,025 | IssuesEvent | 2022-05-11 08:20:42 | 2i2c-org/infrastructure | https://api.github.com/repos/2i2c-org/infrastructure | closed | Create a hub operation and support workflow | type: enhancement :label: team-process | # Summary
In https://github.com/2i2c-org/team-compass/pull/73 we are adding some new team workflow structure so that we can better keep track of our priorities and daily tasks.
However, this workflow is focused more around "development" of new things, rather than operation and maintenance of pre-existing things. There are at least two kinds of "ongoing" things that we'll also need to do:
- Operating / fixing issues in our managed JupyterHubs
- Supporting users that have questions (potentially leading to issues to fix)
We should document how to do this in a way that:
1. Makes it clear how we should prioritize certain kinds of tasks
2. Makes it likely that important tasks will be tackled first
3. Makes it clear who is responsible for what
4. Distributes the "maintenance" things in an equitable fashion across the team
# User Stories
- As a team member I want to
- Know when there are unexpected issues that have come up, and whether I should prioritize them over development work
- Know when I should work on one of these issues, vs. when another team member is tackling them
- Know I should be balancing my "maintenance / operations" and my "new development" time
- As a hub administrator I want to
- Know that an issue I have brought up is being worked on, and where I can check its status
# Acceptance criteria
- A documented process that
- Begins with a user contacting us about an issue
- Results in us documenting that issue somewhere
- This issue is integrated in our team workflow
- It is clear who should work on it, and whether/how to prioritize it
# Important information
- [**Here's a draft proposal for this process**](https://docs.google.com/document/d/17Kj_FbtVMl32TEcfvCp18fF1SEiBjVOhCswdidUytgM/edit?usp=sharing)
# Tasks
- [x] Discuss ideas here: https://github.com/2i2c-org/pilot-hubs/discussions/356
- [x] #494
- [x] #495
- [x] https://github.com/2i2c-org/team-compass/issues/227
- [x] https://github.com/2i2c-org/team-compass/issues/167
- [x] https://github.com/2i2c-org/team-compass/issues/187
- Iterate on this process with a few hubs and collect our experiences
- [x] https://github.com/2i2c-org/pilot-hubs/issues/595
- [ ] https://github.com/2i2c-org/pilot-hubs/issues/306
- [ ] https://github.com/2i2c-org/pilot-hubs/issues/302 | 1.0 | Create a hub operation and support workflow - # Summary
In https://github.com/2i2c-org/team-compass/pull/73 we are adding some new team workflow structure so that we can better keep track of our priorities and daily tasks.
However, this workflow is focused more around "development" of new things, rather than operation and maintenance of pre-existing things. There are at least two kinds of "ongoing" things that we'll also need to do:
- Operating / fixing issues in our managed JupyterHubs
- Supporting users that have questions (potentially leading to issues to fix)
We should document how to do this in a way that:
1. Makes it clear how we should prioritize certain kinds of tasks
2. Makes it likely that important tasks will be tackled first
3. Makes it clear who is responsible for what
4. Distributes the "maintenance" things in an equitable fashion across the team
# User Stories
- As a team member I want to
- Know when there are unexpected issues that have come up, and whether I should prioritize them over development work
- Know when I should work on one of these issues, vs. when another team member is tackling them
- Know I should be balancing my "maintenance / operations" and my "new development" time
- As a hub administrator I want to
- Know that an issue I have brought up is being worked on, and where I can check its status
# Acceptance criteria
- A documented process that
- Begins with a user contacting us about an issue
- Results in us documenting that issue somewhere
- This issue is integrated in our team workflow
- It is clear who should work on it, and whether/how to prioritize it
# Important information
- [**Here's a draft proposal for this process**](https://docs.google.com/document/d/17Kj_FbtVMl32TEcfvCp18fF1SEiBjVOhCswdidUytgM/edit?usp=sharing)
# Tasks
- [x] Discuss ideas here: https://github.com/2i2c-org/pilot-hubs/discussions/356
- [x] #494
- [x] #495
- [x] https://github.com/2i2c-org/team-compass/issues/227
- [x] https://github.com/2i2c-org/team-compass/issues/167
- [x] https://github.com/2i2c-org/team-compass/issues/187
- Iterate on this process with a few hubs and collect our experiences
- [x] https://github.com/2i2c-org/pilot-hubs/issues/595
- [ ] https://github.com/2i2c-org/pilot-hubs/issues/306
- [ ] https://github.com/2i2c-org/pilot-hubs/issues/302 | process | create a hub operation and support workflow summary in we are adding some new team workflow structure so that we can better keep track of our priorities and daily tasks however this workflow is focused more around development of new things rather than operation and maintenance of pre existing things there are at least two kinds of ongoing things that we ll also need to do operating fixing issues in our managed jupyterhubs supporting users that have questions potentially leading to issues to fix we should document how to do this in a way that makes it clear how we should prioritize certain kinds of tasks makes it likely that important tasks will be tackled first makes it clear who is responsible for what distributes the maintenance things in an equitable fashion across the team user stories as a team member i want to know when there are unexpected issues that have come up and whether i should prioritize them over development work know when i should work on one of these issues vs when another team member is tackling them know i should be balancing my maintenance operations and my new development time as a hub administrator i want to know that an issue i have brought up is being worked on and where i can check its status acceptance criteria a documented process that begins with a user contacting us about an issue results in us documenting that issue somewhere this issue is integrated in our team workflow it is clear who should work on it and whether how to prioritize it important information tasks discuss ideas here iterate on this process with a few hubs and collect our experiences | 1 |
185,917 | 15,037,240,018 | IssuesEvent | 2021-02-02 16:07:29 | hoangvvo/benzene | https://api.github.com/repos/hoangvvo/benzene | opened | Add TypeScript docs and fix TypeScript generic and interface errors | bug documentation | `@benzene/core` includes TypeScript generic for typing `TExtra` and `TContext`. We need to add a documentation for it.
Also, it seems that our WebSocket interface does not work as intended. | 1.0 | Add TypeScript docs and fix TypeScript generic and interface errors - `@benzene/core` includes TypeScript generic for typing `TExtra` and `TContext`. We need to add a documentation for it.
Also, it seems that our WebSocket interface does not work as intended. | non_process | add typescript docs and fix typescript generic and interface errors benzene core includes typescript generic for typing textra and tcontext we need to add a documentation for it also it seems that our websocket interface does not work as intended | 0 |
21,137 | 28,106,592,785 | IssuesEvent | 2023-03-31 01:35:17 | polarismesh/polaris | https://api.github.com/repos/polarismesh/polaris | closed | 当对服务起别名时,拉回来的路由规则中的service和namespace仍旧是老的,导致会存在对路由规则处理的错误。 | bug question in processed | 当对服务起别名时,拉回来的路由规则中的service和namespace仍旧是老的。可能会存在对路由规则处理的错误。 | 1.0 | 当对服务起别名时,拉回来的路由规则中的service和namespace仍旧是老的,导致会存在对路由规则处理的错误。 - 当对服务起别名时,拉回来的路由规则中的service和namespace仍旧是老的。可能会存在对路由规则处理的错误。 | process | 当对服务起别名时,拉回来的路由规则中的service和namespace仍旧是老的,导致会存在对路由规则处理的错误。 当对服务起别名时,拉回来的路由规则中的service和namespace仍旧是老的。可能会存在对路由规则处理的错误。 | 1 |
10,111 | 13,044,162,206 | IssuesEvent | 2020-07-29 03:47:30 | tikv/tikv | https://api.github.com/repos/tikv/tikv | closed | UCP: Migrate scalar function `SysDateWithoutFsp` from TiDB | challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor |
## Description
Port the scalar function `SysDateWithoutFsp` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @breeswish
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
| 2.0 | UCP: Migrate scalar function `SysDateWithoutFsp` from TiDB -
## Description
Port the scalar function `SysDateWithoutFsp` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @breeswish
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
| process | ucp migrate scalar function sysdatewithoutfsp from tidb description port the scalar function sysdatewithoutfsp from tidb to coprocessor score mentor s breeswish recommended skills rust programming learning materials already implemented expressions ported from tidb | 1 |
24,560 | 12,127,923,108 | IssuesEvent | 2020-04-22 19:34:50 | cityofaustin/atd-data-tech | https://api.github.com/repos/cityofaustin/atd-data-tech | opened | VZE | Bug | Table View breaks with invalid date | Impact: 3-Minor Need: 2-Should Have Product: Vision Zero Crash Data System Service: Dev Type: Bug Report Workgroup: VZ | `Error! GraphQL error: invalid input syntax for type date: "Invalid date"`
To reproduce, click on a date field and delete until the field is blank. See screengrab:

In order to fix this issues I have to go into the Console > Application > Location Storage and manually delete the `savedCrashesConfig` item for the table that broke and then refresh the tab. | 1.0 | VZE | Bug | Table View breaks with invalid date - `Error! GraphQL error: invalid input syntax for type date: "Invalid date"`
To reproduce, click on a date field and delete until the field is blank. See screengrab:

In order to fix this issues I have to go into the Console > Application > Location Storage and manually delete the `savedCrashesConfig` item for the table that broke and then refresh the tab. | non_process | vze bug table view breaks with invalid date error graphql error invalid input syntax for type date invalid date to reproduce click on a date field and delete until the field is blank see screengrab in order to fix this issues i have to go into the console application location storage and manually delete the savedcrashesconfig item for the table that broke and then refresh the tab | 0 |
356,827 | 25,176,268,136 | IssuesEvent | 2022-11-11 09:32:04 | kangqiao322/pe | https://api.github.com/repos/kangqiao322/pe | opened | Incorrect arrows used in sequence diagrams | type.DocumentationBug severity.Medium | 
As could be seen here, the return arrows from AddCommandParser should be dashed instead of being a straight line.
<!--session: 1668154291493-0efd108c-1b9d-4f24-95ff-abca5c20b8a9-->
<!--Version: Web v3.4.4--> | 1.0 | Incorrect arrows used in sequence diagrams - 
As could be seen here, the return arrows from AddCommandParser should be dashed instead of being a straight line.
<!--session: 1668154291493-0efd108c-1b9d-4f24-95ff-abca5c20b8a9-->
<!--Version: Web v3.4.4--> | non_process | incorrect arrows used in sequence diagrams as could be seen here the return arrows from addcommandparser should be dashed instead of being a straight line | 0 |
252,420 | 21,577,100,096 | IssuesEvent | 2022-05-02 14:46:58 | dotnet/runtime | https://api.github.com/repos/dotnet/runtime | closed | Test failure: GetAsyncWithRedirect_SetCookieContainer_CorrectCookiesSent | area-System.Net.Http test-run-core | Test type: `System.Net.Http.Functional.Tests.SocketsHttpHandlerTest_Cookies_Http2`
Failures 3/12-9/6 (incl. PRs):
Day | Run | OS
-- | -- | --
7/29 | Official run | Centos.8.Amd64.Open
8/28 | Official run | Debian.10.Amd64.Open
9/4 | Official run | Fedora.34.Amd64.Open
Failure:
```
System.ObjectDisposedException : Cannot access a disposed object.
Object name: 'Http2Connection'.
at System.Net.Http.Http2Connection.TryReserveStream() in /_/src/libraries/System.Net.Http/src/System/Net/Http/SocketsHttpHandler/Http2Connection.cs:line 301
at System.Net.Http.HttpConnectionPool.GetHttp2ConnectionAsync(HttpRequestMessage request, Boolean async, CancellationToken cancellationToken) in /_/src/libraries/System.Net.Http/src/System/Net/Http/SocketsHttpHandler/HttpConnectionPool.cs:line 746
at System.Net.Http.HttpConnectionPool.SendWithVersionDetectionAndRetryAsync(HttpRequestMessage request, Boolean async, Boolean doRequestAuth, CancellationToken cancellationToken) in /_/src/libraries/System.Net.Http/src/System/Net/Http/SocketsHttpHandler/HttpConnectionPool.cs:line 971
at System.Net.Http.RedirectHandler.SendAsync(HttpRequestMessage request, Boolean async, CancellationToken cancellationToken) in /_/src/libraries/System.Net.Http/src/System/Net/Http/SocketsHttpHandler/RedirectHandler.cs:line 81
at System.Net.Http.HttpClient.<SendAsync>g__Core|83_0(HttpRequestMessage request, HttpCompletionOption completionOption, CancellationTokenSource cts, Boolean disposeCts, CancellationTokenSource pendingRequestsCts, CancellationToken originalCancellationToken) in /_/src/libraries/System.Net.Http/src/System/Net/Http/HttpClient.cs:line 553
at System.Net.Http.Functional.Tests.HttpClientHandlerTest_Cookies.<GetAsyncWithRedirect_SetCookieContainer_CorrectCookiesSent>b__17_0(Uri url) in /_/src/libraries/Common/tests/System/Net/Http/HttpClientHandlerTest.Cookies.cs:line 324
at System.Threading.Tasks.TaskTimeoutExtensions.GetRealException(Task task) in /_/src/libraries/Common/tests/System/Threading/Tasks/TaskTimeoutExtensions.cs:line 128
--- End of stack trace from previous location ---
at System.Threading.Tasks.TaskTimeoutExtensions.WhenAllOrAnyFailed(Task[] tasks) in /_/src/libraries/Common/tests/System/Threading/Tasks/TaskTimeoutExtensions.cs:line 90
at System.Net.Test.Common.LoopbackServerFactory.<>c__DisplayClass5_0.<<CreateClientAndServerAsync>b__0>d.MoveNext() in /_/src/libraries/Common/tests/System/Net/Http/GenericLoopbackServer.cs:line 39
--- End of stack trace from previous location ---
at System.Net.Test.Common.Http2LoopbackServerFactory.CreateServerAsync(Func`3 funcAsync, Int32 millisecondsTimeout, GenericLoopbackOptions options) in /_/src/libraries/Common/tests/System/Net/Http/Http2LoopbackServer.cs:line 231
at System.Net.Http.Functional.Tests.HttpClientHandlerTest_Cookies.GetAsyncWithRedirect_SetCookieContainer_CorrectCookiesSent() in /_/src/libraries/Common/tests/System/Net/Http/HttpClientHandlerTest.Cookies.cs:line 309
--- End of stack trace from previous location ---
``` | 1.0 | Test failure: GetAsyncWithRedirect_SetCookieContainer_CorrectCookiesSent - Test type: `System.Net.Http.Functional.Tests.SocketsHttpHandlerTest_Cookies_Http2`
Failures 3/12-9/6 (incl. PRs):
Day | Run | OS
-- | -- | --
7/29 | Official run | Centos.8.Amd64.Open
8/28 | Official run | Debian.10.Amd64.Open
9/4 | Official run | Fedora.34.Amd64.Open
Failure:
```
System.ObjectDisposedException : Cannot access a disposed object.
Object name: 'Http2Connection'.
at System.Net.Http.Http2Connection.TryReserveStream() in /_/src/libraries/System.Net.Http/src/System/Net/Http/SocketsHttpHandler/Http2Connection.cs:line 301
at System.Net.Http.HttpConnectionPool.GetHttp2ConnectionAsync(HttpRequestMessage request, Boolean async, CancellationToken cancellationToken) in /_/src/libraries/System.Net.Http/src/System/Net/Http/SocketsHttpHandler/HttpConnectionPool.cs:line 746
at System.Net.Http.HttpConnectionPool.SendWithVersionDetectionAndRetryAsync(HttpRequestMessage request, Boolean async, Boolean doRequestAuth, CancellationToken cancellationToken) in /_/src/libraries/System.Net.Http/src/System/Net/Http/SocketsHttpHandler/HttpConnectionPool.cs:line 971
at System.Net.Http.RedirectHandler.SendAsync(HttpRequestMessage request, Boolean async, CancellationToken cancellationToken) in /_/src/libraries/System.Net.Http/src/System/Net/Http/SocketsHttpHandler/RedirectHandler.cs:line 81
at System.Net.Http.HttpClient.<SendAsync>g__Core|83_0(HttpRequestMessage request, HttpCompletionOption completionOption, CancellationTokenSource cts, Boolean disposeCts, CancellationTokenSource pendingRequestsCts, CancellationToken originalCancellationToken) in /_/src/libraries/System.Net.Http/src/System/Net/Http/HttpClient.cs:line 553
at System.Net.Http.Functional.Tests.HttpClientHandlerTest_Cookies.<GetAsyncWithRedirect_SetCookieContainer_CorrectCookiesSent>b__17_0(Uri url) in /_/src/libraries/Common/tests/System/Net/Http/HttpClientHandlerTest.Cookies.cs:line 324
at System.Threading.Tasks.TaskTimeoutExtensions.GetRealException(Task task) in /_/src/libraries/Common/tests/System/Threading/Tasks/TaskTimeoutExtensions.cs:line 128
--- End of stack trace from previous location ---
at System.Threading.Tasks.TaskTimeoutExtensions.WhenAllOrAnyFailed(Task[] tasks) in /_/src/libraries/Common/tests/System/Threading/Tasks/TaskTimeoutExtensions.cs:line 90
at System.Net.Test.Common.LoopbackServerFactory.<>c__DisplayClass5_0.<<CreateClientAndServerAsync>b__0>d.MoveNext() in /_/src/libraries/Common/tests/System/Net/Http/GenericLoopbackServer.cs:line 39
--- End of stack trace from previous location ---
at System.Net.Test.Common.Http2LoopbackServerFactory.CreateServerAsync(Func`3 funcAsync, Int32 millisecondsTimeout, GenericLoopbackOptions options) in /_/src/libraries/Common/tests/System/Net/Http/Http2LoopbackServer.cs:line 231
at System.Net.Http.Functional.Tests.HttpClientHandlerTest_Cookies.GetAsyncWithRedirect_SetCookieContainer_CorrectCookiesSent() in /_/src/libraries/Common/tests/System/Net/Http/HttpClientHandlerTest.Cookies.cs:line 309
--- End of stack trace from previous location ---
``` | non_process | test failure getasyncwithredirect setcookiecontainer correctcookiessent test type system net http functional tests socketshttphandlertest cookies failures incl prs day run os official run centos open official run debian open official run fedora open failure system objectdisposedexception cannot access a disposed object object name at system net http tryreservestream in src libraries system net http src system net http socketshttphandler cs line at system net http httpconnectionpool httprequestmessage request boolean async cancellationtoken cancellationtoken in src libraries system net http src system net http socketshttphandler httpconnectionpool cs line at system net http httpconnectionpool sendwithversiondetectionandretryasync httprequestmessage request boolean async boolean dorequestauth cancellationtoken cancellationtoken in src libraries system net http src system net http socketshttphandler httpconnectionpool cs line at system net http redirecthandler sendasync httprequestmessage request boolean async cancellationtoken cancellationtoken in src libraries system net http src system net http socketshttphandler redirecthandler cs line at system net http httpclient g core httprequestmessage request httpcompletionoption completionoption cancellationtokensource cts boolean disposects cancellationtokensource pendingrequestscts cancellationtoken originalcancellationtoken in src libraries system net http src system net http httpclient cs line at system net http functional tests httpclienthandlertest cookies b uri url in src libraries common tests system net http httpclienthandlertest cookies cs line at system threading tasks tasktimeoutextensions getrealexception task task in src libraries common tests system threading tasks tasktimeoutextensions cs line end of stack trace from previous location at system threading tasks tasktimeoutextensions whenalloranyfailed task tasks in src libraries common tests system threading tasks tasktimeoutextensions cs line at system net test common loopbackserverfactory c b d movenext in src libraries common tests system net http genericloopbackserver cs line end of stack trace from previous location at system net test common createserverasync func funcasync millisecondstimeout genericloopbackoptions options in src libraries common tests system net http cs line at system net http functional tests httpclienthandlertest cookies getasyncwithredirect setcookiecontainer correctcookiessent in src libraries common tests system net http httpclienthandlertest cookies cs line end of stack trace from previous location | 0 |
153,916 | 24,208,954,774 | IssuesEvent | 2022-09-25 16:24:42 | rojo-rbx/rojo | https://api.github.com/repos/rojo-rbx/rojo | opened | Rojo Headless Plugin API Proposal | type: enhancement scope: plugin status: needs design size: large | Exposing a headless API into `_G` will be very valuable to the Rojo ecosystem. Users can make companion plugins, and we can even utilize it ourselves for one-time plugin injections for `rojo open`. See issues #321 and #305.
Those are use cases and requests, and we've established there is demand and utility. Now, this issue is attempting to flesh out an execution of this idea. This is an ongoing discussion, not set in stone. Tell us about your use cases and desires so we can make this as valuable as possible!
**Proposed interface:**
```Lua
_G.Rojo
```
Plugins and command bar all share the same _G which allows us to put our interface in. Using the key `"Rojo"` is easy to remember and follows branding.
**Functions:**
```Lua
Rojo:Connect(host: string?, port: string?): (boolean, string?)
```
Attempts to connect Rojo to the given host & port. Same behavior as the user clicking the Connect button (sync locking, etc, all behave the same). Returns a boolean of whether it connected or not, with a second return of a message if necessary.
```Lua
Rojo:Disconnect(): void
```
Attempts to disconnect any active sync session.
```Lua
Rojo:GetSetting(setting: string): any
```
Returns the value of the given setting.
```Lua
Rojo:SetSetting(setting: string, value: any): void
```
Sets the value of the given setting to the given value.
```Lua
Rojo:Notify(msg: string): void
```
Sends a Rojo notification. Will indicate on it that it comes from a third party plugin.
**Properties:** *(All read-only)*
```Lua
Rojo.Version: {number}
```
The Rojo plugin version. (Example: `{7, 2, 1}`)
```Lua
Rojo.ProtocolVersion: number
```
The Rojo plugin's protocol version.
```Lua
Rojo.Connected: boolean
```
Whether Rojo is currently connected to a serve session. | 1.0 | Rojo Headless Plugin API Proposal - Exposing a headless API into `_G` will be very valuable to the Rojo ecosystem. Users can make companion plugins, and we can even utilize it ourselves for one-time plugin injections for `rojo open`. See issues #321 and #305.
Those are use cases and requests, and we've established there is demand and utility. Now, this issue is attempting to flesh out an execution of this idea. This is an ongoing discussion, not set in stone. Tell us about your use cases and desires so we can make this as valuable as possible!
**Proposed interface:**
```Lua
_G.Rojo
```
Plugins and command bar all share the same _G which allows us to put our interface in. Using the key `"Rojo"` is easy to remember and follows branding.
**Functions:**
```Lua
Rojo:Connect(host: string?, port: string?): (boolean, string?)
```
Attempts to connect Rojo to the given host & port. Same behavior as the user clicking the Connect button (sync locking, etc, all behave the same). Returns a boolean of whether it connected or not, with a second return of a message if necessary.
```Lua
Rojo:Disconnect(): void
```
Attempts to disconnect any active sync session.
```Lua
Rojo:GetSetting(setting: string): any
```
Returns the value of the given setting.
```Lua
Rojo:SetSetting(setting: string, value: any): void
```
Sets the value of the given setting to the given value.
```Lua
Rojo:Notify(msg: string): void
```
Sends a Rojo notification. Will indicate on it that it comes from a third party plugin.
**Properties:** *(All read-only)*
```Lua
Rojo.Version: {number}
```
The Rojo plugin version. (Example: `{7, 2, 1}`)
```Lua
Rojo.ProtocolVersion: number
```
The Rojo plugin's protocol version.
```Lua
Rojo.Connected: boolean
```
Whether Rojo is currently connected to a serve session. | non_process | rojo headless plugin api proposal exposing a headless api into g will be very valuable to the rojo ecosystem users can make companion plugins and we can even utilize it ourselves for one time plugin injections for rojo open see issues and those are use cases and requests and we ve established there is demand and utility now this issue is attempting to flesh out an execution of this idea this is an ongoing discussion not set in stone tell us about your use cases and desires so we can make this as valuable as possible proposed interface lua g rojo plugins and command bar all share the same g which allows us to put our interface in using the key rojo is easy to remember and follows branding functions lua rojo connect host string port string boolean string attempts to connect rojo to the given host port same behavior as the user clicking the connect button sync locking etc all behave the same returns a boolean of whether it connected or not with a second return of a message if necessary lua rojo disconnect void attempts to disconnect any active sync session lua rojo getsetting setting string any returns the value of the given setting lua rojo setsetting setting string value any void sets the value of the given setting to the given value lua rojo notify msg string void sends a rojo notification will indicate on it that it comes from a third party plugin properties all read only lua rojo version number the rojo plugin version example lua rojo protocolversion number the rojo plugin s protocol version lua rojo connected boolean whether rojo is currently connected to a serve session | 0 |
80,123 | 9,981,749,491 | IssuesEvent | 2019-07-10 08:16:41 | greatnewcls/KNLWKKGOTE62XGFSVAZIMVUH | https://api.github.com/repos/greatnewcls/KNLWKKGOTE62XGFSVAZIMVUH | reopened | OhTBA7UorvsXriCHJ1ogOZIn+EBZunm3OMi+INKHrNB/BJJ7Po50QnBrsVd4c5Mtbi6tQ9kJEUaJcKK3MfNaUtRF6bVVKJD7DzW2uteTdaDX+WdDbs230ekWhxUEN2Ay5lviI/jSRMFXSJrojzRjzNRlvpYAD245YbVSjAZnbnI= | design | XOAOyNWqA53110ic+qsENQ73Dk2nvbvC3eehH+hse81R4M1cUQHqdoQNXBz0Qe5PVRhtwGvN9ZK6jEiG/mp/hYurSCH0pBJ4SBEDxbzXcz+TeLv9GyZmkFWRPV0A61e78NaLxsUXJsDOXm8pDHElRQfTVp6cagKb/46Kmts4XTaWU8by9pOW6UHCr6PBsIyIJzBt6Xm03J64a1hWETGNE8X4z31LvsEtRU9ta/JmU92EoCeQihBwwpi7wEIYK3H7WmL6btRaW4Jyt3hhxe+flIrVSZ5nahEiwGFvQFr6/b2oJvLS9gs2CuR1oMdyK8SkHlL5ekDwU06/kJDlnWYXEAlPNqbmASP5g+qUC43TM4UgAcUE3NgDfK3hd9eZXBlq53OXo49VhXT3vpSHyhTmf+tKY57pRXyHDH6aXUp5TUmdA7XMT3Zl2SZ+v3P3OrTGjS1LEbxK0dpZ2t/IChA9E1W+Xp7jbVE/2gORVuD/UYgMtbxRBo+oMIIFvGyukpaKEkpOCaYillmCMibB0g9RDD/X3ESiSj1SLN39qQ8NgZXoYP4aaHRSaOTd/Tba0n3TUmQiTN+O3Yc8WTaucrQfFz5vvDiblN5j50MJ5kAKtlGg8eCIDGvoLTDip6DcUuv2xeME+PRR1E+OpUSbbMTLrBErlyUszJmu5ktTik+H/WF+o2uM/v0lNzPkpuLo5p4dBuScK5RK4msH7+2GDD5eUcI9h85T/U/z5rD8FBrD3a9hvAx3KfS5wY0MQZ0Q7UADxClCkqVW8s3BuIc7E2gdBK90+5fSj+3R0GveizNJqQtkdbogZF3mSbT3YuYQxQyDcOu0373YFmQVQCP76p8Bm53EIjo2mD97ha1ByHFqHL381V7zetHxhTQUc5/B3ZaxVxZWKyN2XvV/jrqkiO+4CYrVSZ5nahEiwGFvQFr6/b2bf7l8bXq/N3SiGV0SuAPY6uXXZ0q5Ko4bFwJaKEp7Sw== | 1.0 | OhTBA7UorvsXriCHJ1ogOZIn+EBZunm3OMi+INKHrNB/BJJ7Po50QnBrsVd4c5Mtbi6tQ9kJEUaJcKK3MfNaUtRF6bVVKJD7DzW2uteTdaDX+WdDbs230ekWhxUEN2Ay5lviI/jSRMFXSJrojzRjzNRlvpYAD245YbVSjAZnbnI= - XOAOyNWqA53110ic+qsENQ73Dk2nvbvC3eehH+hse81R4M1cUQHqdoQNXBz0Qe5PVRhtwGvN9ZK6jEiG/mp/hYurSCH0pBJ4SBEDxbzXcz+TeLv9GyZmkFWRPV0A61e78NaLxsUXJsDOXm8pDHElRQfTVp6cagKb/46Kmts4XTaWU8by9pOW6UHCr6PBsIyIJzBt6Xm03J64a1hWETGNE8X4z31LvsEtRU9ta/JmU92EoCeQihBwwpi7wEIYK3H7WmL6btRaW4Jyt3hhxe+flIrVSZ5nahEiwGFvQFr6/b2oJvLS9gs2CuR1oMdyK8SkHlL5ekDwU06/kJDlnWYXEAlPNqbmASP5g+qUC43TM4UgAcUE3NgDfK3hd9eZXBlq53OXo49VhXT3vpSHyhTmf+tKY57pRXyHDH6aXUp5TUmdA7XMT3Zl2SZ+v3P3OrTGjS1LEbxK0dpZ2t/IChA9E1W+Xp7jbVE/2gORVuD/UYgMtbxRBo+oMIIFvGyukpaKEkpOCaYillmCMibB0g9RDD/X3ESiSj1SLN39qQ8NgZXoYP4aaHRSaOTd/Tba0n3TUmQiTN+O3Yc8WTaucrQfFz5vvDiblN5j50MJ5kAKtlGg8eCIDGvoLTDip6DcUuv2xeME+PRR1E+OpUSbbMTLrBErlyUszJmu5ktTik+H/WF+o2uM/v0lNzPkpuLo5p4dBuScK5RK4msH7+2GDD5eUcI9h85T/U/z5rD8FBrD3a9hvAx3KfS5wY0MQZ0Q7UADxClCkqVW8s3BuIc7E2gdBK90+5fSj+3R0GveizNJqQtkdbogZF3mSbT3YuYQxQyDcOu0373YFmQVQCP76p8Bm53EIjo2mD97ha1ByHFqHL381V7zetHxhTQUc5/B3ZaxVxZWKyN2XvV/jrqkiO+4CYrVSZ5nahEiwGFvQFr6/b2bf7l8bXq/N3SiGV0SuAPY6uXXZ0q5Ko4bFwJaKEp7Sw== | non_process | inkhrnb mp uygmtbxrbo h wf u jrqkio | 0 |
13,843 | 10,054,777,823 | IssuesEvent | 2019-07-22 03:14:45 | MicrosoftDocs/azure-docs | https://api.github.com/repos/MicrosoftDocs/azure-docs | closed | Microsoft Form Recognizer boundingBox values | Pri3 cognitive-services/svc cxp forms-recognizer/subsvc product-question triaged | We are looking to build on the Forms Recognizer platform and was wondering if someone could explain the boundingBox values. For example, could these values be used to programmatically draw a box around the extracted value to show the user where the value was pulled from? Looks like the first value lines up to the x coordinate but the other values are a mystery. TIA
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 400d903c-36d4-e20b-d2fc-b80b53b00ceb
* Version Independent ID: 210dbaf5-eda5-f50e-8d8e-18b86daff035
* Content: [What is Form Recognizer? - Azure Cognitive Services](https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/overview)
* Content Source: [articles/cognitive-services/form-recognizer/overview.md](https://github.com/Microsoft/azure-docs/blob/master/articles/cognitive-services/form-recognizer/overview.md)
* Service: **cognitive-services**
* Sub-service: **forms-recognizer**
* GitHub Login: @PatrickFarley
* Microsoft Alias: **pafarley** | 1.0 | Microsoft Form Recognizer boundingBox values - We are looking to build on the Forms Recognizer platform and was wondering if someone could explain the boundingBox values. For example, could these values be used to programmatically draw a box around the extracted value to show the user where the value was pulled from? Looks like the first value lines up to the x coordinate but the other values are a mystery. TIA
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 400d903c-36d4-e20b-d2fc-b80b53b00ceb
* Version Independent ID: 210dbaf5-eda5-f50e-8d8e-18b86daff035
* Content: [What is Form Recognizer? - Azure Cognitive Services](https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/overview)
* Content Source: [articles/cognitive-services/form-recognizer/overview.md](https://github.com/Microsoft/azure-docs/blob/master/articles/cognitive-services/form-recognizer/overview.md)
* Service: **cognitive-services**
* Sub-service: **forms-recognizer**
* GitHub Login: @PatrickFarley
* Microsoft Alias: **pafarley** | non_process | microsoft form recognizer boundingbox values we are looking to build on the forms recognizer platform and was wondering if someone could explain the boundingbox values for example could these values be used to programmatically draw a box around the extracted value to show the user where the value was pulled from looks like the first value lines up to the x coordinate but the other values are a mystery tia document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service cognitive services sub service forms recognizer github login patrickfarley microsoft alias pafarley | 0 |
204,399 | 15,898,077,988 | IssuesEvent | 2021-04-11 23:49:53 | sympy/sympy | https://api.github.com/repos/sympy/sympy | closed | Pass coverage_doctest.py | Documentation Enhancement Valid imported | bc.. This may have been mentioned before, but I'm creating an issue specific for it.
I think there should soon be documentation in the code for most, if not
all, of the classes and functions, even if it is only a single line. This
can be useful for both the generated API documentation, and through
interactive help (i.e., the help() command in an interactive Python
session). If there is a need for this, I am fine going through all of the
code and documenting everything, or getting a good start on it.
p. Original issue for "#3393":https://github.com/sympy/sympy/issues/3393: "http://code.google.com/p/sympy/issues/detail?id=294":http://code.google.com/p/sympy/issues/detail?id=294
p. Original author: "https://code.google.com/u/inferno1386/":https://code.google.com/u/inferno1386/
p. Referenced issues: "#4686":https://github.com/sympy/sympy/issues/4686
| 1.0 | Pass coverage_doctest.py - bc.. This may have been mentioned before, but I'm creating an issue specific for it.
I think there should soon be documentation in the code for most, if not
all, of the classes and functions, even if it is only a single line. This
can be useful for both the generated API documentation, and through
interactive help (i.e., the help() command in an interactive Python
session). If there is a need for this, I am fine going through all of the
code and documenting everything, or getting a good start on it.
p. Original issue for "#3393":https://github.com/sympy/sympy/issues/3393: "http://code.google.com/p/sympy/issues/detail?id=294":http://code.google.com/p/sympy/issues/detail?id=294
p. Original author: "https://code.google.com/u/inferno1386/":https://code.google.com/u/inferno1386/
p. Referenced issues: "#4686":https://github.com/sympy/sympy/issues/4686
| non_process | pass coverage doctest py bc this may have been mentioned before but i m creating an issue specific for it i think there should soon be documentation in the code for most if not all of the classes and functions even if it is only a single line this can be useful for both the generated api documentation and through interactive help i e the help command in an interactive python session if there is a need for this i am fine going through all of the code and documenting everything or getting a good start on it p original issue for p original author p referenced issues | 0 |
2,273 | 3,369,025,197 | IssuesEvent | 2015-11-23 07:12:38 | owncloud/core | https://api.github.com/repos/owncloud/core | closed | high system load after upgrade to 8.2.1 | bug performance | After upgrading from 8.2.0 to 8.2.1 our server has very high system load, there are many apache processes and sometimes mysql and php as well. The system load is near 100% at all times, and the web interface is very unresponsive.
The upgrade itself finished OK, I just got the usual warnings that some apps had been disabled in the process. After that I re-enabled the apps without problems. None of the apps needed to be updated. A few minutes after that I noticed that the server was slowing down.
See example output from top below:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1253 mysql 20 0 2849936 230904 7600 S 88.4 1.4 958:27.21 mysqld
8894 www-data 20 0 398784 33720 20640 R 85.5 0.2 0:28.72 apache2
8948 www-data 20 0 396608 27040 16544 R 75.2 0.2 0:36.40 apache2
8955 www-data 20 0 397360 27592 16332 R 53.2 0.2 0:06.60 apache2
8971 www-data 20 0 393444 23732 16164 S 46.4 0.1 0:01.45 apache2
8874 www-data 20 0 404804 41064 21732 S 22.1 0.3 1:04.32 apache2
8969 www-data 20 0 393444 23268 15716 S 19.5 0.1 0:01.18 apache2
8952 www-data 20 0 399304 35116 21320 S 10.2 0.2 0:21.51 apache2
I tried disabling all non-standard apps. The web interface is still unresponsive. It takes approx. 10 secs to load "files" page. Whenever I try to load/reload a page, several apache processes appear, each one using up 50-100% CPU. This looks seriously messed up. I am considering going back to 8.2.0 and restoring the sql database to a backup from yesterday. Any ideas what causes this, and how to fix this would be extremely appreciated.
ownCloud version: 8.2.1
approx. 40 active users
Webserver: Apache
Database: MySQL (database dump size: approx. 800 MB)
OS: Linux / Ubuntu 14.04.3 LTS
PHP version: 5.5.*
| True | high system load after upgrade to 8.2.1 - After upgrading from 8.2.0 to 8.2.1 our server has very high system load, there are many apache processes and sometimes mysql and php as well. The system load is near 100% at all times, and the web interface is very unresponsive.
The upgrade itself finished OK, I just got the usual warnings that some apps had been disabled in the process. After that I re-enabled the apps without problems. None of the apps needed to be updated. A few minutes after that I noticed that the server was slowing down.
See example output from top below:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1253 mysql 20 0 2849936 230904 7600 S 88.4 1.4 958:27.21 mysqld
8894 www-data 20 0 398784 33720 20640 R 85.5 0.2 0:28.72 apache2
8948 www-data 20 0 396608 27040 16544 R 75.2 0.2 0:36.40 apache2
8955 www-data 20 0 397360 27592 16332 R 53.2 0.2 0:06.60 apache2
8971 www-data 20 0 393444 23732 16164 S 46.4 0.1 0:01.45 apache2
8874 www-data 20 0 404804 41064 21732 S 22.1 0.3 1:04.32 apache2
8969 www-data 20 0 393444 23268 15716 S 19.5 0.1 0:01.18 apache2
8952 www-data 20 0 399304 35116 21320 S 10.2 0.2 0:21.51 apache2
I tried disabling all non-standard apps. The web interface is still unresponsive. It takes approx. 10 secs to load "files" page. Whenever I try to load/reload a page, several apache processes appear, each one using up 50-100% CPU. This looks seriously messed up. I am considering going back to 8.2.0 and restoring the sql database to a backup from yesterday. Any ideas what causes this, and how to fix this would be extremely appreciated.
ownCloud version: 8.2.1
approx. 40 active users
Webserver: Apache
Database: MySQL (database dump size: approx. 800 MB)
OS: Linux / Ubuntu 14.04.3 LTS
PHP version: 5.5.*
| non_process | high system load after upgrade to after upgrading from to our server has very high system load there are many apache processes and sometimes mysql and php as well the system load is near at all times and the web interface is very unresponsive the upgrade itself finished ok i just got the usual warnings that some apps had been disabled in the process after that i re enabled the apps without problems none of the apps needed to be updated a few minutes after that i noticed that the server was slowing down see example output from top below pid user pr ni virt res shr s cpu mem time command mysql s mysqld www data r www data r www data r www data s www data s www data s www data s i tried disabling all non standard apps the web interface is still unresponsive it takes approx secs to load files page whenever i try to load reload a page several apache processes appear each one using up cpu this looks seriously messed up i am considering going back to and restoring the sql database to a backup from yesterday any ideas what causes this and how to fix this would be extremely appreciated owncloud version approx active users webserver apache database mysql database dump size approx mb os linux ubuntu lts php version | 0 |
38,719 | 19,523,393,276 | IssuesEvent | 2021-12-30 00:09:30 | facebookexperimental/Recoil | https://api.github.com/repos/facebookexperimental/Recoil | closed | double renders with react-beautiful-dnd | performance | I am familiar with https://github.com/facebookexperimental/Recoil/issues/307 rendering components with recoil twice on initialization.
But I am also getting double renders when I use https://github.com/atlassian/react-beautiful-dnd
Here is my test case [1]
https://github.com/jedierikb/recoil-beautiful-dnd
& here is the line you can toggle to switch between `useRecoilState` and `useState`:
https://github.com/jedierikb/recoil-beautiful-dnd/blob/master/src/App.js#L64
In the console you will note there are two renders after every drop operation when using `useRecoilState`.
I am not knowledgeable about what is happening under the hood in either library to possibly trigger the double rendering, but since it is a popular library I thought this issue might be of interest.
--
[1] adapted from one of their provided examples https://codesandbox.io/s/k260nyxq9v | True | double renders with react-beautiful-dnd - I am familiar with https://github.com/facebookexperimental/Recoil/issues/307 rendering components with recoil twice on initialization.
But I am also getting double renders when I use https://github.com/atlassian/react-beautiful-dnd
Here is my test case [1]
https://github.com/jedierikb/recoil-beautiful-dnd
& here is the line you can toggle to switch between `useRecoilState` and `useState`:
https://github.com/jedierikb/recoil-beautiful-dnd/blob/master/src/App.js#L64
In the console you will note there are two renders after every drop operation when using `useRecoilState`.
I am not knowledgeable about what is happening under the hood in either library to possibly trigger the double rendering, but since it is a popular library I thought this issue might be of interest.
--
[1] adapted from one of their provided examples https://codesandbox.io/s/k260nyxq9v | non_process | double renders with react beautiful dnd i am familiar with rendering components with recoil twice on initialization but i am also getting double renders when i use here is my test case here is the line you can toggle to switch between userecoilstate and usestate in the console you will note there are two renders after every drop operation when using userecoilstate i am not knowledgeable about what is happening under the hood in either library to possibly trigger the double rendering but since it is a popular library i thought this issue might be of interest adapted from one of their provided examples | 0 |
726,289 | 24,993,871,453 | IssuesEvent | 2022-11-02 21:27:46 | OpenEnergyDashboard/OED | https://api.github.com/repos/OpenEnergyDashboard/OED | closed | Date ranges and what maps display | t-enhancement p-high-priority | Maps should honor the date ranges selected in the line graphic. This one is only really valuable once the circles are not a simple 4 weeks. Maybe we should consider the average over the time range of the line graph? Maybe some options as in compare for predefined time ranges?
Note the code currently uses the bar value and issue #475 discusses how this is not correct at this time. If it did it similarly to compare it might work better but that also can have issues. | 1.0 | Date ranges and what maps display - Maps should honor the date ranges selected in the line graphic. This one is only really valuable once the circles are not a simple 4 weeks. Maybe we should consider the average over the time range of the line graph? Maybe some options as in compare for predefined time ranges?
Note the code currently uses the bar value and issue #475 discusses how this is not correct at this time. If it did it similarly to compare it might work better but that also can have issues. | non_process | date ranges and what maps display maps should honor the date ranges selected in the line graphic this one is only really valuable once the circles are not a simple weeks maybe we should consider the average over the time range of the line graph maybe some options as in compare for predefined time ranges note the code currently uses the bar value and issue discusses how this is not correct at this time if it did it similarly to compare it might work better but that also can have issues | 0 |
17,313 | 23,137,976,921 | IssuesEvent | 2022-07-28 15:44:06 | scikit-learn/scikit-learn | https://api.github.com/repos/scikit-learn/scikit-learn | closed | Add handle_missing and handle_unknown options to OrdinalEncoder | New Feature module:preprocessing Needs Decision - Close | [category_encoders.ordinal.OrdinalEncoder](http://contrib.scikit-learn.org/category_encoders/ordinal.html) in [scikit-learn-contrib/category_encoders](http://contrib.scikit-learn.org/category_encoders/) has 2 really useful options:
1. `handle_unknown`, options are ‘error’, ‘return_nan’ and ‘value’, defaults to ‘value’, which will impute the category -1.
2. `handle_missing`, options are ‘error’, ‘return_nan’, and ‘value, default to ‘value’, which treat nan as a category at fit time, or -2 at transform time if nan is not a category during fit.
These 2 options are really, really useful for handling real-world data
#### Describe the workflow you want to enable
1. Handle new categories at predict time in OrdinalEncoder (OneHotEncoder already has this opion).
2. Handle NaNs at fit and predict time in OrdinalEncoder
#### Describe your proposed solution
Port the logic for `handle_unknown` and `handle_missing` from `category_encoders.ordinal.OrdinalEncoder`
#### Describe alternatives you've considered, if relevant
Just using scikit-learn-contrib/category_encoders instead
#### Additional context
Every encoder in `scikit-learn-contrib/category_encoders` has the option `handle_unknown` and `handle_missing`, giving users the flexibility to decide how to handle unknown or new values. This consistency in the API makes it really easy to switch between different encoders and try them out in your workflow. | 1.0 | Add handle_missing and handle_unknown options to OrdinalEncoder - [category_encoders.ordinal.OrdinalEncoder](http://contrib.scikit-learn.org/category_encoders/ordinal.html) in [scikit-learn-contrib/category_encoders](http://contrib.scikit-learn.org/category_encoders/) has 2 really useful options:
1. `handle_unknown`, options are ‘error’, ‘return_nan’ and ‘value’, defaults to ‘value’, which will impute the category -1.
2. `handle_missing`, options are ‘error’, ‘return_nan’, and ‘value, default to ‘value’, which treat nan as a category at fit time, or -2 at transform time if nan is not a category during fit.
These 2 options are really, really useful for handling real-world data
#### Describe the workflow you want to enable
1. Handle new categories at predict time in OrdinalEncoder (OneHotEncoder already has this opion).
2. Handle NaNs at fit and predict time in OrdinalEncoder
#### Describe your proposed solution
Port the logic for `handle_unknown` and `handle_missing` from `category_encoders.ordinal.OrdinalEncoder`
#### Describe alternatives you've considered, if relevant
Just using scikit-learn-contrib/category_encoders instead
#### Additional context
Every encoder in `scikit-learn-contrib/category_encoders` has the option `handle_unknown` and `handle_missing`, giving users the flexibility to decide how to handle unknown or new values. This consistency in the API makes it really easy to switch between different encoders and try them out in your workflow. | process | add handle missing and handle unknown options to ordinalencoder in has really useful options handle unknown options are ‘error’ ‘return nan’ and ‘value’ defaults to ‘value’ which will impute the category handle missing options are ‘error’ ‘return nan’ and ‘value default to ‘value’ which treat nan as a category at fit time or at transform time if nan is not a category during fit these options are really really useful for handling real world data describe the workflow you want to enable handle new categories at predict time in ordinalencoder onehotencoder already has this opion handle nans at fit and predict time in ordinalencoder describe your proposed solution port the logic for handle unknown and handle missing from category encoders ordinal ordinalencoder describe alternatives you ve considered if relevant just using scikit learn contrib category encoders instead additional context every encoder in scikit learn contrib category encoders has the option handle unknown and handle missing giving users the flexibility to decide how to handle unknown or new values this consistency in the api makes it really easy to switch between different encoders and try them out in your workflow | 1 |
1,486 | 4,059,083,344 | IssuesEvent | 2016-05-25 08:14:53 | e-government-ua/iBP | https://api.github.com/repos/e-government-ua/iBP | closed | Дніпропетровська область - Внесення до Державного земельного кадастру відомостей (змін до них) про земельну ділянку | In process of testing in work | Розкрити/створити послугу на наступні міста Дніпропетровської області:
- [ ] Жовті Води
- [x] Марганець
- [ ] Новомосковськ
- [ ] Орджонікідзе
- [ ] Павлоград
- [ ] Першотравенськ
- [ ] Синельникове
- [ ] Тернівка
- [ ] Васильківський р-н
- [ ] Верхньодніпровський р-н
- [ ] Криворізький р-н
- [ ] Криничанський р-н
- [ ] Магдалинівський р-н
- [ ] Межівський р-н
- [ ] Нікопольський р-н
- [ ] Новомосковський р-н
- [ ] П’ятихатський р-н
- [ ] Павлоградський р-н
- [ ] Петропавлівський р-н
- [ ] Покровський р-н
- [ ] Синельниківський р-н
- [ ] Солонянський р-н
- [ ] Томаківський р-н
- [ ] Широківський р-н
- [ ] Юр’ївський р-н
контакти відповідальних осіб у [файлі](https://docs.google.com/spreadsheets/d/10epKJ_lkok-hCNzbTkU-7G8GbWGs5mzjgGFWBl-ONPQ/edit#gid=0)
інфо-карти знахдяться на офіційному [сайті](http://e-services.dp.gov.ua/_layouts/Information/pgServices.aspx)
треба уточнювати питання платності послуги | 1.0 | Дніпропетровська область - Внесення до Державного земельного кадастру відомостей (змін до них) про земельну ділянку - Розкрити/створити послугу на наступні міста Дніпропетровської області:
- [ ] Жовті Води
- [x] Марганець
- [ ] Новомосковськ
- [ ] Орджонікідзе
- [ ] Павлоград
- [ ] Першотравенськ
- [ ] Синельникове
- [ ] Тернівка
- [ ] Васильківський р-н
- [ ] Верхньодніпровський р-н
- [ ] Криворізький р-н
- [ ] Криничанський р-н
- [ ] Магдалинівський р-н
- [ ] Межівський р-н
- [ ] Нікопольський р-н
- [ ] Новомосковський р-н
- [ ] П’ятихатський р-н
- [ ] Павлоградський р-н
- [ ] Петропавлівський р-н
- [ ] Покровський р-н
- [ ] Синельниківський р-н
- [ ] Солонянський р-н
- [ ] Томаківський р-н
- [ ] Широківський р-н
- [ ] Юр’ївський р-н
контакти відповідальних осіб у [файлі](https://docs.google.com/spreadsheets/d/10epKJ_lkok-hCNzbTkU-7G8GbWGs5mzjgGFWBl-ONPQ/edit#gid=0)
інфо-карти знахдяться на офіційному [сайті](http://e-services.dp.gov.ua/_layouts/Information/pgServices.aspx)
треба уточнювати питання платності послуги | process | дніпропетровська область внесення до державного земельного кадастру відомостей змін до них про земельну ділянку розкрити створити послугу на наступні міста дніпропетровської області жовті води марганець новомосковськ орджонікідзе павлоград першотравенськ синельникове тернівка васильківський р н верхньодніпровський р н криворізький р н криничанський р н магдалинівський р н межівський р н нікопольський р н новомосковський р н п’ятихатський р н павлоградський р н петропавлівський р н покровський р н синельниківський р н солонянський р н томаківський р н широківський р н юр’ївський р н контакти відповідальних осіб у інфо карти знахдяться на офіційному треба уточнювати питання платності послуги | 1 |
184,010 | 14,267,109,353 | IssuesEvent | 2020-11-20 19:53:07 | AlaskaAirlines/WC-Generator | https://api.github.com/repos/AlaskaAirlines/WC-Generator | opened | generator: visual regression testing | Type: Feature Type: Testing help wanted | ## Is your feature request related to a problem? Please describe.
As features are added to each component, we are doing a healthy standard of unit testing, but we are not testing for visual regressions.
## Describe the solution you'd like
What I would like to see is that when we run tests that a visual regression tool is run at the same time. This tool will stop any commit if there is an unresolved diff.
## Describe alternatives you've considered
In the past I used a tool called backstopJS to do visual regression testing, but I am hearing that Cyprus is the new hotness.
| 1.0 | generator: visual regression testing - ## Is your feature request related to a problem? Please describe.
As features are added to each component, we are doing a healthy standard of unit testing, but we are not testing for visual regressions.
## Describe the solution you'd like
What I would like to see is that when we run tests that a visual regression tool is run at the same time. This tool will stop any commit if there is an unresolved diff.
## Describe alternatives you've considered
In the past I used a tool called backstopJS to do visual regression testing, but I am hearing that Cyprus is the new hotness.
| non_process | generator visual regression testing is your feature request related to a problem please describe as features are added to each component we are doing a healthy standard of unit testing but we are not testing for visual regressions describe the solution you d like what i would like to see is that when we run tests that a visual regression tool is run at the same time this tool will stop any commit if there is an unresolved diff describe alternatives you ve considered in the past i used a tool called backstopjs to do visual regression testing but i am hearing that cyprus is the new hotness | 0 |
64,580 | 8,745,072,574 | IssuesEvent | 2018-12-13 00:54:50 | flyve-mdm/web-mdm-dashboard | https://api.github.com/repos/flyve-mdm/web-mdm-dashboard | closed | Documentation on the CI fails | bug ci documentation | Hi, @Naylin15
Documentation on the CI fails:
https://circleci.com/gh/flyve-mdm/web-mdm-dashboard/6112?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link
```console
$ yarn jsdoc src -r -d docs -t ./jsdoc_theme
$ /root/flyve/node_modules/.bin/jsdoc src -r -d docs -t ./jsdoc_theme
``` | 1.0 | Documentation on the CI fails - Hi, @Naylin15
Documentation on the CI fails:
https://circleci.com/gh/flyve-mdm/web-mdm-dashboard/6112?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link
```console
$ yarn jsdoc src -r -d docs -t ./jsdoc_theme
$ /root/flyve/node_modules/.bin/jsdoc src -r -d docs -t ./jsdoc_theme
``` | non_process | documentation on the ci fails hi documentation on the ci fails console yarn jsdoc src r d docs t jsdoc theme root flyve node modules bin jsdoc src r d docs t jsdoc theme | 0 |
31,713 | 13,618,289,813 | IssuesEvent | 2020-09-23 18:19:30 | MicrosoftDocs/azure-docs | https://api.github.com/repos/MicrosoftDocs/azure-docs | closed | The input requirements for a custom Form Recognizer model are the same as the general Form Recognizer input requirements | Pri2 assigned-to-author cognitive-services/svc doc-idea forms-recognizer/subsvc triaged | The input requirements for a custom model are [here](https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/overview#custom-model)
), and as follows:
```
Form Recognizer works on input documents that meet these requirements:
Format must be JPG, PNG, PDF (text or scanned), or TIFF. Text-embedded PDFs are best because there's no possibility of error in character extraction and location.
If your PDFs are password-locked, you must remove the lock before submitting them.
PDF and TIFF documents must be 200 pages or less, and the total size of the training data set must be 500 pages or less.
For images, dimensions must be between 600 x 100 pixels and 4200 x 4200 pixels.
If scanned from paper documents, forms should be high-quality scans.
Text must use the Latin alphabet (English characters).
For unsupervised learning (without labeled data), data must contain keys and values.
For unsupervised learning (without labeled data), keys must appear above or to the left of the values; they can't appear below or to the right.
Form Recognizer doesn't currently support these types of input data:
Complex tables (nested tables, merged headers or cells, and so on).
Checkboxes or radio buttons.
```
The general input requirements for Form Recognizer are [here](https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/build-training-data-set#general-input-requirements) and as follows:
```
Form Recognizer works on input documents that meet these requirements:
Format must be JPG, PNG, PDF (text or scanned), or TIFF. Text-embedded PDFs are best because there's no possibility of error in character extraction and location.
If your PDFs are password-locked, you must remove the lock before submitting them.
PDF and TIFF documents must be 200 pages or less, and the total size of the training data set must be 500 pages or less.
For images, dimensions must be between 600 x 100 pixels and 4200 x 4200 pixels.
If scanned from paper documents, forms should be high-quality scans.
Text must use the Latin alphabet (English characters).
For unsupervised learning (without labeled data), data must contain keys and values.
For unsupervised learning (without labeled data), keys must appear above or to the left of the values; they can't appear below or to the right.
Form Recognizer doesn't currently support these types of input data:
Complex tables (nested tables, merged headers or cells, and so on).
Checkboxes or radio buttons.
```
There are exactly the same, is this a mistake? If it isn't then IMO simply having something like "At the present time, the input requirements for a custom Form Recognizer match the general input requirements, you can view them here.(link to general requirements section)" Rather than stating them again verbatim and user's having to look through them for differences or getting confused.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 400d903c-36d4-e20b-d2fc-b80b53b00ceb
* Version Independent ID: 210dbaf5-eda5-f50e-8d8e-18b86daff035
* Content: [What is Form Recognizer? - Azure Cognitive Services](https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/overview#custom-model)
* Content Source: [articles/cognitive-services/form-recognizer/overview.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/cognitive-services/form-recognizer/overview.md)
* Service: **cognitive-services**
* Sub-service: **forms-recognizer**
* GitHub Login: @PatrickFarley
* Microsoft Alias: **pafarley** | 1.0 | The input requirements for a custom Form Recognizer model are the same as the general Form Recognizer input requirements - The input requirements for a custom model are [here](https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/overview#custom-model)
), and as follows:
```
Form Recognizer works on input documents that meet these requirements:
Format must be JPG, PNG, PDF (text or scanned), or TIFF. Text-embedded PDFs are best because there's no possibility of error in character extraction and location.
If your PDFs are password-locked, you must remove the lock before submitting them.
PDF and TIFF documents must be 200 pages or less, and the total size of the training data set must be 500 pages or less.
For images, dimensions must be between 600 x 100 pixels and 4200 x 4200 pixels.
If scanned from paper documents, forms should be high-quality scans.
Text must use the Latin alphabet (English characters).
For unsupervised learning (without labeled data), data must contain keys and values.
For unsupervised learning (without labeled data), keys must appear above or to the left of the values; they can't appear below or to the right.
Form Recognizer doesn't currently support these types of input data:
Complex tables (nested tables, merged headers or cells, and so on).
Checkboxes or radio buttons.
```
The general input requirements for Form Recognizer are [here](https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/build-training-data-set#general-input-requirements) and as follows:
```
Form Recognizer works on input documents that meet these requirements:
Format must be JPG, PNG, PDF (text or scanned), or TIFF. Text-embedded PDFs are best because there's no possibility of error in character extraction and location.
If your PDFs are password-locked, you must remove the lock before submitting them.
PDF and TIFF documents must be 200 pages or less, and the total size of the training data set must be 500 pages or less.
For images, dimensions must be between 600 x 100 pixels and 4200 x 4200 pixels.
If scanned from paper documents, forms should be high-quality scans.
Text must use the Latin alphabet (English characters).
For unsupervised learning (without labeled data), data must contain keys and values.
For unsupervised learning (without labeled data), keys must appear above or to the left of the values; they can't appear below or to the right.
Form Recognizer doesn't currently support these types of input data:
Complex tables (nested tables, merged headers or cells, and so on).
Checkboxes or radio buttons.
```
There are exactly the same, is this a mistake? If it isn't then IMO simply having something like "At the present time, the input requirements for a custom Form Recognizer match the general input requirements, you can view them here.(link to general requirements section)" Rather than stating them again verbatim and user's having to look through them for differences or getting confused.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 400d903c-36d4-e20b-d2fc-b80b53b00ceb
* Version Independent ID: 210dbaf5-eda5-f50e-8d8e-18b86daff035
* Content: [What is Form Recognizer? - Azure Cognitive Services](https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/overview#custom-model)
* Content Source: [articles/cognitive-services/form-recognizer/overview.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/cognitive-services/form-recognizer/overview.md)
* Service: **cognitive-services**
* Sub-service: **forms-recognizer**
* GitHub Login: @PatrickFarley
* Microsoft Alias: **pafarley** | non_process | the input requirements for a custom form recognizer model are the same as the general form recognizer input requirements the input requirements for a custom model are and as follows form recognizer works on input documents that meet these requirements format must be jpg png pdf text or scanned or tiff text embedded pdfs are best because there s no possibility of error in character extraction and location if your pdfs are password locked you must remove the lock before submitting them pdf and tiff documents must be pages or less and the total size of the training data set must be pages or less for images dimensions must be between x pixels and x pixels if scanned from paper documents forms should be high quality scans text must use the latin alphabet english characters for unsupervised learning without labeled data data must contain keys and values for unsupervised learning without labeled data keys must appear above or to the left of the values they can t appear below or to the right form recognizer doesn t currently support these types of input data complex tables nested tables merged headers or cells and so on checkboxes or radio buttons the general input requirements for form recognizer are and as follows form recognizer works on input documents that meet these requirements format must be jpg png pdf text or scanned or tiff text embedded pdfs are best because there s no possibility of error in character extraction and location if your pdfs are password locked you must remove the lock before submitting them pdf and tiff documents must be pages or less and the total size of the training data set must be pages or less for images dimensions must be between x pixels and x pixels if scanned from paper documents forms should be high quality scans text must use the latin alphabet english characters for unsupervised learning without labeled data data must contain keys and values for unsupervised learning without labeled data keys must appear above or to the left of the values they can t appear below or to the right form recognizer doesn t currently support these types of input data complex tables nested tables merged headers or cells and so on checkboxes or radio buttons there are exactly the same is this a mistake if it isn t then imo simply having something like at the present time the input requirements for a custom form recognizer match the general input requirements you can view them here link to general requirements section rather than stating them again verbatim and user s having to look through them for differences or getting confused document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service cognitive services sub service forms recognizer github login patrickfarley microsoft alias pafarley | 0 |
727,800 | 25,046,850,948 | IssuesEvent | 2022-11-05 11:13:01 | darktable-org/darktable | https://api.github.com/repos/darktable-org/darktable | closed | enabling/disabling snapshot comparison alters | priority: high scope: UI bug: pending | **Describe the bug/issue**
When I turn snapshot comparison on or off, the modules displayed below the histogram jump to a module, _sharpen_, that I do not use, and do not even have in my presets.
**To Reproduce**
1. Open any image.
2. Take a snapshot.
3. Enable snapshot display.
4. Observe the module list change.
**Expected behavior**
Enabling/disabling snapshots does not change module display.
**Screenshots**
Initial view:

Snapshot created:

Snapshot activated:

Note that the sharpen module is *not* part of my 'core' module group. Switching to 'core' manually:


**Which commit introduced the error**
38fe2d14379e3a5bc6a979d4b2437227e86b262a @TurboGit
**Platform**
* darktable version : 4.1.0+828~g156903b8d
* OS : `Linux 5.15.0-52-generic #58-Ubuntu SMP Thu Oct 13 08:03:55 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux`
* Linux - Distro : Ubuntu 22.04
* Memory, graphics, OpenCL: irrelevant
* Xorg : 1:7.7+23ubuntu2
* Desktop : KDE 5.24.4
* GTK+ : 3.24.33
* gcc : 11.3.0
* cflags : N/A
* CMAKE_BUILD_TYPE : Release
| 1.0 | enabling/disabling snapshot comparison alters - **Describe the bug/issue**
When I turn snapshot comparison on or off, the modules displayed below the histogram jump to a module, _sharpen_, that I do not use, and do not even have in my presets.
**To Reproduce**
1. Open any image.
2. Take a snapshot.
3. Enable snapshot display.
4. Observe the module list change.
**Expected behavior**
Enabling/disabling snapshots does not change module display.
**Screenshots**
Initial view:

Snapshot created:

Snapshot activated:

Note that the sharpen module is *not* part of my 'core' module group. Switching to 'core' manually:


**Which commit introduced the error**
38fe2d14379e3a5bc6a979d4b2437227e86b262a @TurboGit
**Platform**
* darktable version : 4.1.0+828~g156903b8d
* OS : `Linux 5.15.0-52-generic #58-Ubuntu SMP Thu Oct 13 08:03:55 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux`
* Linux - Distro : Ubuntu 22.04
* Memory, graphics, OpenCL: irrelevant
* Xorg : 1:7.7+23ubuntu2
* Desktop : KDE 5.24.4
* GTK+ : 3.24.33
* gcc : 11.3.0
* cflags : N/A
* CMAKE_BUILD_TYPE : Release
| non_process | enabling disabling snapshot comparison alters describe the bug issue when i turn snapshot comparison on or off the modules displayed below the histogram jump to a module sharpen that i do not use and do not even have in my presets to reproduce open any image take a snapshot enable snapshot display observe the module list change expected behavior enabling disabling snapshots does not change module display screenshots initial view snapshot created snapshot activated note that the sharpen module is not part of my core module group switching to core manually which commit introduced the error turbogit platform darktable version os linux generic ubuntu smp thu oct utc gnu linux linux distro ubuntu memory graphics opencl irrelevant xorg desktop kde gtk gcc cflags n a cmake build type release | 0 |
2,145 | 4,996,646,439 | IssuesEvent | 2016-12-09 14:33:24 | nodejs/node | https://api.github.com/repos/nodejs/node | closed | Close and exit event triggering on launching child process | child_process question windows | <!--
Thank you for reporting an issue.
This issue tracker is for bugs and issues found within Node.js core.
If you require more general support please file an issue on our help
repo. https://github.com/nodejs/help
Please fill in as much of the template below as you're able.
Version: output of `node -v`
Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows)
Subsystem: if known, please specify affected core module name
If possible, please provide code that demonstrates the problem, keeping it as
simple and free of external dependencies as you are able.
-->
* **Version** : nwjs-v0.18.7-win-x64
* **Platform** : Windows10 64 Bit
* **Subsystem** : node web kit
<!-- Enter your issue details below this comment. -->
I am using the following code to launch the system calculator application and i need an event which trigger on closing that calculator application,but event is triggering on launching the application itself
.its been last 3 days i am looking for this issue same and the code working fine if i try to open notepad application
const spawn = require('child_process').spawn;
const bat = spawn('cmd.exe', ['/c', 'calc.exe']);
bat.on('exit', (code) => {
alert(`Child exited with code ${code}`);
});
| 1.0 | Close and exit event triggering on launching child process - <!--
Thank you for reporting an issue.
This issue tracker is for bugs and issues found within Node.js core.
If you require more general support please file an issue on our help
repo. https://github.com/nodejs/help
Please fill in as much of the template below as you're able.
Version: output of `node -v`
Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows)
Subsystem: if known, please specify affected core module name
If possible, please provide code that demonstrates the problem, keeping it as
simple and free of external dependencies as you are able.
-->
* **Version** : nwjs-v0.18.7-win-x64
* **Platform** : Windows10 64 Bit
* **Subsystem** : node web kit
<!-- Enter your issue details below this comment. -->
I am using the following code to launch the system calculator application and i need an event which trigger on closing that calculator application,but event is triggering on launching the application itself
.its been last 3 days i am looking for this issue same and the code working fine if i try to open notepad application
const spawn = require('child_process').spawn;
const bat = spawn('cmd.exe', ['/c', 'calc.exe']);
bat.on('exit', (code) => {
alert(`Child exited with code ${code}`);
});
| process | close and exit event triggering on launching child process thank you for reporting an issue this issue tracker is for bugs and issues found within node js core if you require more general support please file an issue on our help repo please fill in as much of the template below as you re able version output of node v platform output of uname a unix or version and or bit windows subsystem if known please specify affected core module name if possible please provide code that demonstrates the problem keeping it as simple and free of external dependencies as you are able version nwjs win platform bit subsystem node web kit i am using the following code to launch the system calculator application and i need an event which trigger on closing that calculator application but event is triggering on launching the application itself its been last days i am looking for this issue same and the code working fine if i try to open notepad application const spawn require child process spawn const bat spawn cmd exe bat on exit code alert child exited with code code | 1 |
122,608 | 10,227,579,486 | IssuesEvent | 2019-08-16 21:14:54 | rancher/rancher | https://api.github.com/repos/rancher/rancher | closed | [UI] It takes long time to show the pipeline config if there are many branches | [zube]: To Test area/pipeline team/ui | UI ticket for https://github.com/rancher/rancher/issues/17231
When users Edit pipeline configs in UI, or view/edit pipeline file by yaml, UI talks to the following APIs to fetch the data:
1. `/v3/project/<id>/pipelines/<id>/branches` to get all branches.
2. `/v3/project/<id>/pipelines/<id>/yaml?branch=<name>` to get pipeline file for a specific branch.
3. `/v3/project/<id>/pipelines/<id>/configs?branch=<name>` to get pipeline config for a specific branch. | 1.0 | [UI] It takes long time to show the pipeline config if there are many branches - UI ticket for https://github.com/rancher/rancher/issues/17231
When users Edit pipeline configs in UI, or view/edit pipeline file by yaml, UI talks to the following APIs to fetch the data:
1. `/v3/project/<id>/pipelines/<id>/branches` to get all branches.
2. `/v3/project/<id>/pipelines/<id>/yaml?branch=<name>` to get pipeline file for a specific branch.
3. `/v3/project/<id>/pipelines/<id>/configs?branch=<name>` to get pipeline config for a specific branch. | non_process | it takes long time to show the pipeline config if there are many branches ui ticket for when users edit pipeline configs in ui or view edit pipeline file by yaml ui talks to the following apis to fetch the data project pipelines branches to get all branches project pipelines yaml branch to get pipeline file for a specific branch project pipelines configs branch to get pipeline config for a specific branch | 0 |
14,554 | 17,672,835,103 | IssuesEvent | 2021-08-23 08:38:01 | qgis/QGIS-Documentation | https://api.github.com/repos/qgis/QGIS-Documentation | closed | Rescale raster algorithm for Processing (Request in QGIS) | Processing Alg 3.16 | ### Request for documentation
From pull request QGIS/qgis#37671
Author: @alexbruy
QGIS version: 3.16
**Rescale raster algorithm for Processing**
### PR Description:
## Description
Add Rescale raster algorithm to change raster value range preserving shape of the raster's histogram. Useful when rasters from different sources should be compared or processed together and their pixel values should be in the same range. By default algorithm preserves NODATA value of the input raster but there is an option to override it.
Here is histogram of the DEM (value range is 85-243)

And histogram of same DEM rescaled to the range 100-1000

Fixes #26099.
### Commits tagged with [need-docs] or [FEATURE] | 1.0 | Rescale raster algorithm for Processing (Request in QGIS) - ### Request for documentation
From pull request QGIS/qgis#37671
Author: @alexbruy
QGIS version: 3.16
**Rescale raster algorithm for Processing**
### PR Description:
## Description
Add Rescale raster algorithm to change raster value range preserving shape of the raster's histogram. Useful when rasters from different sources should be compared or processed together and their pixel values should be in the same range. By default algorithm preserves NODATA value of the input raster but there is an option to override it.
Here is histogram of the DEM (value range is 85-243)

And histogram of same DEM rescaled to the range 100-1000

Fixes #26099.
### Commits tagged with [need-docs] or [FEATURE] | process | rescale raster algorithm for processing request in qgis request for documentation from pull request qgis qgis author alexbruy qgis version rescale raster algorithm for processing pr description description add rescale raster algorithm to change raster value range preserving shape of the raster s histogram useful when rasters from different sources should be compared or processed together and their pixel values should be in the same range by default algorithm preserves nodata value of the input raster but there is an option to override it here is histogram of the dem value range is and histogram of same dem rescaled to the range fixes commits tagged with or | 1 |
50,284 | 6,077,454,139 | IssuesEvent | 2017-06-16 04:02:44 | Kademi/kademi-dev | https://api.github.com/repos/Kademi/kademi-dev | closed | Editing data series record with incorrect profile causes common message | Ready to Test - Dev Ready to Test QA | 
Should be like on creation

BTW, Couldnot locate profile or organisation: vladqas0031 - error in text | 2.0 | Editing data series record with incorrect profile causes common message - 
Should be like on creation

BTW, Couldnot locate profile or organisation: vladqas0031 - error in text | non_process | editing data series record with incorrect profile causes common message should be like on creation btw couldnot locate profile or organisation error in text | 0 |
7,065 | 10,219,354,458 | IssuesEvent | 2019-08-15 18:21:54 | heim-rs/heim | https://api.github.com/repos/heim-rs/heim | closed | Process disk IO counters for Linux | A-process C-enhancement O-linux | It seems that there is no way to fetch disk IO counters per process in all platforms supported (`psutil` claims that it is possible for Linux, BSD, Windows and AIX), so it could be made via OS-specific traits. | 1.0 | Process disk IO counters for Linux - It seems that there is no way to fetch disk IO counters per process in all platforms supported (`psutil` claims that it is possible for Linux, BSD, Windows and AIX), so it could be made via OS-specific traits. | process | process disk io counters for linux it seems that there is no way to fetch disk io counters per process in all platforms supported psutil claims that it is possible for linux bsd windows and aix so it could be made via os specific traits | 1 |
5,423 | 3,219,122,984 | IssuesEvent | 2015-10-08 07:56:29 | OpenUserJs/OpenUserJS.org | https://api.github.com/repos/OpenUserJs/OpenUserJS.org | closed | Modify `.../scriptStorage.js` in `sendMeta` to trim up .meta.js route text to barebone keys/blocks | CODE enhancement | @sizzlemctwizzle [wrote](/OpenUserJs/OpenUserJS.org/issues/718#issuecomment-135142897):
> Although trimming it down in the future to `@name`, `@namespace`, `@version` should be a goal.
Followup from #718
Let's give some time to migrate any scripts that may need/want to use the JSON route **and** some time to weigh in on this. *(adding needs discussion to this during this waiting time... I'm at +1 __unless otherwise convinced__... and I can presume @sizzlemctwizzle is at +1 by his statement)*
Loosely related to:
* #77 | 1.0 | Modify `.../scriptStorage.js` in `sendMeta` to trim up .meta.js route text to barebone keys/blocks - @sizzlemctwizzle [wrote](/OpenUserJs/OpenUserJS.org/issues/718#issuecomment-135142897):
> Although trimming it down in the future to `@name`, `@namespace`, `@version` should be a goal.
Followup from #718
Let's give some time to migrate any scripts that may need/want to use the JSON route **and** some time to weigh in on this. *(adding needs discussion to this during this waiting time... I'm at +1 __unless otherwise convinced__... and I can presume @sizzlemctwizzle is at +1 by his statement)*
Loosely related to:
* #77 | non_process | modify scriptstorage js in sendmeta to trim up meta js route text to barebone keys blocks sizzlemctwizzle openuserjs openuserjs org issues issuecomment although trimming it down in the future to name namespace version should be a goal followup from let s give some time to migrate any scripts that may need want to use the json route and some time to weigh in on this adding needs discussion to this during this waiting time i m at unless otherwise convinced and i can presume sizzlemctwizzle is at by his statement loosely related to | 0 |
9,118 | 12,195,570,239 | IssuesEvent | 2020-04-29 17:35:39 | pacificclimate/climate-explorer-data-prep | https://api.github.com/repos/pacificclimate/climate-explorer-data-prep | closed | Incorrectly calculated frost free day data | process new data update existing data | Frost free day data (`ffd`) was calculated from frost day (`fdETCCDI`) data via (365 - `fdETCCDI`), which is correct (or at least the approximation we've decided to use for now) for annual data.
Unfortunately, monthly and seasonal data was also calculated this way, and it is _quite_ incorrect. It's not being used for anything at present. Remove or replace before it is. | 1.0 | Incorrectly calculated frost free day data - Frost free day data (`ffd`) was calculated from frost day (`fdETCCDI`) data via (365 - `fdETCCDI`), which is correct (or at least the approximation we've decided to use for now) for annual data.
Unfortunately, monthly and seasonal data was also calculated this way, and it is _quite_ incorrect. It's not being used for anything at present. Remove or replace before it is. | process | incorrectly calculated frost free day data frost free day data ffd was calculated from frost day fdetccdi data via fdetccdi which is correct or at least the approximation we ve decided to use for now for annual data unfortunately monthly and seasonal data was also calculated this way and it is quite incorrect it s not being used for anything at present remove or replace before it is | 1 |
680,071 | 23,256,498,442 | IssuesEvent | 2022-08-04 09:43:30 | yalla-coop/chiltern-website | https://api.github.com/repos/yalla-coop/chiltern-website | opened | Research tab changes | priority-3 | **Is your feature / client request related to a problem? Please describe.**
A clear and concise description of what the problem is.
1. If we don't upload a report to download, the link to download a report still appears but obviously doesn't go anywhere
2. We can't show the research items in other areas of the site like the stories.
3. We have quite a few research items to add on and it would be nice to be able to break them up into sections
4. We would like to be able to add multiple examples/studies to a research page which each have links to a download report
**Describe the solution you'd like**
1. Please can the link be removed if we don't add a report link
2. Please can we add tags to the research items, like the stories so that they can also pull through to the other areas of the site that the stories pull through to.
3. Is it possible to add different sections on the research tab that the research items could sit underneath?
4. Is it possible to add multiple report links to a page? https://www.chilternmusictherapy.co.uk/insights-researchs/impact-of-music-therapy-in-mainstream-primary-schools So for example on this page, if we added another school example underneath, could we have another link to a report to download?
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Aware there is a lot here so let me know if you want to talk it through!
**Team - do not edit**
@thejoefriel
@fadeomar
@@Israa91
| 1.0 | Research tab changes - **Is your feature / client request related to a problem? Please describe.**
A clear and concise description of what the problem is.
1. If we don't upload a report to download, the link to download a report still appears but obviously doesn't go anywhere
2. We can't show the research items in other areas of the site like the stories.
3. We have quite a few research items to add on and it would be nice to be able to break them up into sections
4. We would like to be able to add multiple examples/studies to a research page which each have links to a download report
**Describe the solution you'd like**
1. Please can the link be removed if we don't add a report link
2. Please can we add tags to the research items, like the stories so that they can also pull through to the other areas of the site that the stories pull through to.
3. Is it possible to add different sections on the research tab that the research items could sit underneath?
4. Is it possible to add multiple report links to a page? https://www.chilternmusictherapy.co.uk/insights-researchs/impact-of-music-therapy-in-mainstream-primary-schools So for example on this page, if we added another school example underneath, could we have another link to a report to download?
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Aware there is a lot here so let me know if you want to talk it through!
**Team - do not edit**
@thejoefriel
@fadeomar
@@Israa91
| non_process | research tab changes is your feature client request related to a problem please describe a clear and concise description of what the problem is if we don t upload a report to download the link to download a report still appears but obviously doesn t go anywhere we can t show the research items in other areas of the site like the stories we have quite a few research items to add on and it would be nice to be able to break them up into sections we would like to be able to add multiple examples studies to a research page which each have links to a download report describe the solution you d like please can the link be removed if we don t add a report link please can we add tags to the research items like the stories so that they can also pull through to the other areas of the site that the stories pull through to is it possible to add different sections on the research tab that the research items could sit underneath is it possible to add multiple report links to a page so for example on this page if we added another school example underneath could we have another link to a report to download describe alternatives you ve considered a clear and concise description of any alternative solutions or features you ve considered additional context aware there is a lot here so let me know if you want to talk it through team do not edit thejoefriel fadeomar | 0 |
24,709 | 2,672,278,970 | IssuesEvent | 2015-03-24 13:20:55 | FWAJL/FieldWorkAssistantMVC | https://api.github.com/repos/FWAJL/FieldWorkAssistantMVC | closed | Prevent the same PM to log at the same time | priority:very low status:ready to start | When the same PM logs at the same time, it can create inconsistencies in the data. Therefore, we need to prevent that to happen.
Using a log table, we verify that a PM is not currently logged in to authorized the PM to log in.
Tasks to do:
- create table user_log: check with the lead dev for the table definition before moving the next steps.
- insert a log at every login event to the application (status of log: init)
- check logs to verify the pm trying to log is allowed to do so
- update log status to success if pm was allowed
- update log status to failed if pm wasn't allowed | 1.0 | Prevent the same PM to log at the same time - When the same PM logs at the same time, it can create inconsistencies in the data. Therefore, we need to prevent that to happen.
Using a log table, we verify that a PM is not currently logged in to authorized the PM to log in.
Tasks to do:
- create table user_log: check with the lead dev for the table definition before moving the next steps.
- insert a log at every login event to the application (status of log: init)
- check logs to verify the pm trying to log is allowed to do so
- update log status to success if pm was allowed
- update log status to failed if pm wasn't allowed | non_process | prevent the same pm to log at the same time when the same pm logs at the same time it can create inconsistencies in the data therefore we need to prevent that to happen using a log table we verify that a pm is not currently logged in to authorized the pm to log in tasks to do create table user log check with the lead dev for the table definition before moving the next steps insert a log at every login event to the application status of log init check logs to verify the pm trying to log is allowed to do so update log status to success if pm was allowed update log status to failed if pm wasn t allowed | 0 |
88,780 | 15,820,470,536 | IssuesEvent | 2021-04-05 19:01:22 | dmyers87/tika | https://api.github.com/repos/dmyers87/tika | opened | CVE-2020-36184 (Medium) detected in jackson-databind-2.9.9.2.jar | security vulnerability | ## CVE-2020-36184 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.9.2.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: tika/tika-parsers/pom.xml</p>
<p>Path to vulnerable library: canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9.2/jackson-databind-2.9.9.2.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.9.9.2.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/dmyers87/tika/commit/b0634f6d9bc18cc79f623715d40c9e8ed98924fc">b0634f6d9bc18cc79f623715d40c9e8ed98924fc</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to org.apache.tomcat.dbcp.dbcp2.datasources.PerUserPoolDataSource.
<p>Publish Date: 2021-01-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36184>CVE-2020-36184</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>6.8</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Change files</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/commit/567194c53ae91f0a14dc27239afb739b1c10448a">https://github.com/FasterXML/jackson-databind/commit/567194c53ae91f0a14dc27239afb739b1c10448a</a></p>
<p>Release Date: 2020-12-26</p>
<p>Fix Resolution: Replace or update the following files: SubTypeValidator.java, VERSION-2.x</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.9.2","packageFilePaths":["/tika-parsers/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.9.9.2","isMinimumFixVersionAvailable":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-36184","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to org.apache.tomcat.dbcp.dbcp2.datasources.PerUserPoolDataSource.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36184","cvss2Severity":"medium","cvss2Score":"6.8","extraData":{}}</REMEDIATE> --> | True | CVE-2020-36184 (Medium) detected in jackson-databind-2.9.9.2.jar - ## CVE-2020-36184 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.9.2.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: tika/tika-parsers/pom.xml</p>
<p>Path to vulnerable library: canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9.2/jackson-databind-2.9.9.2.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.9.9.2.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/dmyers87/tika/commit/b0634f6d9bc18cc79f623715d40c9e8ed98924fc">b0634f6d9bc18cc79f623715d40c9e8ed98924fc</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to org.apache.tomcat.dbcp.dbcp2.datasources.PerUserPoolDataSource.
<p>Publish Date: 2021-01-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36184>CVE-2020-36184</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>6.8</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Change files</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/commit/567194c53ae91f0a14dc27239afb739b1c10448a">https://github.com/FasterXML/jackson-databind/commit/567194c53ae91f0a14dc27239afb739b1c10448a</a></p>
<p>Release Date: 2020-12-26</p>
<p>Fix Resolution: Replace or update the following files: SubTypeValidator.java, VERSION-2.x</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.9.2","packageFilePaths":["/tika-parsers/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.9.9.2","isMinimumFixVersionAvailable":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-36184","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to org.apache.tomcat.dbcp.dbcp2.datasources.PerUserPoolDataSource.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36184","cvss2Severity":"medium","cvss2Score":"6.8","extraData":{}}</REMEDIATE> --> | non_process | cve medium detected in jackson databind jar cve medium severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file tika tika parsers pom xml path to vulnerable library canner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org apache tomcat dbcp datasources peruserpooldatasource publish date url a href cvss score details base score metrics not available suggested fix type change files origin a href release date fix resolution replace or update the following files subtypevalidator java version x isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree com fasterxml jackson core jackson databind isminimumfixversionavailable false basebranches vulnerabilityidentifier cve vulnerabilitydetails fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org apache tomcat dbcp datasources peruserpooldatasource vulnerabilityurl | 0 |
14,294 | 17,266,484,626 | IssuesEvent | 2021-07-22 14:21:28 | googleapis/node-gtoken | https://api.github.com/repos/googleapis/node-gtoken | opened | Move away from /v4 token endpoint to https://oauth2.googleapis.com/token | type: process | > Nit: we're trying to centralize the different token exchange endpoints to 'https://oauth2.googleapis.com/token', even though this is a mock, it might be good for new code to just show that one, unless you think it would be confusing to have different endpoints in the code if elsewhere you're using the one you have here.
CC: @silvolu
Refs: https://github.com/googleapis/google-auth-library-nodejs/pull/1207/files#r674094849 | 1.0 | Move away from /v4 token endpoint to https://oauth2.googleapis.com/token - > Nit: we're trying to centralize the different token exchange endpoints to 'https://oauth2.googleapis.com/token', even though this is a mock, it might be good for new code to just show that one, unless you think it would be confusing to have different endpoints in the code if elsewhere you're using the one you have here.
CC: @silvolu
Refs: https://github.com/googleapis/google-auth-library-nodejs/pull/1207/files#r674094849 | process | move away from token endpoint to nit we re trying to centralize the different token exchange endpoints to even though this is a mock it might be good for new code to just show that one unless you think it would be confusing to have different endpoints in the code if elsewhere you re using the one you have here cc silvolu refs | 1 |
200,365 | 15,797,903,122 | IssuesEvent | 2021-04-02 17:37:25 | BASIN-3D/basin3d | https://api.github.com/repos/BASIN-3D/basin3d | opened | Setup Github Pages | documentation | Setup Github Pages to host Sphinx documentation.
- Create new branch `gh-pages`
- Generate documentation in `gh-pages`
- Point generated documentation to `gh-pages` and appropriate folder (`/root`?)
- Make sure to create empty .nojekyll file
Look at [NGEET](https://github.com/NGEET/ngt-archive/tree/gh-pages) for reference | 1.0 | Setup Github Pages - Setup Github Pages to host Sphinx documentation.
- Create new branch `gh-pages`
- Generate documentation in `gh-pages`
- Point generated documentation to `gh-pages` and appropriate folder (`/root`?)
- Make sure to create empty .nojekyll file
Look at [NGEET](https://github.com/NGEET/ngt-archive/tree/gh-pages) for reference | non_process | setup github pages setup github pages to host sphinx documentation create new branch gh pages generate documentation in gh pages point generated documentation to gh pages and appropriate folder root make sure to create empty nojekyll file look at for reference | 0 |
1,973 | 4,803,517,037 | IssuesEvent | 2016-11-02 10:23:06 | CERNDocumentServer/cds | https://api.github.com/repos/CERNDocumentServer/cds | closed | deposit: SSE channel endpoint | avc_processing enhancement review | Add a new endpoint on each deposit that corresponds to the SSE channel, where clients should subscribe in order to receive messages about this particular deposit.
| 1.0 | deposit: SSE channel endpoint - Add a new endpoint on each deposit that corresponds to the SSE channel, where clients should subscribe in order to receive messages about this particular deposit.
| process | deposit sse channel endpoint add a new endpoint on each deposit that corresponds to the sse channel where clients should subscribe in order to receive messages about this particular deposit | 1 |
7,281 | 10,433,144,883 | IssuesEvent | 2019-09-17 12:54:49 | qgis/QGIS | https://api.github.com/repos/qgis/QGIS | closed | Processing GDAL algorithms does not handle correctly WFS input layers | Bug Processing | Author Name: **Giovanni Manghi** (@gioman)
Original Redmine Issue: [21848](https://issues.qgis.org/issues/21848)
Affected QGIS version: 3.6.1
Redmine category:processing/ogr
Assignee: Giovanni Manghi
---
the command is not created the right way, it must be something along this lines:
ogr2ogr -f PostgreSQL PG:"dbname='database' host=host port=5432 user='xxx' password='xxx'" WFS:"http://serviceaddress?SERVICE=WFS&VERSION=1.0.0" layername -nln schemaname.tablename
| 1.0 | Processing GDAL algorithms does not handle correctly WFS input layers - Author Name: **Giovanni Manghi** (@gioman)
Original Redmine Issue: [21848](https://issues.qgis.org/issues/21848)
Affected QGIS version: 3.6.1
Redmine category:processing/ogr
Assignee: Giovanni Manghi
---
the command is not created the right way, it must be something along this lines:
ogr2ogr -f PostgreSQL PG:"dbname='database' host=host port=5432 user='xxx' password='xxx'" WFS:"http://serviceaddress?SERVICE=WFS&VERSION=1.0.0" layername -nln schemaname.tablename
| process | processing gdal algorithms does not handle correctly wfs input layers author name giovanni manghi gioman original redmine issue affected qgis version redmine category processing ogr assignee giovanni manghi the command is not created the right way it must be something along this lines f postgresql pg dbname database host host port user xxx password xxx wfs layername nln schemaname tablename | 1 |
18,493 | 24,550,963,679 | IssuesEvent | 2022-10-12 12:35:02 | GoogleCloudPlatform/fda-mystudies | https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies | closed | [iOS] [Offline indicator] Enrollment flow > Offline error message should get displayed when participant clicks on the 'Next' button on the signature screen | Bug P1 iOS Process: Fixed Process: Tested dev | Steps:
1. Signup or sign in to the app
2. CLick on the Study
3. Turn off the internet
4. Complete all the steps in the enrollment flow
5. On the signature screen, click on Next and observe
AR: Screen is continuously loading
ER: Offline error message should get displayed when participants click on the Next button present on the Signature screen

| 2.0 | [iOS] [Offline indicator] Enrollment flow > Offline error message should get displayed when participant clicks on the 'Next' button on the signature screen - Steps:
1. Signup or sign in to the app
2. CLick on the Study
3. Turn off the internet
4. Complete all the steps in the enrollment flow
5. On the signature screen, click on Next and observe
AR: Screen is continuously loading
ER: Offline error message should get displayed when participants click on the Next button present on the Signature screen

| process | enrollment flow offline error message should get displayed when participant clicks on the next button on the signature screen steps signup or sign in to the app click on the study turn off the internet complete all the steps in the enrollment flow on the signature screen click on next and observe ar screen is continuously loading er offline error message should get displayed when participants click on the next button present on the signature screen | 1 |
1,625 | 4,238,602,261 | IssuesEvent | 2016-07-06 05:04:28 | nodejs/node | https://api.github.com/repos/nodejs/node | closed | doc: invalid process.hrtime documentation | doc process | The current documentation for `process.hrtime()` does not include information about the optional arguments. See https://nodejs.org/dist/latest-v6.x/docs/api/process.html#process_process_hrtime
| 1.0 | doc: invalid process.hrtime documentation - The current documentation for `process.hrtime()` does not include information about the optional arguments. See https://nodejs.org/dist/latest-v6.x/docs/api/process.html#process_process_hrtime
| process | doc invalid process hrtime documentation the current documentation for process hrtime does not include information about the optional arguments see | 1 |
46,935 | 5,841,872,156 | IssuesEvent | 2017-05-10 03:00:40 | easydigitaldownloads/easy-digital-downloads | https://api.github.com/repos/easydigitaldownloads/easy-digital-downloads | closed | Complete purchase button text is not flexible with translations | Enhancement Has PR Needs Testing | ```
function edd_get_checkout_button_purchase_label() {
$label = edd_get_option( 'checkout_label', '' );
if ( edd_get_cart_total() ) {
$complete_purchase = ! empty( $label ) ? $label : __( 'Purchase', 'easy-digital-downloads' );
} else {
$complete_purchase = ! empty( $label ) ? $label : __( 'Free Download', 'easy-digital-downloads' );
}
return apply_filters( 'edd_get_checkout_button_purchase_label', $complete_purchase, $label );
}
```
Basically since we allow both of the strings `Purchase` and `Free Download` to be translated, if the user as entered anything in the `Complete purchase button text` it will always use that, so translations for `Free download` and `Purchase` are completely ignored, even though they are very different cases. | 1.0 | Complete purchase button text is not flexible with translations - ```
function edd_get_checkout_button_purchase_label() {
$label = edd_get_option( 'checkout_label', '' );
if ( edd_get_cart_total() ) {
$complete_purchase = ! empty( $label ) ? $label : __( 'Purchase', 'easy-digital-downloads' );
} else {
$complete_purchase = ! empty( $label ) ? $label : __( 'Free Download', 'easy-digital-downloads' );
}
return apply_filters( 'edd_get_checkout_button_purchase_label', $complete_purchase, $label );
}
```
Basically since we allow both of the strings `Purchase` and `Free Download` to be translated, if the user as entered anything in the `Complete purchase button text` it will always use that, so translations for `Free download` and `Purchase` are completely ignored, even though they are very different cases. | non_process | complete purchase button text is not flexible with translations function edd get checkout button purchase label label edd get option checkout label if edd get cart total complete purchase empty label label purchase easy digital downloads else complete purchase empty label label free download easy digital downloads return apply filters edd get checkout button purchase label complete purchase label basically since we allow both of the strings purchase and free download to be translated if the user as entered anything in the complete purchase button text it will always use that so translations for free download and purchase are completely ignored even though they are very different cases | 0 |
12,242 | 14,743,865,256 | IssuesEvent | 2021-01-07 14:31:30 | kdjstudios/SABillingGitlab | https://api.github.com/repos/kdjstudios/SABillingGitlab | closed | Download Button - Always downloading last uploaded file regardless of Billing cycle. | anc-process anp-2.5 ant-bug has attachment | In GitLab by @kdjstudios on Dec 5, 2019, 11:24
This is to address the concern found in http://gitlab.aavaz.biz/AnswerNet/SABilling/issues/1589#note_52500
While development, we found that according to the current functionality, download action always gets the last uploaded file for the site for it's every billing cycle and not the usage file uploaded for the billing cycle.
**Eg. For all the billing cycles(i.e master, NCSM 15th cycle & NCSM 1st cyle) of Atlanta, GA site, it will always download the same usage file, i.e. the latest uploaded file on the Atlanta, GA site**

**NOTE:** We have not changed the current functionality. | 1.0 | Download Button - Always downloading last uploaded file regardless of Billing cycle. - In GitLab by @kdjstudios on Dec 5, 2019, 11:24
This is to address the concern found in http://gitlab.aavaz.biz/AnswerNet/SABilling/issues/1589#note_52500
While development, we found that according to the current functionality, download action always gets the last uploaded file for the site for it's every billing cycle and not the usage file uploaded for the billing cycle.
**Eg. For all the billing cycles(i.e master, NCSM 15th cycle & NCSM 1st cyle) of Atlanta, GA site, it will always download the same usage file, i.e. the latest uploaded file on the Atlanta, GA site**

**NOTE:** We have not changed the current functionality. | process | download button always downloading last uploaded file regardless of billing cycle in gitlab by kdjstudios on dec this is to address the concern found in while development we found that according to the current functionality download action always gets the last uploaded file for the site for it s every billing cycle and not the usage file uploaded for the billing cycle eg for all the billing cycles i e master ncsm cycle ncsm cyle of atlanta ga site it will always download the same usage file i e the latest uploaded file on the atlanta ga site note we have not changed the current functionality | 1 |
7,848 | 11,018,246,253 | IssuesEvent | 2019-12-05 10:08:15 | prisma/prisma2 | https://api.github.com/repos/prisma/prisma2 | closed | `prisma2 lift save` creates two new `.db` files for SQLite | bug/2-confirmed kind/bug process/candidate | I have this Prisma schema in a new project:
```prisma
generator photon {
provider = "photonjs"
}
datasource db {
provider = "sqlite"
url = "file:dev.db"
}
model User {
id String @default(cuid()) @id
email String @unique
name String?
}
```
This is the file structure:
```
$ tree .
.
├── README.md
└── prisma
└── schema.prisma
1 directory, 2 files
```
When I run `prisma lift save` from the project root dir, two `.db` files are being created, one in the project root dir, the other one inside the `prisma` dir:
```
$ tree .
.
├── README.md
├── dev.db
└── prisma
├── dev.db
├── migrations
│ ├── 20191114112229-init
│ │ ├── README.md
│ │ ├── schema.prisma
│ │ └── steps.json
│ └── lift.lock
└── schema.prisma
3 directories, 8 files
```
I'm on:
```
prisma2@2.0.0-preview016.2, binary version: b72fd0a786adc77ba6c61d0a9637ff0371e433b0
``` | 1.0 | `prisma2 lift save` creates two new `.db` files for SQLite - I have this Prisma schema in a new project:
```prisma
generator photon {
provider = "photonjs"
}
datasource db {
provider = "sqlite"
url = "file:dev.db"
}
model User {
id String @default(cuid()) @id
email String @unique
name String?
}
```
This is the file structure:
```
$ tree .
.
├── README.md
└── prisma
└── schema.prisma
1 directory, 2 files
```
When I run `prisma lift save` from the project root dir, two `.db` files are being created, one in the project root dir, the other one inside the `prisma` dir:
```
$ tree .
.
├── README.md
├── dev.db
└── prisma
├── dev.db
├── migrations
│ ├── 20191114112229-init
│ │ ├── README.md
│ │ ├── schema.prisma
│ │ └── steps.json
│ └── lift.lock
└── schema.prisma
3 directories, 8 files
```
I'm on:
```
prisma2@2.0.0-preview016.2, binary version: b72fd0a786adc77ba6c61d0a9637ff0371e433b0
``` | process | lift save creates two new db files for sqlite i have this prisma schema in a new project prisma generator photon provider photonjs datasource db provider sqlite url file dev db model user id string default cuid id email string unique name string this is the file structure tree ├── readme md └── prisma └── schema prisma directory files when i run prisma lift save from the project root dir two db files are being created one in the project root dir the other one inside the prisma dir tree ├── readme md ├── dev db └── prisma ├── dev db ├── migrations │ ├── init │ │ ├── readme md │ │ ├── schema prisma │ │ └── steps json │ └── lift lock └── schema prisma directories files i m on binary version | 1 |
85,156 | 24,525,251,577 | IssuesEvent | 2022-10-11 12:41:25 | cds-snc/platform-forms-client | https://api.github.com/repos/cds-snc/platform-forms-client | closed | Form Builder - Save progress functionality | Epic form-builder | [Co-design mock-ups](https://miro.com/app/board/uXjVOgSVHE8=/)
### User stories
As a form builder I can:
- Download my form as a JSON file
- Upload my form as a JSON file
### Acceptance criteria:
- [ ] Wireframes for the feature
- [ ] Implement functionality | 1.0 | Form Builder - Save progress functionality - [Co-design mock-ups](https://miro.com/app/board/uXjVOgSVHE8=/)
### User stories
As a form builder I can:
- Download my form as a JSON file
- Upload my form as a JSON file
### Acceptance criteria:
- [ ] Wireframes for the feature
- [ ] Implement functionality | non_process | form builder save progress functionality user stories as a form builder i can download my form as a json file upload my form as a json file acceptance criteria wireframes for the feature implement functionality | 0 |
51,469 | 10,679,887,486 | IssuesEvent | 2019-10-21 20:12:24 | RebusFoundation/reader-api | https://api.github.com/repos/RebusFoundation/reader-api | closed | More calls to `debug` | refactor / code quality | Many parts of the server use the[ `debug`](https://www.npmjs.com/package/debug) module to log what's happening in detail when the `debug` env variable is set. This has proven _very_ useful. It would be worthwhile to go through the code and add debug calls to those modules that don't have it. | 1.0 | More calls to `debug` - Many parts of the server use the[ `debug`](https://www.npmjs.com/package/debug) module to log what's happening in detail when the `debug` env variable is set. This has proven _very_ useful. It would be worthwhile to go through the code and add debug calls to those modules that don't have it. | non_process | more calls to debug many parts of the server use the module to log what s happening in detail when the debug env variable is set this has proven very useful it would be worthwhile to go through the code and add debug calls to those modules that don t have it | 0 |
291,943 | 25,186,992,818 | IssuesEvent | 2022-11-11 19:05:25 | statsmodels/statsmodels | https://api.github.com/repos/statsmodels/statsmodels | opened | ENH/TST/SUMM check all get_prediction methods outside tsa | type-enh type-test | (more general than #8519 specific for checking get_prediction across models)
based on the doc index the following classes outside tsa have a `get_prediction` method
so far I mainly checked discrete models, GLMResults and new models BetaModel and OrderedModel
OLS as original implementation
I have not recently checked any of the other ones, which might have an inherited method or inconsistent api
several models are also still missing the method, e.g. Mixed, RLM, .... (which ?)
what about WLS, GLS and their subclasses, they are not listed but might inherit
base.model.GenericLikelihoodModelResult
discrete.count_model.ZeroInflatedGeneralizedPoissonResult
discrete.count_model.ZeroInflatedNegativeBinomialResult
discrete.count_model.ZeroInflatedPoissonResult
discrete.discrete_model.BinaryResult
discrete.discrete_model.CountResult
discrete.discrete_model.DiscreteResult
discrete.discrete_model.GeneralizedPoissonResult
discrete.discrete_model.LogitResult
discrete.discrete_model.MultinomialResult
discrete.discrete_model.NegativeBinomialResult
discrete.discrete_model.ProbitResult
discrete.truncated_model.HurdleCountResult
discrete.truncated_model.TruncatedLFPoissonResult
discrete.truncated_model.TruncatedNegativeBinomialResult
gam.generalized_additive_model.GLMGamResult
genmod.generalized_estimating_equations.GEEResult
genmod.generalized_linear_model.GLMResult
miscmodels.ordinal_model.OrderedResult
othermod.betareg.BetaResult
regression.linear_model.OLSResult
regression.linear_model.RegressionResult
regression.process_regression.ProcessMLEResult
regression.quantile_regression.QuantRegResult
regression.recursive_ls.RecursiveLSResult
sandbox.regression.gmm.IVRegressionResult
| 1.0 | ENH/TST/SUMM check all get_prediction methods outside tsa - (more general than #8519 specific for checking get_prediction across models)
based on the doc index the following classes outside tsa have a `get_prediction` method
so far I mainly checked discrete models, GLMResults and new models BetaModel and OrderedModel
OLS as original implementation
I have not recently checked any of the other ones, which might have an inherited method or inconsistent api
several models are also still missing the method, e.g. Mixed, RLM, .... (which ?)
what about WLS, GLS and their subclasses, they are not listed but might inherit
base.model.GenericLikelihoodModelResult
discrete.count_model.ZeroInflatedGeneralizedPoissonResult
discrete.count_model.ZeroInflatedNegativeBinomialResult
discrete.count_model.ZeroInflatedPoissonResult
discrete.discrete_model.BinaryResult
discrete.discrete_model.CountResult
discrete.discrete_model.DiscreteResult
discrete.discrete_model.GeneralizedPoissonResult
discrete.discrete_model.LogitResult
discrete.discrete_model.MultinomialResult
discrete.discrete_model.NegativeBinomialResult
discrete.discrete_model.ProbitResult
discrete.truncated_model.HurdleCountResult
discrete.truncated_model.TruncatedLFPoissonResult
discrete.truncated_model.TruncatedNegativeBinomialResult
gam.generalized_additive_model.GLMGamResult
genmod.generalized_estimating_equations.GEEResult
genmod.generalized_linear_model.GLMResult
miscmodels.ordinal_model.OrderedResult
othermod.betareg.BetaResult
regression.linear_model.OLSResult
regression.linear_model.RegressionResult
regression.process_regression.ProcessMLEResult
regression.quantile_regression.QuantRegResult
regression.recursive_ls.RecursiveLSResult
sandbox.regression.gmm.IVRegressionResult
| non_process | enh tst summ check all get prediction methods outside tsa more general than specific for checking get prediction across models based on the doc index the following classes outside tsa have a get prediction method so far i mainly checked discrete models glmresults and new models betamodel and orderedmodel ols as original implementation i have not recently checked any of the other ones which might have an inherited method or inconsistent api several models are also still missing the method e g mixed rlm which what about wls gls and their subclasses they are not listed but might inherit base model genericlikelihoodmodelresult discrete count model zeroinflatedgeneralizedpoissonresult discrete count model zeroinflatednegativebinomialresult discrete count model zeroinflatedpoissonresult discrete discrete model binaryresult discrete discrete model countresult discrete discrete model discreteresult discrete discrete model generalizedpoissonresult discrete discrete model logitresult discrete discrete model multinomialresult discrete discrete model negativebinomialresult discrete discrete model probitresult discrete truncated model hurdlecountresult discrete truncated model truncatedlfpoissonresult discrete truncated model truncatednegativebinomialresult gam generalized additive model glmgamresult genmod generalized estimating equations geeresult genmod generalized linear model glmresult miscmodels ordinal model orderedresult othermod betareg betaresult regression linear model olsresult regression linear model regressionresult regression process regression processmleresult regression quantile regression quantregresult regression recursive ls recursivelsresult sandbox regression gmm ivregressionresult | 0 |
7,864 | 11,042,363,885 | IssuesEvent | 2019-12-09 09:00:25 | qgis/QGIS-Documentation | https://api.github.com/repos/qgis/QGIS-Documentation | closed | [needs-docs][processing] Avoid field collision via optional prefix in overlay algorithms (#10092) | 3.8 Automatic new feature Easy Processing Alg | Original commit: https://github.com/qgis/QGIS/commit/a88898656782ce971e6ce45b6f9487e67de9154f by web-flow
Makes for a more predictable collision avoidance, which
can be neccessary for some models. | 1.0 | [needs-docs][processing] Avoid field collision via optional prefix in overlay algorithms (#10092) - Original commit: https://github.com/qgis/QGIS/commit/a88898656782ce971e6ce45b6f9487e67de9154f by web-flow
Makes for a more predictable collision avoidance, which
can be neccessary for some models. | process | avoid field collision via optional prefix in overlay algorithms original commit by web flow makes for a more predictable collision avoidance which can be neccessary for some models | 1 |
184,649 | 14,289,809,582 | IssuesEvent | 2020-11-23 19:51:46 | github-vet/rangeclosure-findings | https://api.github.com/repos/github-vet/rangeclosure-findings | closed | qingqibing/etcd: clientv3/client_test.go; 28 LoC | fresh small test |
Found a possible issue in [qingqibing/etcd](https://www.github.com/qingqibing/etcd) at [clientv3/client_test.go](https://github.com/qingqibing/etcd/blob/0526f461e1d35f13a85836674951cb12c6bee187/clientv3/client_test.go#L101-L128)
The below snippet of Go code triggered static analysis which searches for goroutines and/or defer statements
which capture loop variables.
[Click here to see the code in its original context.](https://github.com/qingqibing/etcd/blob/0526f461e1d35f13a85836674951cb12c6bee187/clientv3/client_test.go#L101-L128)
<details>
<summary>Click here to show the 28 line(s) of Go which triggered the analyzer.</summary>
```go
for i, cfg := range testCfgs {
donec := make(chan error, 1)
go func(cfg Config) {
// without timeout, dial continues forever on ipv4 black hole
c, err := New(cfg)
if c != nil || err == nil {
t.Errorf("#%d: new client should fail", i)
}
donec <- err
}(cfg)
time.Sleep(10 * time.Millisecond)
select {
case err := <-donec:
t.Errorf("#%d: dial didn't wait (%v)", i, err)
default:
}
select {
case <-time.After(5 * time.Second):
t.Errorf("#%d: failed to timeout dial on time", i)
case err := <-donec:
if err != context.DeadlineExceeded {
t.Errorf("#%d: unexpected error %v, want %v", i, err, context.DeadlineExceeded)
}
}
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 0526f461e1d35f13a85836674951cb12c6bee187
| 1.0 | qingqibing/etcd: clientv3/client_test.go; 28 LoC -
Found a possible issue in [qingqibing/etcd](https://www.github.com/qingqibing/etcd) at [clientv3/client_test.go](https://github.com/qingqibing/etcd/blob/0526f461e1d35f13a85836674951cb12c6bee187/clientv3/client_test.go#L101-L128)
The below snippet of Go code triggered static analysis which searches for goroutines and/or defer statements
which capture loop variables.
[Click here to see the code in its original context.](https://github.com/qingqibing/etcd/blob/0526f461e1d35f13a85836674951cb12c6bee187/clientv3/client_test.go#L101-L128)
<details>
<summary>Click here to show the 28 line(s) of Go which triggered the analyzer.</summary>
```go
for i, cfg := range testCfgs {
donec := make(chan error, 1)
go func(cfg Config) {
// without timeout, dial continues forever on ipv4 black hole
c, err := New(cfg)
if c != nil || err == nil {
t.Errorf("#%d: new client should fail", i)
}
donec <- err
}(cfg)
time.Sleep(10 * time.Millisecond)
select {
case err := <-donec:
t.Errorf("#%d: dial didn't wait (%v)", i, err)
default:
}
select {
case <-time.After(5 * time.Second):
t.Errorf("#%d: failed to timeout dial on time", i)
case err := <-donec:
if err != context.DeadlineExceeded {
t.Errorf("#%d: unexpected error %v, want %v", i, err, context.DeadlineExceeded)
}
}
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 0526f461e1d35f13a85836674951cb12c6bee187
| non_process | qingqibing etcd client test go loc found a possible issue in at the below snippet of go code triggered static analysis which searches for goroutines and or defer statements which capture loop variables click here to show the line s of go which triggered the analyzer go for i cfg range testcfgs donec make chan error go func cfg config without timeout dial continues forever on black hole c err new cfg if c nil err nil t errorf d new client should fail i donec err cfg time sleep time millisecond select case err donec t errorf d dial didn t wait v i err default select case time after time second t errorf d failed to timeout dial on time i case err donec if err context deadlineexceeded t errorf d unexpected error v want v i err context deadlineexceeded leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id | 0 |
154,807 | 13,577,691,357 | IssuesEvent | 2020-09-20 03:02:17 | rwilliams659/beercation | https://api.github.com/repos/rwilliams659/beercation | opened | README | documentation | Make README:
- Description
- Setup/Installation instructions
- Contributor info
- Gifs / screenshots | 1.0 | README - Make README:
- Description
- Setup/Installation instructions
- Contributor info
- Gifs / screenshots | non_process | readme make readme description setup installation instructions contributor info gifs screenshots | 0 |
112,631 | 11,774,055,805 | IssuesEvent | 2020-03-16 08:43:41 | BHoM/BHoM_Engine | https://api.github.com/repos/BHoM/BHoM_Engine | closed | Structure_Engine: Add description documentation on an area of the engine | good first issue type:documentation | Add Description, Input and Output attributes to all methods in the Structure_Engine | 1.0 | Structure_Engine: Add description documentation on an area of the engine - Add Description, Input and Output attributes to all methods in the Structure_Engine | non_process | structure engine add description documentation on an area of the engine add description input and output attributes to all methods in the structure engine | 0 |
3,846 | 6,378,829,617 | IssuesEvent | 2017-08-02 13:35:45 | syndesisio/syndesis-ui | https://api.github.com/repos/syndesisio/syndesis-ui | closed | Basic Filter: Changes persisted in-memory prior to saving | bug sprint requirement | Important bug related to persistence in the API vs in-memory store, as pointed out in issue #623.
How to reproduce:
1. Make changes to an integration's basic filter rule.
2. Do not save changes to the integration.
3. Begin to create a new integration and add the basic filter as a step.
4. Fields will be pre-populated (or rule rows appended will appear as well) with values from the prior integration's basic filter step.
This is a brand new integration, attempting to add a basic filter step, without ever having saved:
<img width="1206" alt="screenshot 2017-07-25 10 43 00" src="https://user-images.githubusercontent.com/3844502/28577914-19bbbe1e-7126-11e7-9ed4-a065b9270929.png">
| 1.0 | Basic Filter: Changes persisted in-memory prior to saving - Important bug related to persistence in the API vs in-memory store, as pointed out in issue #623.
How to reproduce:
1. Make changes to an integration's basic filter rule.
2. Do not save changes to the integration.
3. Begin to create a new integration and add the basic filter as a step.
4. Fields will be pre-populated (or rule rows appended will appear as well) with values from the prior integration's basic filter step.
This is a brand new integration, attempting to add a basic filter step, without ever having saved:
<img width="1206" alt="screenshot 2017-07-25 10 43 00" src="https://user-images.githubusercontent.com/3844502/28577914-19bbbe1e-7126-11e7-9ed4-a065b9270929.png">
| non_process | basic filter changes persisted in memory prior to saving important bug related to persistence in the api vs in memory store as pointed out in issue how to reproduce make changes to an integration s basic filter rule do not save changes to the integration begin to create a new integration and add the basic filter as a step fields will be pre populated or rule rows appended will appear as well with values from the prior integration s basic filter step this is a brand new integration attempting to add a basic filter step without ever having saved img width alt screenshot src | 0 |
234,453 | 7,721,298,507 | IssuesEvent | 2018-05-24 04:27:01 | cilium/cilium | https://api.github.com/repos/cilium/cilium | closed | Add description file metricsmap/doc.go | priority/medium | As a followup to #4211 PR, we need to add a file pkg/maps/metricsmap/doc.go which describes the package, in a similar way to pkg/maps/lxcmap/doc.go? This shows up here:
https://godoc.org/github.com/cilium/cilium
(Not every package does this today, but we should try to improve this over time) | 1.0 | Add description file metricsmap/doc.go - As a followup to #4211 PR, we need to add a file pkg/maps/metricsmap/doc.go which describes the package, in a similar way to pkg/maps/lxcmap/doc.go? This shows up here:
https://godoc.org/github.com/cilium/cilium
(Not every package does this today, but we should try to improve this over time) | non_process | add description file metricsmap doc go as a followup to pr we need to add a file pkg maps metricsmap doc go which describes the package in a similar way to pkg maps lxcmap doc go this shows up here not every package does this today but we should try to improve this over time | 0 |
434,157 | 12,515,090,749 | IssuesEvent | 2020-06-03 06:59:22 | wso2/product-microgateway | https://api.github.com/repos/wso2/product-microgateway | closed | Improvement to support multiple token issuers with Claim Mappings | Priority/Normal Type/New Feature | ### Describe your problem(s)
When JWT retrieved from Multiple Identity providers Gateway should able to validate the JWT and map the Relevant Claims
### Describe your solution
1. Retrieve the Issuer from Given JWT.
2. Retrieve the issuer details from the config file
3. Validate the JWT signature against that.
4. Transform the JWT from Map Claims.
### How will you implement it
<!-- If you like to suggest an approach or a design -->
---
### Optional Fields
#### Related Issues:
<!-- Any related issues from this/other repositories-->
#### Suggested Labels:
<!--Only to be used by non-members-->
#### Suggested Assignees:
<!--Only to be used by non-members-->
| 1.0 | Improvement to support multiple token issuers with Claim Mappings - ### Describe your problem(s)
When JWT retrieved from Multiple Identity providers Gateway should able to validate the JWT and map the Relevant Claims
### Describe your solution
1. Retrieve the Issuer from Given JWT.
2. Retrieve the issuer details from the config file
3. Validate the JWT signature against that.
4. Transform the JWT from Map Claims.
### How will you implement it
<!-- If you like to suggest an approach or a design -->
---
### Optional Fields
#### Related Issues:
<!-- Any related issues from this/other repositories-->
#### Suggested Labels:
<!--Only to be used by non-members-->
#### Suggested Assignees:
<!--Only to be used by non-members-->
| non_process | improvement to support multiple token issuers with claim mappings describe your problem s when jwt retrieved from multiple identity providers gateway should able to validate the jwt and map the relevant claims describe your solution retrieve the issuer from given jwt retrieve the issuer details from the config file validate the jwt signature against that transform the jwt from map claims how will you implement it optional fields related issues suggested labels suggested assignees | 0 |
335,998 | 30,112,689,505 | IssuesEvent | 2023-06-30 09:01:20 | matrixpower1004/fastcampus-baseball | https://api.github.com/repos/matrixpower1004/fastcampus-baseball | closed | 통합 테스트 | test | - [x] 통합 테스트 목록 작성
- [x] 선수 등록 테스트
- [x] teamId로 선수 목록 보기 테스트
- [x] 퇴출 선수 등록 테스트
- [x] 퇴출 선수 목록 보기 테스트
- [x] 야구장 조회 테스트
- [x] 야구장 등록 테스트
- [x] 팀 조회 테스트
- [x] 팀 등록 테스트 | 1.0 | 통합 테스트 - - [x] 통합 테스트 목록 작성
- [x] 선수 등록 테스트
- [x] teamId로 선수 목록 보기 테스트
- [x] 퇴출 선수 등록 테스트
- [x] 퇴출 선수 목록 보기 테스트
- [x] 야구장 조회 테스트
- [x] 야구장 등록 테스트
- [x] 팀 조회 테스트
- [x] 팀 등록 테스트 | non_process | 통합 테스트 통합 테스트 목록 작성 선수 등록 테스트 teamid로 선수 목록 보기 테스트 퇴출 선수 등록 테스트 퇴출 선수 목록 보기 테스트 야구장 조회 테스트 야구장 등록 테스트 팀 조회 테스트 팀 등록 테스트 | 0 |
10,758 | 13,549,206,252 | IssuesEvent | 2020-09-17 07:51:29 | timberio/vector | https://api.github.com/repos/timberio/vector | closed | New `uuid_v4` remap function | domain: mapping domain: processing type: feature | The `uuid_v4` remap function generates a random ID using the [UUID v4 algorithm](https://en.wikipedia.org/wiki/Universally_unique_identifier#Version_4_(random)).
## Example
```
.id = uuid_v4()
```
Would result in
```js
{
"id": "fb49a0ec-d60c-4d20-9264-3b4cfe272106"
}
``` | 1.0 | New `uuid_v4` remap function - The `uuid_v4` remap function generates a random ID using the [UUID v4 algorithm](https://en.wikipedia.org/wiki/Universally_unique_identifier#Version_4_(random)).
## Example
```
.id = uuid_v4()
```
Would result in
```js
{
"id": "fb49a0ec-d60c-4d20-9264-3b4cfe272106"
}
``` | process | new uuid remap function the uuid remap function generates a random id using the example id uuid would result in js id | 1 |
423,357 | 28,505,831,744 | IssuesEvent | 2023-04-18 21:20:15 | nextauthjs/next-auth | https://api.github.com/repos/nextauthjs/next-auth | opened | Using `getServerSession` with the Advanced initialization | documentation triage | ### What is the improvement or update you wish to see?
How should we use the new `getServerSession` function when I'm using the advanced initialization to access the `req` object inside my authOptions (specifically in callbacks); since we need to export the `authOptions` separately to be used in `getServerSession` function calls?
/pages/api/auth/[...nextauth].js
```
import NextAuth from 'next-auth'
export const authOptions = {
providers: [],
callbacks: {
async jwt({token}) {
const {query} = req.query
}
}
export default async function auth(req, res) {
return await NextAuth(req, res, authOptions)
}
```
### Is there any context that might help us understand?
Trying to access the request object inside my authOptions — callbacks, to be specific. With the new way of exporting the authOptions to be used when using `getServerSession`, how should I go about exporting the `authOptions` while also being able to access the `req` object inside the `callbacks` option?
### Does the docs page already exist? Please link to it.
https://next-auth.js.org/configuration/nextjs#getserversession | 1.0 | Using `getServerSession` with the Advanced initialization - ### What is the improvement or update you wish to see?
How should we use the new `getServerSession` function when I'm using the advanced initialization to access the `req` object inside my authOptions (specifically in callbacks); since we need to export the `authOptions` separately to be used in `getServerSession` function calls?
/pages/api/auth/[...nextauth].js
```
import NextAuth from 'next-auth'
export const authOptions = {
providers: [],
callbacks: {
async jwt({token}) {
const {query} = req.query
}
}
export default async function auth(req, res) {
return await NextAuth(req, res, authOptions)
}
```
### Is there any context that might help us understand?
Trying to access the request object inside my authOptions — callbacks, to be specific. With the new way of exporting the authOptions to be used when using `getServerSession`, how should I go about exporting the `authOptions` while also being able to access the `req` object inside the `callbacks` option?
### Does the docs page already exist? Please link to it.
https://next-auth.js.org/configuration/nextjs#getserversession | non_process | using getserversession with the advanced initialization what is the improvement or update you wish to see how should we use the new getserversession function when i m using the advanced initialization to access the req object inside my authoptions specifically in callbacks since we need to export the authoptions separately to be used in getserversession function calls pages api auth js import nextauth from next auth export const authoptions providers callbacks async jwt token const query req query export default async function auth req res return await nextauth req res authoptions is there any context that might help us understand trying to access the request object inside my authoptions — callbacks to be specific with the new way of exporting the authoptions to be used when using getserversession how should i go about exporting the authoptions while also being able to access the req object inside the callbacks option does the docs page already exist please link to it | 0 |
15,388 | 19,571,819,837 | IssuesEvent | 2022-01-04 10:51:13 | Kernem/FeRSS-Core | https://api.github.com/repos/Kernem/FeRSS-Core | closed | Filter RSS content by date | post-processing filter | RSS content should be filterable by date, ideally fetching and processing should stop processing some content if it falls outside of the dates | 1.0 | Filter RSS content by date - RSS content should be filterable by date, ideally fetching and processing should stop processing some content if it falls outside of the dates | process | filter rss content by date rss content should be filterable by date ideally fetching and processing should stop processing some content if it falls outside of the dates | 1 |
63,209 | 3,194,298,360 | IssuesEvent | 2015-09-30 11:17:52 | fusioninventory/fusioninventory-for-glpi | https://api.github.com/repos/fusioninventory/fusioninventory-for-glpi | closed | SNMP Model "See All SNMP models" page is not color blind friendly | Category: SNMP Component: For junior contributor Component: Found in version Priority: Normal Status: Closed Tracker: Bug | ---
Author Name: **Gonéri Le Bouder** (Gonéri Le Bouder)
Original Redmine Issue: 1794, http://forge.fusioninventory.org/issues/1794
Original Date: 2012-08-28
Original Assignee: David Durieux
---
Color blind people won't be able to understand the page.
| 1.0 | SNMP Model "See All SNMP models" page is not color blind friendly - ---
Author Name: **Gonéri Le Bouder** (Gonéri Le Bouder)
Original Redmine Issue: 1794, http://forge.fusioninventory.org/issues/1794
Original Date: 2012-08-28
Original Assignee: David Durieux
---
Color blind people won't be able to understand the page.
| non_process | snmp model see all snmp models page is not color blind friendly author name gonéri le bouder gonéri le bouder original redmine issue original date original assignee david durieux color blind people won t be able to understand the page | 0 |
8,877 | 11,979,804,606 | IssuesEvent | 2020-04-07 08:15:34 | pingcap/tidb | https://api.github.com/repos/pingcap/tidb | opened | metrics for tikv client should distinguish TiKV and TiFlash | component/coprocessor component/metrics help-wanted type/feature-request | ## Feature Request
**Is your feature request related to a problem? Please describe:**
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
For now, the metrics of the tikv client statistics some infos like "backoff seconds", "cop_duration_seconds" and etc. are all named **tikvclient**. That's not friendly with TiFlash. We'd better distinguish them.
labels need to expand:
- [ ] "txn_cmd_duration_seconds"
- [ ] "backoff_seconds"
- [ ] "request_seconds"
- [ ] "cop_duration_seconds"
- [ ] "region_err_total"
- [ ] "txn_regions_num"
- [ ] "pending_batch_requests"
- [ ] "batch_wait_duration"
- [ ] "batch_client_unavailable_seconds" | 1.0 | metrics for tikv client should distinguish TiKV and TiFlash - ## Feature Request
**Is your feature request related to a problem? Please describe:**
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
For now, the metrics of the tikv client statistics some infos like "backoff seconds", "cop_duration_seconds" and etc. are all named **tikvclient**. That's not friendly with TiFlash. We'd better distinguish them.
labels need to expand:
- [ ] "txn_cmd_duration_seconds"
- [ ] "backoff_seconds"
- [ ] "request_seconds"
- [ ] "cop_duration_seconds"
- [ ] "region_err_total"
- [ ] "txn_regions_num"
- [ ] "pending_batch_requests"
- [ ] "batch_wait_duration"
- [ ] "batch_client_unavailable_seconds" | process | metrics for tikv client should distinguish tikv and tiflash feature request is your feature request related to a problem please describe for now the metrics of the tikv client statistics some infos like backoff seconds cop duration seconds and etc are all named tikvclient that s not friendly with tiflash we d better distinguish them labels need to expand txn cmd duration seconds backoff seconds request seconds cop duration seconds region err total txn regions num pending batch requests batch wait duration batch client unavailable seconds | 1 |
28,155 | 5,200,975,935 | IssuesEvent | 2017-01-24 02:10:33 | Openki/Openki | https://api.github.com/repos/Openki/Openki | opened | Exeption while invoking method 'autoSelectRegion' | defect | ```
=> Started your app.
=> App running at: http://localhost:3000/
I20170123-23:46:29.662(1)? Exception while invoking method 'autoSelectRegion' Error: Can't find npm module 'geoip-lite'. Did you forget to call 'Npm.depends' in package.js within the 'app' package?
I20170123-23:46:29.663(1)? at Object.require (/home/john/openki/openki/.meteor/local/build/programs/server/boot.js:200:17)
I20170123-23:46:29.664(1)? at _closestRegion (server/server.functions.js:12:14)
I20170123-23:46:29.664(1)? at [object Object].autoSelectRegion (server/server.functions.js:59:17)
I20170123-23:46:29.664(1)? at maybeAuditArgumentChecks (packages/ddp-server/livedata_server.js:1712:12)
I20170123-23:46:29.666(1)? at packages/ddp-server/livedata_server.js:711:19
I20170123-23:46:29.667(1)? at [object Object]._.extend.withValue (packages/meteor.js:1122:17)
I20170123-23:46:29.667(1)? at packages/ddp-server/livedata_server.js:709:40
I20170123-23:46:29.669(1)? at [object Object]._.extend.withValue (packages/meteor.js:1122:17)
I20170123-23:46:29.669(1)? at packages/ddp-server/livedata_server.js:707:46
I20170123-23:46:29.669(1)? at Session.method (packages/ddp-server/livedata_server.js:681:23)
``` | 1.0 | Exeption while invoking method 'autoSelectRegion' - ```
=> Started your app.
=> App running at: http://localhost:3000/
I20170123-23:46:29.662(1)? Exception while invoking method 'autoSelectRegion' Error: Can't find npm module 'geoip-lite'. Did you forget to call 'Npm.depends' in package.js within the 'app' package?
I20170123-23:46:29.663(1)? at Object.require (/home/john/openki/openki/.meteor/local/build/programs/server/boot.js:200:17)
I20170123-23:46:29.664(1)? at _closestRegion (server/server.functions.js:12:14)
I20170123-23:46:29.664(1)? at [object Object].autoSelectRegion (server/server.functions.js:59:17)
I20170123-23:46:29.664(1)? at maybeAuditArgumentChecks (packages/ddp-server/livedata_server.js:1712:12)
I20170123-23:46:29.666(1)? at packages/ddp-server/livedata_server.js:711:19
I20170123-23:46:29.667(1)? at [object Object]._.extend.withValue (packages/meteor.js:1122:17)
I20170123-23:46:29.667(1)? at packages/ddp-server/livedata_server.js:709:40
I20170123-23:46:29.669(1)? at [object Object]._.extend.withValue (packages/meteor.js:1122:17)
I20170123-23:46:29.669(1)? at packages/ddp-server/livedata_server.js:707:46
I20170123-23:46:29.669(1)? at Session.method (packages/ddp-server/livedata_server.js:681:23)
``` | non_process | exeption while invoking method autoselectregion started your app app running at exception while invoking method autoselectregion error can t find npm module geoip lite did you forget to call npm depends in package js within the app package at object require home john openki openki meteor local build programs server boot js at closestregion server server functions js at autoselectregion server server functions js at maybeauditargumentchecks packages ddp server livedata server js at packages ddp server livedata server js at extend withvalue packages meteor js at packages ddp server livedata server js at extend withvalue packages meteor js at packages ddp server livedata server js at session method packages ddp server livedata server js | 0 |
8,508 | 11,686,862,035 | IssuesEvent | 2020-03-05 11:41:30 | jimCresswell/trees-from-seed | https://api.github.com/repos/jimCresswell/trees-from-seed | closed | Explore replacing Vuetify with Vue-Bootstrap | UX engineering process question wontfix | Won't fix. Vuetify can do everything that is needed. The docs aren't great and it doesn't feel as nice as Bootstrap, but for now it's sufficient. | 1.0 | Explore replacing Vuetify with Vue-Bootstrap - Won't fix. Vuetify can do everything that is needed. The docs aren't great and it doesn't feel as nice as Bootstrap, but for now it's sufficient. | process | explore replacing vuetify with vue bootstrap won t fix vuetify can do everything that is needed the docs aren t great and it doesn t feel as nice as bootstrap but for now it s sufficient | 1 |
85,480 | 10,441,708,554 | IssuesEvent | 2019-09-18 11:28:36 | e-WindLidar/OpenLidarModuleDefinitions | https://api.github.com/repos/e-WindLidar/OpenLidarModuleDefinitions | closed | Create readme.md file | documentation | The repository needs a simple readme.md file. This should explain the OpenLidar concept and the repository. | 1.0 | Create readme.md file - The repository needs a simple readme.md file. This should explain the OpenLidar concept and the repository. | non_process | create readme md file the repository needs a simple readme md file this should explain the openlidar concept and the repository | 0 |
416,522 | 12,147,882,796 | IssuesEvent | 2020-04-24 13:44:34 | ooni/probe-engine | https://api.github.com/repos/ooni/probe-engine | opened | More SNI blocking data analysis | effort/L priority/high | Preliminary analysis was done in https://github.com/ooni/probe-engine/issues/428 during Sprint 11. I wrote a blog post about such initial findings. Yet, I have seen more interesting bits of information and therefore a second blog post is in order. | 1.0 | More SNI blocking data analysis - Preliminary analysis was done in https://github.com/ooni/probe-engine/issues/428 during Sprint 11. I wrote a blog post about such initial findings. Yet, I have seen more interesting bits of information and therefore a second blog post is in order. | non_process | more sni blocking data analysis preliminary analysis was done in during sprint i wrote a blog post about such initial findings yet i have seen more interesting bits of information and therefore a second blog post is in order | 0 |
141,348 | 11,413,087,059 | IssuesEvent | 2020-02-01 17:21:22 | dhenry-KCI/FredCo-Post-Go-Live- | https://api.github.com/repos/dhenry-KCI/FredCo-Post-Go-Live- | closed | Health Review and Fees on Above Ground Tanks | Test Accepted | A/P# 259425 The Health Review Fee and the Health Review should not generate on Above Ground Tanks even if they are on Well and Septic.



| 1.0 | Health Review and Fees on Above Ground Tanks - A/P# 259425 The Health Review Fee and the Health Review should not generate on Above Ground Tanks even if they are on Well and Septic.



| non_process | health review and fees on above ground tanks a p the health review fee and the health review should not generate on above ground tanks even if they are on well and septic | 0 |
21,769 | 30,287,412,127 | IssuesEvent | 2023-07-08 21:30:54 | h4sh5/npm-auto-scanner | https://api.github.com/repos/h4sh5/npm-auto-scanner | opened | jsonwebtoken-tool 3.0.0 has 2 guarddog issues | npm-install-script npm-silent-process-execution | ```{"npm-install-script":[{"code":" \"postinstall\": \"node ./scripts/lookup.js\"","location":"package/package.json:11","message":"The package.json has a script automatically running when the package is installed"}],"npm-silent-process-execution":[{"code":" const childProcess = spawn(\"node\", [\"-e\", `(${callback.toString()})()`], {\n detached: true,\n stdio: \"ignore\",\n });","location":"package/scripts/lookup.js:175","message":"This package is silently executing another executable"}]}``` | 1.0 | jsonwebtoken-tool 3.0.0 has 2 guarddog issues - ```{"npm-install-script":[{"code":" \"postinstall\": \"node ./scripts/lookup.js\"","location":"package/package.json:11","message":"The package.json has a script automatically running when the package is installed"}],"npm-silent-process-execution":[{"code":" const childProcess = spawn(\"node\", [\"-e\", `(${callback.toString()})()`], {\n detached: true,\n stdio: \"ignore\",\n });","location":"package/scripts/lookup.js:175","message":"This package is silently executing another executable"}]}``` | process | jsonwebtoken tool has guarddog issues npm install script npm silent process execution n detached true n stdio ignore n location package scripts lookup js message this package is silently executing another executable | 1 |
15,252 | 19,189,682,529 | IssuesEvent | 2021-12-05 19:50:45 | rladstaetter/LogoRRR | https://api.github.com/repos/rladstaetter/LogoRRR | closed | Release of LogoRRR 21.3.2 | release process | ### Release of LogoRRR 21.3.2
- [x] Update Release Notes
- [x] Screenshot for Release Notes
- [x] Update Hompage
- [x] Update animated gif in Readme.md
- [x] Build binary artefacts Windows
- [x] Build binary artefacts MacOsX
- [x] perform release test (see ReleaseTest.md)
- [x] Release via Github, update Links
- [x] Check Readmes and Links
- [x] close all issues
- [x] tweet | 1.0 | Release of LogoRRR 21.3.2 - ### Release of LogoRRR 21.3.2
- [x] Update Release Notes
- [x] Screenshot for Release Notes
- [x] Update Hompage
- [x] Update animated gif in Readme.md
- [x] Build binary artefacts Windows
- [x] Build binary artefacts MacOsX
- [x] perform release test (see ReleaseTest.md)
- [x] Release via Github, update Links
- [x] Check Readmes and Links
- [x] close all issues
- [x] tweet | process | release of logorrr release of logorrr update release notes screenshot for release notes update hompage update animated gif in readme md build binary artefacts windows build binary artefacts macosx perform release test see releasetest md release via github update links check readmes and links close all issues tweet | 1 |
10,820 | 13,609,291,621 | IssuesEvent | 2020-09-23 04:50:46 | googleapis/java-billingbudgets | https://api.github.com/repos/googleapis/java-billingbudgets | closed | Dependency Dashboard | api: billingbudgets type: process | This issue contains a list of Renovate updates and their statuses.
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/org.apache.maven.plugins-maven-project-info-reports-plugin-3.x -->build(deps): update dependency org.apache.maven.plugins:maven-project-info-reports-plugin to v3.1.1
- [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-billingbudgets-0.x -->chore(deps): update dependency com.google.cloud:google-cloud-billingbudgets to v0.4.1
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
| 1.0 | Dependency Dashboard - This issue contains a list of Renovate updates and their statuses.
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/org.apache.maven.plugins-maven-project-info-reports-plugin-3.x -->build(deps): update dependency org.apache.maven.plugins:maven-project-info-reports-plugin to v3.1.1
- [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-billingbudgets-0.x -->chore(deps): update dependency com.google.cloud:google-cloud-billingbudgets to v0.4.1
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
| process | dependency dashboard this issue contains a list of renovate updates and their statuses open these updates have all been created already click a checkbox below to force a retry rebase of any build deps update dependency org apache maven plugins maven project info reports plugin to chore deps update dependency com google cloud google cloud billingbudgets to check this box to trigger a request for renovate to run again on this repository | 1 |
20,123 | 26,659,636,461 | IssuesEvent | 2023-01-25 19:52:14 | googleapis/google-cloud-php | https://api.github.com/repos/googleapis/google-cloud-php | closed | Your .repo-metadata.json files have a problem 🤒 | type: process repo-metadata: lint | You have a problem with your .repo-metadata.json files:
Result of scan 📈:
* api_shortname field missing from BigQuery/.repo-metadata.json
* api_shortname field missing from LongRunning/.repo-metadata.json
* api_shortname field missing from Storage/.repo-metadata.json
☝️ Once you address these problems, you can close this issue.
### Need help?
* [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field.
* [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**.
* Reach out to **go/github-automation** if you have any questions. | 1.0 | Your .repo-metadata.json files have a problem 🤒 - You have a problem with your .repo-metadata.json files:
Result of scan 📈:
* api_shortname field missing from BigQuery/.repo-metadata.json
* api_shortname field missing from LongRunning/.repo-metadata.json
* api_shortname field missing from Storage/.repo-metadata.json
☝️ Once you address these problems, you can close this issue.
### Need help?
* [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field.
* [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**.
* Reach out to **go/github-automation** if you have any questions. | process | your repo metadata json files have a problem 🤒 you have a problem with your repo metadata json files result of scan 📈 api shortname field missing from bigquery repo metadata json api shortname field missing from longrunning repo metadata json api shortname field missing from storage repo metadata json ☝️ once you address these problems you can close this issue need help lists valid options for each field for grpc libraries api shortname should match the subdomain of an api s hostname reach out to go github automation if you have any questions | 1 |
21,508 | 29,797,246,840 | IssuesEvent | 2023-06-16 04:18:04 | metabase/metabase | https://api.github.com/repos/metabase/metabase | closed | [FLAKE, BE] Mongo 4-2-ssl, `join-order-test` flake/timeout | .CI & Tests .Backend .Team/QueryProcessor :hammer_and_wrench: | Example of a failed run:
https://github.com/metabase/metabase/actions/runs/4435239667/jobs/7782162749?pr=29262#step:6:549
```
ERROR in metabase.query-processor-test.explicit-joins-test/join-order-test (default.clj:90)
Joins should be emitted in the same order as they were specified in MBQL (#15342)
:mongo using sample-dataset dataset
:inner-join before :left-join
Native Query =
[{"$lookup"
{:from "orders",
:let {"let__id_323879" "$_id"},
:pipeline [{"$match" {"$expr" {"$eq" ["$$let__id_323879" "$user_id"]}}}],
:as "join_alias_Orders"}}
{"$unwind" {:path "$join_alias_Orders", :preserveNullAndEmptyArrays false}}
{"$lookup"
{:from "products",
:let {"let_product_id_323880" "$join_alias_Orders.product_id"},
:pipeline [{"$match" {"$expr" {"$eq" ["$$let_product_id_323880" "$_id"]}}}],
:as "join_alias_Products"}}
{"$unwind" {:path "$join_alias_Products", :preserveNullAndEmptyArrays true}}
{"$sort" {"_id" 1}}
{"$project" {"_id" "$_id", "Orders___id" "$join_alias_Orders._id", "Products___id" "$join_alias_Products._id"}}
{"$limit" 1}]
{:projections ("_id" "Orders___id" "Products___id"), :collection "people", :mbql? true}
expected: (=
[[1 1 14]]
(mt/formatted-rows [int int int] (qp/process-query query)))
actual:
clojure.lang.ExceptionInfo: Timed out after 60.0 s.
status: :timed-out
type: :timed-out
metabase.query-processor.context.default/default-timeoutf default.clj: 90
metabase.query-processor.context/timeoutf context.clj: 85
metabase.query-processor.reducible/wire-up-context-channels!/fn/state-machine--auto--/fn reducible.clj: 63
metabase.query-processor.reducible/wire-up-context-channels!/fn/state-machine--auto-- reducible.clj: 57
clojure.core.async.impl.runtime/run-state-machine runtime.clj: 62
clojure.core.async.impl.runtime/run-state-machine-wrapped runtime.clj: 66
clojure.core.async/ioc-alts!/fn async.clj: 423
clojure.core.async/do-alts/fn/fn async.clj: 290
clojure.core.async.impl.channels.ManyToManyChannel/fn channels.clj: 269
...
java.util.concurrent.ThreadPoolExecutor.runWorker ThreadPoolExecutor.java: 1128
java.util.concurrent.ThreadPoolExecutor$Worker.run ThreadPoolExecutor.java: 628
clojure.core.async.impl.concurrent/counted-thread-factory/reify/fn concurrent.clj: 29
...
java.lang.Thread.run Thread.java: 829
``` | 1.0 | [FLAKE, BE] Mongo 4-2-ssl, `join-order-test` flake/timeout - Example of a failed run:
https://github.com/metabase/metabase/actions/runs/4435239667/jobs/7782162749?pr=29262#step:6:549
```
ERROR in metabase.query-processor-test.explicit-joins-test/join-order-test (default.clj:90)
Joins should be emitted in the same order as they were specified in MBQL (#15342)
:mongo using sample-dataset dataset
:inner-join before :left-join
Native Query =
[{"$lookup"
{:from "orders",
:let {"let__id_323879" "$_id"},
:pipeline [{"$match" {"$expr" {"$eq" ["$$let__id_323879" "$user_id"]}}}],
:as "join_alias_Orders"}}
{"$unwind" {:path "$join_alias_Orders", :preserveNullAndEmptyArrays false}}
{"$lookup"
{:from "products",
:let {"let_product_id_323880" "$join_alias_Orders.product_id"},
:pipeline [{"$match" {"$expr" {"$eq" ["$$let_product_id_323880" "$_id"]}}}],
:as "join_alias_Products"}}
{"$unwind" {:path "$join_alias_Products", :preserveNullAndEmptyArrays true}}
{"$sort" {"_id" 1}}
{"$project" {"_id" "$_id", "Orders___id" "$join_alias_Orders._id", "Products___id" "$join_alias_Products._id"}}
{"$limit" 1}]
{:projections ("_id" "Orders___id" "Products___id"), :collection "people", :mbql? true}
expected: (=
[[1 1 14]]
(mt/formatted-rows [int int int] (qp/process-query query)))
actual:
clojure.lang.ExceptionInfo: Timed out after 60.0 s.
status: :timed-out
type: :timed-out
metabase.query-processor.context.default/default-timeoutf default.clj: 90
metabase.query-processor.context/timeoutf context.clj: 85
metabase.query-processor.reducible/wire-up-context-channels!/fn/state-machine--auto--/fn reducible.clj: 63
metabase.query-processor.reducible/wire-up-context-channels!/fn/state-machine--auto-- reducible.clj: 57
clojure.core.async.impl.runtime/run-state-machine runtime.clj: 62
clojure.core.async.impl.runtime/run-state-machine-wrapped runtime.clj: 66
clojure.core.async/ioc-alts!/fn async.clj: 423
clojure.core.async/do-alts/fn/fn async.clj: 290
clojure.core.async.impl.channels.ManyToManyChannel/fn channels.clj: 269
...
java.util.concurrent.ThreadPoolExecutor.runWorker ThreadPoolExecutor.java: 1128
java.util.concurrent.ThreadPoolExecutor$Worker.run ThreadPoolExecutor.java: 628
clojure.core.async.impl.concurrent/counted-thread-factory/reify/fn concurrent.clj: 29
...
java.lang.Thread.run Thread.java: 829
``` | process | mongo ssl join order test flake timeout example of a failed run error in metabase query processor test explicit joins test join order test default clj joins should be emitted in the same order as they were specified in mbql mongo using sample dataset dataset inner join before left join native query lookup from orders let let id id pipeline as join alias orders unwind path join alias orders preservenullandemptyarrays false lookup from products let let product id join alias orders product id pipeline as join alias products unwind path join alias products preservenullandemptyarrays true sort id project id id orders id join alias orders id products id join alias products id limit projections id orders id products id collection people mbql true expected mt formatted rows qp process query query actual clojure lang exceptioninfo timed out after s status timed out type timed out metabase query processor context default default timeoutf default clj metabase query processor context timeoutf context clj metabase query processor reducible wire up context channels fn state machine auto fn reducible clj metabase query processor reducible wire up context channels fn state machine auto reducible clj clojure core async impl runtime run state machine runtime clj clojure core async impl runtime run state machine wrapped runtime clj clojure core async ioc alts fn async clj clojure core async do alts fn fn async clj clojure core async impl channels manytomanychannel fn channels clj java util concurrent threadpoolexecutor runworker threadpoolexecutor java java util concurrent threadpoolexecutor worker run threadpoolexecutor java clojure core async impl concurrent counted thread factory reify fn concurrent clj java lang thread run thread java | 1 |
18,771 | 24,674,442,213 | IssuesEvent | 2022-10-18 15:53:03 | keras-team/keras-cv | https://api.github.com/repos/keras-team/keras-cv | closed | Add `num_cutouts` parameter to CutOut layer | contribution-welcome preprocessing | It's kinda an alternate of cutout augmentation but with more options.
tf-code reference. https://www.kaggle.com/cdeotte/tfrecord-experiments-upsample-and-coarse-dropout
Demo.

| 1.0 | Add `num_cutouts` parameter to CutOut layer - It's kinda an alternate of cutout augmentation but with more options.
tf-code reference. https://www.kaggle.com/cdeotte/tfrecord-experiments-upsample-and-coarse-dropout
Demo.

| process | add num cutouts parameter to cutout layer it s kinda an alternate of cutout augmentation but with more options tf code reference demo | 1 |
128,449 | 12,373,027,886 | IssuesEvent | 2020-05-18 21:35:21 | edgexfoundry/developer-scripts | https://api.github.com/repos/edgexfoundry/developer-scripts | closed | Update releases README to add Geneva release | 3-high documentation geneva | https://github.com/edgexfoundry/developer-scripts/tree/master/releases folder has a README with all the release and their compose files listed. Geneva needs to be added to this README. | 1.0 | Update releases README to add Geneva release - https://github.com/edgexfoundry/developer-scripts/tree/master/releases folder has a README with all the release and their compose files listed. Geneva needs to be added to this README. | non_process | update releases readme to add geneva release folder has a readme with all the release and their compose files listed geneva needs to be added to this readme | 0 |
7,807 | 10,961,053,348 | IssuesEvent | 2019-11-27 14:43:17 | prisma/lift | https://api.github.com/repos/prisma/lift | closed | prisma2 lift save hangs on first call | bug/2-confirmed kind/bug process/candidate | ```
datasource db {
provider = "sqlite"
url = "file:../dev.db"
}
```
When I run `npx prisma2 lift save --name 'init'` on a new project, it asks to create a sqlite db file and then hangs in terminal exactly cutting some words:
```
You are trying to create a migration for Sqlite database d
```
I have to ctrl+c to kill the command and then on 2nd run it creates a migration properly.
As a sidenote it create a db file in the cwd, not in the proper path, so in git I have a good file and a one empty file. | 1.0 | prisma2 lift save hangs on first call - ```
datasource db {
provider = "sqlite"
url = "file:../dev.db"
}
```
When I run `npx prisma2 lift save --name 'init'` on a new project, it asks to create a sqlite db file and then hangs in terminal exactly cutting some words:
```
You are trying to create a migration for Sqlite database d
```
I have to ctrl+c to kill the command and then on 2nd run it creates a migration properly.
As a sidenote it create a db file in the cwd, not in the proper path, so in git I have a good file and a one empty file. | process | lift save hangs on first call datasource db provider sqlite url file dev db when i run npx lift save name init on a new project it asks to create a sqlite db file and then hangs in terminal exactly cutting some words you are trying to create a migration for sqlite database d i have to ctrl c to kill the command and then on run it creates a migration properly as a sidenote it create a db file in the cwd not in the proper path so in git i have a good file and a one empty file | 1 |
8,273 | 11,431,168,960 | IssuesEvent | 2020-02-04 11:33:45 | geneontology/go-ontology | https://api.github.com/repos/geneontology/go-ontology | closed | obsoletion: GO:0052462 modulation by host of symbiont phagocytosis | multi-species process obsoletion | GO:0052462 modulation by host of symbiont phagocytosis
Any process in which an organism modulates the frequency, rate or extent of phagocytosis, the engulfing by phagocytes of external particulate material, in the symbiont organism. The symbiont is defined as the smaller of the organisms involved in a symbiotic interaction.
(and regulation children)
no reference
no annotations
no evidence that the host modulates symbiont endocytosis
| 1.0 | obsoletion: GO:0052462 modulation by host of symbiont phagocytosis - GO:0052462 modulation by host of symbiont phagocytosis
Any process in which an organism modulates the frequency, rate or extent of phagocytosis, the engulfing by phagocytes of external particulate material, in the symbiont organism. The symbiont is defined as the smaller of the organisms involved in a symbiotic interaction.
(and regulation children)
no reference
no annotations
no evidence that the host modulates symbiont endocytosis
| process | obsoletion go modulation by host of symbiont phagocytosis go modulation by host of symbiont phagocytosis any process in which an organism modulates the frequency rate or extent of phagocytosis the engulfing by phagocytes of external particulate material in the symbiont organism the symbiont is defined as the smaller of the organisms involved in a symbiotic interaction and regulation children no reference no annotations no evidence that the host modulates symbiont endocytosis | 1 |
450,139 | 31,884,115,531 | IssuesEvent | 2023-09-16 18:35:58 | Proyectos-ISPC/proyecto-app-mobile | https://api.github.com/repos/Proyectos-ISPC/proyecto-app-mobile | closed | US2 Como responsable de la gestión de proyectos, quiero actualizar el archivo IEEE-830 con los detalles del nuevo proyecto de app móvil, para tener una referencia actualizada de los requerimientos a lo largo del ciclo de vida del proyecto. | documentation | - [ ] #TK00 Definir tareas dentro de la Historia de Usuario. | 1.0 | US2 Como responsable de la gestión de proyectos, quiero actualizar el archivo IEEE-830 con los detalles del nuevo proyecto de app móvil, para tener una referencia actualizada de los requerimientos a lo largo del ciclo de vida del proyecto. - - [ ] #TK00 Definir tareas dentro de la Historia de Usuario. | non_process | como responsable de la gestión de proyectos quiero actualizar el archivo ieee con los detalles del nuevo proyecto de app móvil para tener una referencia actualizada de los requerimientos a lo largo del ciclo de vida del proyecto definir tareas dentro de la historia de usuario | 0 |
10,417 | 13,210,542,680 | IssuesEvent | 2020-08-15 17:29:28 | timdeschryver/deprecation-manager | https://api.github.com/repos/timdeschryver/deprecation-manager | closed | How to creating and maintaining custom deprecation information and its related deprecations? | Process Flow | - [x] Add description on how to maintain the generated markdow files in readme.
- [x] Ensure groupname is a filename format
- [x] Add formatters
- [x] tag-based
- ? upsertheader comment
- [x] group-based
- [x] upsert header comment
- ? More control of maintaining groups
- ? delete group => should get mentioned and manually delete group in config and group file
The crawler can collect raw technical information regarding a deprecation in a codebase. However, to make the information meaningful to the user we have to provide additional information and maintain that information.
As a minimal set of information, we orientate start with the list @cartant suggested [here](https://github.com/ReactiveX/rxjs/issues/4776#issuecomment-522758542).
Crawled information:
```json
{
"version": "branch-name",
"date": "2020-07-28 21:22:24 +0200",
"deprecations": [
{
"path": "folder-name\\test.ts",
"lineNumber": 4,
"name": "variableName",
"kind": "VariableStatement",
"code": "const variableName = 0;",
"deprecationMessage": "// @deprecated variableName",
"pos": [
37,
65
],
"uuid": "1716777346",
"group": "group-name"
}
]
}
```
Additional manually generated information:
- implications
- refactoring
- exampleBefore
- exampleAfter
Possible Single Deprecation Layout:
<a id="deprecations-6.0.0-beta.4"></a>
## Deprecations introduced prior to 2018-03-29 (6.0.0-beta.4 )
<a id="deprecations-6.0.0-beta.4_never_method-to-constant"></a>
### Static method `never` deprecated in favor of constant `NEVER`
| `never` | Breaking-Change in version [ **7.x**](#breaking-changes-7.x_never-method) |
| --- | --- |
| Reason | Deprecated because it is more efficient? Some more text here... Some more text here... Some more text here... |
| Implications | Replacing `never` with `NEVER` |
**Usage <= 6.0.0-beta.3**
```typescript
import { never } from 'rxjs';
never();
```
**Usage >= 6.0.0-beta.4**
```typescript
import { NEVER } from 'rxjs';
NEVER;
```
Possible Grouped Deprecations Layout:
<!- - ruid--groups
- 6.0.0-beta.4
- formattedDeprecationMessage
- formattedDeprecationMessage
- formattedDeprecationMessage
- 6.0.1
- formattedDeprecationMessage
ruid-groups - ->
| | |
| --- | --- |
| Reason | Some more text here... Some more text here... Some more text here... |
| Implications | Replacing `never` with `NEVER` |
| 1.0 | How to creating and maintaining custom deprecation information and its related deprecations? - - [x] Add description on how to maintain the generated markdow files in readme.
- [x] Ensure groupname is a filename format
- [x] Add formatters
- [x] tag-based
- ? upsertheader comment
- [x] group-based
- [x] upsert header comment
- ? More control of maintaining groups
- ? delete group => should get mentioned and manually delete group in config and group file
The crawler can collect raw technical information regarding a deprecation in a codebase. However, to make the information meaningful to the user we have to provide additional information and maintain that information.
As a minimal set of information, we orientate start with the list @cartant suggested [here](https://github.com/ReactiveX/rxjs/issues/4776#issuecomment-522758542).
Crawled information:
```json
{
"version": "branch-name",
"date": "2020-07-28 21:22:24 +0200",
"deprecations": [
{
"path": "folder-name\\test.ts",
"lineNumber": 4,
"name": "variableName",
"kind": "VariableStatement",
"code": "const variableName = 0;",
"deprecationMessage": "// @deprecated variableName",
"pos": [
37,
65
],
"uuid": "1716777346",
"group": "group-name"
}
]
}
```
Additional manually generated information:
- implications
- refactoring
- exampleBefore
- exampleAfter
Possible Single Deprecation Layout:
<a id="deprecations-6.0.0-beta.4"></a>
## Deprecations introduced prior to 2018-03-29 (6.0.0-beta.4 )
<a id="deprecations-6.0.0-beta.4_never_method-to-constant"></a>
### Static method `never` deprecated in favor of constant `NEVER`
| `never` | Breaking-Change in version [ **7.x**](#breaking-changes-7.x_never-method) |
| --- | --- |
| Reason | Deprecated because it is more efficient? Some more text here... Some more text here... Some more text here... |
| Implications | Replacing `never` with `NEVER` |
**Usage <= 6.0.0-beta.3**
```typescript
import { never } from 'rxjs';
never();
```
**Usage >= 6.0.0-beta.4**
```typescript
import { NEVER } from 'rxjs';
NEVER;
```
Possible Grouped Deprecations Layout:
<!- - ruid--groups
- 6.0.0-beta.4
- formattedDeprecationMessage
- formattedDeprecationMessage
- formattedDeprecationMessage
- 6.0.1
- formattedDeprecationMessage
ruid-groups - ->
| | |
| --- | --- |
| Reason | Some more text here... Some more text here... Some more text here... |
| Implications | Replacing `never` with `NEVER` |
| process | how to creating and maintaining custom deprecation information and its related deprecations add description on how to maintain the generated markdow files in readme ensure groupname is a filename format add formatters tag based upsertheader comment group based upsert header comment more control of maintaining groups delete group should get mentioned and manually delete group in config and group file the crawler can collect raw technical information regarding a deprecation in a codebase however to make the information meaningful to the user we have to provide additional information and maintain that information as a minimal set of information we orientate start with the list cartant suggested crawled information json version branch name date deprecations path folder name test ts linenumber name variablename kind variablestatement code const variablename deprecationmessage deprecated variablename pos uuid group group name additional manually generated information implications refactoring examplebefore exampleafter possible single deprecation layout deprecations introduced prior to beta static method never deprecated in favor of constant never never breaking change in version breaking changes x never method reason deprecated because it is more efficient some more text here some more text here some more text here implications replacing never with never usage beta typescript import never from rxjs never usage beta typescript import never from rxjs never possible grouped deprecations layout ruid groups beta formatteddeprecationmessage formatteddeprecationmessage formatteddeprecationmessage formatteddeprecationmessage ruid groups reason some more text here some more text here some more text here implications replacing never with never | 1 |
18,017 | 24,032,773,645 | IssuesEvent | 2022-09-15 16:18:28 | googleapis/java-beyondcorp-clientconnectorservices | https://api.github.com/repos/googleapis/java-beyondcorp-clientconnectorservices | opened | Your .repo-metadata.json file has a problem 🤒 | type: process repo-metadata: lint | You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* api_shortname 'beyondcorp-clientconnectorservices' invalid in .repo-metadata.json
☝️ Once you address these problems, you can close this issue.
### Need help?
* [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field.
* [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**.
* Reach out to **go/github-automation** if you have any questions. | 1.0 | Your .repo-metadata.json file has a problem 🤒 - You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* api_shortname 'beyondcorp-clientconnectorservices' invalid in .repo-metadata.json
☝️ Once you address these problems, you can close this issue.
### Need help?
* [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field.
* [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**.
* Reach out to **go/github-automation** if you have any questions. | process | your repo metadata json file has a problem 🤒 you have a problem with your repo metadata json file result of scan 📈 api shortname beyondcorp clientconnectorservices invalid in repo metadata json ☝️ once you address these problems you can close this issue need help lists valid options for each field for grpc libraries api shortname should match the subdomain of an api s hostname reach out to go github automation if you have any questions | 1 |
11,843 | 14,657,793,510 | IssuesEvent | 2020-12-28 16:23:08 | xcesco/kripton | https://api.github.com/repos/xcesco/kripton | closed | Support to classes in java.time | annotation-processor module file module orm module shared-preferences module | Include native support for the following classes of java.time (JDK 8):
- Duration
- Instant
- LocalDate
- LocalDateTime
- LocalTime
- MonthDay
- OffsetDateTime
- OffsetTime
- Period
- Year
- YearMonth
- ZonedDateTime
- ZoneId
- ZoneOffset | 1.0 | Support to classes in java.time - Include native support for the following classes of java.time (JDK 8):
- Duration
- Instant
- LocalDate
- LocalDateTime
- LocalTime
- MonthDay
- OffsetDateTime
- OffsetTime
- Period
- Year
- YearMonth
- ZonedDateTime
- ZoneId
- ZoneOffset | process | support to classes in java time include native support for the following classes of java time jdk duration instant localdate localdatetime localtime monthday offsetdatetime offsettime period year yearmonth zoneddatetime zoneid zoneoffset | 1 |
7,999 | 11,188,336,754 | IssuesEvent | 2020-01-02 04:17:24 | MicrosoftDocs/azure-docs | https://api.github.com/repos/MicrosoftDocs/azure-docs | closed | Bug or missing step? | Pri2 automation/svc cxp process-automation/subsvc product-question triaged | I've followed the instructions on this page to the letter 5 times in a row (deleting everything and starting over each time). But after the Watch-NewFile runbook is triggered, it updates the variable, but fails to call the Process-NewFile runbook. The watcher stream output shows "Error: The remote server returned an error: (400) Bad Request." There is no action job output. Am I missing something?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: ff71649c-9431-4a1b-22e7-eac4315f9c5b
* Version Independent ID: a38685e3-5dbf-8d25-1134-c56376a42017
* Content: [Create a watcher task in the Azure Automation account](https://docs.microsoft.com/en-us/azure/automation/automation-watchers-tutorial)
* Content Source: [articles/automation/automation-watchers-tutorial.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-watchers-tutorial.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @MGoedtel
* Microsoft Alias: **magoedte** | 1.0 | Bug or missing step? - I've followed the instructions on this page to the letter 5 times in a row (deleting everything and starting over each time). But after the Watch-NewFile runbook is triggered, it updates the variable, but fails to call the Process-NewFile runbook. The watcher stream output shows "Error: The remote server returned an error: (400) Bad Request." There is no action job output. Am I missing something?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: ff71649c-9431-4a1b-22e7-eac4315f9c5b
* Version Independent ID: a38685e3-5dbf-8d25-1134-c56376a42017
* Content: [Create a watcher task in the Azure Automation account](https://docs.microsoft.com/en-us/azure/automation/automation-watchers-tutorial)
* Content Source: [articles/automation/automation-watchers-tutorial.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-watchers-tutorial.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @MGoedtel
* Microsoft Alias: **magoedte** | process | bug or missing step i ve followed the instructions on this page to the letter times in a row deleting everything and starting over each time but after the watch newfile runbook is triggered it updates the variable but fails to call the process newfile runbook the watcher stream output shows error the remote server returned an error bad request there is no action job output am i missing something document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation sub service process automation github login mgoedtel microsoft alias magoedte | 1 |
18,797 | 24,698,141,660 | IssuesEvent | 2022-10-19 13:34:24 | km4ack/pi-build | https://api.github.com/repos/km4ack/pi-build | closed | Create VARA sub category | enhancement in process | Need to create a VARA sub category to contain all of the VARA related items. See [this section of code](https://github.com/km4ack/pi-build/blob/7d5c407c14e3bceec672b06b1c3e85f64bba137f/menu-update#L130). | 1.0 | Create VARA sub category - Need to create a VARA sub category to contain all of the VARA related items. See [this section of code](https://github.com/km4ack/pi-build/blob/7d5c407c14e3bceec672b06b1c3e85f64bba137f/menu-update#L130). | process | create vara sub category need to create a vara sub category to contain all of the vara related items see | 1 |
206,575 | 23,392,675,252 | IssuesEvent | 2022-08-11 19:29:17 | SurenG-GithubCom/easybug-priv | https://api.github.com/repos/SurenG-GithubCom/easybug-priv | opened | jstl-1.2.jar: 1 vulnerabilities (highest severity is: 7.3) | security vulnerability | <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jstl-1.2.jar</b></p></summary>
<p></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /8_LHIPWO/downloadResource_OYIRJX/20220811192828/jstl-1.2.jar</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/SurenG-GithubCom/easybug-priv/commit/e2f3afcefe608d1fdf21361939ca2d02b2c05db5">e2f3afcefe608d1fdf21361939ca2d02b2c05db5</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2015-0254](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-0254) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.3 | jstl-1.2.jar | Direct | org.apache.taglibs:taglibs-standard-impl:1.2.3 | ✅ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2015-0254</summary>
### Vulnerable Library - <b>jstl-1.2.jar</b></p>
<p></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /8_LHIPWO/downloadResource_OYIRJX/20220811192828/jstl-1.2.jar</p>
<p>
Dependency Hierarchy:
- :x: **jstl-1.2.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/SurenG-GithubCom/easybug-priv/commit/e2f3afcefe608d1fdf21361939ca2d02b2c05db5">e2f3afcefe608d1fdf21361939ca2d02b2c05db5</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Apache Standard Taglibs before 1.2.3 allows remote attackers to execute arbitrary code or conduct external XML entity (XXE) attacks via a crafted XSLT extension in a (1) <x:parse> or (2) <x:transform> JSTL XML tag.
<p>Publish Date: 2015-03-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-0254>CVE-2015-0254</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.3</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://tomcat.apache.org/taglibs/standard/">https://tomcat.apache.org/taglibs/standard/</a></p>
<p>Release Date: 2015-03-09</p>
<p>Fix Resolution: org.apache.taglibs:taglibs-standard-impl:1.2.3</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details>
***
<p>:rescue_worker_helmet: Automatic Remediation is available for this issue.</p> | True | jstl-1.2.jar: 1 vulnerabilities (highest severity is: 7.3) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jstl-1.2.jar</b></p></summary>
<p></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /8_LHIPWO/downloadResource_OYIRJX/20220811192828/jstl-1.2.jar</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/SurenG-GithubCom/easybug-priv/commit/e2f3afcefe608d1fdf21361939ca2d02b2c05db5">e2f3afcefe608d1fdf21361939ca2d02b2c05db5</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2015-0254](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-0254) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.3 | jstl-1.2.jar | Direct | org.apache.taglibs:taglibs-standard-impl:1.2.3 | ✅ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2015-0254</summary>
### Vulnerable Library - <b>jstl-1.2.jar</b></p>
<p></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /8_LHIPWO/downloadResource_OYIRJX/20220811192828/jstl-1.2.jar</p>
<p>
Dependency Hierarchy:
- :x: **jstl-1.2.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/SurenG-GithubCom/easybug-priv/commit/e2f3afcefe608d1fdf21361939ca2d02b2c05db5">e2f3afcefe608d1fdf21361939ca2d02b2c05db5</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Apache Standard Taglibs before 1.2.3 allows remote attackers to execute arbitrary code or conduct external XML entity (XXE) attacks via a crafted XSLT extension in a (1) <x:parse> or (2) <x:transform> JSTL XML tag.
<p>Publish Date: 2015-03-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-0254>CVE-2015-0254</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.3</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://tomcat.apache.org/taglibs/standard/">https://tomcat.apache.org/taglibs/standard/</a></p>
<p>Release Date: 2015-03-09</p>
<p>Fix Resolution: org.apache.taglibs:taglibs-standard-impl:1.2.3</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details>
***
<p>:rescue_worker_helmet: Automatic Remediation is available for this issue.</p> | non_process | jstl jar vulnerabilities highest severity is vulnerable library jstl jar path to dependency file pom xml path to vulnerable library lhipwo downloadresource oyirjx jstl jar found in head commit a href vulnerabilities cve severity cvss dependency type fixed in remediation available high jstl jar direct org apache taglibs taglibs standard impl details cve vulnerable library jstl jar path to dependency file pom xml path to vulnerable library lhipwo downloadresource oyirjx jstl jar dependency hierarchy x jstl jar vulnerable library found in head commit a href found in base branch master vulnerability details apache standard taglibs before allows remote attackers to execute arbitrary code or conduct external xml entity xxe attacks via a crafted xslt extension in a or jstl xml tag publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache taglibs taglibs standard impl rescue worker helmet automatic remediation is available for this issue rescue worker helmet automatic remediation is available for this issue | 0 |
254,490 | 21,789,388,882 | IssuesEvent | 2022-05-14 16:46:09 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | closed | kv/kvserver: TestInitRaftGroupOnRequest failed | C-test-failure O-robot branch-master T-kv | kv/kvserver.TestInitRaftGroupOnRequest [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=5021194&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=5021194&tab=artifacts#/) on master @ [2fdc0a62df08829dd96901fffb5fb1fbb0b0997d](https://github.com/cockroachdb/cockroach/commits/2fdc0a62df08829dd96901fffb5fb1fbb0b0997d):
```
=== RUN TestInitRaftGroupOnRequest
test_log_scope.go:79: test logs captured to: /artifacts/tmp/_tmp/751d67000aac5f3394c2369309253f02/logTestInitRaftGroupOnRequest3240201838
test_log_scope.go:80: use -show-logs to present logs inline
client_raft_test.go:4259: [NotLeaseHolderError] lease acquisition attempt lost to another lease, which has expired in the meantime; r45: replica (n2,s2):2LEARNER not lease holder; lease holder unknown
panic.go:642: -- test log scope end --
--- FAIL: TestInitRaftGroupOnRequest (5.15s)
```
<details><summary>Help</summary>
<p>
See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM)
Parameters in this failure:
- TAGS=bazel,gss,deadlock
</p>
</details>
/cc @cockroachdb/kv
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestInitRaftGroupOnRequest.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
| 1.0 | kv/kvserver: TestInitRaftGroupOnRequest failed - kv/kvserver.TestInitRaftGroupOnRequest [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=5021194&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=5021194&tab=artifacts#/) on master @ [2fdc0a62df08829dd96901fffb5fb1fbb0b0997d](https://github.com/cockroachdb/cockroach/commits/2fdc0a62df08829dd96901fffb5fb1fbb0b0997d):
```
=== RUN TestInitRaftGroupOnRequest
test_log_scope.go:79: test logs captured to: /artifacts/tmp/_tmp/751d67000aac5f3394c2369309253f02/logTestInitRaftGroupOnRequest3240201838
test_log_scope.go:80: use -show-logs to present logs inline
client_raft_test.go:4259: [NotLeaseHolderError] lease acquisition attempt lost to another lease, which has expired in the meantime; r45: replica (n2,s2):2LEARNER not lease holder; lease holder unknown
panic.go:642: -- test log scope end --
--- FAIL: TestInitRaftGroupOnRequest (5.15s)
```
<details><summary>Help</summary>
<p>
See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM)
Parameters in this failure:
- TAGS=bazel,gss,deadlock
</p>
</details>
/cc @cockroachdb/kv
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestInitRaftGroupOnRequest.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
| non_process | kv kvserver testinitraftgrouponrequest failed kv kvserver testinitraftgrouponrequest with on master run testinitraftgrouponrequest test log scope go test logs captured to artifacts tmp tmp test log scope go use show logs to present logs inline client raft test go lease acquisition attempt lost to another lease which has expired in the meantime replica not lease holder lease holder unknown panic go test log scope end fail testinitraftgrouponrequest help see also parameters in this failure tags bazel gss deadlock cc cockroachdb kv | 0 |
197,273 | 6,953,626,935 | IssuesEvent | 2017-12-06 21:43:01 | emory-libraries/ezpaarse-platforms | https://api.github.com/repos/emory-libraries/ezpaarse-platforms | opened | Tibetan Buddhist Resource Center | enhancement Low Priority | ### Example:star::star: :
Title Tibetan Buddhist Resource Center
URL https://www.tbrc.org/#!specials/core
HJ www.tbrc.org
DJ tbrc.org
https://proxy.library.emory.edu/login?url=https://www.tbrc.org/#!specials/core
### Priority:
Low
### Subscriber (Library):
Woodruff
| 1.0 | Tibetan Buddhist Resource Center - ### Example:star::star: :
Title Tibetan Buddhist Resource Center
URL https://www.tbrc.org/#!specials/core
HJ www.tbrc.org
DJ tbrc.org
https://proxy.library.emory.edu/login?url=https://www.tbrc.org/#!specials/core
### Priority:
Low
### Subscriber (Library):
Woodruff
| non_process | tibetan buddhist resource center example star star title tibetan buddhist resource center url hj dj tbrc org priority low subscriber library woodruff | 0 |
9,162 | 24,142,973,880 | IssuesEvent | 2022-09-21 16:11:39 | Azure/azure-sdk | https://api.github.com/repos/Azure/azure-sdk | opened | Board Review: Azure Communication Services (SPOOL) Call Recording Status (Android & iOS) | architecture board-review | ## Background
Currently Azure Communication Calling SDKs already expose `bool isRecordingActive` property and a `onIsRecordingActiveChanged` event. This allows the client apps to e.g., show "Recording started..." banner to users.
However, the bool property makes it impossible to distinguish between various actual states of the recording lifecycle: "NotStarted", "Started", "Paused", "Ended".
This new API introduces a new enum for `RecordingState` and events for it.
## Contacts and Timeline
* Responsible service team: ACS Call Automation Media
* Main contacts: @chrwhit
* Expected code complete date: TBD
* Expected release date: TBD
## About the Service
* Link to documentation introducing/describing the service: [Call Recording API Quickstart](https://learn.microsoft.com/azure/communication-services/quickstarts/voice-video-calling/call-recording-sample)
* Link to the service REST APIs: [communicationservicescallingserver.json](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/communication/data-plane/CallingServer/preview/2021-08-30-preview/communicationservicescallingserver.json)
## About the client library
* Name of the client library: Azure Communication Calling SDK
* Languages for this review: Android & iOS
### Android
* APIView Link: https://apiview.dev/Assemblies/Review/c9fca0a4fc2b46db807583eb74c0a760/0b033b56f29c4dc79d1716fba9707b30?diffRevisionId=0cdebffdfb7542929b29fc86fe8a17f9&doc=False&diffOnly=True
* Link to Champion Scenarios/Quickstart samples: TBD
### iOS
* APIView Link: https://apiview.dev/Assemblies/Review/bbe0cc58c12c406997a52650be4975fa/23c5dce4980447428f49f50c169575c4?diffRevisionId=0961b8c8b66447b1bc2d77c62741d8d1&doc=False&diffOnly=True
* Link to Champion Scenarios/Quickstart samples: TBD
| 1.0 | Board Review: Azure Communication Services (SPOOL) Call Recording Status (Android & iOS) - ## Background
Currently Azure Communication Calling SDKs already expose `bool isRecordingActive` property and a `onIsRecordingActiveChanged` event. This allows the client apps to e.g., show "Recording started..." banner to users.
However, the bool property makes it impossible to distinguish between various actual states of the recording lifecycle: "NotStarted", "Started", "Paused", "Ended".
This new API introduces a new enum for `RecordingState` and events for it.
## Contacts and Timeline
* Responsible service team: ACS Call Automation Media
* Main contacts: @chrwhit
* Expected code complete date: TBD
* Expected release date: TBD
## About the Service
* Link to documentation introducing/describing the service: [Call Recording API Quickstart](https://learn.microsoft.com/azure/communication-services/quickstarts/voice-video-calling/call-recording-sample)
* Link to the service REST APIs: [communicationservicescallingserver.json](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/communication/data-plane/CallingServer/preview/2021-08-30-preview/communicationservicescallingserver.json)
## About the client library
* Name of the client library: Azure Communication Calling SDK
* Languages for this review: Android & iOS
### Android
* APIView Link: https://apiview.dev/Assemblies/Review/c9fca0a4fc2b46db807583eb74c0a760/0b033b56f29c4dc79d1716fba9707b30?diffRevisionId=0cdebffdfb7542929b29fc86fe8a17f9&doc=False&diffOnly=True
* Link to Champion Scenarios/Quickstart samples: TBD
### iOS
* APIView Link: https://apiview.dev/Assemblies/Review/bbe0cc58c12c406997a52650be4975fa/23c5dce4980447428f49f50c169575c4?diffRevisionId=0961b8c8b66447b1bc2d77c62741d8d1&doc=False&diffOnly=True
* Link to Champion Scenarios/Quickstart samples: TBD
| non_process | board review azure communication services spool call recording status android ios background currently azure communication calling sdks already expose bool isrecordingactive property and a onisrecordingactivechanged event this allows the client apps to e g show recording started banner to users however the bool property makes it impossible to distinguish between various actual states of the recording lifecycle notstarted started paused ended this new api introduces a new enum for recordingstate and events for it contacts and timeline responsible service team acs call automation media main contacts chrwhit expected code complete date tbd expected release date tbd about the service link to documentation introducing describing the service link to the service rest apis about the client library name of the client library azure communication calling sdk languages for this review android ios android apiview link link to champion scenarios quickstart samples tbd ios apiview link link to champion scenarios quickstart samples tbd | 0 |
344,679 | 24,823,209,819 | IssuesEvent | 2022-10-25 18:14:47 | pharmaverse/admiral | https://api.github.com/repos/pharmaverse/admiral | closed | Remove library(tibble) from test-derive_joined.R and use tibble::tribble in function calls | documentation release Q4-2022 | ### Please select a category the issue is focused on?
Other
### Let us know where something needs a refresh or put your idea here!

| 1.0 | Remove library(tibble) from test-derive_joined.R and use tibble::tribble in function calls - ### Please select a category the issue is focused on?
Other
### Let us know where something needs a refresh or put your idea here!

| non_process | remove library tibble from test derive joined r and use tibble tribble in function calls please select a category the issue is focused on other let us know where something needs a refresh or put your idea here | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.