Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
1
744
labels
stringlengths
4
574
body
stringlengths
9
211k
index
stringclasses
10 values
text_combine
stringlengths
96
211k
label
stringclasses
2 values
text
stringlengths
96
188k
binary_label
int64
0
1
16,430
21,300,782,849
IssuesEvent
2022-04-15 02:36:36
bazelbuild/bazel
https://api.github.com/repos/bazelbuild/bazel
opened
Can we get rid of all the "breaking-change" and "migration" labels?
type: process area-EngProd
They are clutter and, AFAIK, we don't use them for anything any more with the switch to LTS releases. We can also remove that code from bazelisk cc: jdob@ meteorcloudy@
1.0
Can we get rid of all the "breaking-change" and "migration" labels? - They are clutter and, AFAIK, we don't use them for anything any more with the switch to LTS releases. We can also remove that code from bazelisk cc: jdob@ meteorcloudy@
process
can we get rid of all the breaking change and migration labels they are clutter and afaik we don t use them for anything any more with the switch to lts releases we can also remove that code from bazelisk cc jdob meteorcloudy
1
48,352
12,195,735,792
IssuesEvent
2020-04-29 17:52:35
kwk/test-llvm-bz-import-5
https://api.github.com/repos/kwk/test-llvm-bz-import-5
closed
cmake build enables exceptions in more files than configure
BZ-BUG-STATUS: RESOLVED BZ-RESOLUTION: FIXED Build scripts/cmake dummy import from bugzilla
This issue was imported from Bugzilla https://bugs.llvm.org/show_bug.cgi?id=9886.
1.0
cmake build enables exceptions in more files than configure - This issue was imported from Bugzilla https://bugs.llvm.org/show_bug.cgi?id=9886.
non_process
cmake build enables exceptions in more files than configure this issue was imported from bugzilla
0
3,424
2,610,062,413
IssuesEvent
2015-02-26 18:18:21
chrsmith/jsjsj122
https://api.github.com/repos/chrsmith/jsjsj122
opened
黄岩治疗不育去哪里专业
auto-migrated Priority-Medium Type-Defect
``` 黄岩治疗不育去哪里专业【台州五洲生殖医院】24小时健康咨 询热线:0576-88066933-(扣扣800080609)-(微信号tzwzszyy)医院地址:台州 市椒江区枫南路229号(枫南大转盘旁)乘车线路:乘坐104、108� ��118、198及椒江一金清公交车直达枫南小区,乘坐107、105、109 、112、901、 902公交车到星星广场下车,步行即可到院。 诊疗项目:阳痿,早泄,前列腺炎,前列腺增生,龟头炎,�� �精,无精。包皮包茎,精索静脉曲张,淋病等。 台州五洲生殖医院是台州最大的男科医院,权威专家在线免�� �咨询,拥有专业完善的男科检查治疗设备,严格按照国家标� ��收费。尖端医疗设备,与世界同步。权威专家,成就专业典 范。人性化服务,一切以患者为中心。 看男科就选台州五洲生殖医院,专业男科为男人。 ``` ----- Original issue reported on code.google.com by `poweragr...@gmail.com` on 30 May 2014 at 7:37
1.0
黄岩治疗不育去哪里专业 - ``` 黄岩治疗不育去哪里专业【台州五洲生殖医院】24小时健康咨 询热线:0576-88066933-(扣扣800080609)-(微信号tzwzszyy)医院地址:台州 市椒江区枫南路229号(枫南大转盘旁)乘车线路:乘坐104、108� ��118、198及椒江一金清公交车直达枫南小区,乘坐107、105、109 、112、901、 902公交车到星星广场下车,步行即可到院。 诊疗项目:阳痿,早泄,前列腺炎,前列腺增生,龟头炎,�� �精,无精。包皮包茎,精索静脉曲张,淋病等。 台州五洲生殖医院是台州最大的男科医院,权威专家在线免�� �咨询,拥有专业完善的男科检查治疗设备,严格按照国家标� ��收费。尖端医疗设备,与世界同步。权威专家,成就专业典 范。人性化服务,一切以患者为中心。 看男科就选台州五洲生殖医院,专业男科为男人。 ``` ----- Original issue reported on code.google.com by `poweragr...@gmail.com` on 30 May 2014 at 7:37
non_process
黄岩治疗不育去哪里专业 黄岩治疗不育去哪里专业【台州五洲生殖医院】 询热线 微信号tzwzszyy 医院地址 台州 (枫南大转盘旁)乘车线路 、 � �� 、 , 、 、 、 、 、 ,步行即可到院。 诊疗项目:阳痿,早泄,前列腺炎,前列腺增生,龟头炎,�� �精,无精。包皮包茎,精索静脉曲张,淋病等。 台州五洲生殖医院是台州最大的男科医院,权威专家在线免�� �咨询,拥有专业完善的男科检查治疗设备,严格按照国家标� ��收费。尖端医疗设备,与世界同步。权威专家,成就专业典 范。人性化服务,一切以患者为中心。 看男科就选台州五洲生殖医院,专业男科为男人。 original issue reported on code google com by poweragr gmail com on may at
0
13,620
16,236,379,454
IssuesEvent
2021-05-07 01:36:01
NixOS/nixpkgs
https://api.github.com/repos/NixOS/nixpkgs
opened
21.05 Feature Freeze
6.topic: release process
Pinging all language, framework, and ecosystem owners to consolidate feature freeze items for the 21.05 release. Please mention any items you see blocking the 21.05 release in your given domains. The branch off date will be the on the 21st of May. So there is still some time to address these items. Nix/nix-cli ecosystem: @edolstra @grahamc @nbp @Profpatsch Mobile: @samueldr Nixos Modules / internals : @Infinisil @matthewbauer @Ericson2314 @orivej Nixos tests: @tfc Emacs: @adisbladis Erlang: @gleber Go: @kalbasit @Mic92 @zowoq Haskell: @cdepillabout @sternenseemann Python: @FRidh Perl: :( php: @NixOS/php Ruby: @alyssais rust: @bhipple @Mic92 @andir @LnL7 Darwin: @NixOS/darwin-maintainers bazel: @mboes blockchains @mmahut podman: @NixOS/podman Gnome: @jtojnar @NixOS/gnome Qt / KDE: @ttuegel @NixOS/qt-kde Postgres: @thoughtpolice in case I forgot anyone: @NixOS/nixpkgs-committers Anyone is free to propose potential blockers, but I would ask that you remember that this is a volunteer organization. Unless someone is likely to "pick up" the work and address the concern in the coming weeks, please only state critical issues. Or if anyone is active in a given ecosystem and I did not mention them, then they are free to state that there's unlikely to be any concerns as well.
1.0
21.05 Feature Freeze - Pinging all language, framework, and ecosystem owners to consolidate feature freeze items for the 21.05 release. Please mention any items you see blocking the 21.05 release in your given domains. The branch off date will be the on the 21st of May. So there is still some time to address these items. Nix/nix-cli ecosystem: @edolstra @grahamc @nbp @Profpatsch Mobile: @samueldr Nixos Modules / internals : @Infinisil @matthewbauer @Ericson2314 @orivej Nixos tests: @tfc Emacs: @adisbladis Erlang: @gleber Go: @kalbasit @Mic92 @zowoq Haskell: @cdepillabout @sternenseemann Python: @FRidh Perl: :( php: @NixOS/php Ruby: @alyssais rust: @bhipple @Mic92 @andir @LnL7 Darwin: @NixOS/darwin-maintainers bazel: @mboes blockchains @mmahut podman: @NixOS/podman Gnome: @jtojnar @NixOS/gnome Qt / KDE: @ttuegel @NixOS/qt-kde Postgres: @thoughtpolice in case I forgot anyone: @NixOS/nixpkgs-committers Anyone is free to propose potential blockers, but I would ask that you remember that this is a volunteer organization. Unless someone is likely to "pick up" the work and address the concern in the coming weeks, please only state critical issues. Or if anyone is active in a given ecosystem and I did not mention them, then they are free to state that there's unlikely to be any concerns as well.
process
feature freeze pinging all language framework and ecosystem owners to consolidate feature freeze items for the release please mention any items you see blocking the release in your given domains the branch off date will be the on the of may so there is still some time to address these items nix nix cli ecosystem edolstra grahamc nbp profpatsch mobile samueldr nixos modules internals infinisil matthewbauer orivej nixos tests tfc emacs adisbladis erlang gleber go kalbasit zowoq haskell cdepillabout sternenseemann python fridh perl php nixos php ruby alyssais rust bhipple andir darwin nixos darwin maintainers bazel mboes blockchains mmahut podman nixos podman gnome jtojnar nixos gnome qt kde ttuegel nixos qt kde postgres thoughtpolice in case i forgot anyone nixos nixpkgs committers anyone is free to propose potential blockers but i would ask that you remember that this is a volunteer organization unless someone is likely to pick up the work and address the concern in the coming weeks please only state critical issues or if anyone is active in a given ecosystem and i did not mention them then they are free to state that there s unlikely to be any concerns as well
1
243,684
26,287,392,549
IssuesEvent
2023-01-08 01:04:51
Pio1006/envoy
https://api.github.com/repos/Pio1006/envoy
closed
CVE-2021-3121 (High) detected in github.com/envoyproxy/protoc-gen-validate-v0.1.0 - autoclosed
security vulnerability
## CVE-2021-3121 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>github.com/envoyproxy/protoc-gen-validate-v0.1.0</b></p></summary> <p>protoc plugin to generate polyglot message validators</p> <p>Library home page: <a href="https://proxy.golang.org/github.com/envoyproxy/protoc-gen-validate/@v/v0.1.0.zip">https://proxy.golang.org/github.com/envoyproxy/protoc-gen-validate/@v/v0.1.0.zip</a></p> <p> Dependency Hierarchy: - github.com/envoyproxy/go-control-plane-v0.9.0 (Root Library) - :x: **github.com/envoyproxy/protoc-gen-validate-v0.1.0** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/Pio1006/envoy/commit/d853fd7abd23b213e8ecb1eded4fd77944aa8ed5">d853fd7abd23b213e8ecb1eded4fd77944aa8ed5</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> An issue was discovered in GoGo Protobuf before 1.3.2. plugin/unmarshal/unmarshal.go lacks certain index validation, aka the "skippy peanut butter" issue. <p>Publish Date: 2021-01-11 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-3121>CVE-2021-3121</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.6</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-3121">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-3121</a></p> <p>Release Date: 2021-01-11</p> <p>Fix Resolution: v1.3.2</p> </p> </details> <p></p>
True
CVE-2021-3121 (High) detected in github.com/envoyproxy/protoc-gen-validate-v0.1.0 - autoclosed - ## CVE-2021-3121 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>github.com/envoyproxy/protoc-gen-validate-v0.1.0</b></p></summary> <p>protoc plugin to generate polyglot message validators</p> <p>Library home page: <a href="https://proxy.golang.org/github.com/envoyproxy/protoc-gen-validate/@v/v0.1.0.zip">https://proxy.golang.org/github.com/envoyproxy/protoc-gen-validate/@v/v0.1.0.zip</a></p> <p> Dependency Hierarchy: - github.com/envoyproxy/go-control-plane-v0.9.0 (Root Library) - :x: **github.com/envoyproxy/protoc-gen-validate-v0.1.0** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/Pio1006/envoy/commit/d853fd7abd23b213e8ecb1eded4fd77944aa8ed5">d853fd7abd23b213e8ecb1eded4fd77944aa8ed5</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> An issue was discovered in GoGo Protobuf before 1.3.2. plugin/unmarshal/unmarshal.go lacks certain index validation, aka the "skippy peanut butter" issue. <p>Publish Date: 2021-01-11 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-3121>CVE-2021-3121</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.6</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-3121">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-3121</a></p> <p>Release Date: 2021-01-11</p> <p>Fix Resolution: v1.3.2</p> </p> </details> <p></p>
non_process
cve high detected in github com envoyproxy protoc gen validate autoclosed cve high severity vulnerability vulnerable library github com envoyproxy protoc gen validate protoc plugin to generate polyglot message validators library home page a href dependency hierarchy github com envoyproxy go control plane root library x github com envoyproxy protoc gen validate vulnerable library found in head commit a href found in base branch main vulnerability details an issue was discovered in gogo protobuf before plugin unmarshal unmarshal go lacks certain index validation aka the skippy peanut butter issue publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution
0
21,451
29,489,266,483
IssuesEvent
2023-06-02 12:17:05
nanoframework/Home
https://api.github.com/repos/nanoframework/Home
closed
Typeof for array types returns invalid data
Type: Bug Status: In progress Priority: High Area: Metadata Processor
### Target name(s) ESP32_REV0 ### Firmware version 1.8.0.629 ### Was working before? On which version? Yes, No idea about version. ### Device capabilities _No response_ ### Description Not quite sure what's going on. One time I've got "string" as type. It was working previously, because benchmarks for JSON project has int[] and short[] deserialization test. Also, based on nanoFramework.Json.JsonConvert.PopulateObject typeof(int[]).GetElementType() should return "int[]" type. ### How to reproduce var type = typeof(short[]); Console.WriteLine(type.ToString()); Console.WriteLine(type.Name); Console.WriteLine(type.GetElementType().FullName); ### Expected behaviour _No response_ ### Screenshots ![image](https://user-images.githubusercontent.com/19893965/193697263-b9f4b1e6-1c51-42a7-935a-d67c711ee4b3.png) ### Aditional information _No response_
1.0
Typeof for array types returns invalid data - ### Target name(s) ESP32_REV0 ### Firmware version 1.8.0.629 ### Was working before? On which version? Yes, No idea about version. ### Device capabilities _No response_ ### Description Not quite sure what's going on. One time I've got "string" as type. It was working previously, because benchmarks for JSON project has int[] and short[] deserialization test. Also, based on nanoFramework.Json.JsonConvert.PopulateObject typeof(int[]).GetElementType() should return "int[]" type. ### How to reproduce var type = typeof(short[]); Console.WriteLine(type.ToString()); Console.WriteLine(type.Name); Console.WriteLine(type.GetElementType().FullName); ### Expected behaviour _No response_ ### Screenshots ![image](https://user-images.githubusercontent.com/19893965/193697263-b9f4b1e6-1c51-42a7-935a-d67c711ee4b3.png) ### Aditional information _No response_
process
typeof for array types returns invalid data target name s firmware version was working before on which version yes no idea about version device capabilities no response description not quite sure what s going on one time i ve got string as type it was working previously because benchmarks for json project has int and short deserialization test also based on nanoframework json jsonconvert populateobject typeof int getelementtype should return int type how to reproduce var type typeof short console writeline type tostring console writeline type name console writeline type getelementtype fullname expected behaviour no response screenshots aditional information no response
1
298,184
22,468,210,671
IssuesEvent
2022-06-22 05:11:23
flutter/flutter
https://api.github.com/repos/flutter/flutter
opened
ImageCache.clear should maybe be called when views are detached from the engine
severe: performance a: existing-apps customer: google documentation perf: memory
Currently, when a view is detached from the FlutterEngine, the rasterizer is torn down, and after https://github.com/flutter/engine/pull/33890, the skia resource cache is cleared as well. However, the image cache isn't currently and it's not as clear what the correct behavior should be. When the engine is retained but the view is no longer attached (such as in hybrid use cases), it's unclear what the user's intentions are. 1- the application may be weaving in and out of Flutter views, in which case it may be more beneficial to keep the image cache 2- the application may not show a Flutter UI for a long time or when it does show a Flutter UI again, it would be for a different screen with minimal re-use of the previous screen's cache For case 2, it would be great to have this as a global configurable (perhaps as an option in the embedding) such that it would require no additional wiring to not leave resident memory in the process when no Flutter screens are shown. For case 1, the solution could be as simple as letting the user decide whether to listen to listen to https://api.flutter.dev/flutter/widgets/WidgetsBindingObserver-class.html and call ImageCache.clear when `detached`. But we should definitely have clear docs for this since it would otherwise be very difficult to discover. @dnfield @zanderso
1.0
ImageCache.clear should maybe be called when views are detached from the engine - Currently, when a view is detached from the FlutterEngine, the rasterizer is torn down, and after https://github.com/flutter/engine/pull/33890, the skia resource cache is cleared as well. However, the image cache isn't currently and it's not as clear what the correct behavior should be. When the engine is retained but the view is no longer attached (such as in hybrid use cases), it's unclear what the user's intentions are. 1- the application may be weaving in and out of Flutter views, in which case it may be more beneficial to keep the image cache 2- the application may not show a Flutter UI for a long time or when it does show a Flutter UI again, it would be for a different screen with minimal re-use of the previous screen's cache For case 2, it would be great to have this as a global configurable (perhaps as an option in the embedding) such that it would require no additional wiring to not leave resident memory in the process when no Flutter screens are shown. For case 1, the solution could be as simple as letting the user decide whether to listen to listen to https://api.flutter.dev/flutter/widgets/WidgetsBindingObserver-class.html and call ImageCache.clear when `detached`. But we should definitely have clear docs for this since it would otherwise be very difficult to discover. @dnfield @zanderso
non_process
imagecache clear should maybe be called when views are detached from the engine currently when a view is detached from the flutterengine the rasterizer is torn down and after the skia resource cache is cleared as well however the image cache isn t currently and it s not as clear what the correct behavior should be when the engine is retained but the view is no longer attached such as in hybrid use cases it s unclear what the user s intentions are the application may be weaving in and out of flutter views in which case it may be more beneficial to keep the image cache the application may not show a flutter ui for a long time or when it does show a flutter ui again it would be for a different screen with minimal re use of the previous screen s cache for case it would be great to have this as a global configurable perhaps as an option in the embedding such that it would require no additional wiring to not leave resident memory in the process when no flutter screens are shown for case the solution could be as simple as letting the user decide whether to listen to listen to and call imagecache clear when detached but we should definitely have clear docs for this since it would otherwise be very difficult to discover dnfield zanderso
0
18,298
24,411,667,726
IssuesEvent
2022-10-05 12:54:11
bazelbuild/bazel
https://api.github.com/repos/bazelbuild/bazel
opened
Delete flag incompatible_disallow_java_import_empty_jars
P3 type: process team-Rules-Java
Tracking issue for the removal of _`--incompatible_disallow_java_import_empty_jars`_ after it's flipped. Blocked on the actual flip (https://github.com/bazelbuild/bazel/issues/16385).
1.0
Delete flag incompatible_disallow_java_import_empty_jars - Tracking issue for the removal of _`--incompatible_disallow_java_import_empty_jars`_ after it's flipped. Blocked on the actual flip (https://github.com/bazelbuild/bazel/issues/16385).
process
delete flag incompatible disallow java import empty jars tracking issue for the removal of incompatible disallow java import empty jars after it s flipped blocked on the actual flip
1
15,955
20,172,876,257
IssuesEvent
2022-02-10 12:02:56
LinasVidziunas/Unsupervised-lesion-detection-with-multi-view-MRI-and-autoencoders
https://api.github.com/repos/LinasVidziunas/Unsupervised-lesion-detection-with-multi-view-MRI-and-autoencoders
opened
Remove abnormal slices from the train set
bug Data preprocessing
Abnormal slices get saved when data has been processed. This is a big bug.
1.0
Remove abnormal slices from the train set - Abnormal slices get saved when data has been processed. This is a big bug.
process
remove abnormal slices from the train set abnormal slices get saved when data has been processed this is a big bug
1
240,055
7,800,363,215
IssuesEvent
2018-06-09 08:30:06
tine20/Tine-2.0-Open-Source-Groupware-and-CRM
https://api.github.com/repos/tine20/Tine-2.0-Open-Source-Groupware-and-CRM
closed
0008850: FF: combobox pasting passes validation when combo is invisible
Bug Mantis Tinebase JavaScript high priority
**Reported by pschuele on 29 Aug 2013 09:56** **Version:** Kristina (2013.03.7) FF: combobox pasting passes validation when combo is invisble for example when user switched to another tab. - maybe we should blur all combos when switching tabs - Conny added something to Ext.fixes that addressed a similar problem (see #8616) **Steps to reproduce:** (customfields must be enabled in Timesheets) 1. switch to Timetracker/Timesheets 2. create new Timesheet 3. make sure you have something in the clipboard 4. enter some description 5. set focus to timeaccount combobox 6. switch to history tab (focus remains in combo) 7. CTRL-V to paste 8. switch to main tab 9. press OK -&gt; some fields timeaccount_id have invalid content
1.0
0008850: FF: combobox pasting passes validation when combo is invisible - **Reported by pschuele on 29 Aug 2013 09:56** **Version:** Kristina (2013.03.7) FF: combobox pasting passes validation when combo is invisble for example when user switched to another tab. - maybe we should blur all combos when switching tabs - Conny added something to Ext.fixes that addressed a similar problem (see #8616) **Steps to reproduce:** (customfields must be enabled in Timesheets) 1. switch to Timetracker/Timesheets 2. create new Timesheet 3. make sure you have something in the clipboard 4. enter some description 5. set focus to timeaccount combobox 6. switch to history tab (focus remains in combo) 7. CTRL-V to paste 8. switch to main tab 9. press OK -&gt; some fields timeaccount_id have invalid content
non_process
ff combobox pasting passes validation when combo is invisible reported by pschuele on aug version kristina ff combobox pasting passes validation when combo is invisble for example when user switched to another tab maybe we should blur all combos when switching tabs conny added something to ext fixes that addressed a similar problem see steps to reproduce customfields must be enabled in timesheets switch to timetracker timesheets create new timesheet make sure you have something in the clipboard enter some description set focus to timeaccount combobox switch to history tab focus remains in combo ctrl v to paste switch to main tab press ok gt some fields timeaccount id have invalid content
0
238,407
19,717,619,798
IssuesEvent
2022-01-13 12:37:12
itggot-TE4/Yabs
https://api.github.com/repos/itggot-TE4/Yabs
closed
Write unit test for LoanFormComponent
comp: frontend type: test🚨 priority: low good-first-issue
## ✨ Feature request This is an existing test that requires updating. The bookConditionComponent requires working tests. ### 📇 User story As a user I want to know that everything will work, because it has been tested properly. ### 📜 Acceptance Criteria - [ ] There should be 50% code coverage on this component - [ ] - [ ] (Here there should be more precise part goals for the test, what smaller things the test needs to test) ### 💡 Additional context Handbook: https://lmiller1990.github.io/vue-testing-handbook/ See current test coverage: https://codeclimate.com/github/itggot-TE4/Yabs
1.0
Write unit test for LoanFormComponent - ## ✨ Feature request This is an existing test that requires updating. The bookConditionComponent requires working tests. ### 📇 User story As a user I want to know that everything will work, because it has been tested properly. ### 📜 Acceptance Criteria - [ ] There should be 50% code coverage on this component - [ ] - [ ] (Here there should be more precise part goals for the test, what smaller things the test needs to test) ### 💡 Additional context Handbook: https://lmiller1990.github.io/vue-testing-handbook/ See current test coverage: https://codeclimate.com/github/itggot-TE4/Yabs
non_process
write unit test for loanformcomponent ✨ feature request this is an existing test that requires updating the bookconditioncomponent requires working tests 📇 user story as a user i want to know that everything will work because it has been tested properly 📜 acceptance criteria there should be code coverage on this component here there should be more precise part goals for the test what smaller things the test needs to test 💡 additional context handbook see current test coverage
0
438,052
30,621,841,238
IssuesEvent
2023-07-24 08:48:34
infinispan/infinispan-operator
https://api.github.com/repos/infinispan/infinispan-operator
closed
Document how to utilise Cryostat CR
enhancement documentation
We should document how users can leverage the JMX endpoint https://github.com/infinispan/infinispan-operator/issues/1835 in order to obtain JFR flight recordings via [Cryostat](https://cryostat.io/).
1.0
Document how to utilise Cryostat CR - We should document how users can leverage the JMX endpoint https://github.com/infinispan/infinispan-operator/issues/1835 in order to obtain JFR flight recordings via [Cryostat](https://cryostat.io/).
non_process
document how to utilise cryostat cr we should document how users can leverage the jmx endpoint in order to obtain jfr flight recordings via
0
16,417
21,195,097,133
IssuesEvent
2022-04-08 22:55:56
googleapis/gapic-generator-python
https://api.github.com/repos/googleapis/gapic-generator-python
closed
Turn on snippetgen by default
type: process
CC @dizcology There's a small chance snippetgen falls apart on a real API, so I plan to take the following steps to avoid blocking other updates: - Merge the open release PR: #1037 - Turn on snippetgen - Release the new release PR
1.0
Turn on snippetgen by default - CC @dizcology There's a small chance snippetgen falls apart on a real API, so I plan to take the following steps to avoid blocking other updates: - Merge the open release PR: #1037 - Turn on snippetgen - Release the new release PR
process
turn on snippetgen by default cc dizcology there s a small chance snippetgen falls apart on a real api so i plan to take the following steps to avoid blocking other updates merge the open release pr turn on snippetgen release the new release pr
1
15,546
19,703,501,885
IssuesEvent
2022-01-12 19:07:54
googleapis/java-filestore
https://api.github.com/repos/googleapis/java-filestore
opened
Your .repo-metadata.json file has a problem 🤒
type: process repo-metadata: lint
You have a problem with your .repo-metadata.json file: Result of scan 📈: * release_level must be equal to one of the allowed values in .repo-metadata.json * api_shortname 'filestore' invalid in .repo-metadata.json ☝️ Once you correct these problems, you can close this issue. Reach out to **go/github-automation** if you have any questions.
1.0
Your .repo-metadata.json file has a problem 🤒 - You have a problem with your .repo-metadata.json file: Result of scan 📈: * release_level must be equal to one of the allowed values in .repo-metadata.json * api_shortname 'filestore' invalid in .repo-metadata.json ☝️ Once you correct these problems, you can close this issue. Reach out to **go/github-automation** if you have any questions.
process
your repo metadata json file has a problem 🤒 you have a problem with your repo metadata json file result of scan 📈 release level must be equal to one of the allowed values in repo metadata json api shortname filestore invalid in repo metadata json ☝️ once you correct these problems you can close this issue reach out to go github automation if you have any questions
1
1,043
3,511,837,515
IssuesEvent
2016-01-10 16:05:53
pwittchen/NetworkEvents
https://api.github.com/repos/pwittchen/NetworkEvents
opened
Release 2.1.3
release process
**Initial release notes**: TBD. **Things to do**: - [ ] bump library version to 2.1.3 - [ ] upload Archives to Maven Central Repository - [ ] bump library version in `README.md` after Maven Sync - [ ] update `CHANGELOG.md` after Maven Sync - [ ] create new GitHub release
1.0
Release 2.1.3 - **Initial release notes**: TBD. **Things to do**: - [ ] bump library version to 2.1.3 - [ ] upload Archives to Maven Central Repository - [ ] bump library version in `README.md` after Maven Sync - [ ] update `CHANGELOG.md` after Maven Sync - [ ] create new GitHub release
process
release initial release notes tbd things to do bump library version to upload archives to maven central repository bump library version in readme md after maven sync update changelog md after maven sync create new github release
1
102,960
16,594,784,394
IssuesEvent
2021-06-01 12:14:40
OTTIN-T/Planning
https://api.github.com/repos/OTTIN-T/Planning
opened
CVE-2015-9251 (Medium) detected in jquery-1.7.2.min.js, jquery-1.8.1.min.js
security vulnerability
## CVE-2015-9251 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-1.7.2.min.js</b>, <b>jquery-1.8.1.min.js</b></p></summary> <p> <details><summary><b>jquery-1.7.2.min.js</b></p></summary> <p>JavaScript library for DOM operations</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.2/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.2/jquery.min.js</a></p> <p>Path to dependency file: Planning/backend/sandbox/node_modules/jmespath/index.html</p> <p>Path to vulnerable library: Planning/backend/sandbox/node_modules/jmespath/index.html</p> <p> Dependency Hierarchy: - :x: **jquery-1.7.2.min.js** (Vulnerable Library) </details> <details><summary><b>jquery-1.8.1.min.js</b></p></summary> <p>JavaScript library for DOM operations</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js</a></p> <p>Path to dependency file: Planning/backend/sandbox/node_modules/redeyed/examples/browser/index.html</p> <p>Path to vulnerable library: Planning/backend/sandbox/node_modules/redeyed/examples/browser/index.html</p> <p> Dependency Hierarchy: - :x: **jquery-1.8.1.min.js** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/OTTIN-T/Planning/commit/3f417d0fcf6f936676371f185d9f23f81b686848">3f417d0fcf6f936676371f185d9f23f81b686848</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> jQuery before 3.0.0 is vulnerable to Cross-site Scripting (XSS) attacks when a cross-domain Ajax request is performed without the dataType option, causing text/javascript responses to be executed. <p>Publish Date: 2018-01-18 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-9251>CVE-2015-9251</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2015-9251">https://nvd.nist.gov/vuln/detail/CVE-2015-9251</a></p> <p>Release Date: 2018-01-18</p> <p>Fix Resolution: jQuery - v3.0.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2015-9251 (Medium) detected in jquery-1.7.2.min.js, jquery-1.8.1.min.js - ## CVE-2015-9251 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-1.7.2.min.js</b>, <b>jquery-1.8.1.min.js</b></p></summary> <p> <details><summary><b>jquery-1.7.2.min.js</b></p></summary> <p>JavaScript library for DOM operations</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.2/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.2/jquery.min.js</a></p> <p>Path to dependency file: Planning/backend/sandbox/node_modules/jmespath/index.html</p> <p>Path to vulnerable library: Planning/backend/sandbox/node_modules/jmespath/index.html</p> <p> Dependency Hierarchy: - :x: **jquery-1.7.2.min.js** (Vulnerable Library) </details> <details><summary><b>jquery-1.8.1.min.js</b></p></summary> <p>JavaScript library for DOM operations</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js</a></p> <p>Path to dependency file: Planning/backend/sandbox/node_modules/redeyed/examples/browser/index.html</p> <p>Path to vulnerable library: Planning/backend/sandbox/node_modules/redeyed/examples/browser/index.html</p> <p> Dependency Hierarchy: - :x: **jquery-1.8.1.min.js** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/OTTIN-T/Planning/commit/3f417d0fcf6f936676371f185d9f23f81b686848">3f417d0fcf6f936676371f185d9f23f81b686848</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> jQuery before 3.0.0 is vulnerable to Cross-site Scripting (XSS) attacks when a cross-domain Ajax request is performed without the dataType option, causing text/javascript responses to be executed. <p>Publish Date: 2018-01-18 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-9251>CVE-2015-9251</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2015-9251">https://nvd.nist.gov/vuln/detail/CVE-2015-9251</a></p> <p>Release Date: 2018-01-18</p> <p>Fix Resolution: jQuery - v3.0.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in jquery min js jquery min js cve medium severity vulnerability vulnerable libraries jquery min js jquery min js jquery min js javascript library for dom operations library home page a href path to dependency file planning backend sandbox node modules jmespath index html path to vulnerable library planning backend sandbox node modules jmespath index html dependency hierarchy x jquery min js vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file planning backend sandbox node modules redeyed examples browser index html path to vulnerable library planning backend sandbox node modules redeyed examples browser index html dependency hierarchy x jquery min js vulnerable library found in head commit a href found in base branch main vulnerability details jquery before is vulnerable to cross site scripting xss attacks when a cross domain ajax request is performed without the datatype option causing text javascript responses to be executed publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery step up your open source security game with whitesource
0
9,598
12,543,480,980
IssuesEvent
2020-06-05 15:37:16
googleapis/google-cloud-cpp
https://api.github.com/repos/googleapis/google-cloud-cpp
closed
Ping TW to update cloudsite docs
api: spanner type: process
Ping Tech Writer to update links on cloudsite to point to the new location for samples. (for non-GA products, this step can be skipped).
1.0
Ping TW to update cloudsite docs - Ping Tech Writer to update links on cloudsite to point to the new location for samples. (for non-GA products, this step can be skipped).
process
ping tw to update cloudsite docs ping tech writer to update links on cloudsite to point to the new location for samples for non ga products this step can be skipped
1
21,285
10,594,490,577
IssuesEvent
2019-10-09 16:53:04
BCDevOps/platform-services
https://api.github.com/repos/BCDevOps/platform-services
opened
remove cluster quota limits from aporeto projects
bug security security/aporeto
during deployment we hit an issue where quota limits were hit as a result of automatic quota application during project creation. We need a task to validate that quota limits are not in place
True
remove cluster quota limits from aporeto projects - during deployment we hit an issue where quota limits were hit as a result of automatic quota application during project creation. We need a task to validate that quota limits are not in place
non_process
remove cluster quota limits from aporeto projects during deployment we hit an issue where quota limits were hit as a result of automatic quota application during project creation we need a task to validate that quota limits are not in place
0
329,614
10,022,177,747
IssuesEvent
2019-07-16 16:05:03
python/mypy
https://api.github.com/repos/python/mypy
opened
Unclear message if named tuple is not compatible with protocol
priority-1-normal topic-named-tuple topic-protocols topic-usability
Mypy just says that class is not compatible with the bound when type checking this example: ```py from typing import Protocol, Generic, TypeVar, NamedTuple, List, Any class Example(Protocol): sentence: List[str] T = TypeVar('T', bound=Example) class Dataset(Generic[T]): ... class ConcreteExample(NamedTuple): sentence: List[str] others: Any class ConcreteDataset(Dataset[ConcreteExample]): ... ``` Here's the error message: ``` t.py:15: error: Type argument "Tuple[builtins.list[builtins.str], Any, fallback=t.ConcreteExample]" of "Dataset" must be a subtype of "t.Example" ``` It would be better to also say that `sentence` in `ConcreteExample` is read-only, and maybe suggest making it a read-only property in the protocol. Originally reported in #7212.
1.0
Unclear message if named tuple is not compatible with protocol - Mypy just says that class is not compatible with the bound when type checking this example: ```py from typing import Protocol, Generic, TypeVar, NamedTuple, List, Any class Example(Protocol): sentence: List[str] T = TypeVar('T', bound=Example) class Dataset(Generic[T]): ... class ConcreteExample(NamedTuple): sentence: List[str] others: Any class ConcreteDataset(Dataset[ConcreteExample]): ... ``` Here's the error message: ``` t.py:15: error: Type argument "Tuple[builtins.list[builtins.str], Any, fallback=t.ConcreteExample]" of "Dataset" must be a subtype of "t.Example" ``` It would be better to also say that `sentence` in `ConcreteExample` is read-only, and maybe suggest making it a read-only property in the protocol. Originally reported in #7212.
non_process
unclear message if named tuple is not compatible with protocol mypy just says that class is not compatible with the bound when type checking this example py from typing import protocol generic typevar namedtuple list any class example protocol sentence list t typevar t bound example class dataset generic class concreteexample namedtuple sentence list others any class concretedataset dataset here s the error message t py error type argument tuple any fallback t concreteexample of dataset must be a subtype of t example it would be better to also say that sentence in concreteexample is read only and maybe suggest making it a read only property in the protocol originally reported in
0
5,360
8,188,414,951
IssuesEvent
2018-08-30 01:46:27
allinurl/goaccess
https://api.github.com/repos/allinurl/goaccess
closed
make `%h` match anonymized ip
log-processing question
The access log file I would like to parse has anonymized ip adresses: `123.213.122.x`. Goaccess fails to parse those. Is there anything that can be done for such a case?
1.0
make `%h` match anonymized ip - The access log file I would like to parse has anonymized ip adresses: `123.213.122.x`. Goaccess fails to parse those. Is there anything that can be done for such a case?
process
make h match anonymized ip the access log file i would like to parse has anonymized ip adresses x goaccess fails to parse those is there anything that can be done for such a case
1
199,995
6,996,989,389
IssuesEvent
2017-12-16 08:42:01
angelleye/paypal-woocommerce
https://api.github.com/repos/angelleye/paypal-woocommerce
closed
Refunds are not working properly..??
high priority
it's apparently not error handling properly, and it's marking it successfully refunded even though it wasn't.
1.0
Refunds are not working properly..?? - it's apparently not error handling properly, and it's marking it successfully refunded even though it wasn't.
non_process
refunds are not working properly it s apparently not error handling properly and it s marking it successfully refunded even though it wasn t
0
35,279
7,675,450,190
IssuesEvent
2018-05-15 08:43:57
bridgedotnet/Bridge.Newtonsoft.Json
https://api.github.com/repos/bridgedotnet/Bridge.Newtonsoft.Json
closed
SerializeObject ignores TypeNameHandling.Objects in Dictionaries
defect
The output of `SerializeObject()` is the same regardless of the `TypeNameHandling.Objects` setting passed. ### Steps To Reproduce https://deck.net/0b4265a54d07fce5203c36ca08d909e3 ### Expected Result Same as C# ([fiddle](https://dotnetfiddle.net/JjN180)) ``` Simple: {"5":"five"} Detailed: {"$type":"System.Collections.Generic.Dictionary`2[[System.Int32, mscorlib],[System.String, mscorlib]], mscorlib","5":"five"} ``` ### Actual Result ``` Simple: {"5":"five"} Detailed: {"5":"five"} ```
1.0
SerializeObject ignores TypeNameHandling.Objects in Dictionaries - The output of `SerializeObject()` is the same regardless of the `TypeNameHandling.Objects` setting passed. ### Steps To Reproduce https://deck.net/0b4265a54d07fce5203c36ca08d909e3 ### Expected Result Same as C# ([fiddle](https://dotnetfiddle.net/JjN180)) ``` Simple: {"5":"five"} Detailed: {"$type":"System.Collections.Generic.Dictionary`2[[System.Int32, mscorlib],[System.String, mscorlib]], mscorlib","5":"five"} ``` ### Actual Result ``` Simple: {"5":"five"} Detailed: {"5":"five"} ```
non_process
serializeobject ignores typenamehandling objects in dictionaries the output of serializeobject is the same regardless of the typenamehandling objects setting passed steps to reproduce expected result same as c simple five detailed type system collections generic dictionary mscorlib five actual result simple five detailed five
0
25,795
5,199,816,218
IssuesEvent
2017-01-23 21:55:40
LLNL/spack
https://api.github.com/repos/LLNL/spack
closed
[Packaging guide] document recommendations for running tests
documentation
Document that contributors should NOT add a variant `test` but instead use `self.run_tests`. Explain that the reason is that running installation tests does not change the installed package per-se, it's not really a variant in this sense. It's just an optional step during `spack install`.
1.0
[Packaging guide] document recommendations for running tests - Document that contributors should NOT add a variant `test` but instead use `self.run_tests`. Explain that the reason is that running installation tests does not change the installed package per-se, it's not really a variant in this sense. It's just an optional step during `spack install`.
non_process
document recommendations for running tests document that contributors should not add a variant test but instead use self run tests explain that the reason is that running installation tests does not change the installed package per se it s not really a variant in this sense it s just an optional step during spack install
0
22,193
30,749,811,870
IssuesEvent
2023-07-28 18:08:10
metabase/metabase
https://api.github.com/repos/metabase/metabase
closed
[MLv2] Enable access to field IDs from column metadata
.Backend .metabase-lib .Team/QueryProcessor :hammer_and_wrench:
The FE needs a field ID to fetch its values (`/api/field/:fieldId/values`) and present them in filtering UI [Related Slack discussion](https://metaboat.slack.com/archives/C04CYTEL9N2/p1688402562625939)
1.0
[MLv2] Enable access to field IDs from column metadata - The FE needs a field ID to fetch its values (`/api/field/:fieldId/values`) and present them in filtering UI [Related Slack discussion](https://metaboat.slack.com/archives/C04CYTEL9N2/p1688402562625939)
process
enable access to field ids from column metadata the fe needs a field id to fetch its values api field fieldid values and present them in filtering ui
1
50,904
13,610,928,827
IssuesEvent
2020-09-23 08:08:14
sourcegraph/sourcegraph
https://api.github.com/repos/sourcegraph/sourcegraph
opened
Analyze and fix a container vulnerability
planned/3.21 team/security
Once #13551 has been completed we need to analyze the list of vulnerabilities, determine their relative priority and fix one vulnerability. As part of this the Security team will document the vulnerability life cycle and management process, producing an artifact in line with asks for ISO documents.
True
Analyze and fix a container vulnerability - Once #13551 has been completed we need to analyze the list of vulnerabilities, determine their relative priority and fix one vulnerability. As part of this the Security team will document the vulnerability life cycle and management process, producing an artifact in line with asks for ISO documents.
non_process
analyze and fix a container vulnerability once has been completed we need to analyze the list of vulnerabilities determine their relative priority and fix one vulnerability as part of this the security team will document the vulnerability life cycle and management process producing an artifact in line with asks for iso documents
0
17,845
23,784,243,777
IssuesEvent
2022-09-02 08:36:02
prisma/prisma
https://api.github.com/repos/prisma/prisma
opened
Reintrospection error: Could not find relation field exchange_metadata on model Tags.
kind/bug process/candidate topic: re-introspection tech/engines topic: error reporting team/schema
<!-- If required, please update the title to be clear and descriptive --> Command: `prisma db pull` Version: `4.2.1` Binary Version: `2920a97877e12e055c1333079b8d19cee7f33826` Report: https://prisma-errors.netlify.app/report/14269 OS: `x64 win32 10.0.22000` JS Stacktrace: ``` Error: [introspection-engine\connectors\sql-introspection-connector\src\re_introspection.rs:489:14] Could not find relation field exchange_metadata on model Tags. at ChildProcess.<anonymous> (node_modules\prisma\build\index.js:90713:26) at ChildProcess.emit (node:events:527:28) at Process.ChildProcess._handle.onexit (node:internal/child_process:291:12) ```
1.0
Reintrospection error: Could not find relation field exchange_metadata on model Tags. - <!-- If required, please update the title to be clear and descriptive --> Command: `prisma db pull` Version: `4.2.1` Binary Version: `2920a97877e12e055c1333079b8d19cee7f33826` Report: https://prisma-errors.netlify.app/report/14269 OS: `x64 win32 10.0.22000` JS Stacktrace: ``` Error: [introspection-engine\connectors\sql-introspection-connector\src\re_introspection.rs:489:14] Could not find relation field exchange_metadata on model Tags. at ChildProcess.<anonymous> (node_modules\prisma\build\index.js:90713:26) at ChildProcess.emit (node:events:527:28) at Process.ChildProcess._handle.onexit (node:internal/child_process:291:12) ```
process
reintrospection error could not find relation field exchange metadata on model tags command prisma db pull version binary version report os js stacktrace error could not find relation field exchange metadata on model tags at childprocess node modules prisma build index js at childprocess emit node events at process childprocess handle onexit node internal child process
1
47,303
2,974,714,078
IssuesEvent
2015-07-15 03:35:02
cjfields/redmine-test
https://api.github.com/repos/cjfields/redmine-test
opened
Bio::Tools:pSW stop codon bug
Category: bioperl-ext Priority: Normal Status: New Tracker: Migrate
--- Author Name: **Prachi Shah** (Prachi Shah) Original Redmine Issue: 2069, https://redmine.open-bio.org/issues/2069 Original Date: 2006-08-10 Original Assignee: Bioperl Guts --- I am trying to align very similar protein sequences with the Bio::Tools::pSW modules but running into a bug. One of the two sequences is extended with gaps so that an Amino acid residue matches the stop codon (*). The alignment should match the two sequences (because they are the same) up until the stop codon is encountered in the new sequence. Instead, it artificially extends the old sequence and matches the Alanine with the stop codon. Here is an example set of two sequences I am trying to align: >orf19.6264.3 MSNYLNLAQFSGVTDRFNLERIKSDFSSVQSTISKLRPPQEFFDFRRLSKPANFGEIQQRVGYNLGYFSANYITIVLGLSIYALITNFLLLFVTIFVLGGIYGINKLNGEDLVLPVGRFNTSQLYTGLLIVAVPLGFLASPISTMMWLIGSSGVTVGAHAALMEKPIETVFEEEV*V >orf19.6264.3_old MSNYLNLAQFSGVTDRFNLERIKSDFSSVQSTISKLRPPQEFFDFRRLSKPANFGEIQQRVGYNLGYFSANYITIVLGLSIYALITNFLLLFVTIFVLGGIYGINKLNGEDLVLPVGRFNTSQLYTGLLIVAVPLGFLASPISTMMWLIGSSGVTVGAHAALMEKPIETVFEEEV and below is the part of code that generates the alignments -- ################ my $new_translatedSeqObj = Bio::Seq->new(-display_id => $gene, -seq => $new_translatedSeq); my $old_translatedSeqObj = Bio::Seq->new(-display_id => $gene. "_old", -seq => $old_translatedSeq); 1. do alignments my $align_factory = new Bio::Tools::pSW( '-matrix' =>'blosum62.bla', '-gap' => 12, '-ext' => 2 ); my $aln = $align_factory->pairwise_alignment( $old_translatedSeqObj, $new_translatedSeqObj ); my $alnout = new Bio::AlignIO(-format => 'clustalw', -fh => \*STDOUT); ################## The alignment -- CLUSTAL W(1.81) multiple sequence alignment orf19.6264.3_old/1-162 MSNYLNLAQFSGVTDRFNLERIKSDFSSVQSTISKLRPPQEFFDFRRLSKPANFGEIQQR orf19.6264.3/1-177 MSNYLNLAQFSGVTDRFNLERIKSDFSSVQSTISKLRPPQEFFDFRRLSKPANFGEIQQR ************************************************************ orf19.6264.3_old/1-162 VGYNLGYFSANYITIVLGLSIYALITNFLLLFVTIFVLGGIYGINKLNGEDLVLPVGRFN orf19.6264.3/1-177 VGYNLGYFSANYITIVLGLSIYALITNFLLLFVTIFVLGGIYGINKLNGEDLVLPVGRFN ************************************************************ orf19.6264.3_old/1-162 TSQLYTGLLIVAVPLGFLASPISTMMWLIGSSGVTVGAHA---------------AL orf19.6264.3/1-177 TSQLYTGLLIVAVPLGFLASPISTMMWLIGSSGVTVGAHAALMEKPIETVFEEEV*V **************************************** :
1.0
Bio::Tools:pSW stop codon bug - --- Author Name: **Prachi Shah** (Prachi Shah) Original Redmine Issue: 2069, https://redmine.open-bio.org/issues/2069 Original Date: 2006-08-10 Original Assignee: Bioperl Guts --- I am trying to align very similar protein sequences with the Bio::Tools::pSW modules but running into a bug. One of the two sequences is extended with gaps so that an Amino acid residue matches the stop codon (*). The alignment should match the two sequences (because they are the same) up until the stop codon is encountered in the new sequence. Instead, it artificially extends the old sequence and matches the Alanine with the stop codon. Here is an example set of two sequences I am trying to align: >orf19.6264.3 MSNYLNLAQFSGVTDRFNLERIKSDFSSVQSTISKLRPPQEFFDFRRLSKPANFGEIQQRVGYNLGYFSANYITIVLGLSIYALITNFLLLFVTIFVLGGIYGINKLNGEDLVLPVGRFNTSQLYTGLLIVAVPLGFLASPISTMMWLIGSSGVTVGAHAALMEKPIETVFEEEV*V >orf19.6264.3_old MSNYLNLAQFSGVTDRFNLERIKSDFSSVQSTISKLRPPQEFFDFRRLSKPANFGEIQQRVGYNLGYFSANYITIVLGLSIYALITNFLLLFVTIFVLGGIYGINKLNGEDLVLPVGRFNTSQLYTGLLIVAVPLGFLASPISTMMWLIGSSGVTVGAHAALMEKPIETVFEEEV and below is the part of code that generates the alignments -- ################ my $new_translatedSeqObj = Bio::Seq->new(-display_id => $gene, -seq => $new_translatedSeq); my $old_translatedSeqObj = Bio::Seq->new(-display_id => $gene. "_old", -seq => $old_translatedSeq); 1. do alignments my $align_factory = new Bio::Tools::pSW( '-matrix' =>'blosum62.bla', '-gap' => 12, '-ext' => 2 ); my $aln = $align_factory->pairwise_alignment( $old_translatedSeqObj, $new_translatedSeqObj ); my $alnout = new Bio::AlignIO(-format => 'clustalw', -fh => \*STDOUT); ################## The alignment -- CLUSTAL W(1.81) multiple sequence alignment orf19.6264.3_old/1-162 MSNYLNLAQFSGVTDRFNLERIKSDFSSVQSTISKLRPPQEFFDFRRLSKPANFGEIQQR orf19.6264.3/1-177 MSNYLNLAQFSGVTDRFNLERIKSDFSSVQSTISKLRPPQEFFDFRRLSKPANFGEIQQR ************************************************************ orf19.6264.3_old/1-162 VGYNLGYFSANYITIVLGLSIYALITNFLLLFVTIFVLGGIYGINKLNGEDLVLPVGRFN orf19.6264.3/1-177 VGYNLGYFSANYITIVLGLSIYALITNFLLLFVTIFVLGGIYGINKLNGEDLVLPVGRFN ************************************************************ orf19.6264.3_old/1-162 TSQLYTGLLIVAVPLGFLASPISTMMWLIGSSGVTVGAHA---------------AL orf19.6264.3/1-177 TSQLYTGLLIVAVPLGFLASPISTMMWLIGSSGVTVGAHAALMEKPIETVFEEEV*V **************************************** :
non_process
bio tools psw stop codon bug author name prachi shah prachi shah original redmine issue original date original assignee bioperl guts i am trying to align very similar protein sequences with the bio tools psw modules but running into a bug one of the two sequences is extended with gaps so that an amino acid residue matches the stop codon the alignment should match the two sequences because they are the same up until the stop codon is encountered in the new sequence instead it artificially extends the old sequence and matches the alanine with the stop codon here is an example set of two sequences i am trying to align msnylnlaqfsgvtdrfnleriksdfssvqstisklrppqeffdfrrlskpanfgeiqqrvgynlgyfsanyitivlglsiyalitnflllfvtifvlggiyginklngedlvlpvgrfntsqlytgllivavplgflaspistmmwligssgvtvgahaalmekpietvfeeev v old msnylnlaqfsgvtdrfnleriksdfssvqstisklrppqeffdfrrlskpanfgeiqqrvgynlgyfsanyitivlglsiyalitnflllfvtifvlggiyginklngedlvlpvgrfntsqlytgllivavplgflaspistmmwligssgvtvgahaalmekpietvfeeev and below is the part of code that generates the alignments my new translatedseqobj bio seq new display id gene seq new translatedseq my old translatedseqobj bio seq new display id gene old seq old translatedseq do alignments my align factory new bio tools psw matrix bla gap ext my aln align factory pairwise alignment old translatedseqobj new translatedseqobj my alnout new bio alignio format clustalw fh stdout the alignment clustal w multiple sequence alignment old msnylnlaqfsgvtdrfnleriksdfssvqstisklrppqeffdfrrlskpanfgeiqqr msnylnlaqfsgvtdrfnleriksdfssvqstisklrppqeffdfrrlskpanfgeiqqr old vgynlgyfsanyitivlglsiyalitnflllfvtifvlggiyginklngedlvlpvgrfn vgynlgyfsanyitivlglsiyalitnflllfvtifvlggiyginklngedlvlpvgrfn old tsqlytgllivavplgflaspistmmwligssgvtvgaha al tsqlytgllivavplgflaspistmmwligssgvtvgahaalmekpietvfeeev v
0
11,238
14,014,799,079
IssuesEvent
2020-10-29 12:27:45
pystatgen/sgkit
https://api.github.com/repos/pystatgen/sgkit
closed
Make sgkit pep-561 compliant
enhancement process + tools
Signal to mypy that sgkit is typed (essentially making it PEP-0561 compliant) by including `py.typed` in the top level package. Without this sgkit users who use mypy will get this error: ``` test.py:1: error: Skipping analyzing 'sgkit': found module but no type hints or library stubs [import] test.py:1: note: See https://mypy.readthedocs.io/en/latest/running_mypy.html#missing-imports ```
1.0
Make sgkit pep-561 compliant - Signal to mypy that sgkit is typed (essentially making it PEP-0561 compliant) by including `py.typed` in the top level package. Without this sgkit users who use mypy will get this error: ``` test.py:1: error: Skipping analyzing 'sgkit': found module but no type hints or library stubs [import] test.py:1: note: See https://mypy.readthedocs.io/en/latest/running_mypy.html#missing-imports ```
process
make sgkit pep compliant signal to mypy that sgkit is typed essentially making it pep compliant by including py typed in the top level package without this sgkit users who use mypy will get this error test py error skipping analyzing sgkit found module but no type hints or library stubs test py note see
1
240,069
18,292,061,907
IssuesEvent
2021-10-05 16:14:39
Nautilus-Cyberneering/chinese-ideographs
https://api.github.com/repos/Nautilus-Cyberneering/chinese-ideographs
closed
Define Image Folder Structure
documentation enhancement
Need to define the folder structures for the different image types. - [x] Gold - [x] Base
1.0
Define Image Folder Structure - Need to define the folder structures for the different image types. - [x] Gold - [x] Base
non_process
define image folder structure need to define the folder structures for the different image types gold base
0
394,214
27,024,056,003
IssuesEvent
2023-02-11 11:13:46
Arsenic-ATG/Arsenic-ATG
https://api.github.com/repos/Arsenic-ATG/Arsenic-ATG
closed
use references instead of hyperlinks
documentation good first issue
it's much better to use reference instead of hard-coding the hyperlinks in the profile readme, though it would not make any difference in the actual looks of the readme, but would make the readme more readable and manageable for someone who is willing to read/contribute to the source code
1.0
use references instead of hyperlinks - it's much better to use reference instead of hard-coding the hyperlinks in the profile readme, though it would not make any difference in the actual looks of the readme, but would make the readme more readable and manageable for someone who is willing to read/contribute to the source code
non_process
use references instead of hyperlinks it s much better to use reference instead of hard coding the hyperlinks in the profile readme though it would not make any difference in the actual looks of the readme but would make the readme more readable and manageable for someone who is willing to read contribute to the source code
0
5,218
8,017,039,624
IssuesEvent
2018-07-25 14:55:11
sysown/proxysql
https://api.github.com/repos/sysown/proxysql
closed
Reports warnings if mysql-query_digests is false
ADMIN QUERY PROCESSOR
From [wiki](https://github.com/sysown/proxysql/wiki/Global-variables#mysql-query_digests): > It is also very important to note that query digest is required to determine when multiplexing needs to be disabled, for example in case of TEMPORARY tables, SQL_CALC_FOUND_ROWS , GET_LOCK, etc. > Do not disabled mysql-query_digests unless you are really sure it won't break your application. It is often not trivial to understand that ProxySQL is misbehaving because `mysql-query_digests` was disabled. ProxySQL should report on error log a warning when `mysql-query_digests` is set to `false`, to better troubleshoot when ProxySQL is misbehaving. The warning should highlight that disabling `mysql-query_digests` is not recommended, and risky.
1.0
Reports warnings if mysql-query_digests is false - From [wiki](https://github.com/sysown/proxysql/wiki/Global-variables#mysql-query_digests): > It is also very important to note that query digest is required to determine when multiplexing needs to be disabled, for example in case of TEMPORARY tables, SQL_CALC_FOUND_ROWS , GET_LOCK, etc. > Do not disabled mysql-query_digests unless you are really sure it won't break your application. It is often not trivial to understand that ProxySQL is misbehaving because `mysql-query_digests` was disabled. ProxySQL should report on error log a warning when `mysql-query_digests` is set to `false`, to better troubleshoot when ProxySQL is misbehaving. The warning should highlight that disabling `mysql-query_digests` is not recommended, and risky.
process
reports warnings if mysql query digests is false from it is also very important to note that query digest is required to determine when multiplexing needs to be disabled for example in case of temporary tables sql calc found rows get lock etc do not disabled mysql query digests unless you are really sure it won t break your application it is often not trivial to understand that proxysql is misbehaving because mysql query digests was disabled proxysql should report on error log a warning when mysql query digests is set to false to better troubleshoot when proxysql is misbehaving the warning should highlight that disabling mysql query digests is not recommended and risky
1
133,386
18,297,382,177
IssuesEvent
2021-10-05 21:54:45
vipinsun/blockchain-carbon-accounting
https://api.github.com/repos/vipinsun/blockchain-carbon-accounting
closed
CVE-2021-33194 (High) detected in github.com/hyperledger/fabric-v1.4.1, github.com/golang/net-16171245cfb220d5317888b716d69c1fb4e7992b - autoclosed
security vulnerability
## CVE-2021-33194 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>github.com/hyperledger/fabric-v1.4.1</b>, <b>github.com/golang/net-16171245cfb220d5317888b716d69c1fb4e7992b</b></p></summary> <p> <details><summary><b>github.com/hyperledger/fabric-v1.4.1</b></p></summary> <p>Read-only mirror of https://gerrit.hyperledger.org/r/#/admin/projects/fabric</p> <p> Dependency Hierarchy: - :x: **github.com/hyperledger/fabric-v1.4.1** (Vulnerable Library) </details> <details><summary><b>github.com/golang/net-16171245cfb220d5317888b716d69c1fb4e7992b</b></p></summary> <p>[mirror] Go supplementary network libraries</p> <p> Dependency Hierarchy: - github.com/hyperledger/fabric-protos-go-d7d9b8e1fcde4eb6a4b44ec9003bfb90eee3301c (Root Library) - github.com/grpc/grpc-go-v1.27.0 - :x: **github.com/golang/net-16171245cfb220d5317888b716d69c1fb4e7992b** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/vipinsun/blockchain-carbon-accounting/commit/d388e16464e00b9ce84df0d247029f534a429b90">d388e16464e00b9ce84df0d247029f534a429b90</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> golang.org/x/net before v0.0.0-20210520170846-37e1c6afe023 allows attackers to cause a denial of service (infinite loop) via crafted ParseFragment input. <p>Publish Date: 2021-05-26 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-33194>CVE-2021-33194</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33194">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33194</a></p> <p>Release Date: 2021-05-26</p> <p>Fix Resolution: golang.org/x/net - v0.0.0-20210520170846-37e1c6afe023</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-33194 (High) detected in github.com/hyperledger/fabric-v1.4.1, github.com/golang/net-16171245cfb220d5317888b716d69c1fb4e7992b - autoclosed - ## CVE-2021-33194 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>github.com/hyperledger/fabric-v1.4.1</b>, <b>github.com/golang/net-16171245cfb220d5317888b716d69c1fb4e7992b</b></p></summary> <p> <details><summary><b>github.com/hyperledger/fabric-v1.4.1</b></p></summary> <p>Read-only mirror of https://gerrit.hyperledger.org/r/#/admin/projects/fabric</p> <p> Dependency Hierarchy: - :x: **github.com/hyperledger/fabric-v1.4.1** (Vulnerable Library) </details> <details><summary><b>github.com/golang/net-16171245cfb220d5317888b716d69c1fb4e7992b</b></p></summary> <p>[mirror] Go supplementary network libraries</p> <p> Dependency Hierarchy: - github.com/hyperledger/fabric-protos-go-d7d9b8e1fcde4eb6a4b44ec9003bfb90eee3301c (Root Library) - github.com/grpc/grpc-go-v1.27.0 - :x: **github.com/golang/net-16171245cfb220d5317888b716d69c1fb4e7992b** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/vipinsun/blockchain-carbon-accounting/commit/d388e16464e00b9ce84df0d247029f534a429b90">d388e16464e00b9ce84df0d247029f534a429b90</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> golang.org/x/net before v0.0.0-20210520170846-37e1c6afe023 allows attackers to cause a denial of service (infinite loop) via crafted ParseFragment input. <p>Publish Date: 2021-05-26 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-33194>CVE-2021-33194</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33194">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33194</a></p> <p>Release Date: 2021-05-26</p> <p>Fix Resolution: golang.org/x/net - v0.0.0-20210520170846-37e1c6afe023</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in github com hyperledger fabric github com golang net autoclosed cve high severity vulnerability vulnerable libraries github com hyperledger fabric github com golang net github com hyperledger fabric read only mirror of dependency hierarchy x github com hyperledger fabric vulnerable library github com golang net go supplementary network libraries dependency hierarchy github com hyperledger fabric protos go root library github com grpc grpc go x github com golang net vulnerable library found in head commit a href found in base branch main vulnerability details golang org x net before allows attackers to cause a denial of service infinite loop via crafted parsefragment input publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution golang org x net step up your open source security game with whitesource
0
679,741
23,243,721,641
IssuesEvent
2022-08-03 17:57:52
lunatixxx/RCVMod
https://api.github.com/repos/lunatixxx/RCVMod
reopened
Witch percentage / boss vote
bug low priority
-Witch banned percentage is not respected. -Witch minimum distance with tank is not respected.
1.0
Witch percentage / boss vote - -Witch banned percentage is not respected. -Witch minimum distance with tank is not respected.
non_process
witch percentage boss vote witch banned percentage is not respected witch minimum distance with tank is not respected
0
123,250
10,258,300,223
IssuesEvent
2019-08-21 22:30:21
RPTools/maptool
https://api.github.com/repos/RPTools/maptool
closed
Hex map movement doesn't display right
bug low tested
**Describe the bug** For tokens with snap-to-grid on hex maps, (1) the distance numbers partially outside of cells, and (2) there is no blue line forming the path. **To Reproduce** Steps to reproduce the behavior: 1. Create hex map with a token with snap-to-grid. 2. Enable Show Movement Distance 3. Move the token around and notice the numbers show on the hex border and the absence of blue path (compare to square map) **Expected behavior** The numbers show up within the hexes, and there is a blue line forming the path. **Screenshots** ![img](https://i.imgur.com/IeKmSfy.png) ![img](https://i.imgur.com/P5O9OJh.png) **MapTool Info** - Version: 1.5.3 - Install: New **Desktop (please complete the following information):** - OS: Windows - Version 10
1.0
Hex map movement doesn't display right - **Describe the bug** For tokens with snap-to-grid on hex maps, (1) the distance numbers partially outside of cells, and (2) there is no blue line forming the path. **To Reproduce** Steps to reproduce the behavior: 1. Create hex map with a token with snap-to-grid. 2. Enable Show Movement Distance 3. Move the token around and notice the numbers show on the hex border and the absence of blue path (compare to square map) **Expected behavior** The numbers show up within the hexes, and there is a blue line forming the path. **Screenshots** ![img](https://i.imgur.com/IeKmSfy.png) ![img](https://i.imgur.com/P5O9OJh.png) **MapTool Info** - Version: 1.5.3 - Install: New **Desktop (please complete the following information):** - OS: Windows - Version 10
non_process
hex map movement doesn t display right describe the bug for tokens with snap to grid on hex maps the distance numbers partially outside of cells and there is no blue line forming the path to reproduce steps to reproduce the behavior create hex map with a token with snap to grid enable show movement distance move the token around and notice the numbers show on the hex border and the absence of blue path compare to square map expected behavior the numbers show up within the hexes and there is a blue line forming the path screenshots maptool info version install new desktop please complete the following information os windows version
0
56,139
11,517,625,044
IssuesEvent
2020-02-14 08:49:32
stan-dev/math
https://api.github.com/repos/stan-dev/math
closed
double version of trace_gen_quad_form is never used
bug code cleanup
## Description There's an unresolved typename declared for `trace_gen_quad_form`, which therefore is never used even when all inputs are double. Here's the definition: ``` template <int RD, int CD, int RA, int CA, typename TB, int RB, int CB> inline double trace_gen_quad_form(const Eigen::Matrix<double, RD, CD> &D, const Eigen::Matrix<double, RA, CA> &A, const Eigen::Matrix<double, RB, CB> &B) ``` Removing `typename TB` allows that double specialization to be used. #### Current Version: v3.1.0
1.0
double version of trace_gen_quad_form is never used - ## Description There's an unresolved typename declared for `trace_gen_quad_form`, which therefore is never used even when all inputs are double. Here's the definition: ``` template <int RD, int CD, int RA, int CA, typename TB, int RB, int CB> inline double trace_gen_quad_form(const Eigen::Matrix<double, RD, CD> &D, const Eigen::Matrix<double, RA, CA> &A, const Eigen::Matrix<double, RB, CB> &B) ``` Removing `typename TB` allows that double specialization to be used. #### Current Version: v3.1.0
non_process
double version of trace gen quad form is never used description there s an unresolved typename declared for trace gen quad form which therefore is never used even when all inputs are double here s the definition template inline double trace gen quad form const eigen matrix d const eigen matrix a const eigen matrix b removing typename tb allows that double specialization to be used current version
0
8,826
11,939,264,919
IssuesEvent
2020-04-02 14:58:11
dotnet/runtime
https://api.github.com/repos/dotnet/runtime
closed
Difficult (not possible?) to redirect binary output between Linux processes
api-needs-work area-System.Diagnostics.Process
A common pattern in Linux is to pipe output between commands. And such output is not always text (e.g., imagine a series of openssl commands). Unfortunately, the System.Diagnostic.Process class _assumes_ that output and input is textual. Such an assumption violates Linux conventions.
1.0
Difficult (not possible?) to redirect binary output between Linux processes - A common pattern in Linux is to pipe output between commands. And such output is not always text (e.g., imagine a series of openssl commands). Unfortunately, the System.Diagnostic.Process class _assumes_ that output and input is textual. Such an assumption violates Linux conventions.
process
difficult not possible to redirect binary output between linux processes a common pattern in linux is to pipe output between commands and such output is not always text e g imagine a series of openssl commands unfortunately the system diagnostic process class assumes that output and input is textual such an assumption violates linux conventions
1
11,096
13,938,566,911
IssuesEvent
2020-10-22 15:25:02
MicrosoftDocs/azure-docs
https://api.github.com/repos/MicrosoftDocs/azure-docs
closed
Python 3 as a Runbook type?
Pri2 automation/svc awaiting-product-team-response cxp escalated-product-team process-automation/subsvc product-question triaged
Any plans to support Python 3 as a Runbook type? --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 8081200f-2bf4-db58-c957-c8ab7af5f90b * Version Independent ID: b135cf1a-c391-03e5-41e7-e13571351e91 * Content: [Azure Automation Runbook Types](https://docs.microsoft.com/en-us/azure/automation/automation-runbook-types#feedback) * Content Source: [articles/automation/automation-runbook-types.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-runbook-types.md) * Service: **automation** * Sub-service: **process-automation** * GitHub Login: @bobbytreed * Microsoft Alias: **robreed**
1.0
Python 3 as a Runbook type? - Any plans to support Python 3 as a Runbook type? --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 8081200f-2bf4-db58-c957-c8ab7af5f90b * Version Independent ID: b135cf1a-c391-03e5-41e7-e13571351e91 * Content: [Azure Automation Runbook Types](https://docs.microsoft.com/en-us/azure/automation/automation-runbook-types#feedback) * Content Source: [articles/automation/automation-runbook-types.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-runbook-types.md) * Service: **automation** * Sub-service: **process-automation** * GitHub Login: @bobbytreed * Microsoft Alias: **robreed**
process
python as a runbook type any plans to support python as a runbook type document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation sub service process automation github login bobbytreed microsoft alias robreed
1
811
3,286,795,473
IssuesEvent
2015-10-29 06:03:54
t3kt/vjzual2
https://api.github.com/repos/t3kt/vjzual2
closed
add multiple levels to the edge effect
enhancement video processing
either use different thresholds, or different color components, or something along those lines
1.0
add multiple levels to the edge effect - either use different thresholds, or different color components, or something along those lines
process
add multiple levels to the edge effect either use different thresholds or different color components or something along those lines
1
409,159
11,957,860,513
IssuesEvent
2020-04-04 15:49:11
InfiniteFlightAirportEditing/Airports
https://api.github.com/repos/InfiniteFlightAirportEditing/Airports
opened
EDWB-Bremerhaven Airport-BREMEN-GERMANY
Being Redone Low Priority
# Airport Name Bremerhaven Airport # Country? Germany # Improvements that need to be made? rework from scratch # Are you working on this airport? yes # Airport Priority? (IF Event, 10000ft+ Runway, World/US Capital, Low) low
1.0
EDWB-Bremerhaven Airport-BREMEN-GERMANY - # Airport Name Bremerhaven Airport # Country? Germany # Improvements that need to be made? rework from scratch # Are you working on this airport? yes # Airport Priority? (IF Event, 10000ft+ Runway, World/US Capital, Low) low
non_process
edwb bremerhaven airport bremen germany airport name bremerhaven airport country germany improvements that need to be made rework from scratch are you working on this airport yes airport priority if event runway world us capital low low
0
199,207
6,986,843,361
IssuesEvent
2017-12-14 06:18:39
akarshsingh9/Datastructure-and-Algorithms
https://api.github.com/repos/akarshsingh9/Datastructure-and-Algorithms
opened
Work on README.md
Priority: Urgent README Status: In Progress
Give details on what Datastructures and Algorithms will be implemented in Python 3. Also provide details on how others can contribute.
1.0
Work on README.md - Give details on what Datastructures and Algorithms will be implemented in Python 3. Also provide details on how others can contribute.
non_process
work on readme md give details on what datastructures and algorithms will be implemented in python also provide details on how others can contribute
0
21,287
28,482,444,558
IssuesEvent
2023-04-18 04:39:32
bazelbuild/bazel
https://api.github.com/repos/bazelbuild/bazel
closed
[Mirror] Boost 1.82.0
P2 type: process team-OSS mirror request
### Please list the URLs of the archives you'd like to mirror: Please mirror `https://boostorg.jfrog.io/artifactory/main/release/1.82.0/source/boost_1_82_0.tar.gz` Expected mirrored URL: `https://mirror.bazel.build/boostorg.jfrog.io/artifactory/main/release/1.82.0/source/boost_1_82_0.tar.gz`
1.0
[Mirror] Boost 1.82.0 - ### Please list the URLs of the archives you'd like to mirror: Please mirror `https://boostorg.jfrog.io/artifactory/main/release/1.82.0/source/boost_1_82_0.tar.gz` Expected mirrored URL: `https://mirror.bazel.build/boostorg.jfrog.io/artifactory/main/release/1.82.0/source/boost_1_82_0.tar.gz`
process
boost please list the urls of the archives you d like to mirror please mirror expected mirrored url
1
21,478
29,511,736,362
IssuesEvent
2023-06-04 02:00:10
lizhihao6/get-daily-arxiv-noti
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
opened
New submissions for Fri, 2 Jun 23
event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB
## Keyword: events ### Revisit Weakly-Supervised Audio-Visual Video Parsing from the Language Perspective - **Authors:** Yingying Fan, Yu Wu, Yutian Lin, Bo Du - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2306.00595 - **Pdf link:** https://arxiv.org/pdf/2306.00595 - **Abstract** We focus on the weakly-supervised audio-visual video parsing task (AVVP), which aims to identify and locate all the events in audio/visual modalities. Previous works only concentrate on video-level overall label denoising across modalities, but overlook the segment-level label noise, where adjacent video segments (i.e., 1-second video clips) may contain different events. However, recognizing events in the segment is challenging because its label could be any combination of events that occur in the video. To address this issue, we consider tackling AVVP from the language perspective, since language could freely describe how various events appear in each segment beyond fixed labels. Specifically, we design language prompts to describe all cases of event appearance for each video. Then, the similarity between language prompts and segments is calculated, where the event of the most similar prompt is regarded as the segment-level label. In addition, to deal with the mislabeled segments, we propose to perform dynamic re-weighting on the unreliable segments to adjust their labels. Experiments show that our simple yet effective approach outperforms state-of-the-art methods by a large margin. ### DAM-Net: Global Flood Detection from SAR Imagery Using Differential Attention Metric-Based Vision Transformers - **Authors:** Tamer Saleh, Xingxing Weng, Shimaa Holail, Chen Hao, Gui-Song Xia - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2306.00704 - **Pdf link:** https://arxiv.org/pdf/2306.00704 - **Abstract** The detection of flooded areas using high-resolution synthetic aperture radar (SAR) imagery is a critical task with applications in crisis and disaster management, as well as environmental resource planning. However, the complex nature of SAR images presents a challenge that often leads to an overestimation of the flood extent. To address this issue, we propose a novel differential attention metric-based network (DAM-Net) in this study. The DAM-Net comprises two key components: a weight-sharing Siamese backbone to obtain multi-scale change features of multi-temporal images and tokens containing high-level semantic information of water-body changes, and a temporal differential fusion (TDF) module that integrates semantic tokens and change features to generate flood maps with reduced speckle noise. Specifically, the backbone is split into multiple stages. In each stage, we design three modules, namely, temporal-wise feature extraction (TWFE), cross-temporal change attention (CTCA), and temporal-aware change enhancement (TACE), to effectively extract the change features. In TACE of the last stage, we introduce a class token to record high-level semantic information of water-body changes via the attention mechanism. Another challenge faced by data-driven deep learning algorithms is the limited availability of flood detection datasets. To overcome this, we have created the S1GFloods open-source dataset, a global-scale high-resolution Sentinel-1 SAR image pairs dataset covering 46 global flood events between 2015 and 2022. The experiments on the S1GFloods dataset using the proposed DAM-Net showed top results compared to state-of-the-art methods in terms of overall accuracy, F1-score, and IoU, which reached 97.8%, 96.5%, and 93.2%, respectively. Our dataset and code will be available online at https://github.com/Tamer-Saleh/S1GFlood-Detection. ## Keyword: event camera ### Deformable Convolutions and LSTM-based Flexible Event Frame Fusion Network for Motion Deblurring - **Authors:** Dan Yang, Mehmet Yamac - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Machine Learning (cs.LG) - **Arxiv link:** https://arxiv.org/abs/2306.00834 - **Pdf link:** https://arxiv.org/pdf/2306.00834 - **Abstract** Event cameras differ from conventional RGB cameras in that they produce asynchronous data sequences. While RGB cameras capture every frame at a fixed rate, event cameras only capture changes in the scene, resulting in sparse and asynchronous data output. Despite the fact that event data carries useful information that can be utilized in motion deblurring of RGB cameras, integrating event and image information remains a challenge. Recent state-of-the-art CNN-based deblurring solutions produce multiple 2-D event frames based on the accumulation of event data over a time period. In most of these techniques, however, the number of event frames is fixed and predefined, which reduces temporal resolution drastically, particularly for scenarios when fast-moving objects are present or when longer exposure times are required. It is also important to note that recent modern cameras (e.g., cameras in mobile phones) dynamically set the exposure time of the image, which presents an additional problem for networks developed for a fixed number of event frames. A Long Short-Term Memory (LSTM)-based event feature extraction module has been developed for addressing these challenges, which enables us to use a dynamically varying number of event frames. Using these modules, we constructed a state-of-the-art deblurring network, Deformable Convolutions and LSTM-based Flexible Event Frame Fusion Network (DLEFNet). It is particularly useful for scenarios in which exposure times vary depending on factors such as lighting conditions or the presence of fast-moving objects in the scene. It has been demonstrated through evaluation results that the proposed method can outperform the existing state-of-the-art networks for deblurring task in synthetic and real-world data sets. ## Keyword: events camera There is no result ## Keyword: white balance There is no result ## Keyword: color contrast There is no result ## Keyword: AWB There is no result ## Keyword: ISP ### Multi-Modal Deep Learning for Multi-Temporal Urban Mapping With a Partly Missing Optical Modality - **Authors:** Sebastian Hafner, Yifang Ban - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV) - **Arxiv link:** https://arxiv.org/abs/2306.00640 - **Pdf link:** https://arxiv.org/pdf/2306.00640 - **Abstract** This paper proposes a novel multi-temporal urban mapping approach using multi-modal satellite data from the Sentinel-1 Synthetic Aperture Radar (SAR) and Sentinel-2 MultiSpectral Instrument (MSI) missions. In particular, it focuses on the problem of a partly missing optical modality due to clouds. The proposed model utilizes two networks to extract features from each modality separately. In addition, a reconstruction network is utilized to approximate the optical features based on the SAR data in case of a missing optical modality. Our experiments on a multi-temporal urban mapping dataset with Sentinel-1 SAR and Sentinel-2 MSI data demonstrate that the proposed method outperforms a multi-modal approach that uses zero values as a replacement for missing optical data, as well as a uni-modal SAR-based approach. Therefore, the proposed method is effective in exploiting multi-modal data, if available, but it also retains its effectiveness in case the optical modality is missing. ### Universal Test-time Adaptation through Weight Ensembling, Diversity Weighting, and Prior Correction - **Authors:** Robert A. Marsden, Mario Döbler, Bin Yang - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG) - **Arxiv link:** https://arxiv.org/abs/2306.00650 - **Pdf link:** https://arxiv.org/pdf/2306.00650 - **Abstract** Since distribution shifts are likely to occur during test-time and can drastically decrease the model's performance, online test-time adaptation (TTA) continues to update the model after deployment, leveraging the current test data. Clearly, a method proposed for online TTA has to perform well for all kinds of environmental conditions. By introducing the variable factors 'domain non-stationarity' and 'temporal correlation', we first unfold all practically relevant settings and define the entity as universal TTA. To tackle the problem of universal TTA, we identify and highlight several challenges a self-training based method has to deal with, including: 1) model bias and the occurrence of trivial solutions when performing entropy minimization on varying sequence lengths with and without multiple domain shifts, 2) loss of generalization which exacerbates the adaptation to future domain shifts and the occurrence of catastrophic forgetting, and 3) performance degradation due to shifts in label prior. To prevent the model from becoming biased, we leverage a dataset and model-agnostic certainty and diversity weighting. In order to maintain generalization and prevent catastrophic forgetting, we propose to continually weight-average the source and adapted model. To compensate for disparities in the label prior during test-time, we propose an adaptive additive prior correction scheme. We evaluate our approach, named ROID, on a wide range of settings, datasets, and models, setting new standards in the field of universal TTA. ### DeepFake-Adapter: Dual-Level Adapter for DeepFake Detection - **Authors:** Rui Shao, Tianxing Wu, Liqiang Nie, Ziwei Liu - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2306.00863 - **Pdf link:** https://arxiv.org/pdf/2306.00863 - **Abstract** Existing deepfake detection methods fail to generalize well to unseen or degraded samples, which can be attributed to the over-fitting of low-level forgery patterns. Here we argue that high-level semantics are also indispensable recipes for generalizable forgery detection. Recently, large pre-trained Vision Transformers (ViTs) have shown promising generalization capability. In this paper, we propose the first parameter-efficient tuning approach for deepfake detection, namely DeepFake-Adapter, to effectively and efficiently adapt the generalizable high-level semantics from large pre-trained ViTs to aid deepfake detection. Given large pre-trained models but limited deepfake data, DeepFake-Adapter introduces lightweight yet dedicated dual-level adapter modules to a ViT while keeping the model backbone frozen. Specifically, to guide the adaptation process to be aware of both global and local forgery cues of deepfake data, 1) we not only insert Globally-aware Bottleneck Adapters in parallel to MLP layers of ViT, 2) but also actively cross-attend Locally-aware Spatial Adapters with features from ViT. Unlike existing deepfake detection methods merely focusing on low-level forgery patterns, the forgery detection process of our model can be regularized by generalizable high-level semantics from a pre-trained ViT and adapted by global and local low-level forgeries of deepfake data. Extensive experiments on several standard deepfake detection benchmarks validate the effectiveness of our approach. Notably, DeepFake-Adapter demonstrates a convincing advantage under cross-dataset and cross-manipulation settings. The source code is released at https://github.com/rshaojimmy/DeepFake-Adapter ### MOSAIC: Masked Optimisation with Selective Attention for Image Reconstruction - **Authors:** Pamuditha Somarathne, Tharindu Wickremasinghe, Amashi Niwarthana, A. Thieshanthan, Chamira U.S. Edussooriya, Dushan N. Wadduwage - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV) - **Arxiv link:** https://arxiv.org/abs/2306.00906 - **Pdf link:** https://arxiv.org/pdf/2306.00906 - **Abstract** Compressive sensing (CS) reconstructs images from sub-Nyquist measurements by solving a sparsity-regularized inverse problem. Traditional CS solvers use iterative optimizers with hand crafted sparsifiers, while early data-driven methods directly learn an inverse mapping from the low-dimensional measurement space to the original image space. The latter outperforms the former, but is restrictive to a pre-defined measurement domain. More recent, deep unrolling methods combine traditional proximal gradient methods and data-driven approaches to iteratively refine an image approximation. To achieve higher accuracy, it has also been suggested to learn both the sampling matrix, and the choice of measurement vectors adaptively. Contrary to the current trend, in this work we hypothesize that a general inverse mapping from a random set of compressed measurements to the image domain exists for a given measurement basis, and can be learned. Such a model is single-shot, non-restrictive and does not parametrize the sampling process. To this end, we propose MOSAIC, a novel compressive sensing framework to reconstruct images given any random selection of measurements, sampled using a fixed basis. Motivated by the uneven distribution of information across measurements, MOSAIC incorporates an embedding technique to efficiently apply attention mechanisms on an encoded sequence of measurements, while dispensing the need to use unrolled deep networks. A range of experiments validate our proposed architecture as a promising alternative for existing CS reconstruction methods, by achieving the state-of-the-art for metrics of reconstruction accuracy on standard datasets. ### Cocktail: Mixing Multi-Modality Controls for Text-Conditional Image Generation - **Authors:** Minghui Hu, Jianbin Zheng, Daqing Liu, Chuanxia Zheng, Chaoyue Wang, Dacheng Tao, Tat-Jen Cham - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2306.00964 - **Pdf link:** https://arxiv.org/pdf/2306.00964 - **Abstract** Text-conditional diffusion models are able to generate high-fidelity images with diverse contents. However, linguistic representations frequently exhibit ambiguous descriptions of the envisioned objective imagery, requiring the incorporation of additional control signals to bolster the efficacy of text-guided diffusion models. In this work, we propose Cocktail, a pipeline to mix various modalities into one embedding, amalgamated with a generalized ControlNet (gControlNet), a controllable normalisation (ControlNorm), and a spatial guidance sampling method, to actualize multi-modal and spatially-refined control for text-conditional diffusion models. Specifically, we introduce a hyper-network gControlNet, dedicated to the alignment and infusion of the control signals from disparate modalities into the pre-trained diffusion model. gControlNet is capable of accepting flexible modality signals, encompassing the simultaneous reception of any combination of modality signals, or the supplementary fusion of multiple modality signals. The control signals are then fused and injected into the backbone model according to our proposed ControlNorm. Furthermore, our advanced spatial guidance sampling methodology proficiently incorporates the control signal into the designated region, thereby circumventing the manifestation of undesired objects within the generated image. We demonstrate the results of our method in controlling various modalities, proving high-quality synthesis and fidelity to multiple external signals. ## Keyword: image signal processing There is no result ## Keyword: image signal process There is no result ## Keyword: compression ### HySpecNet-11k: A Large-Scale Hyperspectral Dataset for Benchmarking Learning-Based Hyperspectral Image Compression Methods - **Authors:** Martin Hermann Paul Fuchs, Begüm Demir - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV) - **Arxiv link:** https://arxiv.org/abs/2306.00385 - **Pdf link:** https://arxiv.org/pdf/2306.00385 - **Abstract** The development of learning-based hyperspectral image compression methods has recently attracted great attention in remote sensing. Such methods require a high number of hyperspectral images to be used during training to optimize all parameters and reach a high compression performance. However, existing hyperspectral datasets are not sufficient to train and evaluate learning-based compression methods, which hinders the research in this field. To address this problem, in this paper we present HySpecNet-11k that is a large-scale hyperspectral benchmark dataset made up of 11,483 nonoverlapping image patches. Each patch is a portion of 128 $\times$ 128 pixels with 224 spectral bands and a ground sample distance of 30 m. We exploit HySpecNet-11k to benchmark the current state of the art in learning-based hyperspectral image compression by focussing our attention on various 1D, 2D and 3D convolutional autoencoder architectures. Nevertheless, HySpecNet-11k can be used for any unsupervised learning task in the framework of hyperspectral image analysis. The dataset, our code and the pre-trained weights are publicly available at https://hyspecnet.rsim.berlin. ### Wuerstchen: Efficient Pretraining of Text-to-Image Models - **Authors:** Pablo Pernias, Dominic Rampas, Marc Aubreville - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2306.00637 - **Pdf link:** https://arxiv.org/pdf/2306.00637 - **Abstract** We introduce Wuerstchen, a novel technique for text-to-image synthesis that unites competitive performance with unprecedented cost-effectiveness and ease of training on constrained hardware. Building on recent advancements in machine learning, our approach, which utilizes latent diffusion strategies at strong latent image compression rates, significantly reduces the computational burden, typically associated with state-of-the-art models, while preserving, if not enhancing, the quality of generated images. Wuerstchen achieves notable speed improvements at inference time, thereby rendering real-time applications more viable. One of the key advantages of our method lies in its modest training requirements of only 9,200 GPU hours, slashing the usual costs significantly without compromising the end performance. In a comparison against the state-of-the-art, we found the approach to yield strong competitiveness. This paper opens the door to a new line of research that prioritizes both performance and computational accessibility, hence democratizing the use of sophisticated AI technologies. Through Wuerstchen, we demonstrate a compelling stride forward in the realm of text-to-image synthesis, offering an innovative path to explore in future research. ## Keyword: RAW ### Controllable Motion Diffusion Model - **Authors:** Yi Shi, Jingbo Wang, Xuekun Jiang, Bo Dai - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Graphics (cs.GR) - **Arxiv link:** https://arxiv.org/abs/2306.00416 - **Pdf link:** https://arxiv.org/pdf/2306.00416 - **Abstract** Generating realistic and controllable motions for virtual characters is a challenging task in computer animation, and its implications extend to games, simulations, and virtual reality. Recent studies have drawn inspiration from the success of diffusion models in image generation, demonstrating the potential for addressing this task. However, the majority of these studies have been limited to offline applications that target at sequence-level generation that generates all steps simultaneously. To enable real-time motion synthesis with diffusion models in response to time-varying control signals, we propose the framework of the Controllable Motion Diffusion Model (COMODO). Our framework begins with an auto-regressive motion diffusion model (A-MDM), which generates motion sequences step by step. In this way, simply using the standard DDPM algorithm without any additional complexity, our framework is able to generate high-fidelity motion sequences over extended periods with different types of control signals. Then, we propose our reinforcement learning-based controller and controlling strategies on top of the A-MDM model, so that our framework can steer the motion synthesis process across multiple tasks, including target reaching, joystick-based control, goal-oriented control, and trajectory following. The proposed framework enables the real-time generation of diverse motions that react adaptively to user commands on-the-fly, thereby enhancing the overall user experience. Besides, it is compatible with the inpainting-based editing methods and can predict much more diverse motions without additional fine-tuning of the basic motion generation models. We conduct comprehensive experiments to evaluate the effectiveness of our framework in performing various tasks and compare its performance against state-of-the-art methods. ## Keyword: raw image There is no result
2.0
New submissions for Fri, 2 Jun 23 - ## Keyword: events ### Revisit Weakly-Supervised Audio-Visual Video Parsing from the Language Perspective - **Authors:** Yingying Fan, Yu Wu, Yutian Lin, Bo Du - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2306.00595 - **Pdf link:** https://arxiv.org/pdf/2306.00595 - **Abstract** We focus on the weakly-supervised audio-visual video parsing task (AVVP), which aims to identify and locate all the events in audio/visual modalities. Previous works only concentrate on video-level overall label denoising across modalities, but overlook the segment-level label noise, where adjacent video segments (i.e., 1-second video clips) may contain different events. However, recognizing events in the segment is challenging because its label could be any combination of events that occur in the video. To address this issue, we consider tackling AVVP from the language perspective, since language could freely describe how various events appear in each segment beyond fixed labels. Specifically, we design language prompts to describe all cases of event appearance for each video. Then, the similarity between language prompts and segments is calculated, where the event of the most similar prompt is regarded as the segment-level label. In addition, to deal with the mislabeled segments, we propose to perform dynamic re-weighting on the unreliable segments to adjust their labels. Experiments show that our simple yet effective approach outperforms state-of-the-art methods by a large margin. ### DAM-Net: Global Flood Detection from SAR Imagery Using Differential Attention Metric-Based Vision Transformers - **Authors:** Tamer Saleh, Xingxing Weng, Shimaa Holail, Chen Hao, Gui-Song Xia - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2306.00704 - **Pdf link:** https://arxiv.org/pdf/2306.00704 - **Abstract** The detection of flooded areas using high-resolution synthetic aperture radar (SAR) imagery is a critical task with applications in crisis and disaster management, as well as environmental resource planning. However, the complex nature of SAR images presents a challenge that often leads to an overestimation of the flood extent. To address this issue, we propose a novel differential attention metric-based network (DAM-Net) in this study. The DAM-Net comprises two key components: a weight-sharing Siamese backbone to obtain multi-scale change features of multi-temporal images and tokens containing high-level semantic information of water-body changes, and a temporal differential fusion (TDF) module that integrates semantic tokens and change features to generate flood maps with reduced speckle noise. Specifically, the backbone is split into multiple stages. In each stage, we design three modules, namely, temporal-wise feature extraction (TWFE), cross-temporal change attention (CTCA), and temporal-aware change enhancement (TACE), to effectively extract the change features. In TACE of the last stage, we introduce a class token to record high-level semantic information of water-body changes via the attention mechanism. Another challenge faced by data-driven deep learning algorithms is the limited availability of flood detection datasets. To overcome this, we have created the S1GFloods open-source dataset, a global-scale high-resolution Sentinel-1 SAR image pairs dataset covering 46 global flood events between 2015 and 2022. The experiments on the S1GFloods dataset using the proposed DAM-Net showed top results compared to state-of-the-art methods in terms of overall accuracy, F1-score, and IoU, which reached 97.8%, 96.5%, and 93.2%, respectively. Our dataset and code will be available online at https://github.com/Tamer-Saleh/S1GFlood-Detection. ## Keyword: event camera ### Deformable Convolutions and LSTM-based Flexible Event Frame Fusion Network for Motion Deblurring - **Authors:** Dan Yang, Mehmet Yamac - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Machine Learning (cs.LG) - **Arxiv link:** https://arxiv.org/abs/2306.00834 - **Pdf link:** https://arxiv.org/pdf/2306.00834 - **Abstract** Event cameras differ from conventional RGB cameras in that they produce asynchronous data sequences. While RGB cameras capture every frame at a fixed rate, event cameras only capture changes in the scene, resulting in sparse and asynchronous data output. Despite the fact that event data carries useful information that can be utilized in motion deblurring of RGB cameras, integrating event and image information remains a challenge. Recent state-of-the-art CNN-based deblurring solutions produce multiple 2-D event frames based on the accumulation of event data over a time period. In most of these techniques, however, the number of event frames is fixed and predefined, which reduces temporal resolution drastically, particularly for scenarios when fast-moving objects are present or when longer exposure times are required. It is also important to note that recent modern cameras (e.g., cameras in mobile phones) dynamically set the exposure time of the image, which presents an additional problem for networks developed for a fixed number of event frames. A Long Short-Term Memory (LSTM)-based event feature extraction module has been developed for addressing these challenges, which enables us to use a dynamically varying number of event frames. Using these modules, we constructed a state-of-the-art deblurring network, Deformable Convolutions and LSTM-based Flexible Event Frame Fusion Network (DLEFNet). It is particularly useful for scenarios in which exposure times vary depending on factors such as lighting conditions or the presence of fast-moving objects in the scene. It has been demonstrated through evaluation results that the proposed method can outperform the existing state-of-the-art networks for deblurring task in synthetic and real-world data sets. ## Keyword: events camera There is no result ## Keyword: white balance There is no result ## Keyword: color contrast There is no result ## Keyword: AWB There is no result ## Keyword: ISP ### Multi-Modal Deep Learning for Multi-Temporal Urban Mapping With a Partly Missing Optical Modality - **Authors:** Sebastian Hafner, Yifang Ban - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV) - **Arxiv link:** https://arxiv.org/abs/2306.00640 - **Pdf link:** https://arxiv.org/pdf/2306.00640 - **Abstract** This paper proposes a novel multi-temporal urban mapping approach using multi-modal satellite data from the Sentinel-1 Synthetic Aperture Radar (SAR) and Sentinel-2 MultiSpectral Instrument (MSI) missions. In particular, it focuses on the problem of a partly missing optical modality due to clouds. The proposed model utilizes two networks to extract features from each modality separately. In addition, a reconstruction network is utilized to approximate the optical features based on the SAR data in case of a missing optical modality. Our experiments on a multi-temporal urban mapping dataset with Sentinel-1 SAR and Sentinel-2 MSI data demonstrate that the proposed method outperforms a multi-modal approach that uses zero values as a replacement for missing optical data, as well as a uni-modal SAR-based approach. Therefore, the proposed method is effective in exploiting multi-modal data, if available, but it also retains its effectiveness in case the optical modality is missing. ### Universal Test-time Adaptation through Weight Ensembling, Diversity Weighting, and Prior Correction - **Authors:** Robert A. Marsden, Mario Döbler, Bin Yang - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG) - **Arxiv link:** https://arxiv.org/abs/2306.00650 - **Pdf link:** https://arxiv.org/pdf/2306.00650 - **Abstract** Since distribution shifts are likely to occur during test-time and can drastically decrease the model's performance, online test-time adaptation (TTA) continues to update the model after deployment, leveraging the current test data. Clearly, a method proposed for online TTA has to perform well for all kinds of environmental conditions. By introducing the variable factors 'domain non-stationarity' and 'temporal correlation', we first unfold all practically relevant settings and define the entity as universal TTA. To tackle the problem of universal TTA, we identify and highlight several challenges a self-training based method has to deal with, including: 1) model bias and the occurrence of trivial solutions when performing entropy minimization on varying sequence lengths with and without multiple domain shifts, 2) loss of generalization which exacerbates the adaptation to future domain shifts and the occurrence of catastrophic forgetting, and 3) performance degradation due to shifts in label prior. To prevent the model from becoming biased, we leverage a dataset and model-agnostic certainty and diversity weighting. In order to maintain generalization and prevent catastrophic forgetting, we propose to continually weight-average the source and adapted model. To compensate for disparities in the label prior during test-time, we propose an adaptive additive prior correction scheme. We evaluate our approach, named ROID, on a wide range of settings, datasets, and models, setting new standards in the field of universal TTA. ### DeepFake-Adapter: Dual-Level Adapter for DeepFake Detection - **Authors:** Rui Shao, Tianxing Wu, Liqiang Nie, Ziwei Liu - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2306.00863 - **Pdf link:** https://arxiv.org/pdf/2306.00863 - **Abstract** Existing deepfake detection methods fail to generalize well to unseen or degraded samples, which can be attributed to the over-fitting of low-level forgery patterns. Here we argue that high-level semantics are also indispensable recipes for generalizable forgery detection. Recently, large pre-trained Vision Transformers (ViTs) have shown promising generalization capability. In this paper, we propose the first parameter-efficient tuning approach for deepfake detection, namely DeepFake-Adapter, to effectively and efficiently adapt the generalizable high-level semantics from large pre-trained ViTs to aid deepfake detection. Given large pre-trained models but limited deepfake data, DeepFake-Adapter introduces lightweight yet dedicated dual-level adapter modules to a ViT while keeping the model backbone frozen. Specifically, to guide the adaptation process to be aware of both global and local forgery cues of deepfake data, 1) we not only insert Globally-aware Bottleneck Adapters in parallel to MLP layers of ViT, 2) but also actively cross-attend Locally-aware Spatial Adapters with features from ViT. Unlike existing deepfake detection methods merely focusing on low-level forgery patterns, the forgery detection process of our model can be regularized by generalizable high-level semantics from a pre-trained ViT and adapted by global and local low-level forgeries of deepfake data. Extensive experiments on several standard deepfake detection benchmarks validate the effectiveness of our approach. Notably, DeepFake-Adapter demonstrates a convincing advantage under cross-dataset and cross-manipulation settings. The source code is released at https://github.com/rshaojimmy/DeepFake-Adapter ### MOSAIC: Masked Optimisation with Selective Attention for Image Reconstruction - **Authors:** Pamuditha Somarathne, Tharindu Wickremasinghe, Amashi Niwarthana, A. Thieshanthan, Chamira U.S. Edussooriya, Dushan N. Wadduwage - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV) - **Arxiv link:** https://arxiv.org/abs/2306.00906 - **Pdf link:** https://arxiv.org/pdf/2306.00906 - **Abstract** Compressive sensing (CS) reconstructs images from sub-Nyquist measurements by solving a sparsity-regularized inverse problem. Traditional CS solvers use iterative optimizers with hand crafted sparsifiers, while early data-driven methods directly learn an inverse mapping from the low-dimensional measurement space to the original image space. The latter outperforms the former, but is restrictive to a pre-defined measurement domain. More recent, deep unrolling methods combine traditional proximal gradient methods and data-driven approaches to iteratively refine an image approximation. To achieve higher accuracy, it has also been suggested to learn both the sampling matrix, and the choice of measurement vectors adaptively. Contrary to the current trend, in this work we hypothesize that a general inverse mapping from a random set of compressed measurements to the image domain exists for a given measurement basis, and can be learned. Such a model is single-shot, non-restrictive and does not parametrize the sampling process. To this end, we propose MOSAIC, a novel compressive sensing framework to reconstruct images given any random selection of measurements, sampled using a fixed basis. Motivated by the uneven distribution of information across measurements, MOSAIC incorporates an embedding technique to efficiently apply attention mechanisms on an encoded sequence of measurements, while dispensing the need to use unrolled deep networks. A range of experiments validate our proposed architecture as a promising alternative for existing CS reconstruction methods, by achieving the state-of-the-art for metrics of reconstruction accuracy on standard datasets. ### Cocktail: Mixing Multi-Modality Controls for Text-Conditional Image Generation - **Authors:** Minghui Hu, Jianbin Zheng, Daqing Liu, Chuanxia Zheng, Chaoyue Wang, Dacheng Tao, Tat-Jen Cham - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2306.00964 - **Pdf link:** https://arxiv.org/pdf/2306.00964 - **Abstract** Text-conditional diffusion models are able to generate high-fidelity images with diverse contents. However, linguistic representations frequently exhibit ambiguous descriptions of the envisioned objective imagery, requiring the incorporation of additional control signals to bolster the efficacy of text-guided diffusion models. In this work, we propose Cocktail, a pipeline to mix various modalities into one embedding, amalgamated with a generalized ControlNet (gControlNet), a controllable normalisation (ControlNorm), and a spatial guidance sampling method, to actualize multi-modal and spatially-refined control for text-conditional diffusion models. Specifically, we introduce a hyper-network gControlNet, dedicated to the alignment and infusion of the control signals from disparate modalities into the pre-trained diffusion model. gControlNet is capable of accepting flexible modality signals, encompassing the simultaneous reception of any combination of modality signals, or the supplementary fusion of multiple modality signals. The control signals are then fused and injected into the backbone model according to our proposed ControlNorm. Furthermore, our advanced spatial guidance sampling methodology proficiently incorporates the control signal into the designated region, thereby circumventing the manifestation of undesired objects within the generated image. We demonstrate the results of our method in controlling various modalities, proving high-quality synthesis and fidelity to multiple external signals. ## Keyword: image signal processing There is no result ## Keyword: image signal process There is no result ## Keyword: compression ### HySpecNet-11k: A Large-Scale Hyperspectral Dataset for Benchmarking Learning-Based Hyperspectral Image Compression Methods - **Authors:** Martin Hermann Paul Fuchs, Begüm Demir - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV) - **Arxiv link:** https://arxiv.org/abs/2306.00385 - **Pdf link:** https://arxiv.org/pdf/2306.00385 - **Abstract** The development of learning-based hyperspectral image compression methods has recently attracted great attention in remote sensing. Such methods require a high number of hyperspectral images to be used during training to optimize all parameters and reach a high compression performance. However, existing hyperspectral datasets are not sufficient to train and evaluate learning-based compression methods, which hinders the research in this field. To address this problem, in this paper we present HySpecNet-11k that is a large-scale hyperspectral benchmark dataset made up of 11,483 nonoverlapping image patches. Each patch is a portion of 128 $\times$ 128 pixels with 224 spectral bands and a ground sample distance of 30 m. We exploit HySpecNet-11k to benchmark the current state of the art in learning-based hyperspectral image compression by focussing our attention on various 1D, 2D and 3D convolutional autoencoder architectures. Nevertheless, HySpecNet-11k can be used for any unsupervised learning task in the framework of hyperspectral image analysis. The dataset, our code and the pre-trained weights are publicly available at https://hyspecnet.rsim.berlin. ### Wuerstchen: Efficient Pretraining of Text-to-Image Models - **Authors:** Pablo Pernias, Dominic Rampas, Marc Aubreville - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2306.00637 - **Pdf link:** https://arxiv.org/pdf/2306.00637 - **Abstract** We introduce Wuerstchen, a novel technique for text-to-image synthesis that unites competitive performance with unprecedented cost-effectiveness and ease of training on constrained hardware. Building on recent advancements in machine learning, our approach, which utilizes latent diffusion strategies at strong latent image compression rates, significantly reduces the computational burden, typically associated with state-of-the-art models, while preserving, if not enhancing, the quality of generated images. Wuerstchen achieves notable speed improvements at inference time, thereby rendering real-time applications more viable. One of the key advantages of our method lies in its modest training requirements of only 9,200 GPU hours, slashing the usual costs significantly without compromising the end performance. In a comparison against the state-of-the-art, we found the approach to yield strong competitiveness. This paper opens the door to a new line of research that prioritizes both performance and computational accessibility, hence democratizing the use of sophisticated AI technologies. Through Wuerstchen, we demonstrate a compelling stride forward in the realm of text-to-image synthesis, offering an innovative path to explore in future research. ## Keyword: RAW ### Controllable Motion Diffusion Model - **Authors:** Yi Shi, Jingbo Wang, Xuekun Jiang, Bo Dai - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Graphics (cs.GR) - **Arxiv link:** https://arxiv.org/abs/2306.00416 - **Pdf link:** https://arxiv.org/pdf/2306.00416 - **Abstract** Generating realistic and controllable motions for virtual characters is a challenging task in computer animation, and its implications extend to games, simulations, and virtual reality. Recent studies have drawn inspiration from the success of diffusion models in image generation, demonstrating the potential for addressing this task. However, the majority of these studies have been limited to offline applications that target at sequence-level generation that generates all steps simultaneously. To enable real-time motion synthesis with diffusion models in response to time-varying control signals, we propose the framework of the Controllable Motion Diffusion Model (COMODO). Our framework begins with an auto-regressive motion diffusion model (A-MDM), which generates motion sequences step by step. In this way, simply using the standard DDPM algorithm without any additional complexity, our framework is able to generate high-fidelity motion sequences over extended periods with different types of control signals. Then, we propose our reinforcement learning-based controller and controlling strategies on top of the A-MDM model, so that our framework can steer the motion synthesis process across multiple tasks, including target reaching, joystick-based control, goal-oriented control, and trajectory following. The proposed framework enables the real-time generation of diverse motions that react adaptively to user commands on-the-fly, thereby enhancing the overall user experience. Besides, it is compatible with the inpainting-based editing methods and can predict much more diverse motions without additional fine-tuning of the basic motion generation models. We conduct comprehensive experiments to evaluate the effectiveness of our framework in performing various tasks and compare its performance against state-of-the-art methods. ## Keyword: raw image There is no result
process
new submissions for fri jun keyword events revisit weakly supervised audio visual video parsing from the language perspective authors yingying fan yu wu yutian lin bo du subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract we focus on the weakly supervised audio visual video parsing task avvp which aims to identify and locate all the events in audio visual modalities previous works only concentrate on video level overall label denoising across modalities but overlook the segment level label noise where adjacent video segments i e second video clips may contain different events however recognizing events in the segment is challenging because its label could be any combination of events that occur in the video to address this issue we consider tackling avvp from the language perspective since language could freely describe how various events appear in each segment beyond fixed labels specifically we design language prompts to describe all cases of event appearance for each video then the similarity between language prompts and segments is calculated where the event of the most similar prompt is regarded as the segment level label in addition to deal with the mislabeled segments we propose to perform dynamic re weighting on the unreliable segments to adjust their labels experiments show that our simple yet effective approach outperforms state of the art methods by a large margin dam net global flood detection from sar imagery using differential attention metric based vision transformers authors tamer saleh xingxing weng shimaa holail chen hao gui song xia subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract the detection of flooded areas using high resolution synthetic aperture radar sar imagery is a critical task with applications in crisis and disaster management as well as environmental resource planning however the complex nature of sar images presents a challenge that often leads to an overestimation of the flood extent to address this issue we propose a novel differential attention metric based network dam net in this study the dam net comprises two key components a weight sharing siamese backbone to obtain multi scale change features of multi temporal images and tokens containing high level semantic information of water body changes and a temporal differential fusion tdf module that integrates semantic tokens and change features to generate flood maps with reduced speckle noise specifically the backbone is split into multiple stages in each stage we design three modules namely temporal wise feature extraction twfe cross temporal change attention ctca and temporal aware change enhancement tace to effectively extract the change features in tace of the last stage we introduce a class token to record high level semantic information of water body changes via the attention mechanism another challenge faced by data driven deep learning algorithms is the limited availability of flood detection datasets to overcome this we have created the open source dataset a global scale high resolution sentinel sar image pairs dataset covering global flood events between and the experiments on the dataset using the proposed dam net showed top results compared to state of the art methods in terms of overall accuracy score and iou which reached and respectively our dataset and code will be available online at keyword event camera deformable convolutions and lstm based flexible event frame fusion network for motion deblurring authors dan yang mehmet yamac subjects computer vision and pattern recognition cs cv artificial intelligence cs ai machine learning cs lg arxiv link pdf link abstract event cameras differ from conventional rgb cameras in that they produce asynchronous data sequences while rgb cameras capture every frame at a fixed rate event cameras only capture changes in the scene resulting in sparse and asynchronous data output despite the fact that event data carries useful information that can be utilized in motion deblurring of rgb cameras integrating event and image information remains a challenge recent state of the art cnn based deblurring solutions produce multiple d event frames based on the accumulation of event data over a time period in most of these techniques however the number of event frames is fixed and predefined which reduces temporal resolution drastically particularly for scenarios when fast moving objects are present or when longer exposure times are required it is also important to note that recent modern cameras e g cameras in mobile phones dynamically set the exposure time of the image which presents an additional problem for networks developed for a fixed number of event frames a long short term memory lstm based event feature extraction module has been developed for addressing these challenges which enables us to use a dynamically varying number of event frames using these modules we constructed a state of the art deblurring network deformable convolutions and lstm based flexible event frame fusion network dlefnet it is particularly useful for scenarios in which exposure times vary depending on factors such as lighting conditions or the presence of fast moving objects in the scene it has been demonstrated through evaluation results that the proposed method can outperform the existing state of the art networks for deblurring task in synthetic and real world data sets keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb there is no result keyword isp multi modal deep learning for multi temporal urban mapping with a partly missing optical modality authors sebastian hafner yifang ban subjects computer vision and pattern recognition cs cv image and video processing eess iv arxiv link pdf link abstract this paper proposes a novel multi temporal urban mapping approach using multi modal satellite data from the sentinel synthetic aperture radar sar and sentinel multispectral instrument msi missions in particular it focuses on the problem of a partly missing optical modality due to clouds the proposed model utilizes two networks to extract features from each modality separately in addition a reconstruction network is utilized to approximate the optical features based on the sar data in case of a missing optical modality our experiments on a multi temporal urban mapping dataset with sentinel sar and sentinel msi data demonstrate that the proposed method outperforms a multi modal approach that uses zero values as a replacement for missing optical data as well as a uni modal sar based approach therefore the proposed method is effective in exploiting multi modal data if available but it also retains its effectiveness in case the optical modality is missing universal test time adaptation through weight ensembling diversity weighting and prior correction authors robert a marsden mario döbler bin yang subjects computer vision and pattern recognition cs cv machine learning cs lg arxiv link pdf link abstract since distribution shifts are likely to occur during test time and can drastically decrease the model s performance online test time adaptation tta continues to update the model after deployment leveraging the current test data clearly a method proposed for online tta has to perform well for all kinds of environmental conditions by introducing the variable factors domain non stationarity and temporal correlation we first unfold all practically relevant settings and define the entity as universal tta to tackle the problem of universal tta we identify and highlight several challenges a self training based method has to deal with including model bias and the occurrence of trivial solutions when performing entropy minimization on varying sequence lengths with and without multiple domain shifts loss of generalization which exacerbates the adaptation to future domain shifts and the occurrence of catastrophic forgetting and performance degradation due to shifts in label prior to prevent the model from becoming biased we leverage a dataset and model agnostic certainty and diversity weighting in order to maintain generalization and prevent catastrophic forgetting we propose to continually weight average the source and adapted model to compensate for disparities in the label prior during test time we propose an adaptive additive prior correction scheme we evaluate our approach named roid on a wide range of settings datasets and models setting new standards in the field of universal tta deepfake adapter dual level adapter for deepfake detection authors rui shao tianxing wu liqiang nie ziwei liu subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract existing deepfake detection methods fail to generalize well to unseen or degraded samples which can be attributed to the over fitting of low level forgery patterns here we argue that high level semantics are also indispensable recipes for generalizable forgery detection recently large pre trained vision transformers vits have shown promising generalization capability in this paper we propose the first parameter efficient tuning approach for deepfake detection namely deepfake adapter to effectively and efficiently adapt the generalizable high level semantics from large pre trained vits to aid deepfake detection given large pre trained models but limited deepfake data deepfake adapter introduces lightweight yet dedicated dual level adapter modules to a vit while keeping the model backbone frozen specifically to guide the adaptation process to be aware of both global and local forgery cues of deepfake data we not only insert globally aware bottleneck adapters in parallel to mlp layers of vit but also actively cross attend locally aware spatial adapters with features from vit unlike existing deepfake detection methods merely focusing on low level forgery patterns the forgery detection process of our model can be regularized by generalizable high level semantics from a pre trained vit and adapted by global and local low level forgeries of deepfake data extensive experiments on several standard deepfake detection benchmarks validate the effectiveness of our approach notably deepfake adapter demonstrates a convincing advantage under cross dataset and cross manipulation settings the source code is released at mosaic masked optimisation with selective attention for image reconstruction authors pamuditha somarathne tharindu wickremasinghe amashi niwarthana a thieshanthan chamira u s edussooriya dushan n wadduwage subjects computer vision and pattern recognition cs cv image and video processing eess iv arxiv link pdf link abstract compressive sensing cs reconstructs images from sub nyquist measurements by solving a sparsity regularized inverse problem traditional cs solvers use iterative optimizers with hand crafted sparsifiers while early data driven methods directly learn an inverse mapping from the low dimensional measurement space to the original image space the latter outperforms the former but is restrictive to a pre defined measurement domain more recent deep unrolling methods combine traditional proximal gradient methods and data driven approaches to iteratively refine an image approximation to achieve higher accuracy it has also been suggested to learn both the sampling matrix and the choice of measurement vectors adaptively contrary to the current trend in this work we hypothesize that a general inverse mapping from a random set of compressed measurements to the image domain exists for a given measurement basis and can be learned such a model is single shot non restrictive and does not parametrize the sampling process to this end we propose mosaic a novel compressive sensing framework to reconstruct images given any random selection of measurements sampled using a fixed basis motivated by the uneven distribution of information across measurements mosaic incorporates an embedding technique to efficiently apply attention mechanisms on an encoded sequence of measurements while dispensing the need to use unrolled deep networks a range of experiments validate our proposed architecture as a promising alternative for existing cs reconstruction methods by achieving the state of the art for metrics of reconstruction accuracy on standard datasets cocktail mixing multi modality controls for text conditional image generation authors minghui hu jianbin zheng daqing liu chuanxia zheng chaoyue wang dacheng tao tat jen cham subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract text conditional diffusion models are able to generate high fidelity images with diverse contents however linguistic representations frequently exhibit ambiguous descriptions of the envisioned objective imagery requiring the incorporation of additional control signals to bolster the efficacy of text guided diffusion models in this work we propose cocktail a pipeline to mix various modalities into one embedding amalgamated with a generalized controlnet gcontrolnet a controllable normalisation controlnorm and a spatial guidance sampling method to actualize multi modal and spatially refined control for text conditional diffusion models specifically we introduce a hyper network gcontrolnet dedicated to the alignment and infusion of the control signals from disparate modalities into the pre trained diffusion model gcontrolnet is capable of accepting flexible modality signals encompassing the simultaneous reception of any combination of modality signals or the supplementary fusion of multiple modality signals the control signals are then fused and injected into the backbone model according to our proposed controlnorm furthermore our advanced spatial guidance sampling methodology proficiently incorporates the control signal into the designated region thereby circumventing the manifestation of undesired objects within the generated image we demonstrate the results of our method in controlling various modalities proving high quality synthesis and fidelity to multiple external signals keyword image signal processing there is no result keyword image signal process there is no result keyword compression hyspecnet a large scale hyperspectral dataset for benchmarking learning based hyperspectral image compression methods authors martin hermann paul fuchs begüm demir subjects computer vision and pattern recognition cs cv image and video processing eess iv arxiv link pdf link abstract the development of learning based hyperspectral image compression methods has recently attracted great attention in remote sensing such methods require a high number of hyperspectral images to be used during training to optimize all parameters and reach a high compression performance however existing hyperspectral datasets are not sufficient to train and evaluate learning based compression methods which hinders the research in this field to address this problem in this paper we present hyspecnet that is a large scale hyperspectral benchmark dataset made up of nonoverlapping image patches each patch is a portion of times pixels with spectral bands and a ground sample distance of m we exploit hyspecnet to benchmark the current state of the art in learning based hyperspectral image compression by focussing our attention on various and convolutional autoencoder architectures nevertheless hyspecnet can be used for any unsupervised learning task in the framework of hyperspectral image analysis the dataset our code and the pre trained weights are publicly available at wuerstchen efficient pretraining of text to image models authors pablo pernias dominic rampas marc aubreville subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract we introduce wuerstchen a novel technique for text to image synthesis that unites competitive performance with unprecedented cost effectiveness and ease of training on constrained hardware building on recent advancements in machine learning our approach which utilizes latent diffusion strategies at strong latent image compression rates significantly reduces the computational burden typically associated with state of the art models while preserving if not enhancing the quality of generated images wuerstchen achieves notable speed improvements at inference time thereby rendering real time applications more viable one of the key advantages of our method lies in its modest training requirements of only gpu hours slashing the usual costs significantly without compromising the end performance in a comparison against the state of the art we found the approach to yield strong competitiveness this paper opens the door to a new line of research that prioritizes both performance and computational accessibility hence democratizing the use of sophisticated ai technologies through wuerstchen we demonstrate a compelling stride forward in the realm of text to image synthesis offering an innovative path to explore in future research keyword raw controllable motion diffusion model authors yi shi jingbo wang xuekun jiang bo dai subjects computer vision and pattern recognition cs cv artificial intelligence cs ai graphics cs gr arxiv link pdf link abstract generating realistic and controllable motions for virtual characters is a challenging task in computer animation and its implications extend to games simulations and virtual reality recent studies have drawn inspiration from the success of diffusion models in image generation demonstrating the potential for addressing this task however the majority of these studies have been limited to offline applications that target at sequence level generation that generates all steps simultaneously to enable real time motion synthesis with diffusion models in response to time varying control signals we propose the framework of the controllable motion diffusion model comodo our framework begins with an auto regressive motion diffusion model a mdm which generates motion sequences step by step in this way simply using the standard ddpm algorithm without any additional complexity our framework is able to generate high fidelity motion sequences over extended periods with different types of control signals then we propose our reinforcement learning based controller and controlling strategies on top of the a mdm model so that our framework can steer the motion synthesis process across multiple tasks including target reaching joystick based control goal oriented control and trajectory following the proposed framework enables the real time generation of diverse motions that react adaptively to user commands on the fly thereby enhancing the overall user experience besides it is compatible with the inpainting based editing methods and can predict much more diverse motions without additional fine tuning of the basic motion generation models we conduct comprehensive experiments to evaluate the effectiveness of our framework in performing various tasks and compare its performance against state of the art methods keyword raw image there is no result
1
95,138
3,934,640,636
IssuesEvent
2016-04-25 23:43:04
neuropoly/spinalcordtoolbox
https://api.github.com/repos/neuropoly/spinalcordtoolbox
closed
sct_register_to_template: bug
bug priority: high sct_register_to_template
Data: /Volumes/folder_shared/sct_issues/20160421_issue830 Command line: ~~~ pass:/Volumes/slevy-5/data/criugm/d_sp_pain_30_retest/t2 $ sct_register_to_template.py -i t2crop.nii.gz -l t2crop_landmarks.nii.gz -s t2crop_seg.nii.gz ~~~ Output: ~~~ Traceback (most recent call last): File "/Users/slevy_local/spinalcordtoolbox/scripts/sct_register_to_template.py", line 547, in <module> main() File "/Users/slevy_local/spinalcordtoolbox/scripts/sct_register_to_template.py", line 192, in main fname_template = fname_template_list[0] IndexError: list index out of range ~~~
1.0
sct_register_to_template: bug - Data: /Volumes/folder_shared/sct_issues/20160421_issue830 Command line: ~~~ pass:/Volumes/slevy-5/data/criugm/d_sp_pain_30_retest/t2 $ sct_register_to_template.py -i t2crop.nii.gz -l t2crop_landmarks.nii.gz -s t2crop_seg.nii.gz ~~~ Output: ~~~ Traceback (most recent call last): File "/Users/slevy_local/spinalcordtoolbox/scripts/sct_register_to_template.py", line 547, in <module> main() File "/Users/slevy_local/spinalcordtoolbox/scripts/sct_register_to_template.py", line 192, in main fname_template = fname_template_list[0] IndexError: list index out of range ~~~
non_process
sct register to template bug data volumes folder shared sct issues command line pass volumes slevy data criugm d sp pain retest sct register to template py i nii gz l landmarks nii gz s seg nii gz output traceback most recent call last file users slevy local spinalcordtoolbox scripts sct register to template py line in main file users slevy local spinalcordtoolbox scripts sct register to template py line in main fname template fname template list indexerror list index out of range
0
13,402
4,704,518,517
IssuesEvent
2016-10-13 11:47:27
mozilla/addons-frontend
https://api.github.com/repos/mozilla/addons-frontend
reopened
Prevent the test suite from ever calling a real API
code quality
### Describe the problem and steps to reproduce it: * write a test for an action that calls the API * forget to mock out the `callApi` method ### What happened? This will attempt to make a real API request (depending on what server is configured). The result will probably be a harmless 401 Unauthorized since no real tokens are used in the test suite but this can cause slowness. ### What did you expect to happen? It should not make any real API request at all even though it was a mistake not to mock the component. ### Anything else we should know? (Please include a link to the page, screenshots and any relevant files.)
1.0
Prevent the test suite from ever calling a real API - ### Describe the problem and steps to reproduce it: * write a test for an action that calls the API * forget to mock out the `callApi` method ### What happened? This will attempt to make a real API request (depending on what server is configured). The result will probably be a harmless 401 Unauthorized since no real tokens are used in the test suite but this can cause slowness. ### What did you expect to happen? It should not make any real API request at all even though it was a mistake not to mock the component. ### Anything else we should know? (Please include a link to the page, screenshots and any relevant files.)
non_process
prevent the test suite from ever calling a real api describe the problem and steps to reproduce it write a test for an action that calls the api forget to mock out the callapi method what happened this will attempt to make a real api request depending on what server is configured the result will probably be a harmless unauthorized since no real tokens are used in the test suite but this can cause slowness what did you expect to happen it should not make any real api request at all even though it was a mistake not to mock the component anything else we should know please include a link to the page screenshots and any relevant files
0
9,324
12,338,929,020
IssuesEvent
2020-05-14 17:14:55
bazelbuild/bazel
https://api.github.com/repos/bazelbuild/bazel
closed
Bootstrap test with --all_incompatible_changes
P1 team-EngProd type: process
We need a test to ensure that Bazel can bootstrap itself using `--all_incompatible_changes`.
1.0
Bootstrap test with --all_incompatible_changes - We need a test to ensure that Bazel can bootstrap itself using `--all_incompatible_changes`.
process
bootstrap test with all incompatible changes we need a test to ensure that bazel can bootstrap itself using all incompatible changes
1
22,688
31,992,396,280
IssuesEvent
2023-09-21 06:59:33
X-Sharp/XSharpPublic
https://api.github.com/repos/X-Sharp/XSharpPublic
closed
Obscure warning about unreachable code
bug Compiler Preprocessor
For the following code, the compiler generates an "Unreachable code detected" warning: ``` FUNCTION Start() AS VOID STRICT FIELD FLD1 SET RELATION ADDITIVE TO FLD1 INTO SOMEFILE RETURN ``` **Example project:** [XSharpBetaTest.zip](https://github.com/X-Sharp/XSharpPublic/files/12299775/XSharpBetaTest.zip) **Environment:** IDE VS 2022 17.7 Compiler X# 2.17.0.3
1.0
Obscure warning about unreachable code - For the following code, the compiler generates an "Unreachable code detected" warning: ``` FUNCTION Start() AS VOID STRICT FIELD FLD1 SET RELATION ADDITIVE TO FLD1 INTO SOMEFILE RETURN ``` **Example project:** [XSharpBetaTest.zip](https://github.com/X-Sharp/XSharpPublic/files/12299775/XSharpBetaTest.zip) **Environment:** IDE VS 2022 17.7 Compiler X# 2.17.0.3
process
obscure warning about unreachable code for the following code the compiler generates an unreachable code detected warning function start as void strict field set relation additive to into somefile return example project environment ide vs compiler x
1
21,621
30,022,543,419
IssuesEvent
2023-06-27 01:36:30
bazelbuild/bazel
https://api.github.com/repos/bazelbuild/bazel
closed
Crash when targeting platform with invalid crosstool
P3 type: support / not a bug (process) team-Rules-CPP stale
### Description of the problem / feature request: Bazel crashes with a cast error when targeting a platform with an invalid crosstool. ### Bugs: what's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible. #### .bazelrc ```bash build --platforms=@io_bazel_rules_go//go/toolchain:linux_amd64_cgo build --crosstool_top=@zig_sdk//:x86_64-linux-gnu.2.28_toolchain_cc ``` #### WORKSPACE ```py3 load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive") http_archive( name = "bazel-zig-cc", sha256 = "f28327aaf5d66cb7ca42977af6284faf9b8306df8e152fcfc82ae33ad92c443b", strip_prefix = "bazel-zig-cc-v0.3.2", urls = ["https://git.sr.ht/~motiejus/bazel-zig-cc/archive/v0.3.2.tar.gz"], ) http_archive( name = "io_bazel_rules_go", sha256 = "2b1641428dff9018f9e85c0384f03ec6c10660d935b750e3fa1492a281a53b0f", urls = [ "https://mirror.bazel.build/github.com/bazelbuild/rules_go/releases/download/v0.29.0/rules_go-v0.29.0.zip", "https://github.com/bazelbuild/rules_go/releases/download/v0.29.0/rules_go-v0.29.0.zip", ], ) ``` Attempt to build any cgo binary and it should crash. ### What operating system are you running Bazel on? MacOS ### What's the output of `bazel info release`? `release 4.2.1` ### Have you found anything relevant by searching the web? Nope. ### Any other information, logs, or outputs that you want to share? ``` (15:30:47) FATAL: bazel crashed due to an internal error. Printing stack trace: java.lang.RuntimeException: Unrecoverable error while evaluating node 'ConfiguredTargetKey{label=<REDACTED>, config=BuildConfigurationValue.Key[a049b56e6e736ccde967916a39f8d51cd0bab31e483eecffba0efd38aee91aab]}' (requested by nodes 'ConfiguredTargetKey{label=<REDACTED> config=BuildConfigurationValue.Key[a049b56e6e736ccde967916a39f8d51cd0bab31e483eecffba0efd38aee91aab]}') at com.google.devtools.build.skyframe.AbstractParallelEvaluator$Evaluate.run(AbstractParallelEvaluator.java:563) at com.google.devtools.build.lib.concurrent.AbstractQueueVisitor$WrappedRunnable.run(AbstractQueueVisitor.java:398) at java.base/java.util.concurrent.ForkJoinTask$AdaptedRunnableAction.exec(Unknown Source) at java.base/java.util.concurrent.ForkJoinTask.doExec(Unknown Source) at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(Unknown Source) at java.base/java.util.concurrent.ForkJoinPool.scan(Unknown Source) at java.base/java.util.concurrent.ForkJoinPool.runWorker(Unknown Source) at java.base/java.util.concurrent.ForkJoinWorkerThread.run(Unknown Source) Caused by: java.lang.ClassCastException: class com.google.devtools.build.lib.rules.cpp.CcToolchainAttributesProvider cannot be cast to class com.google.devtools.build.lib.rules.cpp.CcToolchainProvider (com.google.devtools.build.lib.rules.cpp.CcToolchainAttributesProvider and com.google.devtools.build.lib.rules.cpp.CcToolchainProvider are in unnamed module of loader 'app') at com.google.devtools.build.lib.rules.cpp.CppHelper.getToolchainFromCrosstoolTop(CppHelper.java:363) at com.google.devtools.build.lib.rules.cpp.CppHelper.getToolchain(CppHelper.java:335) at com.google.devtools.build.lib.rules.cpp.CppHelper.getToolchain(CppHelper.java:327) at com.google.devtools.build.lib.rules.cpp.CppHelper.getToolchain(CppHelper.java:316) at com.google.devtools.build.lib.rules.cpp.CppHelper.getToolchainUsingDefaultCcToolchainAttribute(CppHelper.java:240) at com.google.devtools.build.lib.rules.cpp.CcImport.create(CcImport.java:76) at com.google.devtools.build.lib.rules.cpp.CcImport.create(CcImport.java:38) at com.google.devtools.build.lib.analysis.ConfiguredTargetFactory.createRule(ConfiguredTargetFactory.java:385) at com.google.devtools.build.lib.analysis.ConfiguredTargetFactory.createConfiguredTarget(ConfiguredTargetFactory.java:195) at com.google.devtools.build.lib.skyframe.SkyframeBuildView.createConfiguredTarget(SkyframeBuildView.java:940) at com.google.devtools.build.lib.skyframe.ConfiguredTargetFunction.createConfiguredTarget(ConfiguredTargetFunction.java:1031) at com.google.devtools.build.lib.skyframe.ConfiguredTargetFunction.compute(ConfiguredTargetFunction.java:371) at com.google.devtools.build.skyframe.AbstractParallelEvaluator$Evaluate.run(AbstractParallelEvaluator.java:477) ... 7 more ```
1.0
Crash when targeting platform with invalid crosstool - ### Description of the problem / feature request: Bazel crashes with a cast error when targeting a platform with an invalid crosstool. ### Bugs: what's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible. #### .bazelrc ```bash build --platforms=@io_bazel_rules_go//go/toolchain:linux_amd64_cgo build --crosstool_top=@zig_sdk//:x86_64-linux-gnu.2.28_toolchain_cc ``` #### WORKSPACE ```py3 load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive") http_archive( name = "bazel-zig-cc", sha256 = "f28327aaf5d66cb7ca42977af6284faf9b8306df8e152fcfc82ae33ad92c443b", strip_prefix = "bazel-zig-cc-v0.3.2", urls = ["https://git.sr.ht/~motiejus/bazel-zig-cc/archive/v0.3.2.tar.gz"], ) http_archive( name = "io_bazel_rules_go", sha256 = "2b1641428dff9018f9e85c0384f03ec6c10660d935b750e3fa1492a281a53b0f", urls = [ "https://mirror.bazel.build/github.com/bazelbuild/rules_go/releases/download/v0.29.0/rules_go-v0.29.0.zip", "https://github.com/bazelbuild/rules_go/releases/download/v0.29.0/rules_go-v0.29.0.zip", ], ) ``` Attempt to build any cgo binary and it should crash. ### What operating system are you running Bazel on? MacOS ### What's the output of `bazel info release`? `release 4.2.1` ### Have you found anything relevant by searching the web? Nope. ### Any other information, logs, or outputs that you want to share? ``` (15:30:47) FATAL: bazel crashed due to an internal error. Printing stack trace: java.lang.RuntimeException: Unrecoverable error while evaluating node 'ConfiguredTargetKey{label=<REDACTED>, config=BuildConfigurationValue.Key[a049b56e6e736ccde967916a39f8d51cd0bab31e483eecffba0efd38aee91aab]}' (requested by nodes 'ConfiguredTargetKey{label=<REDACTED> config=BuildConfigurationValue.Key[a049b56e6e736ccde967916a39f8d51cd0bab31e483eecffba0efd38aee91aab]}') at com.google.devtools.build.skyframe.AbstractParallelEvaluator$Evaluate.run(AbstractParallelEvaluator.java:563) at com.google.devtools.build.lib.concurrent.AbstractQueueVisitor$WrappedRunnable.run(AbstractQueueVisitor.java:398) at java.base/java.util.concurrent.ForkJoinTask$AdaptedRunnableAction.exec(Unknown Source) at java.base/java.util.concurrent.ForkJoinTask.doExec(Unknown Source) at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(Unknown Source) at java.base/java.util.concurrent.ForkJoinPool.scan(Unknown Source) at java.base/java.util.concurrent.ForkJoinPool.runWorker(Unknown Source) at java.base/java.util.concurrent.ForkJoinWorkerThread.run(Unknown Source) Caused by: java.lang.ClassCastException: class com.google.devtools.build.lib.rules.cpp.CcToolchainAttributesProvider cannot be cast to class com.google.devtools.build.lib.rules.cpp.CcToolchainProvider (com.google.devtools.build.lib.rules.cpp.CcToolchainAttributesProvider and com.google.devtools.build.lib.rules.cpp.CcToolchainProvider are in unnamed module of loader 'app') at com.google.devtools.build.lib.rules.cpp.CppHelper.getToolchainFromCrosstoolTop(CppHelper.java:363) at com.google.devtools.build.lib.rules.cpp.CppHelper.getToolchain(CppHelper.java:335) at com.google.devtools.build.lib.rules.cpp.CppHelper.getToolchain(CppHelper.java:327) at com.google.devtools.build.lib.rules.cpp.CppHelper.getToolchain(CppHelper.java:316) at com.google.devtools.build.lib.rules.cpp.CppHelper.getToolchainUsingDefaultCcToolchainAttribute(CppHelper.java:240) at com.google.devtools.build.lib.rules.cpp.CcImport.create(CcImport.java:76) at com.google.devtools.build.lib.rules.cpp.CcImport.create(CcImport.java:38) at com.google.devtools.build.lib.analysis.ConfiguredTargetFactory.createRule(ConfiguredTargetFactory.java:385) at com.google.devtools.build.lib.analysis.ConfiguredTargetFactory.createConfiguredTarget(ConfiguredTargetFactory.java:195) at com.google.devtools.build.lib.skyframe.SkyframeBuildView.createConfiguredTarget(SkyframeBuildView.java:940) at com.google.devtools.build.lib.skyframe.ConfiguredTargetFunction.createConfiguredTarget(ConfiguredTargetFunction.java:1031) at com.google.devtools.build.lib.skyframe.ConfiguredTargetFunction.compute(ConfiguredTargetFunction.java:371) at com.google.devtools.build.skyframe.AbstractParallelEvaluator$Evaluate.run(AbstractParallelEvaluator.java:477) ... 7 more ```
process
crash when targeting platform with invalid crosstool description of the problem feature request bazel crashes with a cast error when targeting a platform with an invalid crosstool bugs what s the simplest easiest way to reproduce this bug please provide a minimal example if possible bazelrc bash build platforms io bazel rules go go toolchain linux cgo build crosstool top zig sdk linux gnu toolchain cc workspace load bazel tools tools build defs repo http bzl http archive http archive name bazel zig cc strip prefix bazel zig cc urls http archive name io bazel rules go urls attempt to build any cgo binary and it should crash what operating system are you running bazel on macos what s the output of bazel info release release have you found anything relevant by searching the web nope any other information logs or outputs that you want to share fatal bazel crashed due to an internal error printing stack trace java lang runtimeexception unrecoverable error while evaluating node configuredtargetkey label config buildconfigurationvalue key requested by nodes configuredtargetkey label config buildconfigurationvalue key at com google devtools build skyframe abstractparallelevaluator evaluate run abstractparallelevaluator java at com google devtools build lib concurrent abstractqueuevisitor wrappedrunnable run abstractqueuevisitor java at java base java util concurrent forkjointask adaptedrunnableaction exec unknown source at java base java util concurrent forkjointask doexec unknown source at java base java util concurrent forkjoinpool workqueue toplevelexec unknown source at java base java util concurrent forkjoinpool scan unknown source at java base java util concurrent forkjoinpool runworker unknown source at java base java util concurrent forkjoinworkerthread run unknown source caused by java lang classcastexception class com google devtools build lib rules cpp cctoolchainattributesprovider cannot be cast to class com google devtools build lib rules cpp cctoolchainprovider com google devtools build lib rules cpp cctoolchainattributesprovider and com google devtools build lib rules cpp cctoolchainprovider are in unnamed module of loader app at com google devtools build lib rules cpp cpphelper gettoolchainfromcrosstooltop cpphelper java at com google devtools build lib rules cpp cpphelper gettoolchain cpphelper java at com google devtools build lib rules cpp cpphelper gettoolchain cpphelper java at com google devtools build lib rules cpp cpphelper gettoolchain cpphelper java at com google devtools build lib rules cpp cpphelper gettoolchainusingdefaultcctoolchainattribute cpphelper java at com google devtools build lib rules cpp ccimport create ccimport java at com google devtools build lib rules cpp ccimport create ccimport java at com google devtools build lib analysis configuredtargetfactory createrule configuredtargetfactory java at com google devtools build lib analysis configuredtargetfactory createconfiguredtarget configuredtargetfactory java at com google devtools build lib skyframe skyframebuildview createconfiguredtarget skyframebuildview java at com google devtools build lib skyframe configuredtargetfunction createconfiguredtarget configuredtargetfunction java at com google devtools build lib skyframe configuredtargetfunction compute configuredtargetfunction java at com google devtools build skyframe abstractparallelevaluator evaluate run abstractparallelevaluator java more
1
254,732
27,413,793,390
IssuesEvent
2023-03-01 12:25:27
scm-automation-project/npm-6-with-lock-file-project
https://api.github.com/repos/scm-automation-project/npm-6-with-lock-file-project
closed
minimist-1.2.0.tgz: 1 vulnerabilities (highest severity is: 5.6) - autoclosed
Mend: dependency security vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>minimist-1.2.0.tgz</b></p></summary> <p>parse argument options</p> <p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-1.2.0.tgz">https://registry.npmjs.org/minimist/-/minimist-1.2.0.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/minimist/package.json</p> <p> <p>Found in HEAD commit: <a href="https://github.com/scm-automation-project/npm-6-with-lock-file-project/commit/608438533a463d2e344219ddaca4e9dd6b71e6f9">608438533a463d2e344219ddaca4e9dd6b71e6f9</a></p></details> ## Vulnerabilities | CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (minimist version) | Remediation Available | | ------------- | ------------- | ----- | ----- | ----- | ------------- | --- | | [CVE-2020-7598](https://www.mend.io/vulnerability-database/CVE-2020-7598) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.6 | minimist-1.2.0.tgz | Direct | 1.2.2 | &#9989; | ## Details <details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2020-7598</summary> ### Vulnerable Library - <b>minimist-1.2.0.tgz</b></p> <p>parse argument options</p> <p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-1.2.0.tgz">https://registry.npmjs.org/minimist/-/minimist-1.2.0.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/minimist/package.json</p> <p> Dependency Hierarchy: - :x: **minimist-1.2.0.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/scm-automation-project/npm-6-with-lock-file-project/commit/608438533a463d2e344219ddaca4e9dd6b71e6f9">608438533a463d2e344219ddaca4e9dd6b71e6f9</a></p> </p> <p></p> ### Vulnerability Details <p> minimist before 1.2.2 could be tricked into adding or modifying properties of Object.prototype using a "constructor" or "__proto__" payload. <p>Publish Date: 2020-03-11 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-7598>CVE-2020-7598</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>5.6</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2020-7598">https://nvd.nist.gov/vuln/detail/CVE-2020-7598</a></p> <p>Release Date: 2020-03-11</p> <p>Fix Resolution: 1.2.2</p> </p> <p></p> :rescue_worker_helmet: Automatic Remediation is available for this issue </details> *** <p>:rescue_worker_helmet: Automatic Remediation is available for this issue.</p>
True
minimist-1.2.0.tgz: 1 vulnerabilities (highest severity is: 5.6) - autoclosed - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>minimist-1.2.0.tgz</b></p></summary> <p>parse argument options</p> <p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-1.2.0.tgz">https://registry.npmjs.org/minimist/-/minimist-1.2.0.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/minimist/package.json</p> <p> <p>Found in HEAD commit: <a href="https://github.com/scm-automation-project/npm-6-with-lock-file-project/commit/608438533a463d2e344219ddaca4e9dd6b71e6f9">608438533a463d2e344219ddaca4e9dd6b71e6f9</a></p></details> ## Vulnerabilities | CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (minimist version) | Remediation Available | | ------------- | ------------- | ----- | ----- | ----- | ------------- | --- | | [CVE-2020-7598](https://www.mend.io/vulnerability-database/CVE-2020-7598) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.6 | minimist-1.2.0.tgz | Direct | 1.2.2 | &#9989; | ## Details <details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2020-7598</summary> ### Vulnerable Library - <b>minimist-1.2.0.tgz</b></p> <p>parse argument options</p> <p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-1.2.0.tgz">https://registry.npmjs.org/minimist/-/minimist-1.2.0.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/minimist/package.json</p> <p> Dependency Hierarchy: - :x: **minimist-1.2.0.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/scm-automation-project/npm-6-with-lock-file-project/commit/608438533a463d2e344219ddaca4e9dd6b71e6f9">608438533a463d2e344219ddaca4e9dd6b71e6f9</a></p> </p> <p></p> ### Vulnerability Details <p> minimist before 1.2.2 could be tricked into adding or modifying properties of Object.prototype using a "constructor" or "__proto__" payload. <p>Publish Date: 2020-03-11 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-7598>CVE-2020-7598</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>5.6</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2020-7598">https://nvd.nist.gov/vuln/detail/CVE-2020-7598</a></p> <p>Release Date: 2020-03-11</p> <p>Fix Resolution: 1.2.2</p> </p> <p></p> :rescue_worker_helmet: Automatic Remediation is available for this issue </details> *** <p>:rescue_worker_helmet: Automatic Remediation is available for this issue.</p>
non_process
minimist tgz vulnerabilities highest severity is autoclosed vulnerable library minimist tgz parse argument options library home page a href path to dependency file package json path to vulnerable library node modules minimist package json found in head commit a href vulnerabilities cve severity cvss dependency type fixed in minimist version remediation available medium minimist tgz direct details cve vulnerable library minimist tgz parse argument options library home page a href path to dependency file package json path to vulnerable library node modules minimist package json dependency hierarchy x minimist tgz vulnerable library found in head commit a href vulnerability details minimist before could be tricked into adding or modifying properties of object prototype using a constructor or proto payload publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue rescue worker helmet automatic remediation is available for this issue
0
20,049
26,537,986,936
IssuesEvent
2023-01-19 16:58:45
bazelbuild/bazel
https://api.github.com/repos/bazelbuild/bazel
closed
[Mirror] rules_cc-0.0.2.tar.gz, rules_java-5.4.0.tar.gz, (rules_proto) 5.3.0-21.7.tar.gz
P2 type: process team-OSS mirror request
### Please list the URLs of the archives you'd like to mirror: https://github.com/bazelbuild/rules_cc/releases/download/0.0.2/rules_cc-0.0.2.tar.gz https://github.com/bazelbuild/rules_java/releases/download/5.4.0/rules_java-5.4.0.tar.gz https://github.com/bazelbuild/rules_proto/archive/refs/tags/5.3.0-21.7.tar.gz Currently, as a part of building bazel itself, archives are downloaded only from github as mentioned distdir_deps.bzl: https://github.com/bazelbuild/bazel/blob/bfa72889ec6cb9bfb359226124c43ecffa44b078/distdir_deps.bzl#L56-L97 They should also be made available on the mirror and perhaps downloaded from there when building bazel. Let me know if I should make a separate task for this, thanks!
1.0
[Mirror] rules_cc-0.0.2.tar.gz, rules_java-5.4.0.tar.gz, (rules_proto) 5.3.0-21.7.tar.gz - ### Please list the URLs of the archives you'd like to mirror: https://github.com/bazelbuild/rules_cc/releases/download/0.0.2/rules_cc-0.0.2.tar.gz https://github.com/bazelbuild/rules_java/releases/download/5.4.0/rules_java-5.4.0.tar.gz https://github.com/bazelbuild/rules_proto/archive/refs/tags/5.3.0-21.7.tar.gz Currently, as a part of building bazel itself, archives are downloaded only from github as mentioned distdir_deps.bzl: https://github.com/bazelbuild/bazel/blob/bfa72889ec6cb9bfb359226124c43ecffa44b078/distdir_deps.bzl#L56-L97 They should also be made available on the mirror and perhaps downloaded from there when building bazel. Let me know if I should make a separate task for this, thanks!
process
rules cc tar gz rules java tar gz rules proto tar gz please list the urls of the archives you d like to mirror currently as a part of building bazel itself archives are downloaded only from github as mentioned distdir deps bzl they should also be made available on the mirror and perhaps downloaded from there when building bazel let me know if i should make a separate task for this thanks
1
135,434
5,252,445,980
IssuesEvent
2017-02-02 04:37:37
wevote/WebApp
https://api.github.com/repos/wevote/WebApp
closed
Keywords needed for SEO
Priority 3
We need to integrate keywords into webapp, to help boost our Search Engine Optimization. Might be worth having specific keywords for specific parts of the site, like including "ballot" on ballot page, etc quick brainstorm of some good keywords: "election", "ballot", "voting", "opinion", "proposition", "initiative", "measure", "candidate", "senate", "governor" etc etc
1.0
Keywords needed for SEO - We need to integrate keywords into webapp, to help boost our Search Engine Optimization. Might be worth having specific keywords for specific parts of the site, like including "ballot" on ballot page, etc quick brainstorm of some good keywords: "election", "ballot", "voting", "opinion", "proposition", "initiative", "measure", "candidate", "senate", "governor" etc etc
non_process
keywords needed for seo we need to integrate keywords into webapp to help boost our search engine optimization might be worth having specific keywords for specific parts of the site like including ballot on ballot page etc quick brainstorm of some good keywords election ballot voting opinion proposition initiative measure candidate senate governor etc etc
0
21,147
28,126,460,759
IssuesEvent
2023-03-31 18:10:51
dotnet/fabricbot-config
https://api.github.com/repos/dotnet/fabricbot-config
closed
Issues tagged with certain OS or arch labels should be excluded from area boards
enhancement process in-pr
Is it possible to have automation that makes it so that issues labeled with the following labels aren't added to boards, and once one of these labels is applied it's no longer tracked by the board (similar to when a different area is assigned)? https://github.com/dotnet/runtime/blob/f2364ade5b637e516de9cfd4a0fffa92ae634235/docs/area-owners.md#operating-systems https://github.com/dotnet/runtime/blob/f2364ade5b637e516de9cfd4a0fffa92ae634235/docs/area-owners.md#architectures Feedback from @eerhardt @maryamariyan @tarekgh
1.0
Issues tagged with certain OS or arch labels should be excluded from area boards - Is it possible to have automation that makes it so that issues labeled with the following labels aren't added to boards, and once one of these labels is applied it's no longer tracked by the board (similar to when a different area is assigned)? https://github.com/dotnet/runtime/blob/f2364ade5b637e516de9cfd4a0fffa92ae634235/docs/area-owners.md#operating-systems https://github.com/dotnet/runtime/blob/f2364ade5b637e516de9cfd4a0fffa92ae634235/docs/area-owners.md#architectures Feedback from @eerhardt @maryamariyan @tarekgh
process
issues tagged with certain os or arch labels should be excluded from area boards is it possible to have automation that makes it so that issues labeled with the following labels aren t added to boards and once one of these labels is applied it s no longer tracked by the board similar to when a different area is assigned feedback from eerhardt maryamariyan tarekgh
1
127,694
10,478,200,770
IssuesEvent
2019-09-23 23:07:36
longhorn/longhorn
https://api.github.com/repos/longhorn/longhorn
closed
engine test: Add websocket test case in the integration test
area/engine area/test
Now engine supports WebSocket on reporting the volume status.
1.0
engine test: Add websocket test case in the integration test - Now engine supports WebSocket on reporting the volume status.
non_process
engine test add websocket test case in the integration test now engine supports websocket on reporting the volume status
0
48,110
7,374,771,723
IssuesEvent
2018-03-13 21:27:07
probcomp/notebook
https://api.github.com/repos/probcomp/notebook
opened
"Search" notebook supports
documentation
The notebook described in #7 should include looking at a dependence heatmap as a diagnostic of whether the models are totally crazy or not.
1.0
"Search" notebook supports - The notebook described in #7 should include looking at a dependence heatmap as a diagnostic of whether the models are totally crazy or not.
non_process
search notebook supports the notebook described in should include looking at a dependence heatmap as a diagnostic of whether the models are totally crazy or not
0
1,943
4,769,524,305
IssuesEvent
2016-10-26 12:52:30
Lever-age/leverage
https://api.github.com/repos/Lever-age/leverage
closed
Design document for data analyses
data analysis process/administration ready for review
It'll be useful to have guiding principles for how we approach data, especially as we move forward with new team members.
1.0
Design document for data analyses - It'll be useful to have guiding principles for how we approach data, especially as we move forward with new team members.
process
design document for data analyses it ll be useful to have guiding principles for how we approach data especially as we move forward with new team members
1
118,405
11,968,208,882
IssuesEvent
2020-04-06 08:14:54
SOFTENG701G1/Flatmate-Management-System
https://api.github.com/repos/SOFTENG701G1/Flatmate-Management-System
closed
Low-fi prototype for redesigned chores interface
documentation
**Describe the change** * Add redesigned low-fidelity prototype of chores screen to wiki. This can be added to the [low-fi prototyping wiki page](https://github.com/SOFTENG701G1/Flatmate-Management-System/wiki/Low-fi-Prototype) or a new page. **Why is this happening** * Record the design of the chores screen for future work. * Establish common understanding of the chores feature and its design in the group. Note: there are multiple assignees on this issue as we had a team of people working on the design.
1.0
Low-fi prototype for redesigned chores interface - **Describe the change** * Add redesigned low-fidelity prototype of chores screen to wiki. This can be added to the [low-fi prototyping wiki page](https://github.com/SOFTENG701G1/Flatmate-Management-System/wiki/Low-fi-Prototype) or a new page. **Why is this happening** * Record the design of the chores screen for future work. * Establish common understanding of the chores feature and its design in the group. Note: there are multiple assignees on this issue as we had a team of people working on the design.
non_process
low fi prototype for redesigned chores interface describe the change add redesigned low fidelity prototype of chores screen to wiki this can be added to the or a new page why is this happening record the design of the chores screen for future work establish common understanding of the chores feature and its design in the group note there are multiple assignees on this issue as we had a team of people working on the design
0
3,125
6,156,498,228
IssuesEvent
2017-06-28 16:50:25
allinurl/goaccess
https://api.github.com/repos/allinurl/goaccess
closed
Requests are reducing over time
log-processing
Question: I am using goAccess for a production site I have live for some months now. Furthermore, I have a script that parses the log files and uploads the html report. Initially everything was working great. On my last report update, I noticed that the requests are actually less for a longer amount of time. Given that: 1. logs are generated by the apache in an excellent production server from a big enterprise 2. I actually parsed them one by one and the sum is the same small number 3. nothing changed in my script Could you think of a reason why this happens? My script for parsing logs: ```bash ssh <username>@server zcat $APP_PATH'access.http.log.*.gz' >> file.log ssh <username>@server zcat $APP_PATH'access.https.log.*.gz' >> file.log ssh <username>@server 'cat '$APP_PATH'access.http.log' >> file.log ssh <username>@server 'cat '$APP_PATH'access.http.log.1' >> file.log ssh <username>@server 'cat '$APP_PATH'access.https.log' >> file.log ssh <username>@server 'cat '$APP_PATH'access.https.log.1' >> file.log goaccess file.log -o /var/www/ngcc_report.html ``` Logs: ```bash $ls -1 access.http.log access.http.log.1 access.http.log.2.gz access.http.log.3.gz access.http.log.4.gz access.http.log.5.gz access.https.log access.https.log.1 access.https.log.2.gz access.https.log.3.gz access.https.log.4.gz access.https.log.5.gz access.https.log.6.gz # + error logs ``` Stats: First 3 months: ~45000 total requests Last update (first 3 months + 2 more): ~23000 total requests
1.0
Requests are reducing over time - Question: I am using goAccess for a production site I have live for some months now. Furthermore, I have a script that parses the log files and uploads the html report. Initially everything was working great. On my last report update, I noticed that the requests are actually less for a longer amount of time. Given that: 1. logs are generated by the apache in an excellent production server from a big enterprise 2. I actually parsed them one by one and the sum is the same small number 3. nothing changed in my script Could you think of a reason why this happens? My script for parsing logs: ```bash ssh <username>@server zcat $APP_PATH'access.http.log.*.gz' >> file.log ssh <username>@server zcat $APP_PATH'access.https.log.*.gz' >> file.log ssh <username>@server 'cat '$APP_PATH'access.http.log' >> file.log ssh <username>@server 'cat '$APP_PATH'access.http.log.1' >> file.log ssh <username>@server 'cat '$APP_PATH'access.https.log' >> file.log ssh <username>@server 'cat '$APP_PATH'access.https.log.1' >> file.log goaccess file.log -o /var/www/ngcc_report.html ``` Logs: ```bash $ls -1 access.http.log access.http.log.1 access.http.log.2.gz access.http.log.3.gz access.http.log.4.gz access.http.log.5.gz access.https.log access.https.log.1 access.https.log.2.gz access.https.log.3.gz access.https.log.4.gz access.https.log.5.gz access.https.log.6.gz # + error logs ``` Stats: First 3 months: ~45000 total requests Last update (first 3 months + 2 more): ~23000 total requests
process
requests are reducing over time question i am using goaccess for a production site i have live for some months now furthermore i have a script that parses the log files and uploads the html report initially everything was working great on my last report update i noticed that the requests are actually less for a longer amount of time given that logs are generated by the apache in an excellent production server from a big enterprise i actually parsed them one by one and the sum is the same small number nothing changed in my script could you think of a reason why this happens my script for parsing logs bash ssh server zcat app path access http log gz file log ssh server zcat app path access https log gz file log ssh server cat app path access http log file log ssh server cat app path access http log file log ssh server cat app path access https log file log ssh server cat app path access https log file log goaccess file log o var www ngcc report html logs bash ls access http log access http log access http log gz access http log gz access http log gz access http log gz access https log access https log access https log gz access https log gz access https log gz access https log gz access https log gz error logs stats first months total requests last update first months more total requests
1
370
2,813,943,626
IssuesEvent
2015-05-18 17:16:47
joyent/node
https://api.github.com/repos/joyent/node
closed
Node.js child process on windows fails for node installed cli apps
child_process windows
On Windows if you use `child_process` to exec another npm installed cli app it fails Here's an example script: https://gist.github.com/3638530 Running this on OSX/Linux it works as expected, you get the `npm show` output printed. Running this on Windows (with 0.8.8) you get: ``` CreateProcessW: The system cannot find the file specified. child exited with code 127 ``` Seems like maybe the ENV or path is not being passed to the sub process so it can't find it.
1.0
Node.js child process on windows fails for node installed cli apps - On Windows if you use `child_process` to exec another npm installed cli app it fails Here's an example script: https://gist.github.com/3638530 Running this on OSX/Linux it works as expected, you get the `npm show` output printed. Running this on Windows (with 0.8.8) you get: ``` CreateProcessW: The system cannot find the file specified. child exited with code 127 ``` Seems like maybe the ENV or path is not being passed to the sub process so it can't find it.
process
node js child process on windows fails for node installed cli apps on windows if you use child process to exec another npm installed cli app it fails here s an example script running this on osx linux it works as expected you get the npm show output printed running this on windows with you get createprocessw the system cannot find the file specified child exited with code seems like maybe the env or path is not being passed to the sub process so it can t find it
1
22,157
30,699,950,848
IssuesEvent
2023-07-26 22:05:26
sandsquaretech/AdvancedLayoutCalculator.jl
https://api.github.com/repos/sandsquaretech/AdvancedLayoutCalculator.jl
closed
Filtering out Keycodes vs Ngrams
text processing
Separate: - filtering out keycodes should (eg char2keycodes) be for keys that we aren't concerned with typing, like accented stuff - filtering out Ngrams should remove any Ngrams that contain the things we want to filter out - this mainly has to do with, eg, shifted 1 -> `!`. We don't want to place `!` in the keymap on its own, which means it won't be viewed as typeable and cause not found errors. Maybe we should turn these into the shifted versions instead. - The main thing is some key are going to be preplaced, like numbers, so we don't want to include them a second time when randomly generating a keymap
1.0
Filtering out Keycodes vs Ngrams - Separate: - filtering out keycodes should (eg char2keycodes) be for keys that we aren't concerned with typing, like accented stuff - filtering out Ngrams should remove any Ngrams that contain the things we want to filter out - this mainly has to do with, eg, shifted 1 -> `!`. We don't want to place `!` in the keymap on its own, which means it won't be viewed as typeable and cause not found errors. Maybe we should turn these into the shifted versions instead. - The main thing is some key are going to be preplaced, like numbers, so we don't want to include them a second time when randomly generating a keymap
process
filtering out keycodes vs ngrams separate filtering out keycodes should eg be for keys that we aren t concerned with typing like accented stuff filtering out ngrams should remove any ngrams that contain the things we want to filter out this mainly has to do with eg shifted we don t want to place in the keymap on its own which means it won t be viewed as typeable and cause not found errors maybe we should turn these into the shifted versions instead the main thing is some key are going to be preplaced like numbers so we don t want to include them a second time when randomly generating a keymap
1
22,209
30,761,443,482
IssuesEvent
2023-07-29 19:02:24
dita-ot/dita-ot
https://api.github.com/repos/dita-ot/dita-ot
closed
preprocess2 is not compatible with chunk=to-content
bug preprocess/conref preprocess2
## Expected Behavior If you extend the normalize plugin with a plugin that uses map-first preprocessing, then the normalized output will still support chunking. ## Actual Behavior Topicrefs with the chunked topics appear as broken links and look like this: ```xml <topicref href="e9a94ae80eb9ac5e81b9c6b56facbc52bbced403.dita#reference_xlm_4lf_qn"> ``` ## Possible Solution Could chunking add that root element ID to the href and this messes up one of the preprocess steps that resolve the hashkeys? ## Steps to Reproduce <!-- Test case, Gist, set of files or steps required to reproduce the issue. --> 1. Create an extension to the normalize plugin. 2. In that new plugin use the preprocess2 target instead of preprocess. 3. Create a ditamap with a chunked topic. 4. Publish the new normalized output. * DITA-OT version: 3.5 * Operating system and version: macOS * How did you run DITA-OT? dita command, oxygen * Transformation type: dita normalize <!-- Before submitting, check the Preview tab above to verify the XML markup appears correctly and remember you can edit the description later to add information. -->
2.0
preprocess2 is not compatible with chunk=to-content - ## Expected Behavior If you extend the normalize plugin with a plugin that uses map-first preprocessing, then the normalized output will still support chunking. ## Actual Behavior Topicrefs with the chunked topics appear as broken links and look like this: ```xml <topicref href="e9a94ae80eb9ac5e81b9c6b56facbc52bbced403.dita#reference_xlm_4lf_qn"> ``` ## Possible Solution Could chunking add that root element ID to the href and this messes up one of the preprocess steps that resolve the hashkeys? ## Steps to Reproduce <!-- Test case, Gist, set of files or steps required to reproduce the issue. --> 1. Create an extension to the normalize plugin. 2. In that new plugin use the preprocess2 target instead of preprocess. 3. Create a ditamap with a chunked topic. 4. Publish the new normalized output. * DITA-OT version: 3.5 * Operating system and version: macOS * How did you run DITA-OT? dita command, oxygen * Transformation type: dita normalize <!-- Before submitting, check the Preview tab above to verify the XML markup appears correctly and remember you can edit the description later to add information. -->
process
is not compatible with chunk to content expected behavior if you extend the normalize plugin with a plugin that uses map first preprocessing then the normalized output will still support chunking actual behavior topicrefs with the chunked topics appear as broken links and look like this xml possible solution could chunking add that root element id to the href and this messes up one of the preprocess steps that resolve the hashkeys steps to reproduce create an extension to the normalize plugin in that new plugin use the target instead of preprocess create a ditamap with a chunked topic publish the new normalized output dita ot version operating system and version macos how did you run dita ot dita command oxygen transformation type dita normalize before submitting check the preview tab above to verify the xml markup appears correctly and remember you can edit the description later to add information
1
20,665
27,334,852,220
IssuesEvent
2023-02-26 03:50:56
cse442-at-ub/project_s23-team-infinity
https://api.github.com/repos/cse442-at-ub/project_s23-team-infinity
closed
Check version of PHP downloaded onto computer
Processing Task Sprint 1
**Task tests** *Test 1* 1. After you've successfully downloaded PHP onto the computer, check which version is running on your computer. 2. Do this by going to the command prompt and typing **php -v** 3. Do this to ensure the features you may try to implement with React.js are compatible with the version downloaded.
1.0
Check version of PHP downloaded onto computer - **Task tests** *Test 1* 1. After you've successfully downloaded PHP onto the computer, check which version is running on your computer. 2. Do this by going to the command prompt and typing **php -v** 3. Do this to ensure the features you may try to implement with React.js are compatible with the version downloaded.
process
check version of php downloaded onto computer task tests test after you ve successfully downloaded php onto the computer check which version is running on your computer do this by going to the command prompt and typing php v do this to ensure the features you may try to implement with react js are compatible with the version downloaded
1
9,530
12,500,801,947
IssuesEvent
2020-06-01 23:17:20
googleapis/gapic-showcase
https://api.github.com/repos/googleapis/gapic-showcase
closed
chore: remove all instances of suffix _id in resource name template variables
good first issue process
Resource name patterns containing path template variables with the suffix `_id` violate aip.dev/123. We should remove them all.
1.0
chore: remove all instances of suffix _id in resource name template variables - Resource name patterns containing path template variables with the suffix `_id` violate aip.dev/123. We should remove them all.
process
chore remove all instances of suffix id in resource name template variables resource name patterns containing path template variables with the suffix id violate aip dev we should remove them all
1
9,999
13,042,340,572
IssuesEvent
2020-07-28 22:15:56
googleapis/google-resumable-media-python
https://api.github.com/repos/googleapis/google-resumable-media-python
opened
Testing: harden systest teardowns against 429 / 409 responses.
testing type: process
From [this Kokoro run](https://source.cloud.google.com/results/invocations/9c4b8536-e957-4b3e-896e-98b9850e7754/targets/cloud-devrel%2Fclient-libraries%2Fpython%2Fgoogleapis%2Fgoogle-resumable-media-python%2Fpresubmit%2Fpresubmit/log): ```python _____ ERROR at teardown of test_resumable_upload_with_bad_checksum[crc32c] _____ Traceback (most recent call last): File "/tmpfs/src/github/google-resumable-media-python/tests/system/requests/test_upload.py", line 60, in cleanup assert response.status_code == http_client.NO_CONTENT AssertionError: assert 429 == 204 + where 429 = <Response [429]>.status_code + and 204 = http_client.NO_CONTENT _ ERROR at teardown of TestResumableUploadUnknownSize.test_interleave_writes[None] _ Traceback (most recent call last): File "/tmpfs/src/github/google-resumable-media-python/tests/system/requests/conftest.py", line 58, in bucket cleanup_bucket(authorized_transport) File "/tmpfs/src/github/google-resumable-media-python/tests/system/requests/conftest.py", line 43, in cleanup_bucket raise ValueError("{}: {}".format(del_response.status_code, del_response.reason)) ValueError: 409: Conflict ```
1.0
Testing: harden systest teardowns against 429 / 409 responses. - From [this Kokoro run](https://source.cloud.google.com/results/invocations/9c4b8536-e957-4b3e-896e-98b9850e7754/targets/cloud-devrel%2Fclient-libraries%2Fpython%2Fgoogleapis%2Fgoogle-resumable-media-python%2Fpresubmit%2Fpresubmit/log): ```python _____ ERROR at teardown of test_resumable_upload_with_bad_checksum[crc32c] _____ Traceback (most recent call last): File "/tmpfs/src/github/google-resumable-media-python/tests/system/requests/test_upload.py", line 60, in cleanup assert response.status_code == http_client.NO_CONTENT AssertionError: assert 429 == 204 + where 429 = <Response [429]>.status_code + and 204 = http_client.NO_CONTENT _ ERROR at teardown of TestResumableUploadUnknownSize.test_interleave_writes[None] _ Traceback (most recent call last): File "/tmpfs/src/github/google-resumable-media-python/tests/system/requests/conftest.py", line 58, in bucket cleanup_bucket(authorized_transport) File "/tmpfs/src/github/google-resumable-media-python/tests/system/requests/conftest.py", line 43, in cleanup_bucket raise ValueError("{}: {}".format(del_response.status_code, del_response.reason)) ValueError: 409: Conflict ```
process
testing harden systest teardowns against responses from python error at teardown of test resumable upload with bad checksum traceback most recent call last file tmpfs src github google resumable media python tests system requests test upload py line in cleanup assert response status code http client no content assertionerror assert where status code and http client no content error at teardown of testresumableuploadunknownsize test interleave writes traceback most recent call last file tmpfs src github google resumable media python tests system requests conftest py line in bucket cleanup bucket authorized transport file tmpfs src github google resumable media python tests system requests conftest py line in cleanup bucket raise valueerror format del response status code del response reason valueerror conflict
1
67,354
20,961,607,269
IssuesEvent
2022-03-27 21:48:28
abedmaatalla/sipdroid
https://api.github.com/repos/abedmaatalla/sipdroid
closed
Screen Goes Blank On Incoming Calls
Priority-Medium Type-Defect auto-migrated
``` This may be same as in part 2 of issue 1028 I stopped using SIPDROID v3 because of this problem about 6 months ago but thought it might be worth re-installing to see if the problem remains, it does. The problem is that as soon as an incoming VOIP call the screen goes blank and it is impossible to accept the call. I have noted this does occur occasionally when I get a GSM call. This problem started at some point in time after I first started using SIPDROID so I assume it has been caused by some update, new app. install or a change in settings. What steps will reproduce the problem? 1. Ring the PBXES extension from an outside line What is the expected output? What do you see instead? Expected Out Come - options to accept call WHat I see - A blank screen What version of the product are you using? On what device/operating system? V3 Galaxy Nexus, Android 4.2.2, Kernel 3,0.31-g9f818de Which SIP server are you using? What happens with PBXes? PBXES Which type of network are you using? WiFi Please provide any additional information below. ``` Original issue reported on code.google.com by `neuros...@gmail.com` on 3 Jul 2013 at 8:07
1.0
Screen Goes Blank On Incoming Calls - ``` This may be same as in part 2 of issue 1028 I stopped using SIPDROID v3 because of this problem about 6 months ago but thought it might be worth re-installing to see if the problem remains, it does. The problem is that as soon as an incoming VOIP call the screen goes blank and it is impossible to accept the call. I have noted this does occur occasionally when I get a GSM call. This problem started at some point in time after I first started using SIPDROID so I assume it has been caused by some update, new app. install or a change in settings. What steps will reproduce the problem? 1. Ring the PBXES extension from an outside line What is the expected output? What do you see instead? Expected Out Come - options to accept call WHat I see - A blank screen What version of the product are you using? On what device/operating system? V3 Galaxy Nexus, Android 4.2.2, Kernel 3,0.31-g9f818de Which SIP server are you using? What happens with PBXes? PBXES Which type of network are you using? WiFi Please provide any additional information below. ``` Original issue reported on code.google.com by `neuros...@gmail.com` on 3 Jul 2013 at 8:07
non_process
screen goes blank on incoming calls this may be same as in part of issue i stopped using sipdroid because of this problem about months ago but thought it might be worth re installing to see if the problem remains it does the problem is that as soon as an incoming voip call the screen goes blank and it is impossible to accept the call i have noted this does occur occasionally when i get a gsm call this problem started at some point in time after i first started using sipdroid so i assume it has been caused by some update new app install or a change in settings what steps will reproduce the problem ring the pbxes extension from an outside line what is the expected output what do you see instead expected out come options to accept call what i see a blank screen what version of the product are you using on what device operating system galaxy nexus android kernel which sip server are you using what happens with pbxes pbxes which type of network are you using wifi please provide any additional information below original issue reported on code google com by neuros gmail com on jul at
0
21,319
28,761,295,172
IssuesEvent
2023-05-01 01:09:27
googleapis/java-container
https://api.github.com/repos/googleapis/java-container
reopened
Dependency Dashboard
type: process api: container
This issue lists Renovate updates and detected dependencies. Read the [Dependency Dashboard](https://docs.renovatebot.com/key-concepts/dashboard/) docs to learn more. ## Edited/Blocked These updates have been manually edited so Renovate will no longer make changes. To discard all commits and start over, click on a checkbox. - [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-shared-config-1.x -->[build(deps): update dependency com.google.cloud:google-cloud-shared-config to v1.5.5](../pull/839) - [ ] <!-- rebase-branch=renovate/org.apache.maven.plugins-maven-project-info-reports-plugin-3.x -->[build(deps): update dependency org.apache.maven.plugins:maven-project-info-reports-plugin to v3.4.3](../pull/861) - [ ] <!-- rebase-branch=renovate/org.apache.maven.plugins-maven-deploy-plugin-3.x -->[build(deps): update dependency org.apache.maven.plugins:maven-deploy-plugin to v3.1.1](../pull/870) - [ ] <!-- rebase-branch=renovate/org.apache.maven.plugins-maven-javadoc-plugin-3.x -->[build(deps): update dependency org.apache.maven.plugins:maven-javadoc-plugin to v3.5.0](../pull/872) - [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-container-2.x -->[chore(deps): update dependency com.google.cloud:google-cloud-container to v2.19.0](../pull/831) - [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-container-parent-2.x -->[chore(deps): update dependency com.google.cloud:google-cloud-container-parent to v2.19.0](../pull/832) - [ ] <!-- rebase-branch=renovate/com.google.cloud-libraries-bom-26.x -->[chore(deps): update dependency com.google.cloud:libraries-bom to v26.13.0](../pull/855) - [ ] <!-- rebase-branch=renovate/com.google.api.grpc-grpc-google-cloud-container-v1-2.x -->[deps: update dependency com.google.api.grpc:grpc-google-cloud-container-v1 to v2.19.0](../pull/833) - [ ] <!-- rebase-branch=renovate/com.google.api.grpc-grpc-google-cloud-container-v1beta1-2.x -->[deps: update dependency com.google.api.grpc:grpc-google-cloud-container-v1beta1 to v2.19.0](../pull/834) - [ ] <!-- rebase-branch=renovate/com.google.api.grpc-proto-google-cloud-container-v1-2.x -->[deps: update dependency com.google.api.grpc:proto-google-cloud-container-v1 to v2.19.0](../pull/835) - [ ] <!-- rebase-branch=renovate/com.google.api.grpc-proto-google-cloud-container-v1beta1-2.x -->[deps: update dependency com.google.api.grpc:proto-google-cloud-container-v1beta1 to v2.19.0](../pull/836) - [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-shared-dependencies-3.x -->[deps: update dependency com.google.cloud:google-cloud-shared-dependencies to v3.8.0](../pull/840) ## Detected dependencies <details><summary>github-actions</summary> <blockquote> <details><summary>.github/workflows/approve-readme.yaml</summary> - `actions/github-script v6` </details> <details><summary>.github/workflows/auto-release.yaml</summary> - `actions/github-script v6` </details> <details><summary>.github/workflows/ci.yaml</summary> - `actions/checkout v3` - `actions/setup-java v3` - `actions/checkout v3` - `actions/setup-java v3` - `actions/checkout v3` - `actions/setup-java v3` - `actions/checkout v3` - `actions/setup-java v3` - `actions/checkout v3` - `actions/setup-java v3` </details> <details><summary>.github/workflows/samples.yaml</summary> - `actions/checkout v3` - `actions/setup-java v3` </details> </blockquote> </details> <details><summary>maven</summary> <blockquote> <details><summary>google-cloud-container-bom/pom.xml</summary> - `com.google.cloud:google-cloud-shared-config 1.5.3` - `com.google.cloud:google-cloud-container 2.6.1-SNAPSHOT` - `com.google.api.grpc:grpc-google-cloud-container-v1 2.6.1-SNAPSHOT` - `com.google.api.grpc:grpc-google-cloud-container-v1beta1 2.6.1-SNAPSHOT` - `com.google.api.grpc:proto-google-cloud-container-v1 2.6.1-SNAPSHOT` - `com.google.api.grpc:proto-google-cloud-container-v1beta1 2.6.1-SNAPSHOT` </details> <details><summary>google-cloud-container/pom.xml</summary> - `com.google.cloud:google-cloud-container-parent 2.6.1-SNAPSHOT` </details> <details><summary>grpc-google-cloud-container-v1/pom.xml</summary> - `com.google.cloud:google-cloud-container-parent 2.6.1-SNAPSHOT` </details> <details><summary>grpc-google-cloud-container-v1beta1/pom.xml</summary> - `com.google.cloud:google-cloud-container-parent 2.6.1-SNAPSHOT` </details> <details><summary>pom.xml</summary> - `com.google.cloud:google-cloud-shared-config 1.5.3` - `com.google.api.grpc:proto-google-cloud-container-v1 2.6.1-SNAPSHOT` - `com.google.api.grpc:proto-google-cloud-container-v1beta1 2.6.1-SNAPSHOT` - `com.google.api.grpc:grpc-google-cloud-container-v1beta1 2.6.1-SNAPSHOT` - `com.google.api.grpc:grpc-google-cloud-container-v1 2.6.1-SNAPSHOT` - `com.google.cloud:google-cloud-container 2.6.1-SNAPSHOT` - `com.google.cloud:google-cloud-shared-dependencies 3.0.4` - `junit:junit 4.13.2` - `org.apache.maven.plugins:maven-project-info-reports-plugin 3.4.1` - `org.apache.maven.plugins:maven-javadoc-plugin 3.4.1` </details> <details><summary>proto-google-cloud-container-v1/pom.xml</summary> - `com.google.cloud:google-cloud-container-parent 2.6.1-SNAPSHOT` </details> <details><summary>proto-google-cloud-container-v1beta1/pom.xml</summary> - `com.google.cloud:google-cloud-container-parent 2.6.1-SNAPSHOT` </details> <details><summary>samples/install-without-bom/pom.xml</summary> - `com.google.cloud.samples:shared-configuration 1.2.0` - `com.google.cloud:google-cloud-container 2.6.0` - `junit:junit 4.13.2` - `com.google.truth:truth 1.1.3` - `org.codehaus.mojo:build-helper-maven-plugin 3.3.0` </details> <details><summary>samples/pom.xml</summary> - `com.google.cloud.samples:shared-configuration 1.2.0` - `org.apache.maven.plugins:maven-deploy-plugin 3.0.0` - `org.sonatype.plugins:nexus-staging-maven-plugin 1.6.13` </details> <details><summary>samples/snapshot/pom.xml</summary> - `com.google.cloud.samples:shared-configuration 1.2.0` - `com.google.cloud:google-cloud-container 2.6.0` - `junit:junit 4.13.2` - `com.google.truth:truth 1.1.3` - `org.codehaus.mojo:build-helper-maven-plugin 3.3.0` </details> <details><summary>samples/snippets/pom.xml</summary> - `com.google.cloud.samples:shared-configuration 1.2.0` - `com.google.cloud:libraries-bom 26.1.4` - `junit:junit 4.13.2` - `com.google.truth:truth 1.1.3` </details> </blockquote> </details> --- - [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
1.0
Dependency Dashboard - This issue lists Renovate updates and detected dependencies. Read the [Dependency Dashboard](https://docs.renovatebot.com/key-concepts/dashboard/) docs to learn more. ## Edited/Blocked These updates have been manually edited so Renovate will no longer make changes. To discard all commits and start over, click on a checkbox. - [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-shared-config-1.x -->[build(deps): update dependency com.google.cloud:google-cloud-shared-config to v1.5.5](../pull/839) - [ ] <!-- rebase-branch=renovate/org.apache.maven.plugins-maven-project-info-reports-plugin-3.x -->[build(deps): update dependency org.apache.maven.plugins:maven-project-info-reports-plugin to v3.4.3](../pull/861) - [ ] <!-- rebase-branch=renovate/org.apache.maven.plugins-maven-deploy-plugin-3.x -->[build(deps): update dependency org.apache.maven.plugins:maven-deploy-plugin to v3.1.1](../pull/870) - [ ] <!-- rebase-branch=renovate/org.apache.maven.plugins-maven-javadoc-plugin-3.x -->[build(deps): update dependency org.apache.maven.plugins:maven-javadoc-plugin to v3.5.0](../pull/872) - [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-container-2.x -->[chore(deps): update dependency com.google.cloud:google-cloud-container to v2.19.0](../pull/831) - [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-container-parent-2.x -->[chore(deps): update dependency com.google.cloud:google-cloud-container-parent to v2.19.0](../pull/832) - [ ] <!-- rebase-branch=renovate/com.google.cloud-libraries-bom-26.x -->[chore(deps): update dependency com.google.cloud:libraries-bom to v26.13.0](../pull/855) - [ ] <!-- rebase-branch=renovate/com.google.api.grpc-grpc-google-cloud-container-v1-2.x -->[deps: update dependency com.google.api.grpc:grpc-google-cloud-container-v1 to v2.19.0](../pull/833) - [ ] <!-- rebase-branch=renovate/com.google.api.grpc-grpc-google-cloud-container-v1beta1-2.x -->[deps: update dependency com.google.api.grpc:grpc-google-cloud-container-v1beta1 to v2.19.0](../pull/834) - [ ] <!-- rebase-branch=renovate/com.google.api.grpc-proto-google-cloud-container-v1-2.x -->[deps: update dependency com.google.api.grpc:proto-google-cloud-container-v1 to v2.19.0](../pull/835) - [ ] <!-- rebase-branch=renovate/com.google.api.grpc-proto-google-cloud-container-v1beta1-2.x -->[deps: update dependency com.google.api.grpc:proto-google-cloud-container-v1beta1 to v2.19.0](../pull/836) - [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-shared-dependencies-3.x -->[deps: update dependency com.google.cloud:google-cloud-shared-dependencies to v3.8.0](../pull/840) ## Detected dependencies <details><summary>github-actions</summary> <blockquote> <details><summary>.github/workflows/approve-readme.yaml</summary> - `actions/github-script v6` </details> <details><summary>.github/workflows/auto-release.yaml</summary> - `actions/github-script v6` </details> <details><summary>.github/workflows/ci.yaml</summary> - `actions/checkout v3` - `actions/setup-java v3` - `actions/checkout v3` - `actions/setup-java v3` - `actions/checkout v3` - `actions/setup-java v3` - `actions/checkout v3` - `actions/setup-java v3` - `actions/checkout v3` - `actions/setup-java v3` </details> <details><summary>.github/workflows/samples.yaml</summary> - `actions/checkout v3` - `actions/setup-java v3` </details> </blockquote> </details> <details><summary>maven</summary> <blockquote> <details><summary>google-cloud-container-bom/pom.xml</summary> - `com.google.cloud:google-cloud-shared-config 1.5.3` - `com.google.cloud:google-cloud-container 2.6.1-SNAPSHOT` - `com.google.api.grpc:grpc-google-cloud-container-v1 2.6.1-SNAPSHOT` - `com.google.api.grpc:grpc-google-cloud-container-v1beta1 2.6.1-SNAPSHOT` - `com.google.api.grpc:proto-google-cloud-container-v1 2.6.1-SNAPSHOT` - `com.google.api.grpc:proto-google-cloud-container-v1beta1 2.6.1-SNAPSHOT` </details> <details><summary>google-cloud-container/pom.xml</summary> - `com.google.cloud:google-cloud-container-parent 2.6.1-SNAPSHOT` </details> <details><summary>grpc-google-cloud-container-v1/pom.xml</summary> - `com.google.cloud:google-cloud-container-parent 2.6.1-SNAPSHOT` </details> <details><summary>grpc-google-cloud-container-v1beta1/pom.xml</summary> - `com.google.cloud:google-cloud-container-parent 2.6.1-SNAPSHOT` </details> <details><summary>pom.xml</summary> - `com.google.cloud:google-cloud-shared-config 1.5.3` - `com.google.api.grpc:proto-google-cloud-container-v1 2.6.1-SNAPSHOT` - `com.google.api.grpc:proto-google-cloud-container-v1beta1 2.6.1-SNAPSHOT` - `com.google.api.grpc:grpc-google-cloud-container-v1beta1 2.6.1-SNAPSHOT` - `com.google.api.grpc:grpc-google-cloud-container-v1 2.6.1-SNAPSHOT` - `com.google.cloud:google-cloud-container 2.6.1-SNAPSHOT` - `com.google.cloud:google-cloud-shared-dependencies 3.0.4` - `junit:junit 4.13.2` - `org.apache.maven.plugins:maven-project-info-reports-plugin 3.4.1` - `org.apache.maven.plugins:maven-javadoc-plugin 3.4.1` </details> <details><summary>proto-google-cloud-container-v1/pom.xml</summary> - `com.google.cloud:google-cloud-container-parent 2.6.1-SNAPSHOT` </details> <details><summary>proto-google-cloud-container-v1beta1/pom.xml</summary> - `com.google.cloud:google-cloud-container-parent 2.6.1-SNAPSHOT` </details> <details><summary>samples/install-without-bom/pom.xml</summary> - `com.google.cloud.samples:shared-configuration 1.2.0` - `com.google.cloud:google-cloud-container 2.6.0` - `junit:junit 4.13.2` - `com.google.truth:truth 1.1.3` - `org.codehaus.mojo:build-helper-maven-plugin 3.3.0` </details> <details><summary>samples/pom.xml</summary> - `com.google.cloud.samples:shared-configuration 1.2.0` - `org.apache.maven.plugins:maven-deploy-plugin 3.0.0` - `org.sonatype.plugins:nexus-staging-maven-plugin 1.6.13` </details> <details><summary>samples/snapshot/pom.xml</summary> - `com.google.cloud.samples:shared-configuration 1.2.0` - `com.google.cloud:google-cloud-container 2.6.0` - `junit:junit 4.13.2` - `com.google.truth:truth 1.1.3` - `org.codehaus.mojo:build-helper-maven-plugin 3.3.0` </details> <details><summary>samples/snippets/pom.xml</summary> - `com.google.cloud.samples:shared-configuration 1.2.0` - `com.google.cloud:libraries-bom 26.1.4` - `junit:junit 4.13.2` - `com.google.truth:truth 1.1.3` </details> </blockquote> </details> --- - [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
process
dependency dashboard this issue lists renovate updates and detected dependencies read the docs to learn more edited blocked these updates have been manually edited so renovate will no longer make changes to discard all commits and start over click on a checkbox pull pull pull pull pull pull pull pull pull pull pull pull detected dependencies github actions github workflows approve readme yaml actions github script github workflows auto release yaml actions github script github workflows ci yaml actions checkout actions setup java actions checkout actions setup java actions checkout actions setup java actions checkout actions setup java actions checkout actions setup java github workflows samples yaml actions checkout actions setup java maven google cloud container bom pom xml com google cloud google cloud shared config com google cloud google cloud container snapshot com google api grpc grpc google cloud container snapshot com google api grpc grpc google cloud container snapshot com google api grpc proto google cloud container snapshot com google api grpc proto google cloud container snapshot google cloud container pom xml com google cloud google cloud container parent snapshot grpc google cloud container pom xml com google cloud google cloud container parent snapshot grpc google cloud container pom xml com google cloud google cloud container parent snapshot pom xml com google cloud google cloud shared config com google api grpc proto google cloud container snapshot com google api grpc proto google cloud container snapshot com google api grpc grpc google cloud container snapshot com google api grpc grpc google cloud container snapshot com google cloud google cloud container snapshot com google cloud google cloud shared dependencies junit junit org apache maven plugins maven project info reports plugin org apache maven plugins maven javadoc plugin proto google cloud container pom xml com google cloud google cloud container parent snapshot proto google cloud container pom xml com google cloud google cloud container parent snapshot samples install without bom pom xml com google cloud samples shared configuration com google cloud google cloud container junit junit com google truth truth org codehaus mojo build helper maven plugin samples pom xml com google cloud samples shared configuration org apache maven plugins maven deploy plugin org sonatype plugins nexus staging maven plugin samples snapshot pom xml com google cloud samples shared configuration com google cloud google cloud container junit junit com google truth truth org codehaus mojo build helper maven plugin samples snippets pom xml com google cloud samples shared configuration com google cloud libraries bom junit junit com google truth truth check this box to trigger a request for renovate to run again on this repository
1
10,283
4,734,433,711
IssuesEvent
2016-10-19 14:13:45
Krzmbrzl/SQDev
https://api.github.com/repos/Krzmbrzl/SQDev
closed
NullPointerException while creating new Project
completed in Dev-Build crash
Using Mars.2 on Linux there is a Bug which causes a NullPointerException in Plugin Version 0.6.1 I've no experience with developing eclipse Plugins so i have no glue if the information in the [error.txt](https://github.com/Krzmbrzl/SQDev/files/527618/error.txt) do help. I've several other plugins installed like egit, but the problems seems to start at raven.sqdev.util.SQDevPreferenceUtil.getProfilesDocumentDirectory(SQDevPreferenceUtil.java:113) Similar things happen if you try to change the preferences of SQDev.
1.0
NullPointerException while creating new Project - Using Mars.2 on Linux there is a Bug which causes a NullPointerException in Plugin Version 0.6.1 I've no experience with developing eclipse Plugins so i have no glue if the information in the [error.txt](https://github.com/Krzmbrzl/SQDev/files/527618/error.txt) do help. I've several other plugins installed like egit, but the problems seems to start at raven.sqdev.util.SQDevPreferenceUtil.getProfilesDocumentDirectory(SQDevPreferenceUtil.java:113) Similar things happen if you try to change the preferences of SQDev.
non_process
nullpointerexception while creating new project using mars on linux there is a bug which causes a nullpointerexception in plugin version i ve no experience with developing eclipse plugins so i have no glue if the information in the do help i ve several other plugins installed like egit but the problems seems to start at raven sqdev util sqdevpreferenceutil getprofilesdocumentdirectory sqdevpreferenceutil java similar things happen if you try to change the preferences of sqdev
0
7,472
10,567,545,064
IssuesEvent
2019-10-06 05:24:49
KerasKorea/KerasObjectDetector
https://api.github.com/repos/KerasKorea/KerasObjectDetector
opened
[Preprocessing] Dataset - PASCAL VOC2007
dataset preprocessing
- [ ] Explanation of the dataset - [ ] Download dataset - [ ] Generate Generator - [ ] Process label
1.0
[Preprocessing] Dataset - PASCAL VOC2007 - - [ ] Explanation of the dataset - [ ] Download dataset - [ ] Generate Generator - [ ] Process label
process
dataset pascal explanation of the dataset download dataset generate generator process label
1
4,214
7,177,065,235
IssuesEvent
2018-01-31 12:18:44
twsswt/bug_buddy_jira_plugin
https://api.github.com/repos/twsswt/bug_buddy_jira_plugin
closed
Identify a packaging and distribution strategy for the data-gatherer component
priority:med processs enhancement
I'm thinking docker
1.0
Identify a packaging and distribution strategy for the data-gatherer component - I'm thinking docker
process
identify a packaging and distribution strategy for the data gatherer component i m thinking docker
1
72,427
8,736,509,525
IssuesEvent
2018-12-11 19:44:04
GSA/dotgov-home
https://api.github.com/repos/GSA/dotgov-home
closed
.gov-specific favicon?
design
Right now it's the GSA agency logo. But I don't think that really represents the program very effectively, and it wouldn't distinguish it from gsa.gov in people's browser tabs. Other ideas? cc @h-m-f-t
1.0
.gov-specific favicon? - Right now it's the GSA agency logo. But I don't think that really represents the program very effectively, and it wouldn't distinguish it from gsa.gov in people's browser tabs. Other ideas? cc @h-m-f-t
non_process
gov specific favicon right now it s the gsa agency logo but i don t think that really represents the program very effectively and it wouldn t distinguish it from gsa gov in people s browser tabs other ideas cc h m f t
0
30,703
7,244,798,268
IssuesEvent
2018-02-14 16:08:09
Fake-Api/FakeApi
https://api.github.com/repos/Fake-Api/FakeApi
opened
Remove duplicated validation
code duplicate
Similar blocks of code found in 2 locations. Consider refactoring. https://codeclimate.com/github/wnascimento/FakeApi/server/app/requests/Request.js#issue_5a787a3224cbe00001000051
1.0
Remove duplicated validation - Similar blocks of code found in 2 locations. Consider refactoring. https://codeclimate.com/github/wnascimento/FakeApi/server/app/requests/Request.js#issue_5a787a3224cbe00001000051
non_process
remove duplicated validation similar blocks of code found in locations consider refactoring
0
585,990
17,552,290,436
IssuesEvent
2021-08-13 00:21:46
PlanktonTeam/IMOS_Toolbox
https://api.github.com/repos/PlanktonTeam/IMOS_Toolbox
closed
Changelogs
bug high priority
BGC_* changelogs have `START_DATE` CPR_* changelogs have `STARTDATE` `STARTDATE` would be consistent with other variables such as `STARTLATITUDE`, `STARTLONGITUDE` etc
1.0
Changelogs - BGC_* changelogs have `START_DATE` CPR_* changelogs have `STARTDATE` `STARTDATE` would be consistent with other variables such as `STARTLATITUDE`, `STARTLONGITUDE` etc
non_process
changelogs bgc changelogs have start date cpr changelogs have startdate startdate would be consistent with other variables such as startlatitude startlongitude etc
0
171,964
27,211,769,488
IssuesEvent
2023-02-20 17:08:10
carbon-design-system/carbon-design-kit
https://api.github.com/repos/carbon-design-system/carbon-design-kit
closed
[Figma] Image guidelines: Create components for "Mini components"
kit: figma role: design :pencil2:
#### Acceptance criteria [Things that need to get done to make this a finished piece of work] As a designer, I can pull in and use the "Mini components" component from the library without having to recreate them.
1.0
[Figma] Image guidelines: Create components for "Mini components" - #### Acceptance criteria [Things that need to get done to make this a finished piece of work] As a designer, I can pull in and use the "Mini components" component from the library without having to recreate them.
non_process
image guidelines create components for mini components acceptance criteria as a designer i can pull in and use the mini components component from the library without having to recreate them
0
8,376
11,524,972,450
IssuesEvent
2020-02-15 04:29:34
SE-Garden/tms-webserver
https://api.github.com/repos/SE-Garden/tms-webserver
opened
CIを組み込む
kind:アーキ process:CQ
## 概要 テストケースが現状存在しないのだが、CIを組み込む。 内容としては、以下を考えている。 - GitHub Actions によるビルド(ビルド冗長化) - TravisCI によるビルド(ビルド冗長化) - CircleCI によるビルド(ビルド冗長化) - CodeSmell系のサービス追加 ## ゴール - masterへのPush毎に自動的にビルドを確認すること - PullRequestはそのビルドが正常に通っていないとマージ不可能にすること ## 成果物 - build.gradle - 各種CI系YAML - SaaS設定 ## 関連Issue
1.0
CIを組み込む - ## 概要 テストケースが現状存在しないのだが、CIを組み込む。 内容としては、以下を考えている。 - GitHub Actions によるビルド(ビルド冗長化) - TravisCI によるビルド(ビルド冗長化) - CircleCI によるビルド(ビルド冗長化) - CodeSmell系のサービス追加 ## ゴール - masterへのPush毎に自動的にビルドを確認すること - PullRequestはそのビルドが正常に通っていないとマージ不可能にすること ## 成果物 - build.gradle - 各種CI系YAML - SaaS設定 ## 関連Issue
process
ciを組み込む 概要 テストケースが現状存在しないのだが、ciを組み込む。 内容としては、以下を考えている。 github actions によるビルド(ビルド冗長化) travisci によるビルド(ビルド冗長化) circleci によるビルド(ビルド冗長化) codesmell系のサービス追加 ゴール masterへのpush毎に自動的にビルドを確認すること pullrequestはそのビルドが正常に通っていないとマージ不可能にすること 成果物 build gradle 各種ci系yaml saas設定 関連issue
1
52,146
7,750,441,737
IssuesEvent
2018-05-30 14:22:53
typelevel/cats
https://api.github.com/repos/typelevel/cats
opened
Document why monad transformers don't have Applicative instances.
documentation
see https://github.com/typelevel/cats/pull/2181#issuecomment-370098157 Probably in the FAQ section, maybe also mention in the guideline.
1.0
Document why monad transformers don't have Applicative instances. - see https://github.com/typelevel/cats/pull/2181#issuecomment-370098157 Probably in the FAQ section, maybe also mention in the guideline.
non_process
document why monad transformers don t have applicative instances see probably in the faq section maybe also mention in the guideline
0
21,890
4,757,139,355
IssuesEvent
2016-10-24 15:47:13
coala/coala-bears
https://api.github.com/repos/coala/coala-bears
closed
Incorrect indent in RELEASE_NOTES.rst
area/documentation difficulty/newcomer importance/low size/XS
```bash $ rstcheck RELEASE_NOTES.rst RELEASE_NOTES.rst:179: (ERROR/3) Unexpected indentation. ``` The problem is that there is an extra space (` `) in the second line of ``` - ``ClangComplexityBear`` (Calculates cyclomatic complexity of each function for C, C++ and other Clang supported languages.) ``` The `for` should be aligned under the first character of the first line of the list item. This problem is part of the patch at #904 , which may be abandoned in an unmergable state.
1.0
Incorrect indent in RELEASE_NOTES.rst - ```bash $ rstcheck RELEASE_NOTES.rst RELEASE_NOTES.rst:179: (ERROR/3) Unexpected indentation. ``` The problem is that there is an extra space (` `) in the second line of ``` - ``ClangComplexityBear`` (Calculates cyclomatic complexity of each function for C, C++ and other Clang supported languages.) ``` The `for` should be aligned under the first character of the first line of the list item. This problem is part of the patch at #904 , which may be abandoned in an unmergable state.
non_process
incorrect indent in release notes rst bash rstcheck release notes rst release notes rst error unexpected indentation the problem is that there is an extra space in the second line of clangcomplexitybear calculates cyclomatic complexity of each function for c c and other clang supported languages the for should be aligned under the first character of the first line of the list item this problem is part of the patch at which may be abandoned in an unmergable state
0
136,222
5,276,890,026
IssuesEvent
2017-02-07 00:54:54
bantuist/capstone
https://api.github.com/repos/bantuist/capstone
opened
Add new cards to Anki
priority
As a **user**, I want new cards **to be added** to Anki when I create new prompt/response nodes in Dynalist **so that I don't have to create the cards in Anki separately.**
1.0
Add new cards to Anki - As a **user**, I want new cards **to be added** to Anki when I create new prompt/response nodes in Dynalist **so that I don't have to create the cards in Anki separately.**
non_process
add new cards to anki as a user i want new cards to be added to anki when i create new prompt response nodes in dynalist so that i don t have to create the cards in anki separately
0
234,690
18,013,640,029
IssuesEvent
2021-09-16 11:33:36
ehuckriede/Toronto_renting_regulations
https://api.github.com/repos/ehuckriede/Toronto_renting_regulations
opened
Update readme file
documentation
- [ ] motivation (Julie) - [ ] method and results (later) - [ ] repository overview - [x] running instructions (Eveline) - [ ] about
1.0
Update readme file - - [ ] motivation (Julie) - [ ] method and results (later) - [ ] repository overview - [x] running instructions (Eveline) - [ ] about
non_process
update readme file motivation julie method and results later repository overview running instructions eveline about
0
15,392
19,575,894,781
IssuesEvent
2022-01-04 15:24:32
opensafely-core/job-server
https://api.github.com/repos/opensafely-core/job-server
opened
Addition of PPIE links to application form
application-process
Suggestion by OpenSAFELY September oversight board _consider as part of the application process form adding, “Please provide your PPIE links as evidence”._
1.0
Addition of PPIE links to application form - Suggestion by OpenSAFELY September oversight board _consider as part of the application process form adding, “Please provide your PPIE links as evidence”._
process
addition of ppie links to application form suggestion by opensafely september oversight board consider as part of the application process form adding “please provide your ppie links as evidence”
1
23,359
11,941,446,889
IssuesEvent
2020-04-02 18:27:44
sharkdp/fd
https://api.github.com/repos/sharkdp/fd
closed
Return first file found and terminate
help wanted performance question
Does `fd` support the equivalent of `find PATH -name NAME -print -quit`, which finds the first match, prints the result, and terminates?
True
Return first file found and terminate - Does `fd` support the equivalent of `find PATH -name NAME -print -quit`, which finds the first match, prints the result, and terminates?
non_process
return first file found and terminate does fd support the equivalent of find path name name print quit which finds the first match prints the result and terminates
0
12,180
14,742,016,613
IssuesEvent
2021-01-07 11:33:13
kdjstudios/SABillingGitlab
https://api.github.com/repos/kdjstudios/SABillingGitlab
closed
Memphis - Account Issue
anc-process anp-1 anp-emergency release ant-bug ant-support has attachment
In GitLab by @kdjstudios on Mar 5, 2019, 14:51 **Submitted by:** "Laura Duckworth" <laura.duckworth@answernet.com> **Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2019-03-05-99417/conversation **Server:** Internal **Client/Site:** Memphis **Account:** 5386 **Issue:** The Bioventus account is no longer billing for patches or dispatched calls. The last 2 bill cycles did not calculate and bill for the overage. Can you please check into he system to see if there is an issue. There is no usage history for the last2 bill cycles ![image](/uploads/c96fd4951d6fdef40b27af17793fdfaf/image.png)
1.0
Memphis - Account Issue - In GitLab by @kdjstudios on Mar 5, 2019, 14:51 **Submitted by:** "Laura Duckworth" <laura.duckworth@answernet.com> **Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2019-03-05-99417/conversation **Server:** Internal **Client/Site:** Memphis **Account:** 5386 **Issue:** The Bioventus account is no longer billing for patches or dispatched calls. The last 2 bill cycles did not calculate and bill for the overage. Can you please check into he system to see if there is an issue. There is no usage history for the last2 bill cycles ![image](/uploads/c96fd4951d6fdef40b27af17793fdfaf/image.png)
process
memphis account issue in gitlab by kdjstudios on mar submitted by laura duckworth helpdesk server internal client site memphis account issue the bioventus account is no longer billing for patches or dispatched calls the last bill cycles did not calculate and bill for the overage can you please check into he system to see if there is an issue there is no usage history for the bill cycles uploads image png
1
129,645
18,107,460,909
IssuesEvent
2021-09-22 20:54:04
Tim-Demo/JS-Demo
https://api.github.com/repos/Tim-Demo/JS-Demo
opened
CVE-2018-20834 (High) detected in tar-2.2.1.tgz
security vulnerability
## CVE-2018-20834 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tar-2.2.1.tgz</b></p></summary> <p>tar for node</p> <p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-2.2.1.tgz">https://registry.npmjs.org/tar/-/tar-2.2.1.tgz</a></p> <p>Path to dependency file: JS-Demo/package.json</p> <p>Path to vulnerable library: JS-Demo/node_modules/npm/node_modules/tar/package.json</p> <p> Dependency Hierarchy: - grunt-npm-install-0.3.1.tgz (Root Library) - npm-3.10.10.tgz - :x: **tar-2.2.1.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/Tim-Demo/JS-Demo/commit/6867d3cdd385f17346bb7b3f8b5ce830dac87398">6867d3cdd385f17346bb7b3f8b5ce830dac87398</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A vulnerability was found in node-tar before version 4.4.2 (excluding version 2.2.2). An Arbitrary File Overwrite issue exists when extracting a tarball containing a hardlink to a file that already exists on the system, in conjunction with a later plain file with the same name as the hardlink. This plain file content replaces the existing file content. A patch has been applied to node-tar v2.2.2). <p>Publish Date: 2019-04-30 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-20834>CVE-2018-20834</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20834">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20834</a></p> <p>Release Date: 2019-04-30</p> <p>Fix Resolution: tar - 2.2.2,4.4.2</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"tar","packageVersion":"2.2.1","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"grunt-npm-install:0.3.1;npm:3.10.10;tar:2.2.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"tar - 2.2.2,4.4.2"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2018-20834","vulnerabilityDetails":"A vulnerability was found in node-tar before version 4.4.2 (excluding version 2.2.2). An Arbitrary File Overwrite issue exists when extracting a tarball containing a hardlink to a file that already exists on the system, in conjunction with a later plain file with the same name as the hardlink. This plain file content replaces the existing file content. A patch has been applied to node-tar v2.2.2).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-20834","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
True
CVE-2018-20834 (High) detected in tar-2.2.1.tgz - ## CVE-2018-20834 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tar-2.2.1.tgz</b></p></summary> <p>tar for node</p> <p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-2.2.1.tgz">https://registry.npmjs.org/tar/-/tar-2.2.1.tgz</a></p> <p>Path to dependency file: JS-Demo/package.json</p> <p>Path to vulnerable library: JS-Demo/node_modules/npm/node_modules/tar/package.json</p> <p> Dependency Hierarchy: - grunt-npm-install-0.3.1.tgz (Root Library) - npm-3.10.10.tgz - :x: **tar-2.2.1.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/Tim-Demo/JS-Demo/commit/6867d3cdd385f17346bb7b3f8b5ce830dac87398">6867d3cdd385f17346bb7b3f8b5ce830dac87398</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A vulnerability was found in node-tar before version 4.4.2 (excluding version 2.2.2). An Arbitrary File Overwrite issue exists when extracting a tarball containing a hardlink to a file that already exists on the system, in conjunction with a later plain file with the same name as the hardlink. This plain file content replaces the existing file content. A patch has been applied to node-tar v2.2.2). <p>Publish Date: 2019-04-30 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-20834>CVE-2018-20834</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20834">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20834</a></p> <p>Release Date: 2019-04-30</p> <p>Fix Resolution: tar - 2.2.2,4.4.2</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"tar","packageVersion":"2.2.1","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"grunt-npm-install:0.3.1;npm:3.10.10;tar:2.2.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"tar - 2.2.2,4.4.2"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2018-20834","vulnerabilityDetails":"A vulnerability was found in node-tar before version 4.4.2 (excluding version 2.2.2). An Arbitrary File Overwrite issue exists when extracting a tarball containing a hardlink to a file that already exists on the system, in conjunction with a later plain file with the same name as the hardlink. This plain file content replaces the existing file content. A patch has been applied to node-tar v2.2.2).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-20834","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
non_process
cve high detected in tar tgz cve high severity vulnerability vulnerable library tar tgz tar for node library home page a href path to dependency file js demo package json path to vulnerable library js demo node modules npm node modules tar package json dependency hierarchy grunt npm install tgz root library npm tgz x tar tgz vulnerable library found in head commit a href found in base branch master vulnerability details a vulnerability was found in node tar before version excluding version an arbitrary file overwrite issue exists when extracting a tarball containing a hardlink to a file that already exists on the system in conjunction with a later plain file with the same name as the hardlink this plain file content replaces the existing file content a patch has been applied to node tar publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tar isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree grunt npm install npm tar isminimumfixversionavailable true minimumfixversion tar basebranches vulnerabilityidentifier cve vulnerabilitydetails a vulnerability was found in node tar before version excluding version an arbitrary file overwrite issue exists when extracting a tarball containing a hardlink to a file that already exists on the system in conjunction with a later plain file with the same name as the hardlink this plain file content replaces the existing file content a patch has been applied to node tar vulnerabilityurl
0
23,860
12,137,095,544
IssuesEvent
2020-04-23 15:15:26
SpineEventEngine/core-java
https://api.github.com/repos/SpineEventEngine/core-java
opened
Make subsequent unpacking of `Any` faster
performance
There are cases when we repeatedly invoke `AnyPacker.unpack()` on an `Any` instance. This happens, for example, when we obtain a type of a `Signal` that we're processing. Even though, `Any` handles the caching for us, the `AnyPacker` still performs these operations: ```java TypeUrl typeUrl = TypeUrl.ofEnclosed(any); Class<? extends Message> messageClass = typeUrl.getMessageClass(); ``` We can improve it by one of the following scenarios: Option 1. Look into the `Any`'s field `cachedUnpackValue` (which is `private`, BTW) and it's non-null, return its value. Since we don't pass the class when unpacking, having a value would mean the previous call for unpacking was successful. Since it's going to use Reflection, it may be a bit slow too. Option 2: Have a limited size cache which would map an instance of `Any` to extracted value.
True
Make subsequent unpacking of `Any` faster - There are cases when we repeatedly invoke `AnyPacker.unpack()` on an `Any` instance. This happens, for example, when we obtain a type of a `Signal` that we're processing. Even though, `Any` handles the caching for us, the `AnyPacker` still performs these operations: ```java TypeUrl typeUrl = TypeUrl.ofEnclosed(any); Class<? extends Message> messageClass = typeUrl.getMessageClass(); ``` We can improve it by one of the following scenarios: Option 1. Look into the `Any`'s field `cachedUnpackValue` (which is `private`, BTW) and it's non-null, return its value. Since we don't pass the class when unpacking, having a value would mean the previous call for unpacking was successful. Since it's going to use Reflection, it may be a bit slow too. Option 2: Have a limited size cache which would map an instance of `Any` to extracted value.
non_process
make subsequent unpacking of any faster there are cases when we repeatedly invoke anypacker unpack on an any instance this happens for example when we obtain a type of a signal that we re processing even though any handles the caching for us the anypacker still performs these operations java typeurl typeurl typeurl ofenclosed any class messageclass typeurl getmessageclass we can improve it by one of the following scenarios option look into the any s field cachedunpackvalue which is private btw and it s non null return its value since we don t pass the class when unpacking having a value would mean the previous call for unpacking was successful since it s going to use reflection it may be a bit slow too option have a limited size cache which would map an instance of any to extracted value
0
20,884
27,708,182,624
IssuesEvent
2023-03-14 12:39:47
toggl/track-windows-feedback
https://api.github.com/repos/toggl/track-windows-feedback
closed
Switching from `Calendar` to `List` view and back again loses place
bug processed
**Describe the bug** Switching from `Calendar` to `List` view and back again loses the user's place. **Steps to reproduce** 1. Go to `Calendar`. 2. Scroll down a bunch. 3. Go to `List` view. 4. Go back to `Calendar` view. **Expected behavior** The `Calendar` view should have saved the place the user was in, as opposed to scrolling to the top. **Environment (please complete the following information):** **I'm not able to copy and paste information in the `About` window, so I posted a screenshot below. Could the Toggl team make text in the `About` window copyable?** ![image](https://user-images.githubusercontent.com/43425812/189253114-12bda171-035e-4a69-b369-c9380fc1417f.png)
1.0
Switching from `Calendar` to `List` view and back again loses place - **Describe the bug** Switching from `Calendar` to `List` view and back again loses the user's place. **Steps to reproduce** 1. Go to `Calendar`. 2. Scroll down a bunch. 3. Go to `List` view. 4. Go back to `Calendar` view. **Expected behavior** The `Calendar` view should have saved the place the user was in, as opposed to scrolling to the top. **Environment (please complete the following information):** **I'm not able to copy and paste information in the `About` window, so I posted a screenshot below. Could the Toggl team make text in the `About` window copyable?** ![image](https://user-images.githubusercontent.com/43425812/189253114-12bda171-035e-4a69-b369-c9380fc1417f.png)
process
switching from calendar to list view and back again loses place describe the bug switching from calendar to list view and back again loses the user s place steps to reproduce go to calendar scroll down a bunch go to list view go back to calendar view expected behavior the calendar view should have saved the place the user was in as opposed to scrolling to the top environment please complete the following information i m not able to copy and paste information in the about window so i posted a screenshot below could the toggl team make text in the about window copyable
1
17,000
22,364,177,926
IssuesEvent
2022-06-16 00:57:38
hashgraph/hedera-json-rpc-relay
https://api.github.com/repos/hashgraph/hedera-json-rpc-relay
opened
Add additional eth_ getTransactionByHash acceptance tests
enhancement P2 process
### Problem The initial coverage was scarce, additonal coverage can be added. ### Solution - [ ] Add negative test for non-existing block hash with `hydrated transactions set to **false**`. You can use `0x52db1930f5ca0a47423a62637cf9632823fdd24d61dc97c3263417b38c31eee6` f.e. The response of the call must be asserted to be: ``` { "jsonrpc": "2.0", "id": 0, "result": null } ``` - [ ] Add negative test for non-existing block hash with `hydrated transactions set to **true**`. You can use `0x52db1930f5ca0a47423a62637cf9632823fdd24d61dc97c3263417b38c31eee6` f.e. The response of the call must be asserted to be: ``` { "jsonrpc": "2.0", "id": 0, "result": null } ``` - [ ] Add a test that queries **existing block** by hash but sets `hydrated transactions to **false**. Assert that the response contains `transactions` array but the values are strings of the **transaction hash** - [ ] Enhance the already existing `getBlockByHash` test by asserting that the returned object has `transactions` array **containing the full transaction** and not only the string of their hash ### Alternatives Where it makes sense some of these tests may be integration tests
1.0
Add additional eth_ getTransactionByHash acceptance tests - ### Problem The initial coverage was scarce, additonal coverage can be added. ### Solution - [ ] Add negative test for non-existing block hash with `hydrated transactions set to **false**`. You can use `0x52db1930f5ca0a47423a62637cf9632823fdd24d61dc97c3263417b38c31eee6` f.e. The response of the call must be asserted to be: ``` { "jsonrpc": "2.0", "id": 0, "result": null } ``` - [ ] Add negative test for non-existing block hash with `hydrated transactions set to **true**`. You can use `0x52db1930f5ca0a47423a62637cf9632823fdd24d61dc97c3263417b38c31eee6` f.e. The response of the call must be asserted to be: ``` { "jsonrpc": "2.0", "id": 0, "result": null } ``` - [ ] Add a test that queries **existing block** by hash but sets `hydrated transactions to **false**. Assert that the response contains `transactions` array but the values are strings of the **transaction hash** - [ ] Enhance the already existing `getBlockByHash` test by asserting that the returned object has `transactions` array **containing the full transaction** and not only the string of their hash ### Alternatives Where it makes sense some of these tests may be integration tests
process
add additional eth gettransactionbyhash acceptance tests problem the initial coverage was scarce additonal coverage can be added solution add negative test for non existing block hash with hydrated transactions set to false you can use f e the response of the call must be asserted to be jsonrpc id result null add negative test for non existing block hash with hydrated transactions set to true you can use f e the response of the call must be asserted to be jsonrpc id result null add a test that queries existing block by hash but sets hydrated transactions to false assert that the response contains transactions array but the values are strings of the transaction hash enhance the already existing getblockbyhash test by asserting that the returned object has transactions array containing the full transaction and not only the string of their hash alternatives where it makes sense some of these tests may be integration tests
1
12,099
3,252,631,546
IssuesEvent
2015-10-19 15:37:21
cyoung/stratux
https://api.github.com/repos/cyoung/stratux
closed
iOS9 testing
help wanted testing
1. What determines when the Wi-Fi icon is displayed (top left)? My home network shows it when connected, but stratux doesn't. 2. Any difference with newer hostapd versions? 3. Any difference with another adapter? 4. Any difference with Wi-Fi Assist OFF (Settings -> Cellular Data -> Wi-Fi Assist)?
1.0
iOS9 testing - 1. What determines when the Wi-Fi icon is displayed (top left)? My home network shows it when connected, but stratux doesn't. 2. Any difference with newer hostapd versions? 3. Any difference with another adapter? 4. Any difference with Wi-Fi Assist OFF (Settings -> Cellular Data -> Wi-Fi Assist)?
non_process
testing what determines when the wi fi icon is displayed top left my home network shows it when connected but stratux doesn t any difference with newer hostapd versions any difference with another adapter any difference with wi fi assist off settings cellular data wi fi assist
0
10,479
13,252,878,712
IssuesEvent
2020-08-20 06:28:39
tikv/tikv
https://api.github.com/repos/tikv/tikv
closed
Switch to use TiDB's Duration memory layout for Copr Duration
PCP-S1 difficulty/medium sig/coprocessor status/help-wanted
## Description Currently TiKV's Duration is implemented using bit flags. It can be changed to stick to TiDB's Duration memory layout (which uses `gotime.Duration` directly) so that Coprocessor can support executing over TiDB's Chunk format directly in future. Goals: 1. Removes the fsp in current Duration implementation. 2. Change to use the same memory format for the Duration. 3. Your new implementation should be compatible with the Chunk format. ## Difficulty - Medium ## Score - 2700 ## Mentor(s) - @breeswish - @iosmanthus ## Recommended Skills - Rust and Golang programming
1.0
Switch to use TiDB's Duration memory layout for Copr Duration - ## Description Currently TiKV's Duration is implemented using bit flags. It can be changed to stick to TiDB's Duration memory layout (which uses `gotime.Duration` directly) so that Coprocessor can support executing over TiDB's Chunk format directly in future. Goals: 1. Removes the fsp in current Duration implementation. 2. Change to use the same memory format for the Duration. 3. Your new implementation should be compatible with the Chunk format. ## Difficulty - Medium ## Score - 2700 ## Mentor(s) - @breeswish - @iosmanthus ## Recommended Skills - Rust and Golang programming
process
switch to use tidb s duration memory layout for copr duration description currently tikv s duration is implemented using bit flags it can be changed to stick to tidb s duration memory layout which uses gotime duration directly so that coprocessor can support executing over tidb s chunk format directly in future goals removes the fsp in current duration implementation change to use the same memory format for the duration your new implementation should be compatible with the chunk format difficulty medium score mentor s breeswish iosmanthus recommended skills rust and golang programming
1
16,948
2,615,127,635
IssuesEvent
2015-03-01 05:56:44
chrsmith/google-api-java-client
https://api.github.com/repos/chrsmith/google-api-java-client
closed
Remove shared-sample-appengine
auto-migrated Milestone-Version1.11.0 Priority-Medium Type-Sample
``` Remove shared-sample-appengine in the samples repository and update the bigquery-appengine-sample. ``` Original issue reported on code.google.com by `rmis...@google.com` on 19 Jun 2012 at 6:30
1.0
Remove shared-sample-appengine - ``` Remove shared-sample-appengine in the samples repository and update the bigquery-appengine-sample. ``` Original issue reported on code.google.com by `rmis...@google.com` on 19 Jun 2012 at 6:30
non_process
remove shared sample appengine remove shared sample appengine in the samples repository and update the bigquery appengine sample original issue reported on code google com by rmis google com on jun at
0
20,579
27,241,038,140
IssuesEvent
2023-02-21 20:28:20
dtcenter/MET
https://api.github.com/repos/dtcenter/MET
closed
Bugfix: Fix inconsistent ASCII2NC AIRNOW location lookup logic
type: bug requestor: METplus Team MET: PreProcessing Tools (Point) priority: high
## Describe the Problem ## Development for issue #2402 flagged unexpected differences in the ascii2nc output for AIRNOW observations: ``` file1: /data/output/met_test_truth/ascii2nc/airnow/HourlyData_20220312.nc ERROR: NetCDF headers differ: ``` Investigating this difference further, I was able to narrow it down and locate a bug in the `AirnowLocations::lookupLatLonElev()` member function. Running the same command multiple times yielded inconsistent results. ### Expected Behavior ### All AIRNOW data for which the station location is defined should appear in the output. Running the same command multiple times should produce consistent results. ### Environment ### Difference flagged via GHA. Reproduced on my Mac Laptop. ### To Reproduce ### Describe the steps to reproduce the behavior. Replicate the behavior by running a very stripped down run of ascii2nc. 1. Use data for a single station: ``` cat data_for_124000050314.dat 03/12/22|00:00|124000050314|Québec - Charlesbour|-5|OZONE|PPB|39|Canada-Quebec1 03/12/22|00:00|124000050314|Québec - Charlesbour|-5|PM2.5|UG/M3|5.2|Canada-Quebec1 ``` 2. Define the airnow station file for only that station: ``` cat airnow_monitoring_site_locations_v2_124000050314.txt StationID|AQSID|FullAQSID|Parameter|MonitorType|SiteCode|SiteName|Status|AgencyID|AgencyName|EPARegion|Latitude|Longitude|Elevation|GMTOffset|CountryFIPS|CBSA_ID|CBSA_Name|StateAQSCode|StateAbbreviation|CountyAQSCode|CountyName 124000050314|000050314|124000050314|O3|Permanent|0314|Québec - Charlesbourg|Active|QC1|Canada-Quebec1|CA|46.861500|-71.257200||-5.00|CA|||||| 124000050314|000050314|124000050314|PM2.5|Permanent|0314|Québec - Charlesbourg|Active|QC1|Canada-Quebec1|CA|46.861500|-71.257200||-5.00|CA|||||| ``` 3. Set $MET_AIRNOW_STATIONS env var: ``` MET_AIRNOW_STATIONS=./airnow_monitoring_site_locations_v2_124000050314.txt ``` 4. Run ascii2nc for MET vesrion 11.0.0: ``` ./ascii2nc data_for_124000050314.dat data_for_124000050314.nc ``` 5. Observe inconsistent results when run several times: ``` DEBUG 2: Finished processing 2 observations for 1 headers. ``` versus ``` DEBUG 2: Finished processing 0 observations for 0 headers. ``` ### Relevant Deadlines ### *List relevant project deadlines here or state NONE.* ### Funding Source ### *Define the source of funding and account keys here or state NONE.* ## Define the Metadata ## ### Assignee ### - [x] Select **engineer(s)** or **no engineer** required - [ ] Select **scientist(s)** or **no scientist** required ### Labels ### - [x] Select **component(s)** - [x] Select **priority** - [x] Select **requestor(s)** ### Projects and Milestone ### - [x] Select **Organization** level **Project** for support of the current coordinated release - [x] Select **Repository** level **Project** for development toward the next official release or add **alert: NEED PROJECT ASSIGNMENT** label - [x] Select **Milestone** as the next bugfix version ## Define Related Issue(s) ## Consider the impact to the other METplus components. - [x] [METplus](https://github.com/dtcenter/METplus/issues/new/choose), [MET](https://github.com/dtcenter/MET/issues/new/choose), [METdataio](https://github.com/dtcenter/METdataio/issues/new/choose), [METviewer](https://github.com/dtcenter/METviewer/issues/new/choose), [METexpress](https://github.com/dtcenter/METexpress/issues/new/choose), [METcalcpy](https://github.com/dtcenter/METcalcpy/issues/new/choose), [METplotpy](https://github.com/dtcenter/METplotpy/issues/new/choose) No impacts. ## Bugfix Checklist ## See the [METplus Workflow](https://metplus.readthedocs.io/en/latest/Contributors_Guide/github_workflow.html) for details. - [ ] Complete the issue definition above, including the **Time Estimate** and **Funding Source**. - [ ] Fork this repository or create a branch of **main_\<Version>**. Branch name: `bugfix_<Issue Number>_main_<Version>_<Description>` - [ ] Fix the bug and test your changes. - [ ] Add/update log messages for easier debugging. - [ ] Add/update unit tests. - [ ] Add/update documentation. - [ ] Push local changes to GitHub. - [ ] Submit a pull request to merge into **main_\<Version>**. Pull request: `bugfix <Issue Number> main_<Version> <Description>` - [ ] Define the pull request metadata, as permissions allow. Select: **Reviewer(s)** and **Development** issue Select: **Organization** level software support **Project** for the current coordinated release Select: **Milestone** as the next bugfix version - [ ] Iterate until the reviewer(s) accept and merge your changes. - [ ] Delete your fork or branch. - [ ] Complete the steps above to fix the bug on the **develop** branch. Branch name: `bugfix_<Issue Number>_develop_<Description>` Pull request: `bugfix <Issue Number> develop <Description>` Select: **Reviewer(s)** and **Development** issue Select: **Repository** level development cycle **Project** for the next official release Select: **Milestone** as the next official version - [ ] Close this issue.
1.0
Bugfix: Fix inconsistent ASCII2NC AIRNOW location lookup logic - ## Describe the Problem ## Development for issue #2402 flagged unexpected differences in the ascii2nc output for AIRNOW observations: ``` file1: /data/output/met_test_truth/ascii2nc/airnow/HourlyData_20220312.nc ERROR: NetCDF headers differ: ``` Investigating this difference further, I was able to narrow it down and locate a bug in the `AirnowLocations::lookupLatLonElev()` member function. Running the same command multiple times yielded inconsistent results. ### Expected Behavior ### All AIRNOW data for which the station location is defined should appear in the output. Running the same command multiple times should produce consistent results. ### Environment ### Difference flagged via GHA. Reproduced on my Mac Laptop. ### To Reproduce ### Describe the steps to reproduce the behavior. Replicate the behavior by running a very stripped down run of ascii2nc. 1. Use data for a single station: ``` cat data_for_124000050314.dat 03/12/22|00:00|124000050314|Québec - Charlesbour|-5|OZONE|PPB|39|Canada-Quebec1 03/12/22|00:00|124000050314|Québec - Charlesbour|-5|PM2.5|UG/M3|5.2|Canada-Quebec1 ``` 2. Define the airnow station file for only that station: ``` cat airnow_monitoring_site_locations_v2_124000050314.txt StationID|AQSID|FullAQSID|Parameter|MonitorType|SiteCode|SiteName|Status|AgencyID|AgencyName|EPARegion|Latitude|Longitude|Elevation|GMTOffset|CountryFIPS|CBSA_ID|CBSA_Name|StateAQSCode|StateAbbreviation|CountyAQSCode|CountyName 124000050314|000050314|124000050314|O3|Permanent|0314|Québec - Charlesbourg|Active|QC1|Canada-Quebec1|CA|46.861500|-71.257200||-5.00|CA|||||| 124000050314|000050314|124000050314|PM2.5|Permanent|0314|Québec - Charlesbourg|Active|QC1|Canada-Quebec1|CA|46.861500|-71.257200||-5.00|CA|||||| ``` 3. Set $MET_AIRNOW_STATIONS env var: ``` MET_AIRNOW_STATIONS=./airnow_monitoring_site_locations_v2_124000050314.txt ``` 4. Run ascii2nc for MET vesrion 11.0.0: ``` ./ascii2nc data_for_124000050314.dat data_for_124000050314.nc ``` 5. Observe inconsistent results when run several times: ``` DEBUG 2: Finished processing 2 observations for 1 headers. ``` versus ``` DEBUG 2: Finished processing 0 observations for 0 headers. ``` ### Relevant Deadlines ### *List relevant project deadlines here or state NONE.* ### Funding Source ### *Define the source of funding and account keys here or state NONE.* ## Define the Metadata ## ### Assignee ### - [x] Select **engineer(s)** or **no engineer** required - [ ] Select **scientist(s)** or **no scientist** required ### Labels ### - [x] Select **component(s)** - [x] Select **priority** - [x] Select **requestor(s)** ### Projects and Milestone ### - [x] Select **Organization** level **Project** for support of the current coordinated release - [x] Select **Repository** level **Project** for development toward the next official release or add **alert: NEED PROJECT ASSIGNMENT** label - [x] Select **Milestone** as the next bugfix version ## Define Related Issue(s) ## Consider the impact to the other METplus components. - [x] [METplus](https://github.com/dtcenter/METplus/issues/new/choose), [MET](https://github.com/dtcenter/MET/issues/new/choose), [METdataio](https://github.com/dtcenter/METdataio/issues/new/choose), [METviewer](https://github.com/dtcenter/METviewer/issues/new/choose), [METexpress](https://github.com/dtcenter/METexpress/issues/new/choose), [METcalcpy](https://github.com/dtcenter/METcalcpy/issues/new/choose), [METplotpy](https://github.com/dtcenter/METplotpy/issues/new/choose) No impacts. ## Bugfix Checklist ## See the [METplus Workflow](https://metplus.readthedocs.io/en/latest/Contributors_Guide/github_workflow.html) for details. - [ ] Complete the issue definition above, including the **Time Estimate** and **Funding Source**. - [ ] Fork this repository or create a branch of **main_\<Version>**. Branch name: `bugfix_<Issue Number>_main_<Version>_<Description>` - [ ] Fix the bug and test your changes. - [ ] Add/update log messages for easier debugging. - [ ] Add/update unit tests. - [ ] Add/update documentation. - [ ] Push local changes to GitHub. - [ ] Submit a pull request to merge into **main_\<Version>**. Pull request: `bugfix <Issue Number> main_<Version> <Description>` - [ ] Define the pull request metadata, as permissions allow. Select: **Reviewer(s)** and **Development** issue Select: **Organization** level software support **Project** for the current coordinated release Select: **Milestone** as the next bugfix version - [ ] Iterate until the reviewer(s) accept and merge your changes. - [ ] Delete your fork or branch. - [ ] Complete the steps above to fix the bug on the **develop** branch. Branch name: `bugfix_<Issue Number>_develop_<Description>` Pull request: `bugfix <Issue Number> develop <Description>` Select: **Reviewer(s)** and **Development** issue Select: **Repository** level development cycle **Project** for the next official release Select: **Milestone** as the next official version - [ ] Close this issue.
process
bugfix fix inconsistent airnow location lookup logic describe the problem development for issue flagged unexpected differences in the output for airnow observations data output met test truth airnow hourlydata nc error netcdf headers differ investigating this difference further i was able to narrow it down and locate a bug in the airnowlocations lookuplatlonelev member function running the same command multiple times yielded inconsistent results expected behavior all airnow data for which the station location is defined should appear in the output running the same command multiple times should produce consistent results environment difference flagged via gha reproduced on my mac laptop to reproduce describe the steps to reproduce the behavior replicate the behavior by running a very stripped down run of use data for a single station cat data for dat québec charlesbour ozone ppb canada québec charlesbour ug canada define the airnow station file for only that station cat airnow monitoring site locations txt stationid aqsid fullaqsid parameter monitortype sitecode sitename status agencyid agencyname eparegion latitude longitude elevation gmtoffset countryfips cbsa id cbsa name stateaqscode stateabbreviation countyaqscode countyname permanent québec charlesbourg active canada ca ca permanent québec charlesbourg active canada ca ca set met airnow stations env var met airnow stations airnow monitoring site locations txt run for met vesrion data for dat data for nc observe inconsistent results when run several times debug finished processing observations for headers versus debug finished processing observations for headers relevant deadlines list relevant project deadlines here or state none funding source define the source of funding and account keys here or state none define the metadata assignee select engineer s or no engineer required select scientist s or no scientist required labels select component s select priority select requestor s projects and milestone select organization level project for support of the current coordinated release select repository level project for development toward the next official release or add alert need project assignment label select milestone as the next bugfix version define related issue s consider the impact to the other metplus components no impacts bugfix checklist see the for details complete the issue definition above including the time estimate and funding source fork this repository or create a branch of main branch name bugfix main fix the bug and test your changes add update log messages for easier debugging add update unit tests add update documentation push local changes to github submit a pull request to merge into main pull request bugfix main define the pull request metadata as permissions allow select reviewer s and development issue select organization level software support project for the current coordinated release select milestone as the next bugfix version iterate until the reviewer s accept and merge your changes delete your fork or branch complete the steps above to fix the bug on the develop branch branch name bugfix develop pull request bugfix develop select reviewer s and development issue select repository level development cycle project for the next official release select milestone as the next official version close this issue
1
200,528
22,795,494,169
IssuesEvent
2022-07-10 16:55:17
TIBCOSoftware/spotfire-wrapper
https://api.github.com/repos/TIBCOSoftware/spotfire-wrapper
closed
CVE-2022-0155 (Medium) detected in follow-redirects-1.14.4.tgz - autoclosed
security vulnerability
## CVE-2022-0155 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>follow-redirects-1.14.4.tgz</b></p></summary> <p>HTTP and HTTPS modules that follow redirects.</p> <p>Library home page: <a href="https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.14.4.tgz">https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.14.4.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/follow-redirects/package.json</p> <p> Dependency Hierarchy: - karma-6.3.6.tgz (Root Library) - http-proxy-1.18.1.tgz - :x: **follow-redirects-1.14.4.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/TIBCOSoftware/spotfire-wrapper/commit/a0099e3904c0895ee751d5c8b127080f759d5345">a0099e3904c0895ee751d5c8b127080f759d5345</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> follow-redirects is vulnerable to Exposure of Private Personal Information to an Unauthorized Actor <p>Publish Date: 2022-01-10 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0155>CVE-2022-0155</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://huntr.dev/bounties/fc524e4b-ebb6-427d-ab67-a64181020406/">https://huntr.dev/bounties/fc524e4b-ebb6-427d-ab67-a64181020406/</a></p> <p>Release Date: 2022-01-10</p> <p>Fix Resolution (follow-redirects): 1.14.7</p> <p>Direct dependency fix Resolution (karma): 6.3.7</p> </p> </details> <p></p> *** <!-- REMEDIATE-OPEN-PR-START --> - [ ] Check this box to open an automated fix PR <!-- REMEDIATE-OPEN-PR-END -->
True
CVE-2022-0155 (Medium) detected in follow-redirects-1.14.4.tgz - autoclosed - ## CVE-2022-0155 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>follow-redirects-1.14.4.tgz</b></p></summary> <p>HTTP and HTTPS modules that follow redirects.</p> <p>Library home page: <a href="https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.14.4.tgz">https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.14.4.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/follow-redirects/package.json</p> <p> Dependency Hierarchy: - karma-6.3.6.tgz (Root Library) - http-proxy-1.18.1.tgz - :x: **follow-redirects-1.14.4.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/TIBCOSoftware/spotfire-wrapper/commit/a0099e3904c0895ee751d5c8b127080f759d5345">a0099e3904c0895ee751d5c8b127080f759d5345</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> follow-redirects is vulnerable to Exposure of Private Personal Information to an Unauthorized Actor <p>Publish Date: 2022-01-10 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0155>CVE-2022-0155</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://huntr.dev/bounties/fc524e4b-ebb6-427d-ab67-a64181020406/">https://huntr.dev/bounties/fc524e4b-ebb6-427d-ab67-a64181020406/</a></p> <p>Release Date: 2022-01-10</p> <p>Fix Resolution (follow-redirects): 1.14.7</p> <p>Direct dependency fix Resolution (karma): 6.3.7</p> </p> </details> <p></p> *** <!-- REMEDIATE-OPEN-PR-START --> - [ ] Check this box to open an automated fix PR <!-- REMEDIATE-OPEN-PR-END -->
non_process
cve medium detected in follow redirects tgz autoclosed cve medium severity vulnerability vulnerable library follow redirects tgz http and https modules that follow redirects library home page a href path to dependency file package json path to vulnerable library node modules follow redirects package json dependency hierarchy karma tgz root library http proxy tgz x follow redirects tgz vulnerable library found in head commit a href found in base branch master vulnerability details follow redirects is vulnerable to exposure of private personal information to an unauthorized actor publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution follow redirects direct dependency fix resolution karma check this box to open an automated fix pr
0
419,228
12,219,319,011
IssuesEvent
2020-05-01 21:24:57
Warcraft-GoA-Development-Team/Warcraft-Guardians-of-Azeroth
https://api.github.com/repos/Warcraft-GoA-Development-Team/Warcraft-Guardians-of-Azeroth
closed
Empress is being used instead of High Queen Dowager
:beetle: bug - localisation :scroll: :grey_exclamation: priority low
**DO NOT REMOVE PRE-EXISTING LINES** ------------------------------------------------------------------------------------------------------------ --> **Mod Version** master branch **Are you using any submods/mods? If so, which?** No **Please explain your issue in as much detail as possible:** Empress for (Mother of said Queen/King of a human empire title), it should be High Queen as the human title for Empires is High King/Queen. Note: Should probably check this out for males as well. **Upload screenshots of the problem localization:** ![ck2_477](https://user-images.githubusercontent.com/31706375/70931802-8c295d80-2038-11ea-879a-097f5d377aaf.png) ![ck2_476](https://user-images.githubusercontent.com/31706375/70931503-e8d84880-2037-11ea-95d6-a59f81291e1c.png)
1.0
Empress is being used instead of High Queen Dowager - **DO NOT REMOVE PRE-EXISTING LINES** ------------------------------------------------------------------------------------------------------------ --> **Mod Version** master branch **Are you using any submods/mods? If so, which?** No **Please explain your issue in as much detail as possible:** Empress for (Mother of said Queen/King of a human empire title), it should be High Queen as the human title for Empires is High King/Queen. Note: Should probably check this out for males as well. **Upload screenshots of the problem localization:** ![ck2_477](https://user-images.githubusercontent.com/31706375/70931802-8c295d80-2038-11ea-879a-097f5d377aaf.png) ![ck2_476](https://user-images.githubusercontent.com/31706375/70931503-e8d84880-2037-11ea-95d6-a59f81291e1c.png)
non_process
empress is being used instead of high queen dowager do not remove pre existing lines mod version master branch are you using any submods mods if so which no please explain your issue in as much detail as possible empress for mother of said queen king of a human empire title it should be high queen as the human title for empires is high king queen note should probably check this out for males as well upload screenshots of the problem localization
0
5,497
8,362,928,518
IssuesEvent
2018-10-03 18:15:35
cityofaustin/techstack
https://api.github.com/repos/cityofaustin/techstack
closed
Assessment of Residential Site Content and content structure
Content type: Process Page Department: Development Services Size: XL Team: Content
Will be moving forward with DSD instead of EMS Needs to be broken up. @courtneyjacinic. List of questions that each discipline needs to answer.
1.0
Assessment of Residential Site Content and content structure - Will be moving forward with DSD instead of EMS Needs to be broken up. @courtneyjacinic. List of questions that each discipline needs to answer.
process
assessment of residential site content and content structure will be moving forward with dsd instead of ems needs to be broken up courtneyjacinic list of questions that each discipline needs to answer
1
702,818
24,137,260,855
IssuesEvent
2022-09-21 12:22:03
AxonIQ/axon-server-se
https://api.github.com/repos/AxonIQ/axon-server-se
closed
Rename the "Active Threads" column in event processor section to "Active Segments"
Priority 3: Could Type: Enhancement
Currently, in the event processor section, the column that is showing the number of claimed segments is labeled "Active Threads" This column should be renamed to "Active Segments" or "Claimed Segments".
1.0
Rename the "Active Threads" column in event processor section to "Active Segments" - Currently, in the event processor section, the column that is showing the number of claimed segments is labeled "Active Threads" This column should be renamed to "Active Segments" or "Claimed Segments".
non_process
rename the active threads column in event processor section to active segments currently in the event processor section the column that is showing the number of claimed segments is labeled active threads this column should be renamed to active segments or claimed segments
0
5,694
8,561,097,454
IssuesEvent
2018-11-09 04:55:34
dklinges9/Myanmar-forest-loss
https://api.github.com/repos/dklinges9/Myanmar-forest-loss
opened
Calculate forest loss, 2000-2017, for all 286 townships
data-processing
Modified version of Audrey's forest loss script calculated to the state level. Audrey, assigning you as you said you could take a shot at this.
1.0
Calculate forest loss, 2000-2017, for all 286 townships - Modified version of Audrey's forest loss script calculated to the state level. Audrey, assigning you as you said you could take a shot at this.
process
calculate forest loss for all townships modified version of audrey s forest loss script calculated to the state level audrey assigning you as you said you could take a shot at this
1
123,957
16,551,534,889
IssuesEvent
2021-05-28 09:08:31
google/web-stories-wp
https://api.github.com/repos/google/web-stories-wp
closed
Redesign: Update link panel
Group: Design Panel Group: Links Group: Workspace P1 Pod: Prometheus Type: Enhancement
<!-- NOTE: For help requests, support questions, or general feedback, please use the WordPress.org forums instead: https://wordpress.org/support/plugin/web-stories/ --> ## Feature Description See [Figma](https://www.figma.com/file/bMhG3KyrJF8vIAODgmbeqT/Design-System?node-id=3404%3A198519) <!-- A clear and concise description of what the problem is and what you want to happen. --> - [ ] URL - [ ] Input - [ ] Media picker ## Additional Context ![Screenshot 2021-02-22 at 14 34 09](https://user-images.githubusercontent.com/3294597/108709216-b1560d80-7512-11eb-89b9-51236097e2f0.png) <!-- Add any other context or screenshots about the feature request. --> --- _Do not alter or remove anything below. The following sections will be managed by moderators only._ ## Acceptance Criteria <!-- One or more bullet points for acceptance criteria. --> ## Implementation Brief <!-- One or more bullet points for how to technically implement the feature. -->
1.0
Redesign: Update link panel - <!-- NOTE: For help requests, support questions, or general feedback, please use the WordPress.org forums instead: https://wordpress.org/support/plugin/web-stories/ --> ## Feature Description See [Figma](https://www.figma.com/file/bMhG3KyrJF8vIAODgmbeqT/Design-System?node-id=3404%3A198519) <!-- A clear and concise description of what the problem is and what you want to happen. --> - [ ] URL - [ ] Input - [ ] Media picker ## Additional Context ![Screenshot 2021-02-22 at 14 34 09](https://user-images.githubusercontent.com/3294597/108709216-b1560d80-7512-11eb-89b9-51236097e2f0.png) <!-- Add any other context or screenshots about the feature request. --> --- _Do not alter or remove anything below. The following sections will be managed by moderators only._ ## Acceptance Criteria <!-- One or more bullet points for acceptance criteria. --> ## Implementation Brief <!-- One or more bullet points for how to technically implement the feature. -->
non_process
redesign update link panel feature description see url input media picker additional context do not alter or remove anything below the following sections will be managed by moderators only acceptance criteria implementation brief
0
10,895
13,673,745,154
IssuesEvent
2020-09-29 10:15:21
prisma/prisma
https://api.github.com/repos/prisma/prisma
opened
Implement env var `FORCE_PANIC_INTROSPECTION_ENGINE`
kind/improvement process/candidate team/typescript topic: internal topic: tests
In this PR https://github.com/prisma/prisma/pull/3694 @williamluke4 implemented `FORCE_PANIC_MIGRATION_ENGINE` We could have one for introspection engine. It will be helpful for this issue https://github.com/prisma/prisma/issues/3779
1.0
Implement env var `FORCE_PANIC_INTROSPECTION_ENGINE` - In this PR https://github.com/prisma/prisma/pull/3694 @williamluke4 implemented `FORCE_PANIC_MIGRATION_ENGINE` We could have one for introspection engine. It will be helpful for this issue https://github.com/prisma/prisma/issues/3779
process
implement env var force panic introspection engine in this pr implemented force panic migration engine we could have one for introspection engine it will be helpful for this issue
1
20,031
26,517,077,207
IssuesEvent
2023-01-18 21:48:11
MicrosoftDocs/azure-devops-docs
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
closed
Demands operators
doc-enhancement devops/prod Pri2 devops-cicd-process/tech
There is -equals operator used as an example for demand: > - agent.os -equals Darwin # check for specific string in capability are there any other operators listed somewhere? --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: e7541ee6-d2bb-84c0-fead-1aa8ee7d2372 * Version Independent ID: 5cf7c51e-37e1-6c67-e6c6-80262c4eb662 * Content: [Demands - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/demands?view=azure-devops&tabs=yaml) * Content Source: [docs/pipelines/process/demands.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/demands.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @steved0x * Microsoft Alias: **sdanie**
1.0
Demands operators - There is -equals operator used as an example for demand: > - agent.os -equals Darwin # check for specific string in capability are there any other operators listed somewhere? --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: e7541ee6-d2bb-84c0-fead-1aa8ee7d2372 * Version Independent ID: 5cf7c51e-37e1-6c67-e6c6-80262c4eb662 * Content: [Demands - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/demands?view=azure-devops&tabs=yaml) * Content Source: [docs/pipelines/process/demands.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/demands.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @steved0x * Microsoft Alias: **sdanie**
process
demands operators there is equals operator used as an example for demand agent os equals darwin check for specific string in capability are there any other operators listed somewhere document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id fead version independent id content content source product devops technology devops cicd process github login microsoft alias sdanie
1
9,626
12,565,989,727
IssuesEvent
2020-06-08 10:24:22
pytorch/pytorch
https://api.github.com/repos/pytorch/pytorch
reopened
Dataloader._shutdown_workers hangs
module: dataloader module: multiprocessing triaged
## 🐛 Bug Dataloader._shutdown_workers hangs unexpectedly. The program has to be killed with Ctrl-C. ``` File "iterative_clustering.py", line 80, in calculate_features for batch in tqdm(dataloader, desc=f"Features (pass {i})"): File "/data1/mschroeder/miniconda3/envs/20-ssdc/lib/python3.7/site-packages/tqdm/std.py", line 1119, in __iter__ for obj in iterable: File "/data1/mschroeder/miniconda3/envs/20-ssdc/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 571, in __next__ self._shutdown_workers() File "/data1/mschroeder/miniconda3/envs/20-ssdc/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 659, in _shutdown_workers w.join() File "/data1/mschroeder/miniconda3/envs/20-ssdc/lib/python3.7/multiprocessing/process.py", line 140, in join res = self._popen.wait(timeout) File "/data1/mschroeder/miniconda3/envs/20-ssdc/lib/python3.7/multiprocessing/popen_fork.py", line 48, in wait return self.poll(os.WNOHANG if timeout == 0.0 else 0) File "/data1/mschroeder/miniconda3/envs/20-ssdc/lib/python3.7/multiprocessing/popen_fork.py", line 28, in poll pid, sts = os.waitpid(self.pid, flag) KeyboardInterrupt ``` ## To Reproduce ```python features = np.zeros((n_samples, n_features)) model.eval() with torch.no_grad(): for i in range(n_passes): offset = 0 for batch in tqdm(dataloader, desc=f"Features (pass {i})"): batch_images = torch.rot90(batch["image"].cuda(), i, (-2, -1)) assert ( batch_images.size() == batch["image"].size() ), f"{batch_images.size()} vs. {batch['image'].size()}" batch_features = model.forward(batch_images) batch_features = torch.flatten(batch_features, 1).cpu().numpy() batch_size = batch_features.shape[0] features[offset : offset + batch_size] += batch_features offset += batch_size ``` The error happens in the forth (last) iteration of `for i in range(n_passes)`. `num_workers` is 8. ## Expected behavior I would expect the loop to terminate without problems. ## Environment PyTorch version: 1.1.0 Is debug build: No CUDA used to build PyTorch: 9.0.176 OS: Linux Mint 18.1 Serena GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.12) 5.4.0 20160609 CMake version: version 3.5.1 Python version: 3.7 Is CUDA available: Yes CUDA runtime version: 8.0.61 GPU models and configuration: GPU 0: GeForce GTX TITAN X Nvidia driver version: 384.98 cuDNN version: Could not collect Versions of relevant libraries: [pip] numpy==1.18.1 [pip] pytorch-lightning==0.7.2.dev0 [pip] torch==1.1.0 [pip] torchvision==0.3.0 [conda] blas 1.0 mkl [conda] cudatoolkit 9.0 h13b8566_0 [conda] mkl 2020.1 217 [conda] mkl-service 2.3.0 py37he904b0f_0 [conda] mkl_fft 1.0.15 py37ha843d7b_0 [conda] mkl_random 1.1.1 py37h0573a6f_0 [conda] numpy 1.18.1 py37h4f9e942_0 [conda] numpy-base 1.18.1 py37hde5b4d6_1 [conda] pytorch 1.1.0 py3.7_cuda9.0.176_cudnn7.5.1_0 pytorch [conda] pytorch-lightning 0.7.2.dev0 pypi_0 pypi [conda] torchvision 0.3.0 py37_cu9.0.176_1 pytorch ## Additional info When examining the code, I see no reason why this wouldn't work. A work-around may be passing a timeout to `w.join()` at the expense of leaving zombie processes around. cc @SsnL
1.0
Dataloader._shutdown_workers hangs - ## 🐛 Bug Dataloader._shutdown_workers hangs unexpectedly. The program has to be killed with Ctrl-C. ``` File "iterative_clustering.py", line 80, in calculate_features for batch in tqdm(dataloader, desc=f"Features (pass {i})"): File "/data1/mschroeder/miniconda3/envs/20-ssdc/lib/python3.7/site-packages/tqdm/std.py", line 1119, in __iter__ for obj in iterable: File "/data1/mschroeder/miniconda3/envs/20-ssdc/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 571, in __next__ self._shutdown_workers() File "/data1/mschroeder/miniconda3/envs/20-ssdc/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 659, in _shutdown_workers w.join() File "/data1/mschroeder/miniconda3/envs/20-ssdc/lib/python3.7/multiprocessing/process.py", line 140, in join res = self._popen.wait(timeout) File "/data1/mschroeder/miniconda3/envs/20-ssdc/lib/python3.7/multiprocessing/popen_fork.py", line 48, in wait return self.poll(os.WNOHANG if timeout == 0.0 else 0) File "/data1/mschroeder/miniconda3/envs/20-ssdc/lib/python3.7/multiprocessing/popen_fork.py", line 28, in poll pid, sts = os.waitpid(self.pid, flag) KeyboardInterrupt ``` ## To Reproduce ```python features = np.zeros((n_samples, n_features)) model.eval() with torch.no_grad(): for i in range(n_passes): offset = 0 for batch in tqdm(dataloader, desc=f"Features (pass {i})"): batch_images = torch.rot90(batch["image"].cuda(), i, (-2, -1)) assert ( batch_images.size() == batch["image"].size() ), f"{batch_images.size()} vs. {batch['image'].size()}" batch_features = model.forward(batch_images) batch_features = torch.flatten(batch_features, 1).cpu().numpy() batch_size = batch_features.shape[0] features[offset : offset + batch_size] += batch_features offset += batch_size ``` The error happens in the forth (last) iteration of `for i in range(n_passes)`. `num_workers` is 8. ## Expected behavior I would expect the loop to terminate without problems. ## Environment PyTorch version: 1.1.0 Is debug build: No CUDA used to build PyTorch: 9.0.176 OS: Linux Mint 18.1 Serena GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.12) 5.4.0 20160609 CMake version: version 3.5.1 Python version: 3.7 Is CUDA available: Yes CUDA runtime version: 8.0.61 GPU models and configuration: GPU 0: GeForce GTX TITAN X Nvidia driver version: 384.98 cuDNN version: Could not collect Versions of relevant libraries: [pip] numpy==1.18.1 [pip] pytorch-lightning==0.7.2.dev0 [pip] torch==1.1.0 [pip] torchvision==0.3.0 [conda] blas 1.0 mkl [conda] cudatoolkit 9.0 h13b8566_0 [conda] mkl 2020.1 217 [conda] mkl-service 2.3.0 py37he904b0f_0 [conda] mkl_fft 1.0.15 py37ha843d7b_0 [conda] mkl_random 1.1.1 py37h0573a6f_0 [conda] numpy 1.18.1 py37h4f9e942_0 [conda] numpy-base 1.18.1 py37hde5b4d6_1 [conda] pytorch 1.1.0 py3.7_cuda9.0.176_cudnn7.5.1_0 pytorch [conda] pytorch-lightning 0.7.2.dev0 pypi_0 pypi [conda] torchvision 0.3.0 py37_cu9.0.176_1 pytorch ## Additional info When examining the code, I see no reason why this wouldn't work. A work-around may be passing a timeout to `w.join()` at the expense of leaving zombie processes around. cc @SsnL
process
dataloader shutdown workers hangs 🐛 bug dataloader shutdown workers hangs unexpectedly the program has to be killed with ctrl c file iterative clustering py line in calculate features for batch in tqdm dataloader desc f features pass i file mschroeder envs ssdc lib site packages tqdm std py line in iter for obj in iterable file mschroeder envs ssdc lib site packages torch utils data dataloader py line in next self shutdown workers file mschroeder envs ssdc lib site packages torch utils data dataloader py line in shutdown workers w join file mschroeder envs ssdc lib multiprocessing process py line in join res self popen wait timeout file mschroeder envs ssdc lib multiprocessing popen fork py line in wait return self poll os wnohang if timeout else file mschroeder envs ssdc lib multiprocessing popen fork py line in poll pid sts os waitpid self pid flag keyboardinterrupt to reproduce python features np zeros n samples n features model eval with torch no grad for i in range n passes offset for batch in tqdm dataloader desc f features pass i batch images torch batch cuda i assert batch images size batch size f batch images size vs batch size batch features model forward batch images batch features torch flatten batch features cpu numpy batch size batch features shape features batch features offset batch size the error happens in the forth last iteration of for i in range n passes num workers is expected behavior i would expect the loop to terminate without problems environment pytorch version is debug build no cuda used to build pytorch os linux mint serena gcc version ubuntu cmake version version python version is cuda available yes cuda runtime version gpu models and configuration gpu geforce gtx titan x nvidia driver version cudnn version could not collect versions of relevant libraries numpy pytorch lightning torch torchvision blas mkl cudatoolkit mkl mkl service mkl fft mkl random numpy numpy base pytorch pytorch pytorch lightning pypi pypi torchvision pytorch additional info when examining the code i see no reason why this wouldn t work a work around may be passing a timeout to w join at the expense of leaving zombie processes around cc ssnl
1
6,212
9,124,564,351
IssuesEvent
2019-02-24 04:44:30
google/codeworld
https://api.github.com/repos/google/codeworld
closed
[codeworld-api] Relax bounds: containers
enhancement in process
GHC 8.6 ships with `containers-0.6`, which is excluded by the current bounds in `codeworld-api`. It seems to build OK with an upper bound of `containers < 0.7`, so I suggest relaxing the bounds and publishing a metadata revision.
1.0
[codeworld-api] Relax bounds: containers - GHC 8.6 ships with `containers-0.6`, which is excluded by the current bounds in `codeworld-api`. It seems to build OK with an upper bound of `containers < 0.7`, so I suggest relaxing the bounds and publishing a metadata revision.
process
relax bounds containers ghc ships with containers which is excluded by the current bounds in codeworld api it seems to build ok with an upper bound of containers so i suggest relaxing the bounds and publishing a metadata revision
1
20,773
27,504,387,233
IssuesEvent
2023-03-06 01:15:07
VolumeFi/paloma
https://api.github.com/repos/VolumeFi/paloma
opened
Upgrade app.go with upgrade handler using the semver of the current app
enhancement ReleaseProcess
# Background TODO: https://github.com/palomachain/paloma/blob/d9731ff5c2caf4f81ad6d8c15dd5f80544ae0e61/app/app.go#L350-L352 s/palomad/app.Version()/ simplified # Done when - [ ] a list of items, when checked, this ticket should be considered to be finished
1.0
Upgrade app.go with upgrade handler using the semver of the current app - # Background TODO: https://github.com/palomachain/paloma/blob/d9731ff5c2caf4f81ad6d8c15dd5f80544ae0e61/app/app.go#L350-L352 s/palomad/app.Version()/ simplified # Done when - [ ] a list of items, when checked, this ticket should be considered to be finished
process
upgrade app go with upgrade handler using the semver of the current app background todo s palomad app version simplified done when a list of items when checked this ticket should be considered to be finished
1
3,573
6,613,232,830
IssuesEvent
2017-09-20 08:29:26
inasafe/inasafe-realtime
https://api.github.com/repos/inasafe/inasafe-realtime
closed
Migrate issues, code, and architecture to new realtime repo
feature request orchestration realtime processor
Note that, in the future, we need to move all Realtime issues to the new repo along with the new deployment that point to that repo. See original ticket at https://github.com/inasafe/inasafe/issues/3696 for further discussion.
1.0
Migrate issues, code, and architecture to new realtime repo - Note that, in the future, we need to move all Realtime issues to the new repo along with the new deployment that point to that repo. See original ticket at https://github.com/inasafe/inasafe/issues/3696 for further discussion.
process
migrate issues code and architecture to new realtime repo note that in the future we need to move all realtime issues to the new repo along with the new deployment that point to that repo see original ticket at for further discussion
1
22,077
30,596,975,427
IssuesEvent
2023-07-21 23:58:56
NationalSecurityAgency/ghidra
https://api.github.com/repos/NationalSecurityAgency/ghidra
closed
dsPIC30F6014 Emulation Error 2
Feature: Processor/PIC Feature: Emulation Status: Internal
**Describe the bug** Attempting to emulate a dsPIC30F6014 file causes: ``` java.lang.NullPointerException: Cannot invoke "ghidra.program.model.address.Address.getAddressSpace()" because "a" is null ``` **To Reproduce** Steps to reproduce the behavior: 1. Compile program with the [XC16 compiler](https://www.microchip.com/en-us/tools-resources/develop/mplab-xc-compilers/downloads-documentation#XC16): ```C #include <xc.h> int main(void); unsigned int Add(unsigned int a, unsigned int b); unsigned int x, y, z; int main(void) { x = 2; y = 5; z = Add(x,y); return 0; } unsigned int Add(unsigned int a, unsigned int b) { return(a+b); } ``` 2. Compile for dsPIC30F6014: ```bash xc16-gcc -mcpu=30f6014 -o hello_world2.elf hello_world2.c ``` 3. Load program into Ghidra and analyze. 4. Navigate to `_main` 5. Right click on instruction and click `Emulate Program in new Trace` **Expected behavior** The ability to emulate the instructions. **Screenshots** If applicable, add screenshots to help explain your problem. **Attachments** Here is the full error output [error.log](https://github.com/NationalSecurityAgency/ghidra/files/12044076/error.log) **Environment (please complete the following information):** Build Date: 2023-Jul-11 1640 EDT - Ghidra Version: 10.3.2 - Java Home: /usr/lib/jvm/java-17-openjdk-amd64 - JVM Version: Private Build 17.0.7 - OS: Linux 5.15.0-73-generic amd64 **Additional context** I opened up an issue a few weeks ago https://github.com/NationalSecurityAgency/ghidra/issues/5410 and it fixed the issue with the old hello_world.elf file. I tried to emulate a slightly more complicated program that I posted above and am failing. I have tried them both in 10.3.2 and one works while the other does not. I have tried starting the emulation at a variety of instructions, but 0x1CA is where it failed in the image below. ![ghidra-error2](https://github.com/NationalSecurityAgency/ghidra/assets/11239651/c9ef7d06-c4d5-4d72-a870-c5d9e94135bf)
1.0
dsPIC30F6014 Emulation Error 2 - **Describe the bug** Attempting to emulate a dsPIC30F6014 file causes: ``` java.lang.NullPointerException: Cannot invoke "ghidra.program.model.address.Address.getAddressSpace()" because "a" is null ``` **To Reproduce** Steps to reproduce the behavior: 1. Compile program with the [XC16 compiler](https://www.microchip.com/en-us/tools-resources/develop/mplab-xc-compilers/downloads-documentation#XC16): ```C #include <xc.h> int main(void); unsigned int Add(unsigned int a, unsigned int b); unsigned int x, y, z; int main(void) { x = 2; y = 5; z = Add(x,y); return 0; } unsigned int Add(unsigned int a, unsigned int b) { return(a+b); } ``` 2. Compile for dsPIC30F6014: ```bash xc16-gcc -mcpu=30f6014 -o hello_world2.elf hello_world2.c ``` 3. Load program into Ghidra and analyze. 4. Navigate to `_main` 5. Right click on instruction and click `Emulate Program in new Trace` **Expected behavior** The ability to emulate the instructions. **Screenshots** If applicable, add screenshots to help explain your problem. **Attachments** Here is the full error output [error.log](https://github.com/NationalSecurityAgency/ghidra/files/12044076/error.log) **Environment (please complete the following information):** Build Date: 2023-Jul-11 1640 EDT - Ghidra Version: 10.3.2 - Java Home: /usr/lib/jvm/java-17-openjdk-amd64 - JVM Version: Private Build 17.0.7 - OS: Linux 5.15.0-73-generic amd64 **Additional context** I opened up an issue a few weeks ago https://github.com/NationalSecurityAgency/ghidra/issues/5410 and it fixed the issue with the old hello_world.elf file. I tried to emulate a slightly more complicated program that I posted above and am failing. I have tried them both in 10.3.2 and one works while the other does not. I have tried starting the emulation at a variety of instructions, but 0x1CA is where it failed in the image below. ![ghidra-error2](https://github.com/NationalSecurityAgency/ghidra/assets/11239651/c9ef7d06-c4d5-4d72-a870-c5d9e94135bf)
process
emulation error describe the bug attempting to emulate a file causes java lang nullpointerexception cannot invoke ghidra program model address address getaddressspace because a is null to reproduce steps to reproduce the behavior compile program with the c include int main void unsigned int add unsigned int a unsigned int b unsigned int x y z int main void x y z add x y return unsigned int add unsigned int a unsigned int b return a b compile for bash gcc mcpu o hello elf hello c load program into ghidra and analyze navigate to main right click on instruction and click emulate program in new trace expected behavior the ability to emulate the instructions screenshots if applicable add screenshots to help explain your problem attachments here is the full error output environment please complete the following information build date jul edt ghidra version java home usr lib jvm java openjdk jvm version private build os linux generic additional context i opened up an issue a few weeks ago and it fixed the issue with the old hello world elf file i tried to emulate a slightly more complicated program that i posted above and am failing i have tried them both in and one works while the other does not i have tried starting the emulation at a variety of instructions but is where it failed in the image below
1