Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
1
744
labels
stringlengths
4
574
body
stringlengths
9
211k
index
stringclasses
10 values
text_combine
stringlengths
96
211k
label
stringclasses
2 values
text
stringlengths
96
188k
binary_label
int64
0
1
666,147
22,344,608,304
IssuesEvent
2022-06-15 06:32:14
litmuschaos/litmus
https://api.github.com/repos/litmuschaos/litmus
closed
Need to upgrade docker images to use go1.15
priority/low
Changes: - Litmus portal components - Litmus core components
1.0
Need to upgrade docker images to use go1.15 - Changes: - Litmus portal components - Litmus core components
non_process
need to upgrade docker images to use changes litmus portal components litmus core components
0
20,079
26,575,519,472
IssuesEvent
2023-01-21 19:13:57
open-telemetry/opentelemetry-collector-contrib
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
closed
transform processor bug
bug priority:p2 processor/transform pkg/ottl
### Component(s) _No response_ ### What happened? make some span ``` tracer := otel.GetTracerProvider().Tracer("test") c := context.Background() _, s := tracer.Start(c, "/misc/ping") s.SetAttributes([]attribute.KeyValue{attribute.String("http.target", "/misc/ping")}...) s.End() _, s1 := tracer.Start(c, "PING") s1.SetAttributes([]attribute.KeyValue{attribute.String("db.operation", "PING")}...) s1.End() ``` as blow collector config , transform processor IsMatch code will return nil val: ``` val, err := target.Get(ctx, tCtx) blabla default: return nil, errors.New("unsupported type") ``` cause tail_sampling will not be execute . add code to IsMatch to fix the promblem : ``` if val == nil { return false, err } ``` ### Collector version v0.68.0 ### Environment information ## Environment OS: (e.g., "Ubuntu 20.04") Compiler(if manually compiled): (e.g., "go 14.2") ### OpenTelemetry Collector configuration ```yaml exporters: logging: loglevel: debug otlp: endpoint: "172.17.0.5:4317" tls: insecure: true receivers: otlp: protocols: grpc: endpoint: 0.0.0.0:4317 http: endpoint: 0.0.0.0:4318 processors: transform: trace_statements: - context: span statements: - set(attributes["sampling.priority"], 1) where IsMatch(attributes["http.target"], "/misc/ping|/actuator/health") == true - set(attributes["sampling.priority"], 1) where IsMatch(attributes["db.operation"], "PING") == true - set(attributes["sampling.priority"], 1) where IsMatch(attributes["http.url"], "xxl-job-admin") == true tail_sampling: decision_wait: 10s #Wait time since the first span of a trace before making a sampling decision num_traces: 50000 #Number of traces kept in memory policies: [ { name: agent-policy, type: numeric_attribute, numeric_attribute: { key: "sampling.priority", min_value: 1, max_value: 1 } }, { name: latency-policy-3, type: latency, latency: { threshold_ms: 1500 } }, { name: error-policy-4, type: status_code, status_code: {status_codes: [ERROR]} }, { name: probabilistic-policy, type: probabilistic, probabilistic: { sampling_percentage: 1 } } ] service: telemetry: logs: level: "debug" pipelines: traces: processors: [transform,tail_sampling] receivers: - otlp exporters: - logging ``` ### Log output _No response_ ### Additional context _No response_
1.0
transform processor bug - ### Component(s) _No response_ ### What happened? make some span ``` tracer := otel.GetTracerProvider().Tracer("test") c := context.Background() _, s := tracer.Start(c, "/misc/ping") s.SetAttributes([]attribute.KeyValue{attribute.String("http.target", "/misc/ping")}...) s.End() _, s1 := tracer.Start(c, "PING") s1.SetAttributes([]attribute.KeyValue{attribute.String("db.operation", "PING")}...) s1.End() ``` as blow collector config , transform processor IsMatch code will return nil val: ``` val, err := target.Get(ctx, tCtx) blabla default: return nil, errors.New("unsupported type") ``` cause tail_sampling will not be execute . add code to IsMatch to fix the promblem : ``` if val == nil { return false, err } ``` ### Collector version v0.68.0 ### Environment information ## Environment OS: (e.g., "Ubuntu 20.04") Compiler(if manually compiled): (e.g., "go 14.2") ### OpenTelemetry Collector configuration ```yaml exporters: logging: loglevel: debug otlp: endpoint: "172.17.0.5:4317" tls: insecure: true receivers: otlp: protocols: grpc: endpoint: 0.0.0.0:4317 http: endpoint: 0.0.0.0:4318 processors: transform: trace_statements: - context: span statements: - set(attributes["sampling.priority"], 1) where IsMatch(attributes["http.target"], "/misc/ping|/actuator/health") == true - set(attributes["sampling.priority"], 1) where IsMatch(attributes["db.operation"], "PING") == true - set(attributes["sampling.priority"], 1) where IsMatch(attributes["http.url"], "xxl-job-admin") == true tail_sampling: decision_wait: 10s #Wait time since the first span of a trace before making a sampling decision num_traces: 50000 #Number of traces kept in memory policies: [ { name: agent-policy, type: numeric_attribute, numeric_attribute: { key: "sampling.priority", min_value: 1, max_value: 1 } }, { name: latency-policy-3, type: latency, latency: { threshold_ms: 1500 } }, { name: error-policy-4, type: status_code, status_code: {status_codes: [ERROR]} }, { name: probabilistic-policy, type: probabilistic, probabilistic: { sampling_percentage: 1 } } ] service: telemetry: logs: level: "debug" pipelines: traces: processors: [transform,tail_sampling] receivers: - otlp exporters: - logging ``` ### Log output _No response_ ### Additional context _No response_
process
transform processor bug component s no response what happened make some span tracer otel gettracerprovider tracer test c context background s tracer start c misc ping s setattributes attribute keyvalue attribute string http target misc ping s end tracer start c ping setattributes attribute keyvalue attribute string db operation ping end as blow collector config transform processor ismatch code will return nil val val err target get ctx tctx blabla default return nil errors new unsupported type cause tail sampling will not be execute add code to ismatch to fix the promblem if val nil return false err collector version environment information environment os e g ubuntu compiler if manually compiled e g go opentelemetry collector configuration yaml exporters logging loglevel debug otlp endpoint tls insecure true receivers otlp protocols grpc endpoint http endpoint processors transform trace statements context span statements set attributes where ismatch attributes misc ping actuator health true set attributes where ismatch attributes ping true set attributes where ismatch attributes xxl job admin true tail sampling decision wait wait time since the first span of a trace before making a sampling decision num traces number of traces kept in memory policies name agent policy type numeric attribute numeric attribute key sampling priority min value max value name latency policy type latency latency threshold ms name error policy type status code status code status codes name probabilistic policy type probabilistic probabilistic sampling percentage service telemetry logs level debug pipelines traces processors receivers otlp exporters logging log output no response additional context no response
1
327,515
9,976,712,037
IssuesEvent
2019-07-09 15:34:57
ArctosDB/arctos
https://api.github.com/repos/ArctosDB/arctos
closed
verbatim event-stuff
Function-Locality/Event/Georeferencing Priority-High
ref: https://github.com/ArctosDB/arctos/issues/2038, https://github.com/ArctosDB/arctos/issues/1971 "verbatim coordinates" are the coordinates as received by Arctos before I transform them into DD.ddd for mapping and etc. We are overusing the existing structure; everything that does not involve triggers should be removed. There is some apparent reluctance to use Verbatim Locality for things like coordinates as provided by the collector. What are we trying to do here, what do we need to do it, etc.?
1.0
verbatim event-stuff - ref: https://github.com/ArctosDB/arctos/issues/2038, https://github.com/ArctosDB/arctos/issues/1971 "verbatim coordinates" are the coordinates as received by Arctos before I transform them into DD.ddd for mapping and etc. We are overusing the existing structure; everything that does not involve triggers should be removed. There is some apparent reluctance to use Verbatim Locality for things like coordinates as provided by the collector. What are we trying to do here, what do we need to do it, etc.?
non_process
verbatim event stuff ref verbatim coordinates are the coordinates as received by arctos before i transform them into dd ddd for mapping and etc we are overusing the existing structure everything that does not involve triggers should be removed there is some apparent reluctance to use verbatim locality for things like coordinates as provided by the collector what are we trying to do here what do we need to do it etc
0
21,219
28,301,408,834
IssuesEvent
2023-04-10 06:31:33
TeamAidemy/ds-paper-summaries
https://api.github.com/repos/TeamAidemy/ds-paper-summaries
opened
GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models
Natural language processing General Economics GPT family
Tyna Eloundou, Sam Manning, Pamela Mishkin, Daniel Rock. 2023. “GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models.” arXiv [econ.GN]. arXiv. https://arxiv.org/abs/2303.10130. - 米囜の各職業を察象に、どれほどLLMによっお仕事の時間が削枛されうるかをラベリングし、経枈瀟䌚にLLMが及がしうる圱響を定量的に評䟡 - LLMの蚀語生成胜力をそのたた䜿甚した堎合の業務削枛量だけでなく、補助的なアプリケヌションが開発され、それを掻甚した堎合の業務削枛量も予枬しお分析 - 職業単䜍だけでなく、タスクや必芁ずされるスキルに分解した結果も提瀺 - 各職業の、LLMによる業務削枛量のラベリングは、人間ずGPT-4の双方により行われた - ラベリング結果に倧きな差はなかったものの、人間の方が、LLMによる業務削枛量を倚めに芋積もる傟向があった - 集蚈された結果より、米囜の劎働者の少なくずも10%のタスクの業務時間がLLMにより半分に短瞮されうるず掚定できる - 曎に、19%の劎働者に぀いおは、半分以䞊のタスクの業務時間がLLMにより半分に短瞮されうるず掚定 - 傟向ずしお、高賃金の職業になるほど、LLMにより䜜業時間が短瞮されるタスクが倚い - 広範な調査の末、GPT(Generative Pretrained Transformers) は、いわゆる汎甚技術ずしおのGPT(general-purpose technologies)ずしおの特城を瀺し、瀟䌚に倧きな圱響を䞎えうるず結論づけた ## Abstract We investigate the potential implications of large language models (LLMs), such as Generative Pre-trained Transformers (GPTs), on the U.S. labor market, focusing on the increased capabilities arising from LLM-powered software compared to LLMs on their own. Using a new rubric, we assess occupations based on their alignment with LLM capabilities, integrating both human expertise and GPT-4 classifications. Our findings reveal that around 80% of the U.S. workforce could have at least 10% of their work tasks affected by the introduction of LLMs, while approximately 19% of workers may see at least 50% of their tasks impacted. We do not make predictions about the development or adoption timeline of such LLMs. The projected effects span all wage levels, with higher-income jobs potentially facing greater exposure to LLM capabilities and LLM-powered software. Significantly, these impacts are not restricted to industries with higher recent productivity growth. Our analysis suggests that, with access to an LLM, about 15% of all worker tasks in the US could be completed significantly faster at the same level of quality. When incorporating software and tooling built on top of LLMs, this share increases to between 47 and 56% of all tasks. This finding implies that LLM-powered software will have a substantial effect on scaling the economic impacts of the underlying models. We conclude that LLMs such as GPTs exhibit traits of general-purpose technologies, indicating that they could have considerable economic, social, and policy implications. (DeepL翻蚳) 我々は、GPTGenerative Pre-trained Transformersのような倧芏暡蚀語モデルLLMが米囜の劎働垂堎に䞎える朜圚的な圱響を調査し、LLM単䜓ず比范しおLLM搭茉の゜フトりェアから生じる胜力の向䞊に焊点を圓おる。新しい評䟡基準を甚いお、人間の専門知識ずGPT-4分類の䞡方を統合し、LLMの胜力ずの敎合性に基づいお職業を評䟡したした。その結果、米囜の劎働者の玄80が、LLMの導入により少なくずも10の業務に圱響を受ける可胜性があり、玄19の劎働者は少なくずも50の業務に圱響を受ける可胜性があるこずが明らかになりたした。たた、LLMの開発・導入時期に぀いおは予枬を行っおいたせん。予枬される圱響はすべおの賃金氎準に及び、高所埗の職皮ほどLLMの機胜やLLMを搭茉した゜フトりェアに觊れる機䌚が倚くなる可胜性がありたす。重芁なのは、こうした圱響は、最近の生産性䞊昇率が高い産業に限定されないずいうこずである。我々の分析によるず、LLMを利甚するこずで、米囜における劎働者の党䜜業の玄15が、同じ品質レベルで倧幅に速く完了する可胜性があるこずが瀺唆された。LLMの䞊に構築された゜フトりェアやツヌルを組み蟌むず、この割合は党䜜業の4756%に増加する。この発芋は、LLMを搭茉した゜フトりェアが、基瀎ずなるモデルの経枈的圱響を拡倧する䞊で倧きな効果を発揮するこずを瀺唆しおいる。GPTのようなLLMは、汎甚的な技術であり、経枈的、瀟䌚的、政策的に倧きな意味を持぀可胜性があるず結論付けおいる。 ## 解決した課題/先行研究ずの比范 - 本研究では、既存のAI技術や自動化技術の劎働垂堎ぞの圱響に぀いおの調査ず比范しお、蚀語モデルのより広く、朜圚的な圱響を調査 - 曎に、過去の汎甚技術e.g. 印刷、蒞気機関の䜿われ方を加味し、蚀語モデルそのものの胜力だけでなく、蚀語モデルず実䜜業の補完を行うむノベヌションの登堎の予枬も考慮 ## 技術・手法のポむント ### 䞻芁な分析手順 1. 米囜における職業別掻動・業務に関する、O*NET 27.2 デヌタベヌスに含たれる、19,265のタスクず2,087のDWA(Detaild Work Activities)に察し、以䞋の3通りのラベル付けを、人間ずGPT-4の䞡方で実斜 - E0 : No Exposure : LLMを䜿甚するこずで、同等レベルのクオリティを維持しながらタスクを遂行する時間が枛少しないLLMぞの曝露がない - E1 : Direct Exposure : LLMを䜿甚するこずで、同等レベルのクオリティを維持しながらタスクを遂行する時間が50%以䞋になるLLMぞの曝露が倧きい - E2 : LLM+ Exposed : LLMのみではタスクの遂行時間は枛少しないものの、LLMの䞊に構築されたアプリケヌションず合わせお䜿甚するこずで、タスクを遂行する時間が50%以䞋になる。画像生成システムぞのアクセスも加味。LLMぞの曝露が将来的に考えられる 2. 以䞊のタスクずDWAのラベリング結果を、職業ごずに集蚈 - この集蚈時に、以䞋3぀のパタヌンで重み付け和をずるこずで、掚蚈のバリ゚ヌションをもたせおいる - α = E1 - β = E1 + 0.5×E2 - ζ = E1 + E2 - αは、LLM単䜓の胜力によっお業務遂行時間が枛少しおいる床合い。ζは、長期的に゜フトりェアが開発されおいっお遂行時間が枛少する床合い。βはその䞭間、のように捉えるずよい 3. 職業ごずにラベリングされた3぀のスコア(α, β, ζ)を甚い、職業に必芁なスキルや、職業ごずの幎収ず関連付けお曎に分析 - 3぀のスコア(α, β, ζ)を目的倉数に、それぞれの職業に玐づく、O*NET基本スキルカテゎリの各スキルの重芁床を説明倉数ずし、回垰分析を行い、各スキルの寄䞎率を算出 - 幎収ず3぀のスコア(α, β, ζ)の盞関を調査、など ### 人間ずGPT-4によるラベル付け結果の比范 - 人間ずGPT-4によるラベル付け結果の盞関 <p align="center"> <img src="https://user-images.githubusercontent.com/33014616/229649328-e3351be8-5644-48e4-9299-bcad36f62001.png" width=600px> </p> - 職業ごずに集蚈されたβスコアの散垃図 ![image](https://user-images.githubusercontent.com/33014616/229649789-64594e30-38dc-4406-a783-e8a498a388d6.png) - 基本的にはよく盞関しおいるものの、グラフの右䞊の領域においおのみ、人間のほうが高いスコアを付ける傟向が芋られる - 理由は䞍明だが、人間のほうが、LLMぞの暎露が倧きい職業においお、GPT-4よりも曝露スコアを高く芋積もっおいる ### 本分析のデヌタセットずラベル付け方法の限界 - ぀けられたラベルの䞻芳性 - ラベル付けを行った人間は、OpenAIお抱えのアノテヌタヌ - 職業の倚様性に欠けるため、各職業の個別のタスクに詳しくなく、LLMの䜿甚に慣れおいる傟向がある - 曝露の床合いの芋積もりが䞍正確な可胜性がある - LLMの発展は極めお速いため、ラベルの内容や分析結果は、倧きく倉化する可胜性がある ## 分析結果 ### 3぀のスコア(α, β, ζ)の芁玄統蚈量 <p align="center"> <img src="https://user-images.githubusercontent.com/33014616/229652540-560dc4f6-758f-412d-8c8e-685f17573c74.png" width=200px> </p> ![image](https://user-images.githubusercontent.com/33014616/229652561-17fbc6a7-1f9e-49dd-8011-d38e457357e3.png) - 読み取れるこず - 80%の劎働者は、10のタスクがLLMの圱響を受ける - 19%の劎働者は、半分以䞊のタスクがLLMの圱響を受ける ### 職業の収入ずの関係 ![image](https://user-images.githubusercontent.com/33014616/229652501-d4ea9e61-565b-412f-9895-4002e98d759b.png) - 読み取れるこず - 高幎収の職業ほど、LLMぞの曝露が倧きい傟向がある ### 暎露スコアが倧きい職業のリスト <p align="center"> <img src="https://user-images.githubusercontent.com/33014616/229653706-d74fca6f-efba-44a1-a32d-2ad4183e2d8a.png" width=400px> </p> - 読み取れるこず - Human α: LLM単䜓による曝露が倧きい職業 - 翻蚳家 - サヌベむ研究者 - 䜜詞家、クリ゚むティブラむタヌ - 動物科孊者 - 広報スペシャリスト - Human ζ: LLMず、それに付随するアプリケヌションによる曝露が倧きい職業 - 数孊者 - 皎理士 - 金融クオンツアナリスト - 䜜家 - Web・デゞタルむンタヌフェヌスデザむナヌ - Highest variance: 分散が倧きい職業䜜業時間が50%以䞋に枛少するタスクず、枛少しないタスクの混圚床合いが倧きい職業 - 怜玢マヌケティングストラテゞスト - グラフィックデザむナヌ - 投資ファンドマネヌゞャヌ - 金融マネヌゞャヌ - 自動車損害保険鑑定士 ### 職業の基本スキルの暎露スコアぞの寄䞎率 <p align="center"> <img src="https://user-images.githubusercontent.com/33014616/229654827-9c51c6fa-4432-4533-adf3-2c79c192c2dd.png" width=600px> </p> - 読み取れるこず - αぞの寄䞎が倧きいスキルLLM単䜓による曝露が倧きいスキル - Programming 0.637 - Writing 0.368 - Reading Comprehension 0.153 - ζぞの寄䞎が倧きいスキルLLMず付随するアプリケヌションによる曝露が倧きいスキル - Mathematics 0.787 - Programming 0.609 - Writing 0.566 - Active Listening 0.449 - Speaking 0.294 - αもζも䜎いスキルLLMや呚蟺アプリケヌションの発展による曝露が小さいスキル - Science -0.346 - Learning Strategies -0.346 - Monitoring -0.232 - Critical Thinking -0.129 ※ 以䞊の結果の受け止め方に぀いおは、埌述の 感想>泚意すべき結果>基本スキルの解釈 も参照されたい。 ## 残された課題・議論・感想 ### この研究の限界 - 米囜以倖に適甚できるか - 産業組織、技術むンフラ、芏制の枠組み、蚀語の倚様性、文化的背景に倧きな違いがあるため、米囜以倖ぞの適甚は限定的である - この研究の方法を公衚するこずで、他の集団に぀いおも調査できるようにしお、察凊したい - 珟状のLLMの胜力のすべおを完璧に反映した研究ではない - 䟋えばα評䟡では、GPT-4の画像を扱う胜力を考慮しおない - もっず倚くの職業の暎露スコアが高くなる可胜性がある - LLMの胜力の進歩が展開するに぀れお、その圱響を怜蚎する必芁がある ### LLMは汎甚技術ず呌べるのか - そうみなせる特城は瀺しおいる - LLMが汎甚技術 General Purpose Technology ずみなされるには、以䞋3぀の条件を満たす必芁がある - 時間の経過ずずもに改善するこず - 経枈党䜓に浞透するこず - 補完的なむノベヌションが生み出されるこず - この3぀のうち、1぀目はOpenAIなどの研究により自明 - 2, 3番目はこの論文の内容によっお、いくらか瀺唆された ### 感想 - 党䜓を通しお - 産業界から芋たLLMの発展による職業ぞの懞念が定量的に怜蚌されおいお、統蚈量ずしお出おいる結果も玍埗感が高い - 今埌の職業遞択や、蚓緎内容の参考にすべき内容が倚いず蚀える - ずはいえ、この結果を受け、悲芳的になりすぎる必芁はないずも考える - LLMはこれたでの人の知識の集積なのだから、人が想定できるような意芋を出すのは驚くべきこずではない。今の人類の想像力の範疇での未来予枬に過ぎない - これから、人類はLLMありきの䞖界で生き、その䞭でむノベヌションを起こしおいくのである - 泚意すべき結果 - 日本に応甚する堎合 - 職業の皮類やラベル付けの方法においお、ただバむアスが倧きい印象であり、応甚可胜性は限定的ず考える - 今埌䌌た研究が、よりバむアスを避けた蚭蚈で行われるこずを埅ちたい - 基本スキルの解釈 - 「スキル」ず「職業」は区別しお考えるべき - たずえば、「Mathematics」スキルはLLMぞの曝露が倧きいず結果では瀺されおいるものの、職業ずしおの「数孊者」が䞍芁ずは捉えられない - 「数孊者」に必芁なスキルには、「Critical Thinking」や「Science」も含たれおおり、これらはLLMぞの曝露が小さいスキルである [参照](https://www.onetonline.org/link/summary/15-2021.00) - あくたでも、「数孊者」の仕事のうち、耇雑な匏倉圢やよく䜿われる数孊的知識の導入など、䞀郚の䜜業にかかる負荷が枛るず捉えるべきである - LLM以倖の技術による曝露の圱響 - 本研究では、あくたでもLLMを起因ずした䜜業負荷の枛少のみにフォヌカスしおいるため、その他の自動化技術や工業機噚等による曝露は、本研究ずは関連なしに進む可胜性がある - たずえば蟲業や林業など、LLM以倖の技術により曝露が進められおきた産業は、今埌もLLM以倖の技術により、人間の圹割が倉化しおいくず考えられる ## 重芁な匕甚 - [GPT-4 System Card](https://cdn.openai.com/papers/gpt-4-system-card.pdf) ## 関連論文 - Brynjolfsson, E., Frank, M. R., Mitchell, T., Rahwan, I., and Rock, D. (2023). Quantifying the Distribution of Machine Learning’s Impact on Work. Forthcoming. - 機械孊習が仕事に䞎える圱響の分垃の定量化 近々公開(2023幎4月珟圚の情報) - 本論文に最も圱響を䞎えた研究ずしお、関連研究で取り䞊げられおいる ## 参考情報 - [O*NET OnLine](https://www.onetonline.org/) - 本論文で䜿甚されおいる、米囜の職業デヌタベヌス O*NET のオンラむンリ゜ヌス - [GPT-3 論文サマリヌ](https://github.com/TeamAidemy/ds-paper-summaries/issues/6) - 本論文の䞻題ずなっおいる GPT-4 の、1぀前のメゞャヌバヌゞョンである GPT-3 の玹介 - [InstructGPT 論文サマリヌ](https://github.com/TeamAidemy/ds-paper-summaries/issues/11) - GPT-3 を、人間のフィヌドバックを甚いた匷化孊習でFine-tuningしたモデル InstructGPT の玹介
1.0
GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models - Tyna Eloundou, Sam Manning, Pamela Mishkin, Daniel Rock. 2023. “GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models.” arXiv [econ.GN]. arXiv. https://arxiv.org/abs/2303.10130. - 米囜の各職業を察象に、どれほどLLMによっお仕事の時間が削枛されうるかをラベリングし、経枈瀟䌚にLLMが及がしうる圱響を定量的に評䟡 - LLMの蚀語生成胜力をそのたた䜿甚した堎合の業務削枛量だけでなく、補助的なアプリケヌションが開発され、それを掻甚した堎合の業務削枛量も予枬しお分析 - 職業単䜍だけでなく、タスクや必芁ずされるスキルに分解した結果も提瀺 - 各職業の、LLMによる業務削枛量のラベリングは、人間ずGPT-4の双方により行われた - ラベリング結果に倧きな差はなかったものの、人間の方が、LLMによる業務削枛量を倚めに芋積もる傟向があった - 集蚈された結果より、米囜の劎働者の少なくずも10%のタスクの業務時間がLLMにより半分に短瞮されうるず掚定できる - 曎に、19%の劎働者に぀いおは、半分以䞊のタスクの業務時間がLLMにより半分に短瞮されうるず掚定 - 傟向ずしお、高賃金の職業になるほど、LLMにより䜜業時間が短瞮されるタスクが倚い - 広範な調査の末、GPT(Generative Pretrained Transformers) は、いわゆる汎甚技術ずしおのGPT(general-purpose technologies)ずしおの特城を瀺し、瀟䌚に倧きな圱響を䞎えうるず結論づけた ## Abstract We investigate the potential implications of large language models (LLMs), such as Generative Pre-trained Transformers (GPTs), on the U.S. labor market, focusing on the increased capabilities arising from LLM-powered software compared to LLMs on their own. Using a new rubric, we assess occupations based on their alignment with LLM capabilities, integrating both human expertise and GPT-4 classifications. Our findings reveal that around 80% of the U.S. workforce could have at least 10% of their work tasks affected by the introduction of LLMs, while approximately 19% of workers may see at least 50% of their tasks impacted. We do not make predictions about the development or adoption timeline of such LLMs. The projected effects span all wage levels, with higher-income jobs potentially facing greater exposure to LLM capabilities and LLM-powered software. Significantly, these impacts are not restricted to industries with higher recent productivity growth. Our analysis suggests that, with access to an LLM, about 15% of all worker tasks in the US could be completed significantly faster at the same level of quality. When incorporating software and tooling built on top of LLMs, this share increases to between 47 and 56% of all tasks. This finding implies that LLM-powered software will have a substantial effect on scaling the economic impacts of the underlying models. We conclude that LLMs such as GPTs exhibit traits of general-purpose technologies, indicating that they could have considerable economic, social, and policy implications. (DeepL翻蚳) 我々は、GPTGenerative Pre-trained Transformersのような倧芏暡蚀語モデルLLMが米囜の劎働垂堎に䞎える朜圚的な圱響を調査し、LLM単䜓ず比范しおLLM搭茉の゜フトりェアから生じる胜力の向䞊に焊点を圓おる。新しい評䟡基準を甚いお、人間の専門知識ずGPT-4分類の䞡方を統合し、LLMの胜力ずの敎合性に基づいお職業を評䟡したした。その結果、米囜の劎働者の玄80が、LLMの導入により少なくずも10の業務に圱響を受ける可胜性があり、玄19の劎働者は少なくずも50の業務に圱響を受ける可胜性があるこずが明らかになりたした。たた、LLMの開発・導入時期に぀いおは予枬を行っおいたせん。予枬される圱響はすべおの賃金氎準に及び、高所埗の職皮ほどLLMの機胜やLLMを搭茉した゜フトりェアに觊れる機䌚が倚くなる可胜性がありたす。重芁なのは、こうした圱響は、最近の生産性䞊昇率が高い産業に限定されないずいうこずである。我々の分析によるず、LLMを利甚するこずで、米囜における劎働者の党䜜業の玄15が、同じ品質レベルで倧幅に速く完了する可胜性があるこずが瀺唆された。LLMの䞊に構築された゜フトりェアやツヌルを組み蟌むず、この割合は党䜜業の4756%に増加する。この発芋は、LLMを搭茉した゜フトりェアが、基瀎ずなるモデルの経枈的圱響を拡倧する䞊で倧きな効果を発揮するこずを瀺唆しおいる。GPTのようなLLMは、汎甚的な技術であり、経枈的、瀟䌚的、政策的に倧きな意味を持぀可胜性があるず結論付けおいる。 ## 解決した課題/先行研究ずの比范 - 本研究では、既存のAI技術や自動化技術の劎働垂堎ぞの圱響に぀いおの調査ず比范しお、蚀語モデルのより広く、朜圚的な圱響を調査 - 曎に、過去の汎甚技術e.g. 印刷、蒞気機関の䜿われ方を加味し、蚀語モデルそのものの胜力だけでなく、蚀語モデルず実䜜業の補完を行うむノベヌションの登堎の予枬も考慮 ## 技術・手法のポむント ### 䞻芁な分析手順 1. 米囜における職業別掻動・業務に関する、O*NET 27.2 デヌタベヌスに含たれる、19,265のタスクず2,087のDWA(Detaild Work Activities)に察し、以䞋の3通りのラベル付けを、人間ずGPT-4の䞡方で実斜 - E0 : No Exposure : LLMを䜿甚するこずで、同等レベルのクオリティを維持しながらタスクを遂行する時間が枛少しないLLMぞの曝露がない - E1 : Direct Exposure : LLMを䜿甚するこずで、同等レベルのクオリティを維持しながらタスクを遂行する時間が50%以䞋になるLLMぞの曝露が倧きい - E2 : LLM+ Exposed : LLMのみではタスクの遂行時間は枛少しないものの、LLMの䞊に構築されたアプリケヌションず合わせお䜿甚するこずで、タスクを遂行する時間が50%以䞋になる。画像生成システムぞのアクセスも加味。LLMぞの曝露が将来的に考えられる 2. 以䞊のタスクずDWAのラベリング結果を、職業ごずに集蚈 - この集蚈時に、以䞋3぀のパタヌンで重み付け和をずるこずで、掚蚈のバリ゚ヌションをもたせおいる - α = E1 - β = E1 + 0.5×E2 - ζ = E1 + E2 - αは、LLM単䜓の胜力によっお業務遂行時間が枛少しおいる床合い。ζは、長期的に゜フトりェアが開発されおいっお遂行時間が枛少する床合い。βはその䞭間、のように捉えるずよい 3. 職業ごずにラベリングされた3぀のスコア(α, β, ζ)を甚い、職業に必芁なスキルや、職業ごずの幎収ず関連付けお曎に分析 - 3぀のスコア(α, β, ζ)を目的倉数に、それぞれの職業に玐づく、O*NET基本スキルカテゎリの各スキルの重芁床を説明倉数ずし、回垰分析を行い、各スキルの寄䞎率を算出 - 幎収ず3぀のスコア(α, β, ζ)の盞関を調査、など ### 人間ずGPT-4によるラベル付け結果の比范 - 人間ずGPT-4によるラベル付け結果の盞関 <p align="center"> <img src="https://user-images.githubusercontent.com/33014616/229649328-e3351be8-5644-48e4-9299-bcad36f62001.png" width=600px> </p> - 職業ごずに集蚈されたβスコアの散垃図 ![image](https://user-images.githubusercontent.com/33014616/229649789-64594e30-38dc-4406-a783-e8a498a388d6.png) - 基本的にはよく盞関しおいるものの、グラフの右䞊の領域においおのみ、人間のほうが高いスコアを付ける傟向が芋られる - 理由は䞍明だが、人間のほうが、LLMぞの暎露が倧きい職業においお、GPT-4よりも曝露スコアを高く芋積もっおいる ### 本分析のデヌタセットずラベル付け方法の限界 - ぀けられたラベルの䞻芳性 - ラベル付けを行った人間は、OpenAIお抱えのアノテヌタヌ - 職業の倚様性に欠けるため、各職業の個別のタスクに詳しくなく、LLMの䜿甚に慣れおいる傟向がある - 曝露の床合いの芋積もりが䞍正確な可胜性がある - LLMの発展は極めお速いため、ラベルの内容や分析結果は、倧きく倉化する可胜性がある ## 分析結果 ### 3぀のスコア(α, β, ζ)の芁玄統蚈量 <p align="center"> <img src="https://user-images.githubusercontent.com/33014616/229652540-560dc4f6-758f-412d-8c8e-685f17573c74.png" width=200px> </p> ![image](https://user-images.githubusercontent.com/33014616/229652561-17fbc6a7-1f9e-49dd-8011-d38e457357e3.png) - 読み取れるこず - 80%の劎働者は、10のタスクがLLMの圱響を受ける - 19%の劎働者は、半分以䞊のタスクがLLMの圱響を受ける ### 職業の収入ずの関係 ![image](https://user-images.githubusercontent.com/33014616/229652501-d4ea9e61-565b-412f-9895-4002e98d759b.png) - 読み取れるこず - 高幎収の職業ほど、LLMぞの曝露が倧きい傟向がある ### 暎露スコアが倧きい職業のリスト <p align="center"> <img src="https://user-images.githubusercontent.com/33014616/229653706-d74fca6f-efba-44a1-a32d-2ad4183e2d8a.png" width=400px> </p> - 読み取れるこず - Human α: LLM単䜓による曝露が倧きい職業 - 翻蚳家 - サヌベむ研究者 - 䜜詞家、クリ゚むティブラむタヌ - 動物科孊者 - 広報スペシャリスト - Human ζ: LLMず、それに付随するアプリケヌションによる曝露が倧きい職業 - 数孊者 - 皎理士 - 金融クオンツアナリスト - 䜜家 - Web・デゞタルむンタヌフェヌスデザむナヌ - Highest variance: 分散が倧きい職業䜜業時間が50%以䞋に枛少するタスクず、枛少しないタスクの混圚床合いが倧きい職業 - 怜玢マヌケティングストラテゞスト - グラフィックデザむナヌ - 投資ファンドマネヌゞャヌ - 金融マネヌゞャヌ - 自動車損害保険鑑定士 ### 職業の基本スキルの暎露スコアぞの寄䞎率 <p align="center"> <img src="https://user-images.githubusercontent.com/33014616/229654827-9c51c6fa-4432-4533-adf3-2c79c192c2dd.png" width=600px> </p> - 読み取れるこず - αぞの寄䞎が倧きいスキルLLM単䜓による曝露が倧きいスキル - Programming 0.637 - Writing 0.368 - Reading Comprehension 0.153 - ζぞの寄䞎が倧きいスキルLLMず付随するアプリケヌションによる曝露が倧きいスキル - Mathematics 0.787 - Programming 0.609 - Writing 0.566 - Active Listening 0.449 - Speaking 0.294 - αもζも䜎いスキルLLMや呚蟺アプリケヌションの発展による曝露が小さいスキル - Science -0.346 - Learning Strategies -0.346 - Monitoring -0.232 - Critical Thinking -0.129 ※ 以䞊の結果の受け止め方に぀いおは、埌述の 感想>泚意すべき結果>基本スキルの解釈 も参照されたい。 ## 残された課題・議論・感想 ### この研究の限界 - 米囜以倖に適甚できるか - 産業組織、技術むンフラ、芏制の枠組み、蚀語の倚様性、文化的背景に倧きな違いがあるため、米囜以倖ぞの適甚は限定的である - この研究の方法を公衚するこずで、他の集団に぀いおも調査できるようにしお、察凊したい - 珟状のLLMの胜力のすべおを完璧に反映した研究ではない - 䟋えばα評䟡では、GPT-4の画像を扱う胜力を考慮しおない - もっず倚くの職業の暎露スコアが高くなる可胜性がある - LLMの胜力の進歩が展開するに぀れお、その圱響を怜蚎する必芁がある ### LLMは汎甚技術ず呌べるのか - そうみなせる特城は瀺しおいる - LLMが汎甚技術 General Purpose Technology ずみなされるには、以䞋3぀の条件を満たす必芁がある - 時間の経過ずずもに改善するこず - 経枈党䜓に浞透するこず - 補完的なむノベヌションが生み出されるこず - この3぀のうち、1぀目はOpenAIなどの研究により自明 - 2, 3番目はこの論文の内容によっお、いくらか瀺唆された ### 感想 - 党䜓を通しお - 産業界から芋たLLMの発展による職業ぞの懞念が定量的に怜蚌されおいお、統蚈量ずしお出おいる結果も玍埗感が高い - 今埌の職業遞択や、蚓緎内容の参考にすべき内容が倚いず蚀える - ずはいえ、この結果を受け、悲芳的になりすぎる必芁はないずも考える - LLMはこれたでの人の知識の集積なのだから、人が想定できるような意芋を出すのは驚くべきこずではない。今の人類の想像力の範疇での未来予枬に過ぎない - これから、人類はLLMありきの䞖界で生き、その䞭でむノベヌションを起こしおいくのである - 泚意すべき結果 - 日本に応甚する堎合 - 職業の皮類やラベル付けの方法においお、ただバむアスが倧きい印象であり、応甚可胜性は限定的ず考える - 今埌䌌た研究が、よりバむアスを避けた蚭蚈で行われるこずを埅ちたい - 基本スキルの解釈 - 「スキル」ず「職業」は区別しお考えるべき - たずえば、「Mathematics」スキルはLLMぞの曝露が倧きいず結果では瀺されおいるものの、職業ずしおの「数孊者」が䞍芁ずは捉えられない - 「数孊者」に必芁なスキルには、「Critical Thinking」や「Science」も含たれおおり、これらはLLMぞの曝露が小さいスキルである [参照](https://www.onetonline.org/link/summary/15-2021.00) - あくたでも、「数孊者」の仕事のうち、耇雑な匏倉圢やよく䜿われる数孊的知識の導入など、䞀郚の䜜業にかかる負荷が枛るず捉えるべきである - LLM以倖の技術による曝露の圱響 - 本研究では、あくたでもLLMを起因ずした䜜業負荷の枛少のみにフォヌカスしおいるため、その他の自動化技術や工業機噚等による曝露は、本研究ずは関連なしに進む可胜性がある - たずえば蟲業や林業など、LLM以倖の技術により曝露が進められおきた産業は、今埌もLLM以倖の技術により、人間の圹割が倉化しおいくず考えられる ## 重芁な匕甚 - [GPT-4 System Card](https://cdn.openai.com/papers/gpt-4-system-card.pdf) ## 関連論文 - Brynjolfsson, E., Frank, M. R., Mitchell, T., Rahwan, I., and Rock, D. (2023). Quantifying the Distribution of Machine Learning’s Impact on Work. Forthcoming. - 機械孊習が仕事に䞎える圱響の分垃の定量化 近々公開(2023幎4月珟圚の情報) - 本論文に最も圱響を䞎えた研究ずしお、関連研究で取り䞊げられおいる ## 参考情報 - [O*NET OnLine](https://www.onetonline.org/) - 本論文で䜿甚されおいる、米囜の職業デヌタベヌス O*NET のオンラむンリ゜ヌス - [GPT-3 論文サマリヌ](https://github.com/TeamAidemy/ds-paper-summaries/issues/6) - 本論文の䞻題ずなっおいる GPT-4 の、1぀前のメゞャヌバヌゞョンである GPT-3 の玹介 - [InstructGPT 論文サマリヌ](https://github.com/TeamAidemy/ds-paper-summaries/issues/11) - GPT-3 を、人間のフィヌドバックを甚いた匷化孊習でFine-tuningしたモデル InstructGPT の玹介
process
gpts are gpts an early look at the labor market impact potential of large language models tyna eloundou sam manning pamela mishkin daniel rock “gpts are gpts an early look at the labor market impact potential of large language models ” arxiv arxiv 米囜の各職業を察象に、どれほどllmによっお仕事の時間が削枛されうるかをラベリングし、経枈瀟䌚にllmが及がしうる圱響を定量的に評䟡 llmの蚀語生成胜力をそのたた䜿甚した堎合の業務削枛量だけでなく、補助的なアプリケヌションが開発され、それを掻甚した堎合の業務削枛量も予枬しお分析 職業単䜍だけでなく、タスクや必芁ずされるスキルに分解した結果も提瀺 各職業の、llmによる業務削枛量のラベリングは、人間ずgpt ラベリング結果に倧きな差はなかったものの、人間の方が、llmによる業務削枛量を倚めに芋積もる傟向があった 集蚈された結果より、 のタスクの業務時間がllmにより半分に短瞮されうるず掚定できる 曎に、 の劎働者に぀いおは、半分以䞊のタスクの業務時間がllmにより半分に短瞮されうるず掚定 傟向ずしお、高賃金の職業になるほど、llmにより䜜業時間が短瞮されるタスクが倚い 広範な調査の末、gpt generative pretrained transformers は、いわゆる汎甚技術ずしおのgpt general purpose technologies ずしおの特城を瀺し、瀟䌚に倧きな圱響を䞎えうるず結論づけた abstract we investigate the potential implications of large language models llms such as generative pre trained transformers gpts on the u s labor market focusing on the increased capabilities arising from llm powered software compared to llms on their own using a new rubric we assess occupations based on their alignment with llm capabilities integrating both human expertise and gpt classifications our findings reveal that around of the u s workforce could have at least of their work tasks affected by the introduction of llms while approximately of workers may see at least of their tasks impacted we do not make predictions about the development or adoption timeline of such llms the projected effects span all wage levels with higher income jobs potentially facing greater exposure to llm capabilities and llm powered software significantly these impacts are not restricted to industries with higher recent productivity growth our analysis suggests that with access to an llm about of all worker tasks in the us could be completed significantly faster at the same level of quality when incorporating software and tooling built on top of llms this share increases to between and of all tasks this finding implies that llm powered software will have a substantial effect on scaling the economic impacts of the underlying models we conclude that llms such as gpts exhibit traits of general purpose technologies indicating that they could have considerable economic social and policy implications deepl翻蚳 我々は、gptgenerative pre trained transformersのような倧芏暡蚀語モデルllmが米囜の劎働垂堎に䞎える朜圚的な圱響を調査し、llm単䜓ず比范しおllm搭茉の゜フトりェアから生じる胜力の向䞊に焊点を圓おる。新しい評䟡基準を甚いお、人間の専門知識ずgpt 、llmの胜力ずの敎合性に基づいお職業を評䟡したした。その結果、 が、 の業務に圱響を受ける可胜性があり、  の業務に圱響を受ける可胜性があるこずが明らかになりたした。たた、llmの開発・導入時期に぀いおは予枬を行っおいたせん。予枬される圱響はすべおの賃金氎準に及び、高所埗の職皮ほどllmの機胜やllmを搭茉した゜フトりェアに觊れる機䌚が倚くなる可胜性がありたす。重芁なのは、こうした圱響は、最近の生産性䞊昇率が高い産業に限定されないずいうこずである。我々の分析によるず、llmを利甚するこずで、 が、同じ品質レベルで倧幅に速く完了する可胜性があるこずが瀺唆された。llmの䞊に構築された゜フトりェアやツヌルを組み蟌むず、  に増加する。この発芋は、llmを搭茉した゜フトりェアが、基瀎ずなるモデルの経枈的圱響を拡倧する䞊で倧きな効果を発揮するこずを瀺唆しおいる。gptのようなllmは、汎甚的な技術であり、経枈的、瀟䌚的、政策的に倧きな意味を持぀可胜性があるず結論付けおいる。 解決した課題 先行研究ずの比范 本研究では、既存のai技術や自動化技術の劎働垂堎ぞの圱響に぀いおの調査ず比范しお、蚀語モデルのより広く、朜圚的な圱響を調査 曎に、過去の汎甚技術e g 印刷、蒞気機関の䜿われ方を加味し、蚀語モデルそのものの胜力だけでなく、蚀語モデルず実䜜業の補完を行うむノベヌションの登堎の予枬も考慮 技術・手法のポむント 䞻芁な分析手順 米囜における職業別掻動・業務に関する、o net デヌタベヌスに含たれる、 detaild work activities に察し、 、人間ずgpt no exposure llmを䜿甚するこずで、同等レベルのクオリティを維持しながらタスクを遂行する時間が枛少しないllmぞの曝露がない direct exposure llmを䜿甚するこずで、 以䞋になるllmぞの曝露が倧きい llm exposed llmのみではタスクの遂行時間は枛少しないものの、llmの䞊に構築されたアプリケヌションず合わせお䜿甚するこずで、 以䞋になる。画像生成システムぞのアクセスも加味。llmぞの曝露が将来的に考えられる 以䞊のタスクずdwaのラベリング結果を、職業ごずに集蚈 この集蚈時に、 、掚蚈のバリ゚ヌションをもたせおいる α β × ζ αは、llm単䜓の胜力によっお業務遂行時間が枛少しおいる床合い。ζは、長期的に゜フトりェアが開発されおいっお遂行時間が枛少する床合い。βはその䞭間、のように捉えるずよい α β ζ を甚い、職業に必芁なスキルや、職業ごずの幎収ず関連付けお曎に分析 α β ζ を目的倉数に、それぞれの職業に玐づく、o net基本スキルカテゎリの各スキルの重芁床を説明倉数ずし、回垰分析を行い、各スキルの寄䞎率を算出 α β ζ の盞関を調査、など 人間ずgpt 人間ずgpt 職業ごずに集蚈されたβスコアの散垃図 基本的にはよく盞関しおいるものの、グラフの右䞊の領域においおのみ、人間のほうが高いスコアを付ける傟向が芋られる 理由は䞍明だが、人間のほうが、llmぞの暎露が倧きい職業においお、gpt 本分析のデヌタセットずラベル付け方法の限界 ぀けられたラベルの䞻芳性 ラベル付けを行った人間は、openaiお抱えのアノテヌタヌ 職業の倚様性に欠けるため、各職業の個別のタスクに詳しくなく、llmの䜿甚に慣れおいる傟向がある 曝露の床合いの芋積もりが䞍正確な可胜性がある llmの発展は極めお速いため、ラベルの内容や分析結果は、倧きく倉化する可胜性がある 分析結果 α β ζ の芁玄統蚈量 読み取れるこず の劎働者は、 のタスクがllmの圱響を受ける の劎働者は、半分以䞊のタスクがllmの圱響を受ける 職業の収入ずの関係 読み取れるこず 高幎収の職業ほど、llmぞの曝露が倧きい傟向がある 暎露スコアが倧きい職業のリスト 読み取れるこず human α llm単䜓による曝露が倧きい職業 翻蚳家 サヌベむ研究者 䜜詞家、クリ゚むティブラむタヌ 動物科孊者 広報スペシャリスト human ζ llmず、それに付随するアプリケヌションによる曝露が倧きい職業 数孊者 皎理士 金融クオンツアナリスト 䜜家 web・デゞタルむンタヌフェヌスデザむナヌ highest variance 分散が倧きい職業 以䞋に枛少するタスクず、枛少しないタスクの混圚床合いが倧きい職業 怜玢マヌケティングストラテゞスト グラフィックデザむナヌ 投資ファンドマネヌゞャヌ 金融マネヌゞャヌ 自動車損害保険鑑定士 職業の基本スキルの暎露スコアぞの寄䞎率 読み取れるこず αぞの寄䞎が倧きいスキルllm単䜓による曝露が倧きいスキル programming writing reading comprehension ζぞの寄䞎が倧きいスキルllmず付随するアプリケヌションによる曝露が倧きいスキル mathematics programming writing active listening speaking αもζも䜎いスキルllmや呚蟺アプリケヌションの発展による曝露が小さいスキル science learning strategies monitoring critical thinking ※ 以䞊の結果の受け止め方に぀いおは、埌述の 感想 泚意すべき結果 基本スキルの解釈 も参照されたい。 残された課題・議論・感想 この研究の限界 米囜以倖に適甚できるか 産業組織、技術むンフラ、芏制の枠組み、蚀語の倚様性、文化的背景に倧きな違いがあるため、米囜以倖ぞの適甚は限定的である この研究の方法を公衚するこずで、他の集団に぀いおも調査できるようにしお、察凊したい 珟状のllmの胜力のすべおを完璧に反映した研究ではない 䟋えばα評䟡では、gpt もっず倚くの職業の暎露スコアが高くなる可胜性がある llmの胜力の進歩が展開するに぀れお、その圱響を怜蚎する必芁がある llmは汎甚技術ず呌べるのか そうみなせる特城は瀺しおいる llmが汎甚技術 general purpose technology ずみなされるには、 時間の経過ずずもに改善するこず 経枈党䜓に浞透するこず 補完的なむノベヌションが生み出されるこず 、 、いくらか瀺唆された 感想 党䜓を通しお 産業界から芋たllmの発展による職業ぞの懞念が定量的に怜蚌されおいお、統蚈量ずしお出おいる結果も玍埗感が高い 今埌の職業遞択や、蚓緎内容の参考にすべき内容が倚いず蚀える ずはいえ、この結果を受け、悲芳的になりすぎる必芁はないずも考える llmはこれたでの人の知識の集積なのだから、人が想定できるような意芋を出すのは驚くべきこずではない。今の人類の想像力の範疇での未来予枬に過ぎない これから、人類はllmありきの䞖界で生き、その䞭でむノベヌションを起こしおいくのである 泚意すべき結果 日本に応甚する堎合 職業の皮類やラベル付けの方法においお、ただバむアスが倧きい印象であり、応甚可胜性は限定的ず考える 今埌䌌た研究が、よりバむアスを避けた蚭蚈で行われるこずを埅ちたい 基本スキルの解釈 「スキル」ず「職業」は区別しお考えるべき たずえば、「mathematics」スキルはllmぞの曝露が倧きいず結果では瀺されおいるものの、職業ずしおの「数孊者」が䞍芁ずは捉えられない 「数孊者」に必芁なスキルには、「critical thinking」や「science」も含たれおおり、これらはllmぞの曝露が小さいスキルである あくたでも、「数孊者」の仕事のうち、耇雑な匏倉圢やよく䜿われる数孊的知識の導入など、䞀郚の䜜業にかかる負荷が枛るず捉えるべきである llm以倖の技術による曝露の圱響 本研究では、あくたでもllmを起因ずした䜜業負荷の枛少のみにフォヌカスしおいるため、その他の自動化技術や工業機噚等による曝露は、本研究ずは関連なしに進む可胜性がある たずえば蟲業や林業など、llm以倖の技術により曝露が進められおきた産業は、今埌もllm以倖の技術により、人間の圹割が倉化しおいくず考えられる 重芁な匕甚 関連論文 brynjolfsson e frank m r mitchell t rahwan i and rock d quantifying the distribution of machine learning’s impact on work forthcoming 機械孊習が仕事に䞎える圱響の分垃の定量化 近々公開 本論文に最も圱響を䞎えた研究ずしお、関連研究で取り䞊げられおいる 参考情報 本論文で䜿甚されおいる、米囜の職業デヌタベヌス o net のオンラむンリ゜ヌス 本論文の䞻題ずなっおいる gpt の、 gpt の玹介 gpt を、人間のフィヌドバックを甚いた匷化孊習でfine tuningしたモデル instructgpt の玹介
1
10,456
13,235,285,512
IssuesEvent
2020-08-18 17:46:35
MicrosoftDocs/azure-devops-docs
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
closed
Need Example for setting Secret Variables tu use in bash script
Pri1 devops-cicd-process/tech devops/prod doc-enhancement
Hi! There is a nice example on how to set and use secret pipeline variables for Powershell : https://docs.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch#secret-variables But I'm struggling to reproduce with bash (on a Ubuntu hosted agent VM). A similar example would be super nice ! Cheers ! --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: dd7e0bd3-1f7d-d7b6-cc72-5ef63c31b46a * Version Independent ID: dae87abd-b73d-9120-bcdb-6097d4b40f2a * Content: [Define variables - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch) * Content Source: [docs/pipelines/process/variables.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/variables.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
1.0
Need Example for setting Secret Variables tu use in bash script - Hi! There is a nice example on how to set and use secret pipeline variables for Powershell : https://docs.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch#secret-variables But I'm struggling to reproduce with bash (on a Ubuntu hosted agent VM). A similar example would be super nice ! Cheers ! --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: dd7e0bd3-1f7d-d7b6-cc72-5ef63c31b46a * Version Independent ID: dae87abd-b73d-9120-bcdb-6097d4b40f2a * Content: [Define variables - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch) * Content Source: [docs/pipelines/process/variables.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/variables.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
process
need example for setting secret variables tu use in bash script hi there is a nice example on how to set and use secret pipeline variables for powershell but i m struggling to reproduce with bash on a ubuntu hosted agent vm a similar example would be super nice cheers document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id bcdb content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
1
3,456
6,543,916,108
IssuesEvent
2017-09-03 08:26:18
Jumpscale/portal9
https://api.github.com/repos/Jumpscale/portal9
closed
Import OS Templates/Images --- Feature Request
process_wontfix
Hi, A Feature Request in the hope it makes sense. A suggestion as it would be ideal from the Admin point of view (Admin Broker Portal) to have the capability of uploading The OS Templates/Images. (those would be img/qcow2 imaes KVM compatible.) Many thanks
1.0
Import OS Templates/Images --- Feature Request - Hi, A Feature Request in the hope it makes sense. A suggestion as it would be ideal from the Admin point of view (Admin Broker Portal) to have the capability of uploading The OS Templates/Images. (those would be img/qcow2 imaes KVM compatible.) Many thanks
process
import os templates images feature request hi a feature request in the hope it makes sense a suggestion as it would be ideal from the admin point of view admin broker portal to have the capability of uploading the os templates images those would be img imaes kvm compatible many thanks
1
421,913
28,365,702,545
IssuesEvent
2023-04-12 13:46:23
nvs-vocabs/ArgoVocabs
https://api.github.com/repos/nvs-vocabs/ArgoVocabs
opened
R08 - addition to the vocabulary description
documentation avtt
R08 is based on WMO "**Common Code Tables C-3 (CCT C-3): Instrument make and type for water temperature profile measurement with fall rate equation coefficients**" These tables are pretty hard to find online up to date. I would like here to keep track on this information and suggest to add it somewhere in the R08 description. The official code table is available here: https://library.wmo.int/doc_num.php?explnum_id=11283 The github repository is here: https://github.com/wmo-im/CCT/blob/master/C03.csv To suggest an addition to the CCT C-3, follow this exemple: https://github.com/wmo-im/CCT/issues/110 Note that R08 can be updated while we are still waiting for WMO CCT C-3 validation.
1.0
R08 - addition to the vocabulary description - R08 is based on WMO "**Common Code Tables C-3 (CCT C-3): Instrument make and type for water temperature profile measurement with fall rate equation coefficients**" These tables are pretty hard to find online up to date. I would like here to keep track on this information and suggest to add it somewhere in the R08 description. The official code table is available here: https://library.wmo.int/doc_num.php?explnum_id=11283 The github repository is here: https://github.com/wmo-im/CCT/blob/master/C03.csv To suggest an addition to the CCT C-3, follow this exemple: https://github.com/wmo-im/CCT/issues/110 Note that R08 can be updated while we are still waiting for WMO CCT C-3 validation.
non_process
addition to the vocabulary description is based on wmo common code tables c cct c instrument make and type for water temperature profile measurement with fall rate equation coefficients these tables are pretty hard to find online up to date i would like here to keep track on this information and suggest to add it somewhere in the description the official code table is available here the github repository is here to suggest an addition to the cct c follow this exemple note that can be updated while we are still waiting for wmo cct c validation
0
20,587
27,246,035,667
IssuesEvent
2023-02-22 02:08:13
cypress-io/cypress
https://api.github.com/repos/cypress-io/cypress
closed
Update Vite + associated dependencies
process: dependencies CT pkg/app pkg/launchpad
### Current behavior I want to use Vitest in the monorepo for faster unit tests. I'd like to start adding more unit tests around the complex logic in app and launchpad, and since we use Vite there already, Vitest would be a good candidate. I tried adding it but we have some incompatibilities, we need to update Vite and esbuild before we add Vitest ### Desired behavior Update Vite related dependencies.
1.0
Update Vite + associated dependencies - ### Current behavior I want to use Vitest in the monorepo for faster unit tests. I'd like to start adding more unit tests around the complex logic in app and launchpad, and since we use Vite there already, Vitest would be a good candidate. I tried adding it but we have some incompatibilities, we need to update Vite and esbuild before we add Vitest ### Desired behavior Update Vite related dependencies.
process
update vite associated dependencies current behavior i want to use vitest in the monorepo for faster unit tests i d like to start adding more unit tests around the complex logic in app and launchpad and since we use vite there already vitest would be a good candidate i tried adding it but we have some incompatibilities we need to update vite and esbuild before we add vitest desired behavior update vite related dependencies
1
10,899
13,676,042,654
IssuesEvent
2020-09-29 13:26:49
JustBru00/RenamePlugin
https://api.github.com/repos/JustBru00/RenamePlugin
closed
[Suggestion] Lore text above enchantment text
Addition Request Processing
My feature request is new alternatives of existing commands (e.g., `/loreabove` and `/setlorelineabove`) to add lore text that appears directly underneath the item’s name rather than below enchantment text, if present. This would be greatly useful. It was suggested by [HypersGamertag](https://www.spigotmc.org/members/hypersgamertag.105024) in his forum post referenced in #19, but I couldn’t find a specific issue created for this paritcular suggestion as the issue was about his other suggestion. I’m not sure if this is possible. I’ve seen it done in mods, but mods can do anything, so I’m not sure if this is a possibility within vanilla Minecraft.
1.0
[Suggestion] Lore text above enchantment text - My feature request is new alternatives of existing commands (e.g., `/loreabove` and `/setlorelineabove`) to add lore text that appears directly underneath the item’s name rather than below enchantment text, if present. This would be greatly useful. It was suggested by [HypersGamertag](https://www.spigotmc.org/members/hypersgamertag.105024) in his forum post referenced in #19, but I couldn’t find a specific issue created for this paritcular suggestion as the issue was about his other suggestion. I’m not sure if this is possible. I’ve seen it done in mods, but mods can do anything, so I’m not sure if this is a possibility within vanilla Minecraft.
process
lore text above enchantment text my feature request is new alternatives of existing commands e g loreabove and setlorelineabove to add lore text that appears directly underneath the item’s name rather than below enchantment text if present this would be greatly useful it was suggested by in his forum post referenced in but i couldn’t find a specific issue created for this paritcular suggestion as the issue was about his other suggestion i’m not sure if this is possible i’ve seen it done in mods but mods can do anything so i’m not sure if this is a possibility within vanilla minecraft
1
4,455
7,327,007,588
IssuesEvent
2018-03-04 03:50:22
MicrosoftDocs/azure-docs
https://api.github.com/repos/MicrosoftDocs/azure-docs
closed
Correction for Windows under supported client types
assigned-to-author automation bug in-process triaged
The table should say that Windows Server 2008 and higher only supports assessments, and then on the next line, it says that Windows Server 2008 R2 SP1 and higher is supported with .NET and WMF prerequisites. These two overlap with conflicting requirements It might be clearer with three separate line items: | Operating System | Notes | |-|-| |Windows Server 2008 - Windows Server 2008 R2 RTM | Only supports update assessments| |Windows Server 2008 R2 SP1 and higher|fully supported with .NET 4.5+ and WMF 5.0+| |Windows Server 2016 Nano Server | Not Supported| --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: db47901d-1664-058d-9407-47e932dc9661 * Version Independent ID: e90ec9ee-e7da-4f19-248f-4c825aaa8b9f * [Content](https://docs.microsoft.com/en-us/azure/automation/automation-update-management#clients) * [Content Source](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-update-management.md) * Service: automation
1.0
Correction for Windows under supported client types - The table should say that Windows Server 2008 and higher only supports assessments, and then on the next line, it says that Windows Server 2008 R2 SP1 and higher is supported with .NET and WMF prerequisites. These two overlap with conflicting requirements It might be clearer with three separate line items: | Operating System | Notes | |-|-| |Windows Server 2008 - Windows Server 2008 R2 RTM | Only supports update assessments| |Windows Server 2008 R2 SP1 and higher|fully supported with .NET 4.5+ and WMF 5.0+| |Windows Server 2016 Nano Server | Not Supported| --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: db47901d-1664-058d-9407-47e932dc9661 * Version Independent ID: e90ec9ee-e7da-4f19-248f-4c825aaa8b9f * [Content](https://docs.microsoft.com/en-us/azure/automation/automation-update-management#clients) * [Content Source](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-update-management.md) * Service: automation
process
correction for windows under supported client types the table should say that windows server and higher only supports assessments and then on the next line it says that windows server and higher is supported with net and wmf prerequisites these two overlap with conflicting requirements it might be clearer with three separate line items operating system notes windows server windows server rtm only supports update assessments windows server and higher fully supported with net and wmf windows server nano server not supported document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id service automation
1
4,599
7,440,272,062
IssuesEvent
2018-03-27 09:34:11
qgis/QGIS-Documentation
https://api.github.com/repos/qgis/QGIS-Documentation
closed
Processing guidelines for help files is outdated
Guidelines Processing
(Picked from http://hub.qgis.org/issues/14599) Here the link of the official webpage: http://docs.qgis.org/2.8/en/docs/documentation_guidelines/README.html#documenting-processing-algorithms some points are outdated: - SAGA help files are not available anymore - it is quite difficult for a potential doc writer to understand where the files are located - QGIS algs have .yml syntax - maybe it is worth to have a standard template for all the providers (at the moment gdal/ogr points to their own docs, SAGA are missing, GRASS and TauDEM have their own..)
1.0
Processing guidelines for help files is outdated - (Picked from http://hub.qgis.org/issues/14599) Here the link of the official webpage: http://docs.qgis.org/2.8/en/docs/documentation_guidelines/README.html#documenting-processing-algorithms some points are outdated: - SAGA help files are not available anymore - it is quite difficult for a potential doc writer to understand where the files are located - QGIS algs have .yml syntax - maybe it is worth to have a standard template for all the providers (at the moment gdal/ogr points to their own docs, SAGA are missing, GRASS and TauDEM have their own..)
process
processing guidelines for help files is outdated picked from here the link of the official webpage some points are outdated saga help files are not available anymore it is quite difficult for a potential doc writer to understand where the files are located qgis algs have yml syntax maybe it is worth to have a standard template for all the providers at the moment gdal ogr points to their own docs saga are missing grass and taudem have their own
1
18,037
24,048,064,395
IssuesEvent
2022-09-16 10:07:05
prisma/prisma
https://api.github.com/repos/prisma/prisma
closed
sqlite: show better error when a permission denied error occurs
bug/1-unconfirmed kind/bug process/candidate topic: sqlite topic: error tech/engines topic: error reporting team/schema
<!-- If required, please update the title to be clear and descriptive --> Command: `prisma migrate reset` Version: `3.15.0` Binary Version: `b9297dc3a59307060c1c39d7e4f5765066f38372` Report: https://prisma-errors.netlify.app/report/14081 OS: `x64 linux 5.4.0-1018-aws` JS Stacktrace: ``` Error: Error in migration engine. Reason: [migration-engine/connectors/sql-migration-connector/src/flavour/sqlite.rs:194:46] failed to truncate sqlite file: Os { code: 13, kind: PermissionDenied, message: "Permission denied" } Please create an issue with your `schema.prisma` at https://github.com/prisma/prisma/issues/new at handlePanic (/home/../app/node_modules/prisma/build/index.js:93373:25) at ChildProcess.<anonymous> (/home/../app/node_modules/prisma/build/index.js:93382:15) at ChildProcess.emit (node:events:390:28) at Process.ChildProcess._handle.onexit (node:internal/child_process:290:12) ``` Rust Stacktrace: ``` Starting migration engine RPC server [migration-engine/connectors/sql-migration-connector/src/flavour/sqlite.rs:194:46] failed to truncate sqlite file: Os { code: 13, kind: PermissionDenied, message: "Permission denied" } ```
1.0
sqlite: show better error when a permission denied error occurs - <!-- If required, please update the title to be clear and descriptive --> Command: `prisma migrate reset` Version: `3.15.0` Binary Version: `b9297dc3a59307060c1c39d7e4f5765066f38372` Report: https://prisma-errors.netlify.app/report/14081 OS: `x64 linux 5.4.0-1018-aws` JS Stacktrace: ``` Error: Error in migration engine. Reason: [migration-engine/connectors/sql-migration-connector/src/flavour/sqlite.rs:194:46] failed to truncate sqlite file: Os { code: 13, kind: PermissionDenied, message: "Permission denied" } Please create an issue with your `schema.prisma` at https://github.com/prisma/prisma/issues/new at handlePanic (/home/../app/node_modules/prisma/build/index.js:93373:25) at ChildProcess.<anonymous> (/home/../app/node_modules/prisma/build/index.js:93382:15) at ChildProcess.emit (node:events:390:28) at Process.ChildProcess._handle.onexit (node:internal/child_process:290:12) ``` Rust Stacktrace: ``` Starting migration engine RPC server [migration-engine/connectors/sql-migration-connector/src/flavour/sqlite.rs:194:46] failed to truncate sqlite file: Os { code: 13, kind: PermissionDenied, message: "Permission denied" } ```
process
sqlite show better error when a permission denied error occurs command prisma migrate reset version binary version report os linux aws js stacktrace error error in migration engine reason failed to truncate sqlite file os code kind permissiondenied message permission denied please create an issue with your schema prisma at at handlepanic home app node modules prisma build index js at childprocess home app node modules prisma build index js at childprocess emit node events at process childprocess handle onexit node internal child process rust stacktrace starting migration engine rpc server failed to truncate sqlite file os code kind permissiondenied message permission denied
1
264,712
23,135,186,174
IssuesEvent
2022-07-28 13:48:48
elastic/elasticsearch
https://api.github.com/repos/elastic/elasticsearch
closed
Simplify BalanceUnbalancedClusterTest
>test :Distributed/Allocation team-discuss Team:Distributed
This takes 10 seconds or more, while other allocation tests are almost instantaneous. Can we simplify this? It looks like it tries to do a basic allocation (5 shards, 1 replica) of a new index when a _ton_ of indexes already exist on just 4 nodes. Perhaps we could test similar circumstances without thousands of shards? Alternatively, we could just make this an integration test (leave the impl, but rename to IT). It doesn't really seem like a unit test as it is now. Also, as a side note, this test is the only user of CatAllocationTestCase. Perhaps we can also eliminate this abstraction and just test directly (eliminating the zipped shard state)? @s1monw do you have any thoughts here?
1.0
Simplify BalanceUnbalancedClusterTest - This takes 10 seconds or more, while other allocation tests are almost instantaneous. Can we simplify this? It looks like it tries to do a basic allocation (5 shards, 1 replica) of a new index when a _ton_ of indexes already exist on just 4 nodes. Perhaps we could test similar circumstances without thousands of shards? Alternatively, we could just make this an integration test (leave the impl, but rename to IT). It doesn't really seem like a unit test as it is now. Also, as a side note, this test is the only user of CatAllocationTestCase. Perhaps we can also eliminate this abstraction and just test directly (eliminating the zipped shard state)? @s1monw do you have any thoughts here?
non_process
simplify balanceunbalancedclustertest this takes seconds or more while other allocation tests are almost instantaneous can we simplify this it looks like it tries to do a basic allocation shards replica of a new index when a ton of indexes already exist on just nodes perhaps we could test similar circumstances without thousands of shards alternatively we could just make this an integration test leave the impl but rename to it it doesn t really seem like a unit test as it is now also as a side note this test is the only user of catallocationtestcase perhaps we can also eliminate this abstraction and just test directly eliminating the zipped shard state do you have any thoughts here
0
439,575
30,704,268,903
IssuesEvent
2023-07-27 04:05:06
aws/aws-cdk
https://api.github.com/repos/aws/aws-cdk
closed
AutoScalingGroup Constructor: Overloaded Constructor / Different Props
p2 feature-request response-requested @aws-cdk/aws-autoscaling documentation closed-for-staleness
### Describe the feature Create overloaded constructor(s) for `AutoScalingGroup` with different properties. These alternate constructors can contain either `launchTemplate` or `mixedInstancesPolicy` while simultaneously removing all properties that must not be specified when either of these two props are passed into the original constructor. ### Use Case When migrating my CFN templates to CDK, it is difficult for me to debug my AutoScalingGroup CDK code because of the way ASGs are described in the CDK documentation and because of the way ASGs are defined in code. Building my templates takes a long time (even for a single package), and running into an error with `launchTemplate`, `mixedInstancesPolicy` and `machineImage` is frustrating. Properties such as `instanceType` specify that the `launchTemplate` / `mixedInstancesPolicy` must not be specified. However, `launchTemplate` and `mixedInstancesPolicy` do not specify the other properties that break them in return. To fix these issues, we can both add more detail to `launchTemplate` / `mixedInstancesPolicy` in the documentation and split the AutoScalingGroupProps interface into different interfaces to prevent this collision in the first place. ### Proposed Solution constructor(scope: Construct, id: string, props: ASGLaunchTemplateProps); // ASGLaunchTemplateProps is the same as AutoScalingGroupProps minus the properties // that state `launchTemplate` must not be specified export interface ASGLaunchTemplateProps extends CommonASGLaunchTemplateProps { readonly vpc: ec2.IVpc; readonly launchTemplate?: ec2.ILaunchTemplate; readonly init?: ec2.CloudFormationInit; readonly initOptions?: ApplyCloudFormationInitOptions; readonly requireImdsv2?: boolean; } // CommonASGLaunchTemplateProps is the same as CommonAutoScalingGroupProps minus the properties // that state `launchTemplate` must not be specified export interface CommonAutoScalingGroupProps { readonly minCapacity?: number; readonly maxCapacity?: number; readonly desiredCapacity?: number; readonly vpcSubnets?: ec2.SubnetSelection; readonly notifications?: NotificationConfiguration[]; readonly allowAllOutbound?: boolean; readonly ignoreUnmodifiedSizeProperties?: boolean; readonly cooldown?: Duration; readonly healthCheck?: HealthCheck; readonly maxInstanceLifetime?: Duration; readonly groupMetrics?: GroupMetrics[]; readonly signals?: Signals; readonly updatePolicy?: UpdatePolicy; readonly newInstancesProtectedFromScaleIn?: boolean; readonly autoScalingGroupName?: string; readonly terminationPolicies?: TerminationPolicy[]; readonly defaultInstanceWarmup?: Duration; readonly ssmSessionPermissions?: boolean; } // Note: The original CommonAutoScalingGroupProps can also be modified to move the properties // that state "`launchTemplate` must not be specified" to AutoScalingGroupProps ### Other Information _No response_ ### Acknowledgements - [x] I may be able to implement this feature request - [ ] This feature might incur a breaking change ### CDK version used 2.87.0 ### Environment details (OS name and version, etc.) OSX / Using AL2_x86_64
1.0
AutoScalingGroup Constructor: Overloaded Constructor / Different Props - ### Describe the feature Create overloaded constructor(s) for `AutoScalingGroup` with different properties. These alternate constructors can contain either `launchTemplate` or `mixedInstancesPolicy` while simultaneously removing all properties that must not be specified when either of these two props are passed into the original constructor. ### Use Case When migrating my CFN templates to CDK, it is difficult for me to debug my AutoScalingGroup CDK code because of the way ASGs are described in the CDK documentation and because of the way ASGs are defined in code. Building my templates takes a long time (even for a single package), and running into an error with `launchTemplate`, `mixedInstancesPolicy` and `machineImage` is frustrating. Properties such as `instanceType` specify that the `launchTemplate` / `mixedInstancesPolicy` must not be specified. However, `launchTemplate` and `mixedInstancesPolicy` do not specify the other properties that break them in return. To fix these issues, we can both add more detail to `launchTemplate` / `mixedInstancesPolicy` in the documentation and split the AutoScalingGroupProps interface into different interfaces to prevent this collision in the first place. ### Proposed Solution constructor(scope: Construct, id: string, props: ASGLaunchTemplateProps); // ASGLaunchTemplateProps is the same as AutoScalingGroupProps minus the properties // that state `launchTemplate` must not be specified export interface ASGLaunchTemplateProps extends CommonASGLaunchTemplateProps { readonly vpc: ec2.IVpc; readonly launchTemplate?: ec2.ILaunchTemplate; readonly init?: ec2.CloudFormationInit; readonly initOptions?: ApplyCloudFormationInitOptions; readonly requireImdsv2?: boolean; } // CommonASGLaunchTemplateProps is the same as CommonAutoScalingGroupProps minus the properties // that state `launchTemplate` must not be specified export interface CommonAutoScalingGroupProps { readonly minCapacity?: number; readonly maxCapacity?: number; readonly desiredCapacity?: number; readonly vpcSubnets?: ec2.SubnetSelection; readonly notifications?: NotificationConfiguration[]; readonly allowAllOutbound?: boolean; readonly ignoreUnmodifiedSizeProperties?: boolean; readonly cooldown?: Duration; readonly healthCheck?: HealthCheck; readonly maxInstanceLifetime?: Duration; readonly groupMetrics?: GroupMetrics[]; readonly signals?: Signals; readonly updatePolicy?: UpdatePolicy; readonly newInstancesProtectedFromScaleIn?: boolean; readonly autoScalingGroupName?: string; readonly terminationPolicies?: TerminationPolicy[]; readonly defaultInstanceWarmup?: Duration; readonly ssmSessionPermissions?: boolean; } // Note: The original CommonAutoScalingGroupProps can also be modified to move the properties // that state "`launchTemplate` must not be specified" to AutoScalingGroupProps ### Other Information _No response_ ### Acknowledgements - [x] I may be able to implement this feature request - [ ] This feature might incur a breaking change ### CDK version used 2.87.0 ### Environment details (OS name and version, etc.) OSX / Using AL2_x86_64
non_process
autoscalinggroup constructor overloaded constructor different props describe the feature create overloaded constructor s for autoscalinggroup with different properties these alternate constructors can contain either launchtemplate or mixedinstancespolicy while simultaneously removing all properties that must not be specified when either of these two props are passed into the original constructor use case when migrating my cfn templates to cdk it is difficult for me to debug my autoscalinggroup cdk code because of the way asgs are described in the cdk documentation and because of the way asgs are defined in code building my templates takes a long time even for a single package and running into an error with launchtemplate mixedinstancespolicy and machineimage is frustrating properties such as instancetype specify that the launchtemplate mixedinstancespolicy must not be specified however launchtemplate and mixedinstancespolicy do not specify the other properties that break them in return to fix these issues we can both add more detail to launchtemplate mixedinstancespolicy in the documentation and split the autoscalinggroupprops interface into different interfaces to prevent this collision in the first place proposed solution constructor scope construct id string props asglaunchtemplateprops asglaunchtemplateprops is the same as autoscalinggroupprops minus the properties that state launchtemplate must not be specified export interface asglaunchtemplateprops extends commonasglaunchtemplateprops readonly vpc ivpc readonly launchtemplate ilaunchtemplate readonly init cloudformationinit readonly initoptions applycloudformationinitoptions readonly boolean commonasglaunchtemplateprops is the same as commonautoscalinggroupprops minus the properties that state launchtemplate must not be specified export interface commonautoscalinggroupprops readonly mincapacity number readonly maxcapacity number readonly desiredcapacity number readonly vpcsubnets subnetselection readonly notifications notificationconfiguration readonly allowalloutbound boolean readonly ignoreunmodifiedsizeproperties boolean readonly cooldown duration readonly healthcheck healthcheck readonly maxinstancelifetime duration readonly groupmetrics groupmetrics readonly signals signals readonly updatepolicy updatepolicy readonly newinstancesprotectedfromscalein boolean readonly autoscalinggroupname string readonly terminationpolicies terminationpolicy readonly defaultinstancewarmup duration readonly ssmsessionpermissions boolean note the original commonautoscalinggroupprops can also be modified to move the properties that state launchtemplate must not be specified to autoscalinggroupprops other information no response acknowledgements i may be able to implement this feature request this feature might incur a breaking change cdk version used environment details os name and version etc osx using
0
617,123
19,343,670,164
IssuesEvent
2021-12-15 08:34:14
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
samsung.com - see bug description
browser-firefox-mobile priority-important engine-gecko
<!-- @browser: Firefox Mobile 95.0 --> <!-- @ua_header: Mozilla/5.0 (Android 11; Mobile; rv:95.0) Gecko/95.0 Firefox/95.0 --> <!-- @reported_with: unknown --> **URL**: http://samsung.com **Browser / Version**: Firefox Mobile 95.0 **Operating System**: windows 10 **Tested Another Browser**: Yes Edge **Problem type**: Something else **Description**: all SAMSUNG devices are hacked **Steps to Reproduce**: Accesses information on emails passwords bank details. TV ( sky) through TV laptop or and phone. <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❀_
1.0
samsung.com - see bug description - <!-- @browser: Firefox Mobile 95.0 --> <!-- @ua_header: Mozilla/5.0 (Android 11; Mobile; rv:95.0) Gecko/95.0 Firefox/95.0 --> <!-- @reported_with: unknown --> **URL**: http://samsung.com **Browser / Version**: Firefox Mobile 95.0 **Operating System**: windows 10 **Tested Another Browser**: Yes Edge **Problem type**: Something else **Description**: all SAMSUNG devices are hacked **Steps to Reproduce**: Accesses information on emails passwords bank details. TV ( sky) through TV laptop or and phone. <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❀_
non_process
samsung com see bug description url browser version firefox mobile operating system windows tested another browser yes edge problem type something else description all samsung devices are hacked steps to reproduce accesses information on emails passwords bank details tv sky through tv laptop or and phone browser configuration none from with ❀
0
85,938
3,700,656,950
IssuesEvent
2016-02-29 09:37:30
cs2103jan2016-t09-2j/main
https://api.github.com/repos/cs2103jan2016-t09-2j/main
closed
Parse Adding Commands to Scheduled Tasks
priority.high status.ongoing type.task
- Decide all the existing possibilities to add new tasks into the scheduled tasks list - What about the keywords for date and time ? - Study all existing libraries. Which ones can be imported ?
1.0
Parse Adding Commands to Scheduled Tasks - - Decide all the existing possibilities to add new tasks into the scheduled tasks list - What about the keywords for date and time ? - Study all existing libraries. Which ones can be imported ?
non_process
parse adding commands to scheduled tasks decide all the existing possibilities to add new tasks into the scheduled tasks list what about the keywords for date and time study all existing libraries which ones can be imported
0
18,542
24,554,962,348
IssuesEvent
2022-10-12 15:11:56
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[iOS] [Standalone] 'Review' consent popup is getting displayed on sign in screen in the following scenario
Bug P1 iOS Process: Fixed Process: Tested QA Process: Tested dev
Steps: 1. Signup or sign in to the mobile app 2. Enroll to the study 3. In SB, update the consent for enrolled participants and publish the study updates 4. Go to the mobile app 5. Minimize the app once after getting the 'Review' consent popup 6. Maximize the app 7. On the Passcode screen, click on ### 'Forgot passcode? Sign in again and observe AR: 'Review' consent popup is getting displayed on the sign-in screen ER: 'Review' consent popup should not be displayed on the sign-in screen ![image](https://user-images.githubusercontent.com/71445210/180446274-bb711bea-c6a3-4b70-b050-80a1e4597cf5.png)
3.0
[iOS] [Standalone] 'Review' consent popup is getting displayed on sign in screen in the following scenario - Steps: 1. Signup or sign in to the mobile app 2. Enroll to the study 3. In SB, update the consent for enrolled participants and publish the study updates 4. Go to the mobile app 5. Minimize the app once after getting the 'Review' consent popup 6. Maximize the app 7. On the Passcode screen, click on ### 'Forgot passcode? Sign in again and observe AR: 'Review' consent popup is getting displayed on the sign-in screen ER: 'Review' consent popup should not be displayed on the sign-in screen ![image](https://user-images.githubusercontent.com/71445210/180446274-bb711bea-c6a3-4b70-b050-80a1e4597cf5.png)
process
review consent popup is getting displayed on sign in screen in the following scenario steps signup or sign in to the mobile app enroll to the study in sb update the consent for enrolled participants and publish the study updates go to the mobile app minimize the app once after getting the review consent popup maximize the app on the passcode screen click on forgot passcode sign in again and observe ar review consent popup is getting displayed on the sign in screen er review consent popup should not be displayed on the sign in screen
1
34,150
9,300,419,535
IssuesEvent
2019-03-23 13:39:02
ekeeke/Genesis-Plus-GX
https://api.github.com/repos/ekeeke/Genesis-Plus-GX
closed
Still doesnt compile with latest devkitPPC version
build error
/media/demetris/c4487cfd-d71a-441b-b83d-23d8ce19475d/genesis-plus-gx/gx/main.c:381:25: error: 'S_IRWXU' undeclared (first use in this function) else mkdir(pathname,S_IRWXU);
1.0
Still doesnt compile with latest devkitPPC version - /media/demetris/c4487cfd-d71a-441b-b83d-23d8ce19475d/genesis-plus-gx/gx/main.c:381:25: error: 'S_IRWXU' undeclared (first use in this function) else mkdir(pathname,S_IRWXU);
non_process
still doesnt compile with latest devkitppc version media demetris genesis plus gx gx main c error s irwxu undeclared first use in this function else mkdir pathname s irwxu
0
106,628
11,493,553,831
IssuesEvent
2020-02-11 23:19:16
5Guys-NC/CAA-Niagara-Event-Management-System
https://api.github.com/repos/5Guys-NC/CAA-Niagara-Event-Management-System
opened
Create good version of Data Model
TODO documentation
Someone in the group will have a sketch of the data model once final. Please make a good copy of this.
1.0
Create good version of Data Model - Someone in the group will have a sketch of the data model once final. Please make a good copy of this.
non_process
create good version of data model someone in the group will have a sketch of the data model once final please make a good copy of this
0
294,028
9,012,327,709
IssuesEvent
2019-02-05 16:38:16
canonical-websites/snapcraft.io
https://api.github.com/repos/canonical-websites/snapcraft.io
closed
Building snaps LIVE Stream notification
Priority: Critical
Advocacy is going to do a LIVE Stream of a keynote called [building snaps](https://www.youtube.com/watch?v=BEp_l2oUcD8&ab_channel=snapcraft) on Youtube which will be live on 6 Sept @ 8pm (GMT) so we would like to include this as part of the homepage and the store homepage.
1.0
Building snaps LIVE Stream notification - Advocacy is going to do a LIVE Stream of a keynote called [building snaps](https://www.youtube.com/watch?v=BEp_l2oUcD8&ab_channel=snapcraft) on Youtube which will be live on 6 Sept @ 8pm (GMT) so we would like to include this as part of the homepage and the store homepage.
non_process
building snaps live stream notification advocacy is going to do a live stream of a keynote called on youtube which will be live on sept gmt so we would like to include this as part of the homepage and the store homepage
0
73,229
14,012,118,879
IssuesEvent
2020-10-29 08:34:10
Regalis11/Barotrauma
https://api.github.com/repos/Regalis11/Barotrauma
closed
[v0.1100.0.0 (unstable)] - Ballast flora permanently jams hatch open even after its killed
Bug Code
**Description** Ballast flora will permanently jam the hatch to ballast open since you're unable to plasma cut it to free it, even after its killed. **Steps To Reproduce** 1) spawn ballast flora 2) open the hatch topside of it (using dugong) 3) attack it with flame thrower **Version** [v0.1100.0.0 (unstable)] **Edit** It seems to be able to permanently jam open doors too (picture below) **Additional information** ![OpenJam](https://user-images.githubusercontent.com/72674994/97459469-c6f96300-1976-11eb-9db1-dbddcc9c4e93.jpg) ![doorjam2](https://user-images.githubusercontent.com/72674994/97463333-99aeb400-197a-11eb-9e58-5ff285f2b6eb.jpg)
1.0
[v0.1100.0.0 (unstable)] - Ballast flora permanently jams hatch open even after its killed - **Description** Ballast flora will permanently jam the hatch to ballast open since you're unable to plasma cut it to free it, even after its killed. **Steps To Reproduce** 1) spawn ballast flora 2) open the hatch topside of it (using dugong) 3) attack it with flame thrower **Version** [v0.1100.0.0 (unstable)] **Edit** It seems to be able to permanently jam open doors too (picture below) **Additional information** ![OpenJam](https://user-images.githubusercontent.com/72674994/97459469-c6f96300-1976-11eb-9db1-dbddcc9c4e93.jpg) ![doorjam2](https://user-images.githubusercontent.com/72674994/97463333-99aeb400-197a-11eb-9e58-5ff285f2b6eb.jpg)
non_process
ballast flora permanently jams hatch open even after its killed description ballast flora will permanently jam the hatch to ballast open since you re unable to plasma cut it to free it even after its killed steps to reproduce spawn ballast flora open the hatch topside of it using dugong attack it with flame thrower version edit it seems to be able to permanently jam open doors too picture below additional information
0
265,611
20,104,590,994
IssuesEvent
2022-02-07 09:15:39
vercel/next.js
https://api.github.com/repos/vercel/next.js
closed
Using `rewrites` in `next.config.js` causes double renders
area: documentation
### Run `next info` (available from version 12.0.8 and up) Operating System: Platform: darwin Arch: arm64 Version: Darwin Kernel Version 21.2.0: Sun Nov 28 20:29:10 PST 2021; root:xnu-8019.61.5~1/RELEASE_ARM64_T8101 Binaries: Node: 16.13.1 npm: 8.3.0 Yarn: 1.22.17 pnpm: 6.11.0 Relevant packages: next: 12.0.8-canary.17 react: 17.0.2 react-dom: 17.0.2 ### What version of Next.js are you using? 12.0.7 and 12.0.8-canary.17 both reproduce ### What version of Node.js are you using? 16.13.1 ### What browser are you using? Chrome and Safari both reproduce ### What operating system are you using? macOS 12.1 (21C52) ### How are you deploying your application? Vercel, `next dev` and `next start` all reproduce ### Describe the Bug When a `rewrites` function is specified in `next.config.js`, every page renders twice in the browser, regardless of the nature of this function. ### Expected Behavior Specifying a `rewrites` function does not cause double renders. ### To Reproduce 1. Clone [`https://github.com/valerie-makes/nextjs-rewrites-bug`](https://github.com/valerie-makes/nextjs-rewrites-bug) 2. Install dependencies: `npm install` 3. Build and start the app: `npm run build && npm start` 4. Visit the index page and observe the browser console: ``` rendering Home rendering Home ``` 5. Comment out the `rewrites` function in `next.config.js` 6. Repeat Step 3 and Step 4: ``` rendering Home ```
1.0
Using `rewrites` in `next.config.js` causes double renders - ### Run `next info` (available from version 12.0.8 and up) Operating System: Platform: darwin Arch: arm64 Version: Darwin Kernel Version 21.2.0: Sun Nov 28 20:29:10 PST 2021; root:xnu-8019.61.5~1/RELEASE_ARM64_T8101 Binaries: Node: 16.13.1 npm: 8.3.0 Yarn: 1.22.17 pnpm: 6.11.0 Relevant packages: next: 12.0.8-canary.17 react: 17.0.2 react-dom: 17.0.2 ### What version of Next.js are you using? 12.0.7 and 12.0.8-canary.17 both reproduce ### What version of Node.js are you using? 16.13.1 ### What browser are you using? Chrome and Safari both reproduce ### What operating system are you using? macOS 12.1 (21C52) ### How are you deploying your application? Vercel, `next dev` and `next start` all reproduce ### Describe the Bug When a `rewrites` function is specified in `next.config.js`, every page renders twice in the browser, regardless of the nature of this function. ### Expected Behavior Specifying a `rewrites` function does not cause double renders. ### To Reproduce 1. Clone [`https://github.com/valerie-makes/nextjs-rewrites-bug`](https://github.com/valerie-makes/nextjs-rewrites-bug) 2. Install dependencies: `npm install` 3. Build and start the app: `npm run build && npm start` 4. Visit the index page and observe the browser console: ``` rendering Home rendering Home ``` 5. Comment out the `rewrites` function in `next.config.js` 6. Repeat Step 3 and Step 4: ``` rendering Home ```
non_process
using rewrites in next config js causes double renders run next info available from version and up operating system platform darwin arch version darwin kernel version sun nov pst root xnu release binaries node npm yarn pnpm relevant packages next canary react react dom what version of next js are you using and canary both reproduce what version of node js are you using what browser are you using chrome and safari both reproduce what operating system are you using macos how are you deploying your application vercel next dev and next start all reproduce describe the bug when a rewrites function is specified in next config js every page renders twice in the browser regardless of the nature of this function expected behavior specifying a rewrites function does not cause double renders to reproduce clone install dependencies npm install build and start the app npm run build npm start visit the index page and observe the browser console rendering home rendering home comment out the rewrites function in next config js repeat step and step rendering home
0
10,657
13,451,655,407
IssuesEvent
2020-09-08 20:38:13
nlpie/biomedicus3
https://api.github.com/repos/nlpie/biomedicus3
opened
PyTorch-based sentences processor (and others) are bottlenecking multiprocessing.
area/performance/resources area/processing area/utilities kind/bug lang/python
Above 8 threads the Python GIL on the sentences processor starts to be a bottleneck for pipeline throughput. There are currently three plans/options for resolving: - Use a (torch.)multithreading.Pool to create multiple processes inside the processor for parallelism. - Deploying multiple instances of the sentences processor and then use round-robin load balancing at the grpc channel. - Use a pytorch model server (https://github.com/pytorch/serve) to serve the sentences model and a single multithreaded processor that calls to the model server.
1.0
PyTorch-based sentences processor (and others) are bottlenecking multiprocessing. - Above 8 threads the Python GIL on the sentences processor starts to be a bottleneck for pipeline throughput. There are currently three plans/options for resolving: - Use a (torch.)multithreading.Pool to create multiple processes inside the processor for parallelism. - Deploying multiple instances of the sentences processor and then use round-robin load balancing at the grpc channel. - Use a pytorch model server (https://github.com/pytorch/serve) to serve the sentences model and a single multithreaded processor that calls to the model server.
process
pytorch based sentences processor and others are bottlenecking multiprocessing above threads the python gil on the sentences processor starts to be a bottleneck for pipeline throughput there are currently three plans options for resolving use a torch multithreading pool to create multiple processes inside the processor for parallelism deploying multiple instances of the sentences processor and then use round robin load balancing at the grpc channel use a pytorch model server to serve the sentences model and a single multithreaded processor that calls to the model server
1
6,428
9,531,277,043
IssuesEvent
2019-04-29 15:38:33
google/personfinder
https://api.github.com/repos/google/personfinder
opened
Presubmit to check that messages are updated
release & process
Since `server_tests` updates them when it runs, it used to be nearly impossible to neglect this, since they'd get updated as long as you ran the server tests locally before you opened your PR. However, with the Django migration, we're moving away from the current server testing framework and using Django's test framework instead. Overall that's been really nice, because the Django tests are way faster, but it does mean you can forget to update messages.
1.0
Presubmit to check that messages are updated - Since `server_tests` updates them when it runs, it used to be nearly impossible to neglect this, since they'd get updated as long as you ran the server tests locally before you opened your PR. However, with the Django migration, we're moving away from the current server testing framework and using Django's test framework instead. Overall that's been really nice, because the Django tests are way faster, but it does mean you can forget to update messages.
process
presubmit to check that messages are updated since server tests updates them when it runs it used to be nearly impossible to neglect this since they d get updated as long as you ran the server tests locally before you opened your pr however with the django migration we re moving away from the current server testing framework and using django s test framework instead overall that s been really nice because the django tests are way faster but it does mean you can forget to update messages
1
14,011
16,816,475,580
IssuesEvent
2021-06-17 08:00:30
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[PM] Responsive issue in hamburger menu button
Bug P2 Participant manager Process: Fixed Process: Tested QA Process: Tested dev
Click on the hamburger menu button > Select any one of the options like location/Myaccount/Admins/Dashboard > After selecting the option, the options page should be removed ![mbh1](https://user-images.githubusercontent.com/71445210/115501583-19b23880-a291-11eb-849c-8f39ad3b05f1.png)
3.0
[PM] Responsive issue in hamburger menu button - Click on the hamburger menu button > Select any one of the options like location/Myaccount/Admins/Dashboard > After selecting the option, the options page should be removed ![mbh1](https://user-images.githubusercontent.com/71445210/115501583-19b23880-a291-11eb-849c-8f39ad3b05f1.png)
process
responsive issue in hamburger menu button click on the hamburger menu button select any one of the options like location myaccount admins dashboard after selecting the option the options page should be removed
1
37,546
4,821,129,032
IssuesEvent
2016-11-05 05:39:34
vickychijwani/quill
https://api.github.com/repos/vickychijwani/quill
opened
Support multiple accounts
design enhancement user feedback
Pro users would like the option to sign in to multiple accounts and switch between them seamlessly.
1.0
Support multiple accounts - Pro users would like the option to sign in to multiple accounts and switch between them seamlessly.
non_process
support multiple accounts pro users would like the option to sign in to multiple accounts and switch between them seamlessly
0
813,294
30,452,572,694
IssuesEvent
2023-07-16 13:35:49
Laravel-Backpack/demo
https://api.github.com/repos/Laravel-Backpack/demo
closed
v6 Demo doesn't have many new features.
enhancement Priority: SHOULD Size: M
Live v6 Demo doesn't have many new features: Few : - [Show Operation Tabs](https://backpackforlaravel.com/docs/5.x/crud-operation-show#tabs-display-columns-in-tabs) - Repeatable Column - Dropzone Column - etc. We should review the demo. - [x] List missing features, columns, and fields. - [ ] Showcase them in [Monster](https://demo-v6.backpackforlaravel.com/admin/monster)
1.0
v6 Demo doesn't have many new features. - Live v6 Demo doesn't have many new features: Few : - [Show Operation Tabs](https://backpackforlaravel.com/docs/5.x/crud-operation-show#tabs-display-columns-in-tabs) - Repeatable Column - Dropzone Column - etc. We should review the demo. - [x] List missing features, columns, and fields. - [ ] Showcase them in [Monster](https://demo-v6.backpackforlaravel.com/admin/monster)
non_process
demo doesn t have many new features live demo doesn t have many new features few repeatable column dropzone column etc we should review the demo list missing features columns and fields showcase them in
0
17,195
22,772,944,472
IssuesEvent
2022-07-08 11:51:30
geneontology/go-ontology
https://api.github.com/repos/geneontology/go-ontology
closed
Obsoletion notice: GO:0039649 modulation by virus of host ubiquitin-protein ligase activity
multi-species process
Dear all, The proposal has been made to obsolete GO:0039649 modulation by virus of host ubiquitin-protein ligase activity. The reason for obsletion is that this term represents a molecular function. There are no annotations to this term. This term is not in any subsets. There is one KW mapping and one UniRule: * UniProtKB-KW:KW-1123 Modulation of host E3 ubiquitin ligases by virus > GO:modulation by virus of host ubiquitin-protein ligase activity ; GO:0039649 * UniRule:UR000496164 These mappings can be moved to GO:0039648 modulation by virus of host protein ubiquitination You can comment on the ticket: https://github.com/geneontology/go-ontology/issues/23651 Thanks, Pascale
1.0
Obsoletion notice: GO:0039649 modulation by virus of host ubiquitin-protein ligase activity - Dear all, The proposal has been made to obsolete GO:0039649 modulation by virus of host ubiquitin-protein ligase activity. The reason for obsletion is that this term represents a molecular function. There are no annotations to this term. This term is not in any subsets. There is one KW mapping and one UniRule: * UniProtKB-KW:KW-1123 Modulation of host E3 ubiquitin ligases by virus > GO:modulation by virus of host ubiquitin-protein ligase activity ; GO:0039649 * UniRule:UR000496164 These mappings can be moved to GO:0039648 modulation by virus of host protein ubiquitination You can comment on the ticket: https://github.com/geneontology/go-ontology/issues/23651 Thanks, Pascale
process
obsoletion notice go modulation by virus of host ubiquitin protein ligase activity dear all the proposal has been made to obsolete go modulation by virus of host ubiquitin protein ligase activity the reason for obsletion is that this term represents a molecular function there are no annotations to this term this term is not in any subsets there is one kw mapping and one unirule uniprotkb kw kw modulation of host ubiquitin ligases by virus go modulation by virus of host ubiquitin protein ligase activity go unirule these mappings can be moved to go modulation by virus of host protein ubiquitination you can comment on the ticket thanks pascale
1
1,440
3,962,934,400
IssuesEvent
2016-05-02 18:37:13
Yoast/wordpress-seo
https://api.github.com/repos/Yoast/wordpress-seo
closed
Fatal Error on wp-admin
compatibility wait for feedback
Hi, I have the current error when going to wp-admin not logged : <b>Fatal error</b>: Uncaught Error: Call to undefined function get_current_screen() in /home/blog/public_html/wp-content/plugins/wordpress-seo/inc/wpseo-non-ajax-functions.php:467 Stack trace: #0 /home/blog/public_html/wp-includes/plugin.php(525): wpseo_admin_bar_style('') #1 /home/blog/public_html/wp-includes/script-loader.php(1076): do_action('wp_enqueue_scri...') #2 /home/blog/public_html/wp-includes/plugin.php(525): wp_enqueue_scripts('') #3 /home/blog/public_html/wp-includes/general-template.php(2528): do_action('wp_head') #4 /home/blog/public_html/wp-content/themes/highwind/header.php(31): wp_head() #5 /home/blog/public_html/wp-includes/template.php(572): require_once('/home/blog/publ...') #6 /home/blog/public_html/wp-includes/template.php(531): load_template('/home/blog/publ...', true) #7 /home/blog/public_html/wp-includes/general-template.php(45): locate_template(Array, true) #8 /home/blog/public_html/wp-content/themes/highwind/404.php(11): get_header() #9 /home/blog/public_html/wp-includes/template.php(572): require_onc in <b>/home/blog/public_html/wp-content/plugins/wordpress-seo/inc/wpseo-non-ajax-functions.php</b> on line <b>467</b><br /> Seems like get_current_screen not already ready. ### What did you expect to happen? Not having a fatal error ### What happened instead? Having a fatal error ### How can we reproduce this behavior? Go to wp-admin with wps-hide-login activated Can you provide a link to a page which shows this issue? Nope ### Technical info * WordPress version: 4.5.1 * Yoast SEO version: 3.2.3
True
Fatal Error on wp-admin - Hi, I have the current error when going to wp-admin not logged : <b>Fatal error</b>: Uncaught Error: Call to undefined function get_current_screen() in /home/blog/public_html/wp-content/plugins/wordpress-seo/inc/wpseo-non-ajax-functions.php:467 Stack trace: #0 /home/blog/public_html/wp-includes/plugin.php(525): wpseo_admin_bar_style('') #1 /home/blog/public_html/wp-includes/script-loader.php(1076): do_action('wp_enqueue_scri...') #2 /home/blog/public_html/wp-includes/plugin.php(525): wp_enqueue_scripts('') #3 /home/blog/public_html/wp-includes/general-template.php(2528): do_action('wp_head') #4 /home/blog/public_html/wp-content/themes/highwind/header.php(31): wp_head() #5 /home/blog/public_html/wp-includes/template.php(572): require_once('/home/blog/publ...') #6 /home/blog/public_html/wp-includes/template.php(531): load_template('/home/blog/publ...', true) #7 /home/blog/public_html/wp-includes/general-template.php(45): locate_template(Array, true) #8 /home/blog/public_html/wp-content/themes/highwind/404.php(11): get_header() #9 /home/blog/public_html/wp-includes/template.php(572): require_onc in <b>/home/blog/public_html/wp-content/plugins/wordpress-seo/inc/wpseo-non-ajax-functions.php</b> on line <b>467</b><br /> Seems like get_current_screen not already ready. ### What did you expect to happen? Not having a fatal error ### What happened instead? Having a fatal error ### How can we reproduce this behavior? Go to wp-admin with wps-hide-login activated Can you provide a link to a page which shows this issue? Nope ### Technical info * WordPress version: 4.5.1 * Yoast SEO version: 3.2.3
non_process
fatal error on wp admin hi i have the current error when going to wp admin not logged fatal error uncaught error call to undefined function get current screen in home blog public html wp content plugins wordpress seo inc wpseo non ajax functions php stack trace home blog public html wp includes plugin php wpseo admin bar style home blog public html wp includes script loader php do action wp enqueue scri home blog public html wp includes plugin php wp enqueue scripts home blog public html wp includes general template php do action wp head home blog public html wp content themes highwind header php wp head home blog public html wp includes template php require once home blog publ home blog public html wp includes template php load template home blog publ true home blog public html wp includes general template php locate template array true home blog public html wp content themes highwind php get header home blog public html wp includes template php require onc in home blog public html wp content plugins wordpress seo inc wpseo non ajax functions php on line seems like get current screen not already ready what did you expect to happen not having a fatal error what happened instead having a fatal error how can we reproduce this behavior go to wp admin with wps hide login activated can you provide a link to a page which shows this issue nope technical info wordpress version yoast seo version
0
15,721
19,863,541,756
IssuesEvent
2022-01-22 06:37:18
MicrosoftDocs/azure-docs
https://api.github.com/repos/MicrosoftDocs/azure-docs
closed
Automation Account Source Control Failure when Setting up VsoTfvc
automation/svc triaged cxp awaiting-product-team-response product-issue process-automation/subsvc Pri3
When trying to add a VsoTfvc repository to an automation account an error: Create source control failed. An error occurred while creating the source control named '[NameHere]'. Error details: The property 'branch' is required.. is generated and the instance is not created. When selecting VsoTfvc the Branch dropdown is disabled with the text "Branch is not supported for VsoTfvc". However I am able to successfully create the instance using the PowerShell New-AzAutomationSourceControl command. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 83c90e64-b615-711f-a53d-fc76606e2ecd * Version Independent ID: 2d164036-6886-4440-50f7-369f99f41cea * Content: [Use source control integration in Azure Automation](https://docs.microsoft.com/en-us/azure/automation/source-control-integration) * Content Source: [articles/automation/source-control-integration.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/automation/source-control-integration.md) * Service: **automation** * Sub-service: **process-automation** * GitHub Login: @SGSneha * Microsoft Alias: **v-ssudhir**
1.0
Automation Account Source Control Failure when Setting up VsoTfvc - When trying to add a VsoTfvc repository to an automation account an error: Create source control failed. An error occurred while creating the source control named '[NameHere]'. Error details: The property 'branch' is required.. is generated and the instance is not created. When selecting VsoTfvc the Branch dropdown is disabled with the text "Branch is not supported for VsoTfvc". However I am able to successfully create the instance using the PowerShell New-AzAutomationSourceControl command. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 83c90e64-b615-711f-a53d-fc76606e2ecd * Version Independent ID: 2d164036-6886-4440-50f7-369f99f41cea * Content: [Use source control integration in Azure Automation](https://docs.microsoft.com/en-us/azure/automation/source-control-integration) * Content Source: [articles/automation/source-control-integration.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/automation/source-control-integration.md) * Service: **automation** * Sub-service: **process-automation** * GitHub Login: @SGSneha * Microsoft Alias: **v-ssudhir**
process
automation account source control failure when setting up vsotfvc when trying to add a vsotfvc repository to an automation account an error create source control failed an error occurred while creating the source control named error details the property branch is required is generated and the instance is not created when selecting vsotfvc the branch dropdown is disabled with the text branch is not supported for vsotfvc however i am able to successfully create the instance using the powershell new azautomationsourcecontrol command document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation sub service process automation github login sgsneha microsoft alias v ssudhir
1
17,685
10,755,663,449
IssuesEvent
2019-10-31 09:37:24
microsoft/BotFramework-WebChat
https://api.github.com/repos/microsoft/BotFramework-WebChat
closed
Output build should have correct version number burnt
Bot Services Bug Pending customer-reported
## Version `master` ## Describe the bug For the burnt version number, it appears to be `undefined` instead of `4.6.0`. In PR #2182, we moved from vulnerable `babel-plugin-version-transform` to `babel-plugin-transform-inline-environment-variables`. The environment variable should be in lowercase, `npm_package_version`, instead of `NPM_PACKAGE_VERSION`. Thus, the Babel plugin did not pick up and cannot burn it into the source file. ## To Reproduce Steps to reproduce the behavior: 1. Host Web Chat with one of the daily build 1. Navigate to the host app 1. Press <kbd>F12</kbd> 1. In the Elements tab, navigate to `html/head/meta` and see the burnt version of Web Chat ## Expected behavior It should say `4.6.0` or `4.6.0-...` or `0.0.0-0` (for development build). [Bug]
1.0
Output build should have correct version number burnt - ## Version `master` ## Describe the bug For the burnt version number, it appears to be `undefined` instead of `4.6.0`. In PR #2182, we moved from vulnerable `babel-plugin-version-transform` to `babel-plugin-transform-inline-environment-variables`. The environment variable should be in lowercase, `npm_package_version`, instead of `NPM_PACKAGE_VERSION`. Thus, the Babel plugin did not pick up and cannot burn it into the source file. ## To Reproduce Steps to reproduce the behavior: 1. Host Web Chat with one of the daily build 1. Navigate to the host app 1. Press <kbd>F12</kbd> 1. In the Elements tab, navigate to `html/head/meta` and see the burnt version of Web Chat ## Expected behavior It should say `4.6.0` or `4.6.0-...` or `0.0.0-0` (for development build). [Bug]
non_process
output build should have correct version number burnt version master describe the bug for the burnt version number it appears to be undefined instead of in pr we moved from vulnerable babel plugin version transform to babel plugin transform inline environment variables the environment variable should be in lowercase npm package version instead of npm package version thus the babel plugin did not pick up and cannot burn it into the source file to reproduce steps to reproduce the behavior host web chat with one of the daily build navigate to the host app press in the elements tab navigate to html head meta and see the burnt version of web chat expected behavior it should say or or for development build
0
21,973
30,467,746,028
IssuesEvent
2023-07-17 11:38:04
hmcts/aks-auto-shutdown
https://api.github.com/repos/hmcts/aks-auto-shutdown
closed
Test - JH
Processing
### Change or Jira reference DTSPO-0004 ### Business area CFT ### Environment Sandbox ### Skip shutdown start date 11-10-2023 ### Skip shutdown end date _No response_
1.0
Test - JH - ### Change or Jira reference DTSPO-0004 ### Business area CFT ### Environment Sandbox ### Skip shutdown start date 11-10-2023 ### Skip shutdown end date _No response_
process
test jh change or jira reference dtspo business area cft environment sandbox skip shutdown start date skip shutdown end date no response
1
11,189
13,957,698,669
IssuesEvent
2020-10-24 08:12:29
alexanderkotsev/geoportal
https://api.github.com/repos/alexanderkotsev/geoportal
opened
PT: Harvesting
Geoportal Harvesting process PT - Portugal
Geoportal team, We kindly request that you start a harvesting to the Portuguese catalogue. We have made some updatings and we would like to see if they are some results of our work. Thank you! Best regards, Vanda Marcos
1.0
PT: Harvesting - Geoportal team, We kindly request that you start a harvesting to the Portuguese catalogue. We have made some updatings and we would like to see if they are some results of our work. Thank you! Best regards, Vanda Marcos
process
pt harvesting geoportal team we kindly request that you start a harvesting to the portuguese catalogue we have made some updatings and we would like to see if they are some results of our work thank you best regards vanda marcos
1
17,739
23,655,238,491
IssuesEvent
2022-08-26 10:32:42
metabase/metabase
https://api.github.com/repos/metabase/metabase
closed
When using saved questions as the source of models we should remove the row limit
Type:Bug Querying/Processor .Correctness .Backend Querying/Models Querying/Cache
**Is your feature request related to a problem? Please describe.** The query processor has a global row limit of [1048575](https://github.com/metabase/metabase/blob/edd687a9ea2ab1584a2dfff9a1bd3415e23918a5/src/metabase/query_processor/interface.clj#L7-L15). If we use a saved question as the source of the model, when the saved question query runs, we add the global limit. There are many use cases when users want to cache models with more than 1M rows. **Describe the solution you'd like** Two options; 1. Remove the limit for model cache 2. Change it to 1B (minimum), ideally 10B. **How important is this feature to you?** This is important for model cache, as the feature was designed for large models.
1.0
When using saved questions as the source of models we should remove the row limit - **Is your feature request related to a problem? Please describe.** The query processor has a global row limit of [1048575](https://github.com/metabase/metabase/blob/edd687a9ea2ab1584a2dfff9a1bd3415e23918a5/src/metabase/query_processor/interface.clj#L7-L15). If we use a saved question as the source of the model, when the saved question query runs, we add the global limit. There are many use cases when users want to cache models with more than 1M rows. **Describe the solution you'd like** Two options; 1. Remove the limit for model cache 2. Change it to 1B (minimum), ideally 10B. **How important is this feature to you?** This is important for model cache, as the feature was designed for large models.
process
when using saved questions as the source of models we should remove the row limit is your feature request related to a problem please describe the query processor has a global row limit of if we use a saved question as the source of the model when the saved question query runs we add the global limit there are many use cases when users want to cache models with more than rows describe the solution you d like two options remove the limit for model cache change it to minimum ideally how important is this feature to you this is important for model cache as the feature was designed for large models
1
7,986
11,176,840,260
IssuesEvent
2019-12-30 08:38:22
nodejs/node
https://api.github.com/repos/nodejs/node
closed
`.errno` field of errors from `child_process.execSync` is in string form instead of numeric.
child_process errors
* **Version**: v7.9.0 * **Platform**: `Linux daurn-m73 4.10.11-1-ARCH #1 SMP PREEMPT Tue Apr 18 08:39:42 CEST 2017 x86_64 GNU/Linux` * **Subsystem**: child_process The `.errno` field of errors from `child_process.execSync` is in string form instead of numeric. ``` $ node -e 'try{require("child_process").execSync("foo", {shell: "doesnt_exit"})}catch(e){console.log(e.errno)}' ENOENT ``` In this example I expect the numeric value `-2` (as it is (correctly) is for other operations such as `open`: `try{require("fs").openSync("doesnt_exist", "r")}catch(e){console.log(e.errno)}`).
1.0
`.errno` field of errors from `child_process.execSync` is in string form instead of numeric. - * **Version**: v7.9.0 * **Platform**: `Linux daurn-m73 4.10.11-1-ARCH #1 SMP PREEMPT Tue Apr 18 08:39:42 CEST 2017 x86_64 GNU/Linux` * **Subsystem**: child_process The `.errno` field of errors from `child_process.execSync` is in string form instead of numeric. ``` $ node -e 'try{require("child_process").execSync("foo", {shell: "doesnt_exit"})}catch(e){console.log(e.errno)}' ENOENT ``` In this example I expect the numeric value `-2` (as it is (correctly) is for other operations such as `open`: `try{require("fs").openSync("doesnt_exist", "r")}catch(e){console.log(e.errno)}`).
process
errno field of errors from child process execsync is in string form instead of numeric version platform linux daurn arch smp preempt tue apr cest gnu linux subsystem child process the errno field of errors from child process execsync is in string form instead of numeric node e try require child process execsync foo shell doesnt exit catch e console log e errno enoent in this example i expect the numeric value as it is correctly is for other operations such as open try require fs opensync doesnt exist r catch e console log e errno
1
3,915
6,827,749,624
IssuesEvent
2017-11-08 18:03:11
Great-Hill-Corporation/quickBlocks
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
closed
whenBlock could include better list of important dates for specials
status-inprocess tools-whenBlock type-enhancement
For example, metropolis, wallet hack Also -- there should be test cases for adding custom special dates.
1.0
whenBlock could include better list of important dates for specials - For example, metropolis, wallet hack Also -- there should be test cases for adding custom special dates.
process
whenblock could include better list of important dates for specials for example metropolis wallet hack also there should be test cases for adding custom special dates
1
19,929
6,796,508,132
IssuesEvent
2017-11-01 19:15:39
GetTerminus/terminus-ui
https://api.github.com/repos/GetTerminus/terminus-ui
opened
Improve webpack build workflow
Focus: build Type: chore
Currently during development, a full production build is generated on each change. This is slow and unneeded. AC: - A single command should start both the library watcher and the demo project watcher. - Only the needed files should be built on each change. - The demo should work with the library _without_ symlinking through NPM NOTES: - Possibly use gulp to better synchronize tasks - The demo project is an @angular/cli project so it will consume the library well. - When pointing the demo's `tsconfig.json` path to the `lib/` folder rather than `dist/`, it consumed the `index.ts` file rather than the `index.js`. If we can get it to consume the JS file, it may work as-is. - Currently the consuming app is only using a single type of build. We should remove unused builds to improve speed. (note: we should save the code in a gist in case we need to reimplement)
1.0
Improve webpack build workflow - Currently during development, a full production build is generated on each change. This is slow and unneeded. AC: - A single command should start both the library watcher and the demo project watcher. - Only the needed files should be built on each change. - The demo should work with the library _without_ symlinking through NPM NOTES: - Possibly use gulp to better synchronize tasks - The demo project is an @angular/cli project so it will consume the library well. - When pointing the demo's `tsconfig.json` path to the `lib/` folder rather than `dist/`, it consumed the `index.ts` file rather than the `index.js`. If we can get it to consume the JS file, it may work as-is. - Currently the consuming app is only using a single type of build. We should remove unused builds to improve speed. (note: we should save the code in a gist in case we need to reimplement)
non_process
improve webpack build workflow currently during development a full production build is generated on each change this is slow and unneeded ac a single command should start both the library watcher and the demo project watcher only the needed files should be built on each change the demo should work with the library without symlinking through npm notes possibly use gulp to better synchronize tasks the demo project is an angular cli project so it will consume the library well when pointing the demo s tsconfig json path to the lib folder rather than dist it consumed the index ts file rather than the index js if we can get it to consume the js file it may work as is currently the consuming app is only using a single type of build we should remove unused builds to improve speed note we should save the code in a gist in case we need to reimplement
0
12,780
15,163,324,872
IssuesEvent
2021-02-12 12:01:53
prisma/prisma
https://api.github.com/repos/prisma/prisma
opened
Internal: Publish untested CLI at `@cli-canary` Npm dist tag
kind/feature process/candidate team/client topic: ci/cd
- We want to publish the CLI as quickly as possible, skipping any testing or other processes. - Npm dist tag: `cli-canary` - Version numbers: `2.17.0-cli-canary.44` or similar - Running as GH Actions pipeline (vs Buildkite)
1.0
Internal: Publish untested CLI at `@cli-canary` Npm dist tag - - We want to publish the CLI as quickly as possible, skipping any testing or other processes. - Npm dist tag: `cli-canary` - Version numbers: `2.17.0-cli-canary.44` or similar - Running as GH Actions pipeline (vs Buildkite)
process
internal publish untested cli at cli canary npm dist tag we want to publish the cli as quickly as possible skipping any testing or other processes npm dist tag cli canary version numbers cli canary or similar running as gh actions pipeline vs buildkite
1
721,687
24,834,711,391
IssuesEvent
2022-10-26 07:55:17
AY2223S1-CS2113-F11-4/tp
https://api.github.com/repos/AY2223S1-CS2113-F11-4/tp
closed
Modify Parser class to handle for editReason, deleteReason, viewVisit functions in VisitList
priority.Medium
1. editReason - modified ver now only accepts ui, int index (referring to overall index of visit in visitList), and String reason - enforce that reason cannot be "" 2. deleteReason - only accept ui and int index 3. viewVisit - modified version now only accepts ui, and int index
1.0
Modify Parser class to handle for editReason, deleteReason, viewVisit functions in VisitList - 1. editReason - modified ver now only accepts ui, int index (referring to overall index of visit in visitList), and String reason - enforce that reason cannot be "" 2. deleteReason - only accept ui and int index 3. viewVisit - modified version now only accepts ui, and int index
non_process
modify parser class to handle for editreason deletereason viewvisit functions in visitlist editreason modified ver now only accepts ui int index referring to overall index of visit in visitlist and string reason enforce that reason cannot be deletereason only accept ui and int index viewvisit modified version now only accepts ui and int index
0
344
2,793,269,669
IssuesEvent
2015-05-11 09:48:11
ecodistrict/IDSSDashboard
https://api.github.com/repos/ecodistrict/IDSSDashboard
closed
Change importance of each KPI
enhancement form feedback 09102014 process step: assess alternatives
There should be room for ‘schuifjes’ where you can change the importance of each KPI”s and see the difference (that is more or less the multi actor multi criteria analysis)
1.0
Change importance of each KPI - There should be room for ‘schuifjes’ where you can change the importance of each KPI”s and see the difference (that is more or less the multi actor multi criteria analysis)
process
change importance of each kpi there should be room for ‘schuifjes’ where you can change the importance of each kpi”s and see the difference that is more or less the multi actor multi criteria analysis
1
9,798
6,996,119,446
IssuesEvent
2017-12-15 22:29:11
dotnet/roslyn
https://api.github.com/repos/dotnet/roslyn
closed
AnalyzerDependencyCheckingService.CheckForConflictsAsync results in 15 seconds of blocked UI opening Roslyn
Area-Analyzers Tenet-Performance Urgency-Soon
**Version Used**: Visual Studio 2017 version 15.5 **Steps to Reproduce**: 1. Open a large solution where there are analyzer diagnostics in a "leaf" project. **Expected Behavior**: Solution should open in a reasonable time **Actual Behavior**: It's very slow, because creating the diagnostic requires a full compilation to be built and then examined for source level suppressions, and this happens synchronously on the UI thread as part of solution load. `microsoft.visualstudio.languageservices <<microsoft.visualstudio.languageservices!Microsoft.VisualStudio.LanguageServices.Implementation.AnalyzerDependencyCheckingService+<CheckForConflictsAsync>d__14.MoveNext()>> | 11.2 | 9,363.414 | 4,087`
True
AnalyzerDependencyCheckingService.CheckForConflictsAsync results in 15 seconds of blocked UI opening Roslyn - **Version Used**: Visual Studio 2017 version 15.5 **Steps to Reproduce**: 1. Open a large solution where there are analyzer diagnostics in a "leaf" project. **Expected Behavior**: Solution should open in a reasonable time **Actual Behavior**: It's very slow, because creating the diagnostic requires a full compilation to be built and then examined for source level suppressions, and this happens synchronously on the UI thread as part of solution load. `microsoft.visualstudio.languageservices <<microsoft.visualstudio.languageservices!Microsoft.VisualStudio.LanguageServices.Implementation.AnalyzerDependencyCheckingService+<CheckForConflictsAsync>d__14.MoveNext()>> | 11.2 | 9,363.414 | 4,087`
non_process
analyzerdependencycheckingservice checkforconflictsasync results in seconds of blocked ui opening roslyn version used visual studio version steps to reproduce open a large solution where there are analyzer diagnostics in a leaf project expected behavior solution should open in a reasonable time actual behavior it s very slow because creating the diagnostic requires a full compilation to be built and then examined for source level suppressions and this happens synchronously on the ui thread as part of solution load microsoft visualstudio languageservices d movenext
0
5,165
3,517,946,533
IssuesEvent
2016-01-12 10:25:56
urho3d/Urho3D
https://api.github.com/repos/urho3d/Urho3D
opened
Improve build system to generate better Urho3D.pc file
build system enhancement
Currently the generated Urho3D.pc file is not entirely correct. There are at least two things we could improve. 1. It may still erroneously expose some of the compiler defines that are really required only when building the library and not when using the library. 2. The linker flags are erroneously prepared as if user would always want to link against the library statically, i.e. we actually need less linker flags when linking dynamically.
1.0
Improve build system to generate better Urho3D.pc file - Currently the generated Urho3D.pc file is not entirely correct. There are at least two things we could improve. 1. It may still erroneously expose some of the compiler defines that are really required only when building the library and not when using the library. 2. The linker flags are erroneously prepared as if user would always want to link against the library statically, i.e. we actually need less linker flags when linking dynamically.
non_process
improve build system to generate better pc file currently the generated pc file is not entirely correct there are at least two things we could improve it may still erroneously expose some of the compiler defines that are really required only when building the library and not when using the library the linker flags are erroneously prepared as if user would always want to link against the library statically i e we actually need less linker flags when linking dynamically
0
5,192
7,973,736,041
IssuesEvent
2018-07-17 00:57:46
bugmark/exchange
https://api.github.com/repos/bugmark/exchange
closed
Team GitHub PR Training
9 discuss DEV PROCESS
I like the nvie model for managing git branches etc. - a master branch - a dev branch - a private dev branch for every developer - feature branches as necessary Would like to have a standard / trainable method. Would like to learn and use the code-review feature.
1.0
Team GitHub PR Training - I like the nvie model for managing git branches etc. - a master branch - a dev branch - a private dev branch for every developer - feature branches as necessary Would like to have a standard / trainable method. Would like to learn and use the code-review feature.
process
team github pr training i like the nvie model for managing git branches etc a master branch a dev branch a private dev branch for every developer feature branches as necessary would like to have a standard trainable method would like to learn and use the code review feature
1
1,041
3,510,764,912
IssuesEvent
2016-01-09 19:01:50
osresearch/vst
https://api.github.com/repos/osresearch/vst
opened
Processing library output can be delayed
processing protocol
The processing library doesn't check the serial port status, so it is possible for it to become very behind on rendering complex scenes. I've seen the spiral demo play through as much as a few seconds of buffered vectors as the OS's serial queues are drained.
1.0
Processing library output can be delayed - The processing library doesn't check the serial port status, so it is possible for it to become very behind on rendering complex scenes. I've seen the spiral demo play through as much as a few seconds of buffered vectors as the OS's serial queues are drained.
process
processing library output can be delayed the processing library doesn t check the serial port status so it is possible for it to become very behind on rendering complex scenes i ve seen the spiral demo play through as much as a few seconds of buffered vectors as the os s serial queues are drained
1
429,582
30,083,972,826
IssuesEvent
2023-06-29 07:09:27
tauri-apps/tauri
https://api.github.com/repos/tauri-apps/tauri
closed
[docs] Accesing AppHandle guide need to be updated
type: documentation
When I am using Tauri and try to access `AppHandle` in `tauri::command`, I realized that the interface of param is used for `AppHandle` was changed to `AppHandle<R: Runtime>`. Please update the old docs according to the changes: ```rust #[tauri::command] async fn my_custom_command(app_handle: tauri::AppHandle) { let app_dir = app_handle.path_resolver().app_dir(); use tauri::GlobalShortcutManager; app_handle.global_shortcut_manager().register("CTRL + U", move || {}); } ```
1.0
[docs] Accesing AppHandle guide need to be updated - When I am using Tauri and try to access `AppHandle` in `tauri::command`, I realized that the interface of param is used for `AppHandle` was changed to `AppHandle<R: Runtime>`. Please update the old docs according to the changes: ```rust #[tauri::command] async fn my_custom_command(app_handle: tauri::AppHandle) { let app_dir = app_handle.path_resolver().app_dir(); use tauri::GlobalShortcutManager; app_handle.global_shortcut_manager().register("CTRL + U", move || {}); } ```
non_process
accesing apphandle guide need to be updated when i am using tauri and try to access apphandle in tauri command i realized that the interface of param is used for apphandle was changed to apphandle please update the old docs according to the changes rust async fn my custom command app handle tauri apphandle let app dir app handle path resolver app dir use tauri globalshortcutmanager app handle global shortcut manager register ctrl u move
0
17,032
22,408,876,136
IssuesEvent
2022-06-18 12:08:11
cgi-js/cgi-js
https://api.github.com/repos/cgi-js/cgi-js
closed
[STYLE]: Tags for commit and pull requests to follow for standardization
process
Tags for commit and pull requests: `temp, working commit/feature, refactor, fix, test, perf, feat/milestone, build/tag, release` `[Commit:temp]: Commiting temporary nd partial working code for xyz.` `[Commit:feature]: Commit new functions and features for code for story xyz.` `[Commit:test]: Commiting tests for new functions and features for code for story xyz.` `[Commit:feat]: Commit to freeze a tag.` `[Commit:tag]: Commit to create build version - a tag.` `[Commit:build]: Commit to create build version - a tag.` `[Commit:release]: Commit to create a release from commit tag xyz.`
1.0
[STYLE]: Tags for commit and pull requests to follow for standardization - Tags for commit and pull requests: `temp, working commit/feature, refactor, fix, test, perf, feat/milestone, build/tag, release` `[Commit:temp]: Commiting temporary nd partial working code for xyz.` `[Commit:feature]: Commit new functions and features for code for story xyz.` `[Commit:test]: Commiting tests for new functions and features for code for story xyz.` `[Commit:feat]: Commit to freeze a tag.` `[Commit:tag]: Commit to create build version - a tag.` `[Commit:build]: Commit to create build version - a tag.` `[Commit:release]: Commit to create a release from commit tag xyz.`
process
tags for commit and pull requests to follow for standardization tags for commit and pull requests temp working commit feature refactor fix test perf feat milestone build tag release commiting temporary nd partial working code for xyz commit new functions and features for code for story xyz commiting tests for new functions and features for code for story xyz commit to freeze a tag commit to create build version a tag commit to create build version a tag commit to create a release from commit tag xyz
1
5,330
8,147,604,431
IssuesEvent
2018-08-22 00:43:59
gradiuscypher/security-learning-resources
https://api.github.com/repos/gradiuscypher/security-learning-resources
reopened
RFC: Design Consideration - JSON structure
RFC process
~~We'll want to make sure the project is organized and easy to find the right information, but we don't want to fragment things too much, because that could reduce discover-ability.~~ Edit: breaking these topics up. This issue will be about JSON structure. We also need to consider how to structure our JSON files, and how we should split those files, or if we want a single file for everything.
1.0
RFC: Design Consideration - JSON structure - ~~We'll want to make sure the project is organized and easy to find the right information, but we don't want to fragment things too much, because that could reduce discover-ability.~~ Edit: breaking these topics up. This issue will be about JSON structure. We also need to consider how to structure our JSON files, and how we should split those files, or if we want a single file for everything.
process
rfc design consideration json structure we ll want to make sure the project is organized and easy to find the right information but we don t want to fragment things too much because that could reduce discover ability edit breaking these topics up this issue will be about json structure we also need to consider how to structure our json files and how we should split those files or if we want a single file for everything
1
5,356
8,182,626,741
IssuesEvent
2018-08-29 06:09:00
Madek/madek
https://api.github.com/repos/Madek/madek
closed
Tests Batch Editing
Batch process accepted
- I would expect the metatags to be sorted alphabetically ![bildschirmfoto 2018-08-21 um 07 51 41](https://user-images.githubusercontent.com/3356016/44382899-116d3280-a517-11e8-926b-a9e50c2dfba6.png) //sorting is according to importance (position in Admin) - If I press "anwenden", then I do not see any kind of notice on the tile itself. I would like to see a change in colour or the button "anwenden" should vanish. ![bildschirmfoto 2018-08-21 um 07 54 45](https://user-images.githubusercontent.com/3356016/44383044-93f5f200-a517-11e8-9dd8-8d37aaf004dc.png) //we'll leave it as it is. - Can you make the popup bigger, so the Derivates is not on a single line? ![bildschirmfoto 2018-08-21 um 08 03 46](https://user-images.githubusercontent.com/3356016/44383376-d966ef00-a518-11e8-9ff4-7f486e8a1158.png) //will not do anything at the moment - It would be nice, if the amount of updated media entries would be added to the green message (Erfolgsmeldung) and if the message would not vanish //will be done ![bildschirmfoto 2018-08-21 um 08 00 22](https://user-images.githubusercontent.com/3356016/44383231-547bd580-a518-11e8-90be-227af1e856d9.png) - If the filter is opened, it is hard to differentiate which is the filter and which are the fields to pick. ![bildschirmfoto 2018-08-21 um 08 10 20](https://user-images.githubusercontent.com/3356016/44383651-acffa280-a519-11e8-8c8e-1d8a6ecdceba.png) //Philipp will have a look at this - Please write "Medien EintrÀgen" together "MedieneintrÀgen" ![bildschirmfoto 2018-08-21 um 08 15 25](https://user-images.githubusercontent.com/3356016/44383837-6b232c00-a51a-11e8-8b4a-937e547752d9.png) //done
1.0
Tests Batch Editing - - I would expect the metatags to be sorted alphabetically ![bildschirmfoto 2018-08-21 um 07 51 41](https://user-images.githubusercontent.com/3356016/44382899-116d3280-a517-11e8-926b-a9e50c2dfba6.png) //sorting is according to importance (position in Admin) - If I press "anwenden", then I do not see any kind of notice on the tile itself. I would like to see a change in colour or the button "anwenden" should vanish. ![bildschirmfoto 2018-08-21 um 07 54 45](https://user-images.githubusercontent.com/3356016/44383044-93f5f200-a517-11e8-9dd8-8d37aaf004dc.png) //we'll leave it as it is. - Can you make the popup bigger, so the Derivates is not on a single line? ![bildschirmfoto 2018-08-21 um 08 03 46](https://user-images.githubusercontent.com/3356016/44383376-d966ef00-a518-11e8-9ff4-7f486e8a1158.png) //will not do anything at the moment - It would be nice, if the amount of updated media entries would be added to the green message (Erfolgsmeldung) and if the message would not vanish //will be done ![bildschirmfoto 2018-08-21 um 08 00 22](https://user-images.githubusercontent.com/3356016/44383231-547bd580-a518-11e8-90be-227af1e856d9.png) - If the filter is opened, it is hard to differentiate which is the filter and which are the fields to pick. ![bildschirmfoto 2018-08-21 um 08 10 20](https://user-images.githubusercontent.com/3356016/44383651-acffa280-a519-11e8-8c8e-1d8a6ecdceba.png) //Philipp will have a look at this - Please write "Medien EintrÀgen" together "MedieneintrÀgen" ![bildschirmfoto 2018-08-21 um 08 15 25](https://user-images.githubusercontent.com/3356016/44383837-6b232c00-a51a-11e8-8b4a-937e547752d9.png) //done
process
tests batch editing i would expect the metatags to be sorted alphabetically sorting is according to importance position in admin if i press anwenden then i do not see any kind of notice on the tile itself i would like to see a change in colour or the button anwenden should vanish we ll leave it as it is can you make the popup bigger so the derivates is not on a single line will not do anything at the moment it would be nice if the amount of updated media entries would be added to the green message erfolgsmeldung and if the message would not vanish will be done if the filter is opened it is hard to differentiate which is the filter and which are the fields to pick philipp will have a look at this please write medien eintrÀgen together medieneintrÀgen done
1
381,260
26,444,136,383
IssuesEvent
2023-01-16 05:02:00
MiguelMR96/homepage
https://api.github.com/repos/MiguelMR96/homepage
closed
Create experience block template and add experiences
documentation enhancement
Create experience block and then populate it with experiences. Then push the changes from branch patch/content/secondary
1.0
Create experience block template and add experiences - Create experience block and then populate it with experiences. Then push the changes from branch patch/content/secondary
non_process
create experience block template and add experiences create experience block and then populate it with experiences then push the changes from branch patch content secondary
0
6,432
9,532,497,824
IssuesEvent
2019-04-29 18:45:16
material-components/material-components-ios
https://api.github.com/repos/material-components/material-components-ios
closed
Define how new states are defined in our components
type:Process
This was filed as an internal issue. If you are a Googler, please visit [b/131411736](http://b/131411736) for more details. <!-- Auto-generated content below, do not modify --> --- #### Internal data - Associated internal bug: [b/131411736](http://b/131411736)
1.0
Define how new states are defined in our components - This was filed as an internal issue. If you are a Googler, please visit [b/131411736](http://b/131411736) for more details. <!-- Auto-generated content below, do not modify --> --- #### Internal data - Associated internal bug: [b/131411736](http://b/131411736)
process
define how new states are defined in our components this was filed as an internal issue if you are a googler please visit for more details internal data associated internal bug
1
20,524
27,184,294,003
IssuesEvent
2023-02-19 02:00:06
lizhihao6/get-daily-arxiv-noti
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
opened
New submissions for Fri, 17 Feb 23
event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB
## Keyword: events There is no result ## Keyword: event camera There is no result ## Keyword: events camera There is no result ## Keyword: white balance There is no result ## Keyword: color contrast There is no result ## Keyword: AWB There is no result ## Keyword: ISP ### Learning Non-Local Spatial-Angular Correlation for Light Field Image Super-Resolution - **Authors:** Zhengyu Liang, Yingqian Wang, Longguang Wang, Jungang Yang, Shilin Zhou, Yulan Guo - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2302.08058 - **Pdf link:** https://arxiv.org/pdf/2302.08058 - **Abstract** Exploiting spatial-angular correlation is crucial to light field (LF) image super-resolution (SR), but is highly challenging due to its non-local property caused by the disparities among LF images. Although many deep neural networks (DNNs) have been developed for LF image SR and achieved continuously improved performance, existing methods cannot well leverage the long-range spatial-angular correlation and thus suffer a significant performance drop when handling scenes with large disparity variations. In this paper, we propose a simple yet effective method to learn the non-local spatial-angular correlation for LF image SR. In our method, we adopt the epipolar plane image (EPI) representation to project the 4D spatial-angular correlation onto multiple 2D EPI planes, and then develop a Transformer network with repetitive self-attention operations to learn the spatial-angular correlation by modeling the dependencies between each pair of EPI pixels. Our method can fully incorporate the information from all angular views while achieving a global receptive field along the epipolar line. We conduct extensive experiments with insightful visualizations to validate the effectiveness of our method. Comparative results on five public datasets show that our method not only achieves state-of-the-art SR performance, but also performs robust to disparity variations. Code is publicly available at https://github.com/ZhengyuLiang24/EPIT. ### Detecting Clouds in Multispectral Satellite Images Using Quantum-Kernel Support Vector Machines - **Authors:** Artur Miroszewski, Jakub Mielczarek, Grzegorz Czelusta, Filip Szczepanek, Bartosz Grabowski, Bertrand Le Saux, Jakub Nalepa - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Quantum Physics (quant-ph) - **Arxiv link:** https://arxiv.org/abs/2302.08270 - **Pdf link:** https://arxiv.org/pdf/2302.08270 - **Abstract** Support vector machines (SVMs) are a well-established classifier effectively deployed in an array of classification tasks. In this work, we consider extending classical SVMs with quantum kernels and applying them to satellite data analysis. The design and implementation of SVMs with quantum kernels (hybrid SVMs) are presented. Here, the pixels are mapped to the Hilbert space using a family of parameterized quantum feature maps (related to quantum kernels). The parameters are optimized to maximize the kernel target alignment. The quantum kernels have been selected such that they enabled analysis of numerous relevant properties while being able to simulate them with classical computers on a real-life large-scale dataset. Specifically, we approach the problem of cloud detection in the multispectral satellite imagery, which is one of the pivotal steps in both on-the-ground and on-board satellite image analysis processing chains. The experiments performed over the benchmark Landsat-8 multispectral dataset revealed that the simulated hybrid SVM successfully classifies satellite images with accuracy comparable to the classical SVM with the RBF kernel for large datasets. Interestingly, for large datasets, the high accuracy was also observed for the simple quantum kernels, lacking quantum entanglement. ### Unpaired Image-to-Image Translation with Limited Data to Reveal Subtle Phenotypes - **Authors:** Anis Bourou, Auguste Genovesio - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV) - **Arxiv link:** https://arxiv.org/abs/2302.08503 - **Pdf link:** https://arxiv.org/pdf/2302.08503 - **Abstract** Unpaired image-to-image translation methods aim at learning a mapping of images from a source domain to a target domain. Recently, these methods proved to be very useful in biological applications to display subtle phenotypic cell variations otherwise invisible to the human eye. However, current models require a large number of images to be trained, while mostmicroscopy experiments remain limited in the number of images they can produce. In this work, we present an improved CycleGAN architecture that employs self-supervised discriminators to alleviate the need for numerous images. We demonstrate quantitatively and qualitatively that the proposed approach outperforms the CycleGAN baseline, including when it is combined with differentiable augmentations. We also provide results obtained with small biological datasets on obvious and non-obvious cell phenotype variations, demonstrating a straightforward application of this method. ## Keyword: image signal processing There is no result ## Keyword: image signal process There is no result ## Keyword: compression There is no result ## Keyword: RAW There is no result ## Keyword: raw image There is no result
2.0
New submissions for Fri, 17 Feb 23 - ## Keyword: events There is no result ## Keyword: event camera There is no result ## Keyword: events camera There is no result ## Keyword: white balance There is no result ## Keyword: color contrast There is no result ## Keyword: AWB There is no result ## Keyword: ISP ### Learning Non-Local Spatial-Angular Correlation for Light Field Image Super-Resolution - **Authors:** Zhengyu Liang, Yingqian Wang, Longguang Wang, Jungang Yang, Shilin Zhou, Yulan Guo - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2302.08058 - **Pdf link:** https://arxiv.org/pdf/2302.08058 - **Abstract** Exploiting spatial-angular correlation is crucial to light field (LF) image super-resolution (SR), but is highly challenging due to its non-local property caused by the disparities among LF images. Although many deep neural networks (DNNs) have been developed for LF image SR and achieved continuously improved performance, existing methods cannot well leverage the long-range spatial-angular correlation and thus suffer a significant performance drop when handling scenes with large disparity variations. In this paper, we propose a simple yet effective method to learn the non-local spatial-angular correlation for LF image SR. In our method, we adopt the epipolar plane image (EPI) representation to project the 4D spatial-angular correlation onto multiple 2D EPI planes, and then develop a Transformer network with repetitive self-attention operations to learn the spatial-angular correlation by modeling the dependencies between each pair of EPI pixels. Our method can fully incorporate the information from all angular views while achieving a global receptive field along the epipolar line. We conduct extensive experiments with insightful visualizations to validate the effectiveness of our method. Comparative results on five public datasets show that our method not only achieves state-of-the-art SR performance, but also performs robust to disparity variations. Code is publicly available at https://github.com/ZhengyuLiang24/EPIT. ### Detecting Clouds in Multispectral Satellite Images Using Quantum-Kernel Support Vector Machines - **Authors:** Artur Miroszewski, Jakub Mielczarek, Grzegorz Czelusta, Filip Szczepanek, Bartosz Grabowski, Bertrand Le Saux, Jakub Nalepa - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Quantum Physics (quant-ph) - **Arxiv link:** https://arxiv.org/abs/2302.08270 - **Pdf link:** https://arxiv.org/pdf/2302.08270 - **Abstract** Support vector machines (SVMs) are a well-established classifier effectively deployed in an array of classification tasks. In this work, we consider extending classical SVMs with quantum kernels and applying them to satellite data analysis. The design and implementation of SVMs with quantum kernels (hybrid SVMs) are presented. Here, the pixels are mapped to the Hilbert space using a family of parameterized quantum feature maps (related to quantum kernels). The parameters are optimized to maximize the kernel target alignment. The quantum kernels have been selected such that they enabled analysis of numerous relevant properties while being able to simulate them with classical computers on a real-life large-scale dataset. Specifically, we approach the problem of cloud detection in the multispectral satellite imagery, which is one of the pivotal steps in both on-the-ground and on-board satellite image analysis processing chains. The experiments performed over the benchmark Landsat-8 multispectral dataset revealed that the simulated hybrid SVM successfully classifies satellite images with accuracy comparable to the classical SVM with the RBF kernel for large datasets. Interestingly, for large datasets, the high accuracy was also observed for the simple quantum kernels, lacking quantum entanglement. ### Unpaired Image-to-Image Translation with Limited Data to Reveal Subtle Phenotypes - **Authors:** Anis Bourou, Auguste Genovesio - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV) - **Arxiv link:** https://arxiv.org/abs/2302.08503 - **Pdf link:** https://arxiv.org/pdf/2302.08503 - **Abstract** Unpaired image-to-image translation methods aim at learning a mapping of images from a source domain to a target domain. Recently, these methods proved to be very useful in biological applications to display subtle phenotypic cell variations otherwise invisible to the human eye. However, current models require a large number of images to be trained, while mostmicroscopy experiments remain limited in the number of images they can produce. In this work, we present an improved CycleGAN architecture that employs self-supervised discriminators to alleviate the need for numerous images. We demonstrate quantitatively and qualitatively that the proposed approach outperforms the CycleGAN baseline, including when it is combined with differentiable augmentations. We also provide results obtained with small biological datasets on obvious and non-obvious cell phenotype variations, demonstrating a straightforward application of this method. ## Keyword: image signal processing There is no result ## Keyword: image signal process There is no result ## Keyword: compression There is no result ## Keyword: RAW There is no result ## Keyword: raw image There is no result
process
new submissions for fri feb keyword events there is no result keyword event camera there is no result keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb there is no result keyword isp learning non local spatial angular correlation for light field image super resolution authors zhengyu liang yingqian wang longguang wang jungang yang shilin zhou yulan guo subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract exploiting spatial angular correlation is crucial to light field lf image super resolution sr but is highly challenging due to its non local property caused by the disparities among lf images although many deep neural networks dnns have been developed for lf image sr and achieved continuously improved performance existing methods cannot well leverage the long range spatial angular correlation and thus suffer a significant performance drop when handling scenes with large disparity variations in this paper we propose a simple yet effective method to learn the non local spatial angular correlation for lf image sr in our method we adopt the epipolar plane image epi representation to project the spatial angular correlation onto multiple epi planes and then develop a transformer network with repetitive self attention operations to learn the spatial angular correlation by modeling the dependencies between each pair of epi pixels our method can fully incorporate the information from all angular views while achieving a global receptive field along the epipolar line we conduct extensive experiments with insightful visualizations to validate the effectiveness of our method comparative results on five public datasets show that our method not only achieves state of the art sr performance but also performs robust to disparity variations code is publicly available at detecting clouds in multispectral satellite images using quantum kernel support vector machines authors artur miroszewski jakub mielczarek grzegorz czelusta filip szczepanek bartosz grabowski bertrand le saux jakub nalepa subjects computer vision and pattern recognition cs cv quantum physics quant ph arxiv link pdf link abstract support vector machines svms are a well established classifier effectively deployed in an array of classification tasks in this work we consider extending classical svms with quantum kernels and applying them to satellite data analysis the design and implementation of svms with quantum kernels hybrid svms are presented here the pixels are mapped to the hilbert space using a family of parameterized quantum feature maps related to quantum kernels the parameters are optimized to maximize the kernel target alignment the quantum kernels have been selected such that they enabled analysis of numerous relevant properties while being able to simulate them with classical computers on a real life large scale dataset specifically we approach the problem of cloud detection in the multispectral satellite imagery which is one of the pivotal steps in both on the ground and on board satellite image analysis processing chains the experiments performed over the benchmark landsat multispectral dataset revealed that the simulated hybrid svm successfully classifies satellite images with accuracy comparable to the classical svm with the rbf kernel for large datasets interestingly for large datasets the high accuracy was also observed for the simple quantum kernels lacking quantum entanglement unpaired image to image translation with limited data to reveal subtle phenotypes authors anis bourou auguste genovesio subjects computer vision and pattern recognition cs cv image and video processing eess iv arxiv link pdf link abstract unpaired image to image translation methods aim at learning a mapping of images from a source domain to a target domain recently these methods proved to be very useful in biological applications to display subtle phenotypic cell variations otherwise invisible to the human eye however current models require a large number of images to be trained while mostmicroscopy experiments remain limited in the number of images they can produce in this work we present an improved cyclegan architecture that employs self supervised discriminators to alleviate the need for numerous images we demonstrate quantitatively and qualitatively that the proposed approach outperforms the cyclegan baseline including when it is combined with differentiable augmentations we also provide results obtained with small biological datasets on obvious and non obvious cell phenotype variations demonstrating a straightforward application of this method keyword image signal processing there is no result keyword image signal process there is no result keyword compression there is no result keyword raw there is no result keyword raw image there is no result
1
184,795
6,716,203,300
IssuesEvent
2017-10-14 04:15:49
craftercms/craftercms
https://api.github.com/repos/craftercms/craftercms
closed
[studio-ui] Update the status in tooltips to match the workflow status listed in the dashboard
bug Priority: Medium
Update the status in tooltips to match the workflow status listed in the dashboard. Here's a list: |Existing Status | Status to be displayed (to match dashboard workflow status) | | ------------------------ | --------------------------------------------------------------| | Deleted | Deleted | | Disabled | Disabled | | Processing | Processing | | In Progress | Edited | | Live | Live | | Submitted | In Workflow | | Scheduled | Scheduled | The following existing status will be removed: Submitted for Delete Scheduled for Delete <img width="405" alt="screen shot 2017-09-28 at 10 06 28 am" src="https://user-images.githubusercontent.com/25483966/30971015-cb10517a-a434-11e7-872f-e9706663b786.png">
1.0
[studio-ui] Update the status in tooltips to match the workflow status listed in the dashboard - Update the status in tooltips to match the workflow status listed in the dashboard. Here's a list: |Existing Status | Status to be displayed (to match dashboard workflow status) | | ------------------------ | --------------------------------------------------------------| | Deleted | Deleted | | Disabled | Disabled | | Processing | Processing | | In Progress | Edited | | Live | Live | | Submitted | In Workflow | | Scheduled | Scheduled | The following existing status will be removed: Submitted for Delete Scheduled for Delete <img width="405" alt="screen shot 2017-09-28 at 10 06 28 am" src="https://user-images.githubusercontent.com/25483966/30971015-cb10517a-a434-11e7-872f-e9706663b786.png">
non_process
update the status in tooltips to match the workflow status listed in the dashboard update the status in tooltips to match the workflow status listed in the dashboard here s a list existing status status to be displayed to match dashboard workflow status deleted deleted disabled disabled processing processing in progress edited live live submitted in workflow scheduled scheduled the following existing status will be removed submitted for delete scheduled for delete img width alt screen shot at am src
0
19,944
26,417,210,153
IssuesEvent
2023-01-13 16:53:11
bazelbuild/bazel
https://api.github.com/repos/bazelbuild/bazel
opened
Release 6.1.0 - $March $2023
P1 type: process release team-OSS
# Status of Bazel 6.1.0 - Expected release date: [2023-03-01] - [List of release blockers](https://github.com/bazelbuild/bazel/milestone/46) To report a release-blocking bug, please add a comment with the text `@bazel-io flag` to the issue. A release manager will triage it and add it to the milestone. To cherry-pick a mainline commit into 6.1, simply send a PR against the `release-6.1.0` branch. **Task list:** - [ ] Create [draft release announcement](https://docs.google.com/document/d/1pu2ARPweOCTxPsRR8snoDtkC9R51XWRyBXeiC6Ql5so/edit) <!-- Note that there should be a new Bazel Release Announcement document for every major release. For minor and patch releases, use the latest open doc. --> - [ ] Send the release announcement PR for review - [ ] Push the release and notify package maintainers - [ ] Update the documentation - [ ] Update the [release page](https://github.com/bazelbuild/bazel/releases/)
1.0
Release 6.1.0 - $March $2023 - # Status of Bazel 6.1.0 - Expected release date: [2023-03-01] - [List of release blockers](https://github.com/bazelbuild/bazel/milestone/46) To report a release-blocking bug, please add a comment with the text `@bazel-io flag` to the issue. A release manager will triage it and add it to the milestone. To cherry-pick a mainline commit into 6.1, simply send a PR against the `release-6.1.0` branch. **Task list:** - [ ] Create [draft release announcement](https://docs.google.com/document/d/1pu2ARPweOCTxPsRR8snoDtkC9R51XWRyBXeiC6Ql5so/edit) <!-- Note that there should be a new Bazel Release Announcement document for every major release. For minor and patch releases, use the latest open doc. --> - [ ] Send the release announcement PR for review - [ ] Push the release and notify package maintainers - [ ] Update the documentation - [ ] Update the [release page](https://github.com/bazelbuild/bazel/releases/)
process
release march status of bazel expected release date to report a release blocking bug please add a comment with the text bazel io flag to the issue a release manager will triage it and add it to the milestone to cherry pick a mainline commit into simply send a pr against the release branch task list create send the release announcement pr for review push the release and notify package maintainers update the documentation update the
1
16,012
20,188,224,700
IssuesEvent
2022-02-11 01:19:31
savitamittalmsft/WAS-SEC-TEST
https://api.github.com/repos/savitamittalmsft/WAS-SEC-TEST
opened
Involve the security team in the development process
WARP-Import WAF FEB 2021 Security Performance and Scalability Capacity Management Processes Operational Model & DevOps Roles & Responsibilities
<a href="https://docs.microsoft.com/azure/cloud-adoption-framework/ready/enterprise-scale/security-governance-and-compliance#service-enablement-framework">Involve the security team in the development process</a> <p><b>Why Consider This?</b></p> There should be a process for onboarding services securely to Azure. <p><b>Context</b></p> <p><span>The onboarding process should include reviewing the configuration options to determine what logging/monitoring needs to be established, how to properly harden a resource before it goes into production. For a list of common criteria for onboarding resorces, see the Service Enablement Framework.</span></p> <p><b>Suggested Actions</b></p> <p><span>Involve security team into development process to apply a SecDevOps approach.</span></p> <p><b>Learn More</b></p> <p><a href="https://docs.microsoft.com/en-us/azure/cloud-adoption-framework/ready/enterprise-scale/security-governance-and-compliance#service-enablement-framework" target="_blank"><span>https://docs.microsoft.com/en-us/azure/cloud-adoption-framework/ready/enterprise-scale/security-governance-and-compliance#service-enablement-framework</span></a><span /></p><p><a href="https://www.microsoft.com/en-us/securityengineering/devsecops" target="_blank"><span>https://www.microsoft.com/en-us/securityengineering/devsecops</span></a><span /></p>
1.0
Involve the security team in the development process - <a href="https://docs.microsoft.com/azure/cloud-adoption-framework/ready/enterprise-scale/security-governance-and-compliance#service-enablement-framework">Involve the security team in the development process</a> <p><b>Why Consider This?</b></p> There should be a process for onboarding services securely to Azure. <p><b>Context</b></p> <p><span>The onboarding process should include reviewing the configuration options to determine what logging/monitoring needs to be established, how to properly harden a resource before it goes into production. For a list of common criteria for onboarding resorces, see the Service Enablement Framework.</span></p> <p><b>Suggested Actions</b></p> <p><span>Involve security team into development process to apply a SecDevOps approach.</span></p> <p><b>Learn More</b></p> <p><a href="https://docs.microsoft.com/en-us/azure/cloud-adoption-framework/ready/enterprise-scale/security-governance-and-compliance#service-enablement-framework" target="_blank"><span>https://docs.microsoft.com/en-us/azure/cloud-adoption-framework/ready/enterprise-scale/security-governance-and-compliance#service-enablement-framework</span></a><span /></p><p><a href="https://www.microsoft.com/en-us/securityengineering/devsecops" target="_blank"><span>https://www.microsoft.com/en-us/securityengineering/devsecops</span></a><span /></p>
process
involve the security team in the development process why consider this there should be a process for onboarding services securely to azure context the onboarding process should include reviewing the configuration options to determine what logging monitoring needs to be established how to properly harden a resource before it goes into production for a list of common criteria for onboarding resorces see the service enablement framework suggested actions involve security team into development process to apply a secdevops approach learn more
1
666,404
22,354,029,381
IssuesEvent
2022-06-15 14:17:46
VEuPathDB/web-eda
https://api.github.com/repos/VEuPathDB/web-eda
closed
In line plots, choosing a Y-axis variable with only one value throws an error message
bug high priority
The example below comes from PRISM without any filtering. I believe this error message is coming because choosing Country on the Y-axis should enable proportions, but there is only one value for the variable Country in this study. Some possible solutions: 1) remove variables with single values as options in the drop down menu, 2) give people a more distinct error message, 3) update how proportions work when there is only one possible value. ![image](https://user-images.githubusercontent.com/54597319/163398028-b31185d1-6051-4f28-b662-adcd082686fc.png)
1.0
In line plots, choosing a Y-axis variable with only one value throws an error message - The example below comes from PRISM without any filtering. I believe this error message is coming because choosing Country on the Y-axis should enable proportions, but there is only one value for the variable Country in this study. Some possible solutions: 1) remove variables with single values as options in the drop down menu, 2) give people a more distinct error message, 3) update how proportions work when there is only one possible value. ![image](https://user-images.githubusercontent.com/54597319/163398028-b31185d1-6051-4f28-b662-adcd082686fc.png)
non_process
in line plots choosing a y axis variable with only one value throws an error message the example below comes from prism without any filtering i believe this error message is coming because choosing country on the y axis should enable proportions but there is only one value for the variable country in this study some possible solutions remove variables with single values as options in the drop down menu give people a more distinct error message update how proportions work when there is only one possible value
0
126,928
18,007,153,272
IssuesEvent
2021-09-16 02:10:22
tomdgl397/goof
https://api.github.com/repos/tomdgl397/goof
opened
CVE-2021-23438 (High) detected in mpath-0.1.1.tgz
security vulnerability
## CVE-2021-23438 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mpath-0.1.1.tgz</b></p></summary> <p>{G,S}et object values using MongoDB path notation</p> <p>Library home page: <a href="https://registry.npmjs.org/mpath/-/mpath-0.1.1.tgz">https://registry.npmjs.org/mpath/-/mpath-0.1.1.tgz</a></p> <p>Path to dependency file: goof/package.json</p> <p>Path to vulnerable library: goof/node_modules/mpath/package.json</p> <p> Dependency Hierarchy: - mongoose-4.2.4.tgz (Root Library) - :x: **mpath-0.1.1.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/tomdgl397/goof/commit/2347df63add5cc26408e39c0cd1cd9ba860e223f">2347df63add5cc26408e39c0cd1cd9ba860e223f</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> This affects the package mpath before 0.8.4. A type confusion vulnerability can lead to a bypass of CVE-2018-16490. In particular, the condition ignoreProperties.indexOf(parts[i]) !== -1 returns -1 if parts[i] is ['__proto__']. This is because the method that has been called if the input is an array is Array.prototype.indexOf() and not String.prototype.indexOf(). They behave differently depending on the type of the input. <p>Publish Date: 2021-09-01 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23438>CVE-2021-23438</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23438">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23438</a></p> <p>Release Date: 2021-09-01</p> <p>Fix Resolution: mpath - 0.8.4</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-23438 (High) detected in mpath-0.1.1.tgz - ## CVE-2021-23438 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mpath-0.1.1.tgz</b></p></summary> <p>{G,S}et object values using MongoDB path notation</p> <p>Library home page: <a href="https://registry.npmjs.org/mpath/-/mpath-0.1.1.tgz">https://registry.npmjs.org/mpath/-/mpath-0.1.1.tgz</a></p> <p>Path to dependency file: goof/package.json</p> <p>Path to vulnerable library: goof/node_modules/mpath/package.json</p> <p> Dependency Hierarchy: - mongoose-4.2.4.tgz (Root Library) - :x: **mpath-0.1.1.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/tomdgl397/goof/commit/2347df63add5cc26408e39c0cd1cd9ba860e223f">2347df63add5cc26408e39c0cd1cd9ba860e223f</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> This affects the package mpath before 0.8.4. A type confusion vulnerability can lead to a bypass of CVE-2018-16490. In particular, the condition ignoreProperties.indexOf(parts[i]) !== -1 returns -1 if parts[i] is ['__proto__']. This is because the method that has been called if the input is an array is Array.prototype.indexOf() and not String.prototype.indexOf(). They behave differently depending on the type of the input. <p>Publish Date: 2021-09-01 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23438>CVE-2021-23438</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23438">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23438</a></p> <p>Release Date: 2021-09-01</p> <p>Fix Resolution: mpath - 0.8.4</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in mpath tgz cve high severity vulnerability vulnerable library mpath tgz g s et object values using mongodb path notation library home page a href path to dependency file goof package json path to vulnerable library goof node modules mpath package json dependency hierarchy mongoose tgz root library x mpath tgz vulnerable library found in head commit a href vulnerability details this affects the package mpath before a type confusion vulnerability can lead to a bypass of cve in particular the condition ignoreproperties indexof parts returns if parts is this is because the method that has been called if the input is an array is array prototype indexof and not string prototype indexof they behave differently depending on the type of the input publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution mpath step up your open source security game with whitesource
0
5,161
7,933,598,184
IssuesEvent
2018-07-08 08:37:21
rubberduck-vba/Rubberduck
https://api.github.com/repos/rubberduck-vba/Rubberduck
opened
Parse Error when CDecl specified in Declare statement
bug edge-case parse-tree-processing
It seems the VBA team were planning on adding support for `__cdecl` calling conventions, and allow for `CDecl` to be specified as part of a `Declare` statement, even though it seems that `__cdel` support never quite got delivered. This statement compiles, but causes an RD parse error: `Private Declare Sub Beep CDecl Lib "kernel32" (dwFreq As Any, dwDuration As Any)` The parser needs to allow for these statements, but should also perhaps have an inspection for removing the `CDecl` as it's not implemented by VBA.
1.0
Parse Error when CDecl specified in Declare statement - It seems the VBA team were planning on adding support for `__cdecl` calling conventions, and allow for `CDecl` to be specified as part of a `Declare` statement, even though it seems that `__cdel` support never quite got delivered. This statement compiles, but causes an RD parse error: `Private Declare Sub Beep CDecl Lib "kernel32" (dwFreq As Any, dwDuration As Any)` The parser needs to allow for these statements, but should also perhaps have an inspection for removing the `CDecl` as it's not implemented by VBA.
process
parse error when cdecl specified in declare statement it seems the vba team were planning on adding support for cdecl calling conventions and allow for cdecl to be specified as part of a declare statement even though it seems that cdel support never quite got delivered this statement compiles but causes an rd parse error private declare sub beep cdecl lib dwfreq as any dwduration as any the parser needs to allow for these statements but should also perhaps have an inspection for removing the cdecl as it s not implemented by vba
1
6,474
9,548,313,324
IssuesEvent
2019-05-02 04:33:14
PennyDreadfulMTG/perf-reports
https://api.github.com/repos/PennyDreadfulMTG/perf-reports
closed
500 error at /api/gitpull
CalledProcessError logsite wontfix
Command '['git', 'fetch']' returned non-zero exit status 1. Reported on logsite by logged_out ``` -------------------------------------------------------------------------------- Request Method: POST Path: /api/gitpull? Cookies: {} Endpoint: process_github_webhook View Args: {} Person: logged_out Referrer: None Request Data: {} Host: logs.pennydreadfulmagic.com Accept-Encoding: gzip Cf-Ipcountry: US X-Forwarded-For: 192.30.252.39, 172.68.65.92 Cf-Ray: 49d0212bfcbac18b-IAD X-Forwarded-Proto: https Cf-Visitor: {"scheme":"https"} Accept: */* User-Agent: GitHub-Hookshot/e18313a X-Github-Event: push X-Github-Delivery: 0d9b6b98-1e13-11e9-8631-6394f2dbb20d Content-Type: application/json Cf-Connecting-Ip: 192.30.252.39 Cdn-Loop: cloudflare X-Forwarded-Host: logs.pennydreadfulmagic.com X-Forwarded-Server: logs.pennydreadfulmagic.com Connection: Keep-Alive Content-Length: 18414 ``` -------------------------------------------------------------------------------- CalledProcessError Command '['git', 'fetch']' returned non-zero exit status 1. Stack Trace: ``` File "/home/discord/.local/lib/python3.6/site-packages/flask/app.py", line 2309, in __call__ return self.wsgi_app(environ, start_response) File "/home/discord/.local/lib/python3.6/site-packages/flask/app.py", line 2295, in wsgi_app response = self.handle_exception(e) File "/home/discord/.local/lib/python3.6/site-packages/flask/app.py", line 2292, in wsgi_app response = self.full_dispatch_request() File "/home/discord/.local/lib/python3.6/site-packages/flask/app.py", line 1815, in full_dispatch_request rv = self.handle_user_exception(e) File "/home/discord/.local/lib/python3.6/site-packages/flask/app.py", line 1718, in handle_user_exception reraise(exc_type, exc_value, tb) File "/home/discord/.local/lib/python3.6/site-packages/flask/_compat.py", line 35, in reraise raise value File "/home/discord/.local/lib/python3.6/site-packages/flask/app.py", line 1813, in full_dispatch_request rv = self.dispatch_request() File "/home/discord/.local/lib/python3.6/site-packages/flask/app.py", line 1799, in dispatch_request return self.view_functions[rule.endpoint](**req.view_args) File "./shared_web/api.py", line 18, in process_github_webhook subprocess.check_output(['git', 'fetch']) File "/usr/lib64/python3.6/subprocess.py", line 336, in check_output **kwargs).stdout File "/usr/lib64/python3.6/subprocess.py", line 418, in run output=stdout, stderr=stderr) ``` Exception_hash: bba63d1d624654822787d3b67659fd7fba1b1bab
1.0
500 error at /api/gitpull - Command '['git', 'fetch']' returned non-zero exit status 1. Reported on logsite by logged_out ``` -------------------------------------------------------------------------------- Request Method: POST Path: /api/gitpull? Cookies: {} Endpoint: process_github_webhook View Args: {} Person: logged_out Referrer: None Request Data: {} Host: logs.pennydreadfulmagic.com Accept-Encoding: gzip Cf-Ipcountry: US X-Forwarded-For: 192.30.252.39, 172.68.65.92 Cf-Ray: 49d0212bfcbac18b-IAD X-Forwarded-Proto: https Cf-Visitor: {"scheme":"https"} Accept: */* User-Agent: GitHub-Hookshot/e18313a X-Github-Event: push X-Github-Delivery: 0d9b6b98-1e13-11e9-8631-6394f2dbb20d Content-Type: application/json Cf-Connecting-Ip: 192.30.252.39 Cdn-Loop: cloudflare X-Forwarded-Host: logs.pennydreadfulmagic.com X-Forwarded-Server: logs.pennydreadfulmagic.com Connection: Keep-Alive Content-Length: 18414 ``` -------------------------------------------------------------------------------- CalledProcessError Command '['git', 'fetch']' returned non-zero exit status 1. Stack Trace: ``` File "/home/discord/.local/lib/python3.6/site-packages/flask/app.py", line 2309, in __call__ return self.wsgi_app(environ, start_response) File "/home/discord/.local/lib/python3.6/site-packages/flask/app.py", line 2295, in wsgi_app response = self.handle_exception(e) File "/home/discord/.local/lib/python3.6/site-packages/flask/app.py", line 2292, in wsgi_app response = self.full_dispatch_request() File "/home/discord/.local/lib/python3.6/site-packages/flask/app.py", line 1815, in full_dispatch_request rv = self.handle_user_exception(e) File "/home/discord/.local/lib/python3.6/site-packages/flask/app.py", line 1718, in handle_user_exception reraise(exc_type, exc_value, tb) File "/home/discord/.local/lib/python3.6/site-packages/flask/_compat.py", line 35, in reraise raise value File "/home/discord/.local/lib/python3.6/site-packages/flask/app.py", line 1813, in full_dispatch_request rv = self.dispatch_request() File "/home/discord/.local/lib/python3.6/site-packages/flask/app.py", line 1799, in dispatch_request return self.view_functions[rule.endpoint](**req.view_args) File "./shared_web/api.py", line 18, in process_github_webhook subprocess.check_output(['git', 'fetch']) File "/usr/lib64/python3.6/subprocess.py", line 336, in check_output **kwargs).stdout File "/usr/lib64/python3.6/subprocess.py", line 418, in run output=stdout, stderr=stderr) ``` Exception_hash: bba63d1d624654822787d3b67659fd7fba1b1bab
process
error at api gitpull command returned non zero exit status reported on logsite by logged out request method post path api gitpull cookies endpoint process github webhook view args person logged out referrer none request data host logs pennydreadfulmagic com accept encoding gzip cf ipcountry us x forwarded for cf ray iad x forwarded proto https cf visitor scheme https accept user agent github hookshot x github event push x github delivery content type application json cf connecting ip cdn loop cloudflare x forwarded host logs pennydreadfulmagic com x forwarded server logs pennydreadfulmagic com connection keep alive content length calledprocesserror command returned non zero exit status stack trace file home discord local lib site packages flask app py line in call return self wsgi app environ start response file home discord local lib site packages flask app py line in wsgi app response self handle exception e file home discord local lib site packages flask app py line in wsgi app response self full dispatch request file home discord local lib site packages flask app py line in full dispatch request rv self handle user exception e file home discord local lib site packages flask app py line in handle user exception reraise exc type exc value tb file home discord local lib site packages flask compat py line in reraise raise value file home discord local lib site packages flask app py line in full dispatch request rv self dispatch request file home discord local lib site packages flask app py line in dispatch request return self view functions req view args file shared web api py line in process github webhook subprocess check output file usr subprocess py line in check output kwargs stdout file usr subprocess py line in run output stdout stderr stderr exception hash
1
326,784
24,101,256,875
IssuesEvent
2022-09-20 01:04:18
Ben-Chapman/EVFinder
https://api.github.com/repos/Ben-Chapman/EVFinder
opened
Documentation missing for the Vue Webapp
documentation
Documentation is missing for much of the frontend Vue webapp. Creating an issue for tracking.
1.0
Documentation missing for the Vue Webapp - Documentation is missing for much of the frontend Vue webapp. Creating an issue for tracking.
non_process
documentation missing for the vue webapp documentation is missing for much of the frontend vue webapp creating an issue for tracking
0
119,958
15,681,805,179
IssuesEvent
2021-03-25 06:09:10
aws/aws-cdk
https://api.github.com/repos/aws/aws-cdk
closed
📊Tracking: AWS Managed Blockchain
@aws-cdk/aws-managedblockchain closed-for-staleness management/tracking needs-design
Add your +1 👍 to help us prioritize high-level constructs for this service --- ### Overview: <!-- Summary of the service (leverage the service’s product page for the text) and a link to the relevant AWS Docs. This should be the same text that we put at the top of the package’s README.md. Also include a link to the service’s CDK Construct Library API reference page. --> Amazon Managed Blockchain is a fully managed service for creating and managing blockchain networks using open source frameworks. Currently, the Hyperledger Fabric open source framework is supported. Blockchain allows you to build applications where multiple parties can securely and transparently run transactions and share data without the need for a trusted, central authority. [AWS Docs](https://docs.aws.amazon.com/managed-blockchain/latest/managementguide/what-is-managed-blockchain.html) <!-- replace `url` with link to the relevant AWS Docs --> ### Maturity: CloudFormation Resources Only <!-- The valid maturity states are: CloudFormation Resources Only, Experimental, Developer Preview, Stable --> See the [AWS Construct Library Module Lifecycle doc](https://github.com/aws/aws-cdk-rfcs/blob/master/text/0107-construct-library-module-lifecycle.md) for more information about maturity levels. ### Implementation: <!-- Checklist of use cases, constructs, features (such as grant methods) that will ship in this package (not required until the issue is added to the public roadmap) - [ ] - [ ] --> See the [CDK API Reference](https://docs.aws.amazon.com/cdk/api/latest/docs/aws-managedblockchain-readme.html) for more implementation details.<!-- replace `url` with link to the service's CDK API reference --> ### Issue list: <!-- e.g. checklist of links to feature requests, bugs, and PRs that are in scope for GA release of this module (not required until the issues is added to the public roadmap) - [ ] - [ ] --> <!-- Labels to add: - package/[name] (create new labels if they don’t already exist) - needs-design (if cfn-only) - management/roadmap (when added to the roadmap) - in-progress (when added to “working on it” column of the roadmap) --> --- This is a 📊Tracking Issue
1.0
📊Tracking: AWS Managed Blockchain - Add your +1 👍 to help us prioritize high-level constructs for this service --- ### Overview: <!-- Summary of the service (leverage the service’s product page for the text) and a link to the relevant AWS Docs. This should be the same text that we put at the top of the package’s README.md. Also include a link to the service’s CDK Construct Library API reference page. --> Amazon Managed Blockchain is a fully managed service for creating and managing blockchain networks using open source frameworks. Currently, the Hyperledger Fabric open source framework is supported. Blockchain allows you to build applications where multiple parties can securely and transparently run transactions and share data without the need for a trusted, central authority. [AWS Docs](https://docs.aws.amazon.com/managed-blockchain/latest/managementguide/what-is-managed-blockchain.html) <!-- replace `url` with link to the relevant AWS Docs --> ### Maturity: CloudFormation Resources Only <!-- The valid maturity states are: CloudFormation Resources Only, Experimental, Developer Preview, Stable --> See the [AWS Construct Library Module Lifecycle doc](https://github.com/aws/aws-cdk-rfcs/blob/master/text/0107-construct-library-module-lifecycle.md) for more information about maturity levels. ### Implementation: <!-- Checklist of use cases, constructs, features (such as grant methods) that will ship in this package (not required until the issue is added to the public roadmap) - [ ] - [ ] --> See the [CDK API Reference](https://docs.aws.amazon.com/cdk/api/latest/docs/aws-managedblockchain-readme.html) for more implementation details.<!-- replace `url` with link to the service's CDK API reference --> ### Issue list: <!-- e.g. checklist of links to feature requests, bugs, and PRs that are in scope for GA release of this module (not required until the issues is added to the public roadmap) - [ ] - [ ] --> <!-- Labels to add: - package/[name] (create new labels if they don’t already exist) - needs-design (if cfn-only) - management/roadmap (when added to the roadmap) - in-progress (when added to “working on it” column of the roadmap) --> --- This is a 📊Tracking Issue
non_process
📊tracking aws managed blockchain add your 👍 to help us prioritize high level constructs for this service overview summary of the service leverage the service’s product page for the text and a link to the relevant aws docs this should be the same text that we put at the top of the package’s readme md also include a link to the service’s cdk construct library api reference page amazon managed blockchain is a fully managed service for creating and managing blockchain networks using open source frameworks currently the hyperledger fabric open source framework is supported blockchain allows you to build applications where multiple parties can securely and transparently run transactions and share data without the need for a trusted central authority maturity cloudformation resources only the valid maturity states are cloudformation resources only experimental developer preview stable see the for more information about maturity levels implementation checklist of use cases constructs features such as grant methods that will ship in this package not required until the issue is added to the public roadmap see the for more implementation details issue list e g checklist of links to feature requests bugs and prs that are in scope for ga release of this module not required until the issues is added to the public roadmap labels to add package create new labels if they don’t already exist needs design if cfn only management roadmap when added to the roadmap in progress when added to “working on it” column of the roadmap this is a 📊tracking issue
0
43,757
11,835,086,216
IssuesEvent
2020-03-23 10:03:02
contao/contao
https://api.github.com/repos/contao/contao
closed
Using role_hierarchy in Symfony security
defect
**Affected version(s)** Contao 4.9 **Description** With `role_hierarchy` configured in Symfony security `config/packages/security.yaml`, the `security.access.simple_role_voter` gets removed and therefore is not available anymore: ``` The service "contao.security.token_checker" has a dependency on a non-existent service "security.access.simple_role_voter". ``` See also https://contao.slack.com/archives/CK4J0KNDB/p1584880943432600?thread_ts=1584879877.425300&cid=CK4J0KNDB ~I'm working on a PR.~ See #1549.
1.0
Using role_hierarchy in Symfony security - **Affected version(s)** Contao 4.9 **Description** With `role_hierarchy` configured in Symfony security `config/packages/security.yaml`, the `security.access.simple_role_voter` gets removed and therefore is not available anymore: ``` The service "contao.security.token_checker" has a dependency on a non-existent service "security.access.simple_role_voter". ``` See also https://contao.slack.com/archives/CK4J0KNDB/p1584880943432600?thread_ts=1584879877.425300&cid=CK4J0KNDB ~I'm working on a PR.~ See #1549.
non_process
using role hierarchy in symfony security affected version s contao description with role hierarchy configured in symfony security config packages security yaml the security access simple role voter gets removed and therefore is not available anymore the service contao security token checker has a dependency on a non existent service security access simple role voter see also i m working on a pr see
0
8,310
11,471,557,632
IssuesEvent
2020-02-09 12:03:54
parcel-bundler/parcel
https://api.github.com/repos/parcel-bundler/parcel
closed
Parcel Less Mixin not working
:bug: Bug CSS Preprocessing Stale
<!--- Thanks for filing an issue 😄 ! Before you submit, please read the following: Search open/closed issues before submitting since someone might have asked the same thing before! --> # 🐛 bug report I have been using UIKit for a while, I've had it set up using Gulp, Webpack and now Parcel. For many reasons, Webpack is difficult to get set up once you want to Purge the results using PurgeCss. Gulp is fine but isn't great with ES6 import. I'm liking Parcel, but I can't get UIKit working with it for one simple reason. It will not respect this mixin: https://github.com/uikit/uikit/blob/e389ef3843f771ac36f468a62dd5be1bfbe2e583/src/less/components/mixin.less ## 🎛 Configuration (.babelrc, package.json, cli command) <!--- If describing a bug, tell us what your babel configuration looks like --> package.json ``` { "name": "project", "version": "1.0.0", "description": "This file is here to ensure Git adds the dir to the repo. You may delete this file.", "main": "index.js", "dependencies": { "uikit": "^3.1.6" }, "devDependencies": { "less": "^3.9.0", "parcel-bundler": "^1.12.3", "parcel-plugin-purgecss": "^2.1.2" }, "scripts": { "dev": "parcel watch ./assets/uikit.js --out-dir assets/dist --no-hmr --no-source-maps", "build": "parcel build ./assets/uikit.js --out-dir assets/dist --no-source-maps" }, "keywords": [], "author": "", "license": "ISC" } ``` ## 🀔 Expected Behavior Should colour and inline the SVG ## 😯 Current Behavior Just outputs the SVG url <!--- Tell us what happens instead of the expected behavior --> <!--- If you are seeing an error, please include the full error message and stack trace --> ## 💁 Possible Solution <!--- Not obligatory, but suggest a fix/reason for the bug --> ## 🔊 Context I'm unable to use UIKit forms https://github.com/uikit/uikit/blob/e389ef3843f771ac36f468a62dd5be1bfbe2e583/src/less/components/form.less#L414 as it's just outputting a URL relative to the Node_Modules directory. Which isn't relative to the root so the SVG breaks. But the mixin should inline it anyway. ## 🌍 Your Environment <!--- Include as many relevant details about the environment you experienced the bug in --> | Software | Version(s) | | ---------------- | ---------- | | Parcel | 1.12.3 | Node | Latest | npm/Yarn | Latest | Operating System | Windows 10 <!-- Love parcel? Please consider supporting our collective: 👉 https://opencollective.com/parcel/donate -->
1.0
Parcel Less Mixin not working - <!--- Thanks for filing an issue 😄 ! Before you submit, please read the following: Search open/closed issues before submitting since someone might have asked the same thing before! --> # 🐛 bug report I have been using UIKit for a while, I've had it set up using Gulp, Webpack and now Parcel. For many reasons, Webpack is difficult to get set up once you want to Purge the results using PurgeCss. Gulp is fine but isn't great with ES6 import. I'm liking Parcel, but I can't get UIKit working with it for one simple reason. It will not respect this mixin: https://github.com/uikit/uikit/blob/e389ef3843f771ac36f468a62dd5be1bfbe2e583/src/less/components/mixin.less ## 🎛 Configuration (.babelrc, package.json, cli command) <!--- If describing a bug, tell us what your babel configuration looks like --> package.json ``` { "name": "project", "version": "1.0.0", "description": "This file is here to ensure Git adds the dir to the repo. You may delete this file.", "main": "index.js", "dependencies": { "uikit": "^3.1.6" }, "devDependencies": { "less": "^3.9.0", "parcel-bundler": "^1.12.3", "parcel-plugin-purgecss": "^2.1.2" }, "scripts": { "dev": "parcel watch ./assets/uikit.js --out-dir assets/dist --no-hmr --no-source-maps", "build": "parcel build ./assets/uikit.js --out-dir assets/dist --no-source-maps" }, "keywords": [], "author": "", "license": "ISC" } ``` ## 🀔 Expected Behavior Should colour and inline the SVG ## 😯 Current Behavior Just outputs the SVG url <!--- Tell us what happens instead of the expected behavior --> <!--- If you are seeing an error, please include the full error message and stack trace --> ## 💁 Possible Solution <!--- Not obligatory, but suggest a fix/reason for the bug --> ## 🔊 Context I'm unable to use UIKit forms https://github.com/uikit/uikit/blob/e389ef3843f771ac36f468a62dd5be1bfbe2e583/src/less/components/form.less#L414 as it's just outputting a URL relative to the Node_Modules directory. Which isn't relative to the root so the SVG breaks. But the mixin should inline it anyway. ## 🌍 Your Environment <!--- Include as many relevant details about the environment you experienced the bug in --> | Software | Version(s) | | ---------------- | ---------- | | Parcel | 1.12.3 | Node | Latest | npm/Yarn | Latest | Operating System | Windows 10 <!-- Love parcel? Please consider supporting our collective: 👉 https://opencollective.com/parcel/donate -->
process
parcel less mixin not working thanks for filing an issue 😄 before you submit please read the following search open closed issues before submitting since someone might have asked the same thing before 🐛 bug report i have been using uikit for a while i ve had it set up using gulp webpack and now parcel for many reasons webpack is difficult to get set up once you want to purge the results using purgecss gulp is fine but isn t great with import i m liking parcel but i can t get uikit working with it for one simple reason it will not respect this mixin 🎛 configuration babelrc package json cli command package json name project version description this file is here to ensure git adds the dir to the repo you may delete this file main index js dependencies uikit devdependencies less parcel bundler parcel plugin purgecss scripts dev parcel watch assets uikit js out dir assets dist no hmr no source maps build parcel build assets uikit js out dir assets dist no source maps keywords author license isc 🀔 expected behavior should colour and inline the svg 😯 current behavior just outputs the svg url 💁 possible solution 🔊 context i m unable to use uikit forms as it s just outputting a url relative to the node modules directory which isn t relative to the root so the svg breaks but the mixin should inline it anyway 🌍 your environment software version s parcel node latest npm yarn latest operating system windows love parcel please consider supporting our collective 👉
1
14,994
18,674,987,894
IssuesEvent
2021-10-31 12:04:58
slynch8/10x
https://api.github.com/repos/slynch8/10x
opened
include paths not resolved if they are only relative to additional include paths
bug Priority 3 preprocessor
Had to disable this because it was too slow for all the files in UE that couldn't be found
1.0
include paths not resolved if they are only relative to additional include paths - Had to disable this because it was too slow for all the files in UE that couldn't be found
process
include paths not resolved if they are only relative to additional include paths had to disable this because it was too slow for all the files in ue that couldn t be found
1
758,913
26,573,781,787
IssuesEvent
2023-01-21 14:33:29
Andrushka130/WASD
https://api.github.com/repos/Andrushka130/WASD
closed
Character Manager Klasse hinzufÃŒgen
DEV Feature Priority 2 Funktionale A
Die Character Manager Klasse soll als Mittelmann tendieren und das Character object fÃŒr die anderen Klassen weitergeben.
1.0
Character Manager Klasse hinzufÃŒgen - Die Character Manager Klasse soll als Mittelmann tendieren und das Character object fÃŒr die anderen Klassen weitergeben.
non_process
character manager klasse hinzufÃŒgen die character manager klasse soll als mittelmann tendieren und das character object fÃŒr die anderen klassen weitergeben
0
353,171
25,104,871,057
IssuesEvent
2022-11-08 15:57:32
Schrodinger-Hat/youtube-to-anchorfm
https://api.github.com/repos/Schrodinger-Hat/youtube-to-anchorfm
closed
A Better name for Environment variables section in README
documentation enhancement
I think the name `Environment variables` doesn't properly describe the section https://github.com/Schrodinger-Hat/youtube-to-anchorfm#environment-variables Something like `Conversion Options` may be better
1.0
A Better name for Environment variables section in README - I think the name `Environment variables` doesn't properly describe the section https://github.com/Schrodinger-Hat/youtube-to-anchorfm#environment-variables Something like `Conversion Options` may be better
non_process
a better name for environment variables section in readme i think the name environment variables doesn t properly describe the section something like conversion options may be better
0
10,302
13,153,229,503
IssuesEvent
2020-08-10 02:28:59
kubeflow/testing
https://api.github.com/repos/kubeflow/testing
closed
Organize the CI GCP projects into folders
area/engprod kind/feature kind/process lifecycle/stale
I'm opening this issue to track updates to the organization of our CI projects into folders. Currently we have folder `ci-cd` which has two projects * github-probots (runs labelbot frontend) * kubeflow-ci (runs our ci clusters) We also have project kubeflow-ci-deployment which is not currently in the ci-cd folder. I'm going to create a subfolder ci-deployments and move it into that project. The reason for creating a subfolder is this way we can great permissions at the folder level to kubeflow-ci project service accounts in order to be able to create infrastructure in these projects. This way we don't have to worry about our garbage collection logic accidentally deleting the resources we ened.
1.0
Organize the CI GCP projects into folders - I'm opening this issue to track updates to the organization of our CI projects into folders. Currently we have folder `ci-cd` which has two projects * github-probots (runs labelbot frontend) * kubeflow-ci (runs our ci clusters) We also have project kubeflow-ci-deployment which is not currently in the ci-cd folder. I'm going to create a subfolder ci-deployments and move it into that project. The reason for creating a subfolder is this way we can great permissions at the folder level to kubeflow-ci project service accounts in order to be able to create infrastructure in these projects. This way we don't have to worry about our garbage collection logic accidentally deleting the resources we ened.
process
organize the ci gcp projects into folders i m opening this issue to track updates to the organization of our ci projects into folders currently we have folder ci cd which has two projects github probots runs labelbot frontend kubeflow ci runs our ci clusters we also have project kubeflow ci deployment which is not currently in the ci cd folder i m going to create a subfolder ci deployments and move it into that project the reason for creating a subfolder is this way we can great permissions at the folder level to kubeflow ci project service accounts in order to be able to create infrastructure in these projects this way we don t have to worry about our garbage collection logic accidentally deleting the resources we ened
1
18,784
24,690,287,677
IssuesEvent
2022-10-19 08:02:43
geneontology/go-ontology
https://api.github.com/repos/geneontology/go-ontology
closed
BP refactoring: split cellular and organism-level processes: homeostatic process
cellular processes organism-level process
Hello, For the BP refactoring, we will split cellular and organism-level processes, as described in the [2022 bp refactoring plan - top level](https://docs.google.com/document/d/1k8yuWTMSkYVTnt2hRbrPVH8Ud3gDwC5IrGu4PqHkKds/edit). This ticket describes changes to the homeostatic process branch: First, GO:0042592 homeostatic process will be obsoleted, to keep the two main branches, 'GO:0019725 cellular homeostasis' and 'GO:0048871 multicellular organismal homeostasis' The following terms were directly under GO:0042592 homeostatic process and were moved: - [x] GO:0048872 homeostasis of number of cells -> move under 'GO:0048871 multicellular organismal homeostasis' - [x] GO:0060249 anatomical structure homeostasis -> move under 'GO:0048871 multicellular organismal homeostasis' - [x] GO:0046716 muscle cell cellular homeostasis and GO:0070050 neuron cellular homeostasis: remove parent: GO:0060249 anatomical structure homeostasis (to keep only GO:0019725 cellular hemeostasis) - [x] Move 'GO:0060274 maintenance of stationary phase' under GO:0019725 cellular homeostasis - [x] GO:0048875 chemical homeostasis within a tissue -> remove parent GO:0048878 chemical homeostasis, keep only GO:0001894 tissue homeostasis - [x] 'GO:0046719 regulation by virus of viral protein levels in host cell', removed parent GO:0048878 chemical homeostasis, keep GO:0050792 regulation of viral process
2.0
BP refactoring: split cellular and organism-level processes: homeostatic process - Hello, For the BP refactoring, we will split cellular and organism-level processes, as described in the [2022 bp refactoring plan - top level](https://docs.google.com/document/d/1k8yuWTMSkYVTnt2hRbrPVH8Ud3gDwC5IrGu4PqHkKds/edit). This ticket describes changes to the homeostatic process branch: First, GO:0042592 homeostatic process will be obsoleted, to keep the two main branches, 'GO:0019725 cellular homeostasis' and 'GO:0048871 multicellular organismal homeostasis' The following terms were directly under GO:0042592 homeostatic process and were moved: - [x] GO:0048872 homeostasis of number of cells -> move under 'GO:0048871 multicellular organismal homeostasis' - [x] GO:0060249 anatomical structure homeostasis -> move under 'GO:0048871 multicellular organismal homeostasis' - [x] GO:0046716 muscle cell cellular homeostasis and GO:0070050 neuron cellular homeostasis: remove parent: GO:0060249 anatomical structure homeostasis (to keep only GO:0019725 cellular hemeostasis) - [x] Move 'GO:0060274 maintenance of stationary phase' under GO:0019725 cellular homeostasis - [x] GO:0048875 chemical homeostasis within a tissue -> remove parent GO:0048878 chemical homeostasis, keep only GO:0001894 tissue homeostasis - [x] 'GO:0046719 regulation by virus of viral protein levels in host cell', removed parent GO:0048878 chemical homeostasis, keep GO:0050792 regulation of viral process
process
bp refactoring split cellular and organism level processes homeostatic process hello for the bp refactoring we will split cellular and organism level processes as described in the this ticket describes changes to the homeostatic process branch first go homeostatic process will be obsoleted to keep the two main branches go cellular homeostasis and go multicellular organismal homeostasis the following terms were directly under go homeostatic process and were moved go homeostasis of number of cells move under go multicellular organismal homeostasis go anatomical structure homeostasis move under go multicellular organismal homeostasis go muscle cell cellular homeostasis and go neuron cellular homeostasis remove parent go anatomical structure homeostasis to keep only go cellular hemeostasis move go maintenance of stationary phase under go cellular homeostasis go chemical homeostasis within a tissue remove parent go chemical homeostasis keep only go tissue homeostasis go regulation by virus of viral protein levels in host cell removed parent go chemical homeostasis keep go regulation of viral process
1
4,411
7,299,102,437
IssuesEvent
2018-02-26 19:02:51
vtex/formula-vtex
https://api.github.com/repos/vtex/formula-vtex
opened
Quick Setup
Office Processo
# Quick Setup ### Configuração de sistema operacional e instalação de programas automática ## Descrição da ideia - Utilização de um package manager (Chocolatey/Homebrew) para instalação não-supervisionada de programas frequentemente utilizados pelas pessoas do escritório - Armazenar os instaladores, atualizados automaticamente, em uma máquina local para acelerar o processo de download - Ao invés de ficar horas e horas baixando tudo e configurando seu Windows/Mac no seu primeiro dia, apenas pega um script no slack e roda. Dentro de minutos, sua máquina estará pronta. Também muito útil para pessoas que frenquentemente formatam os OS, ou que usam máquinas virtuais. - Potencialmente criar scripts diferentes dependendo da área da pessoa (RH, financeiro, dev, design, etc) # TEAM: *name pending* Procurando interessados!
1.0
Quick Setup - # Quick Setup ### Configuração de sistema operacional e instalação de programas automática ## Descrição da ideia - Utilização de um package manager (Chocolatey/Homebrew) para instalação não-supervisionada de programas frequentemente utilizados pelas pessoas do escritório - Armazenar os instaladores, atualizados automaticamente, em uma máquina local para acelerar o processo de download - Ao invés de ficar horas e horas baixando tudo e configurando seu Windows/Mac no seu primeiro dia, apenas pega um script no slack e roda. Dentro de minutos, sua máquina estará pronta. Também muito útil para pessoas que frenquentemente formatam os OS, ou que usam máquinas virtuais. - Potencialmente criar scripts diferentes dependendo da área da pessoa (RH, financeiro, dev, design, etc) # TEAM: *name pending* Procurando interessados!
process
quick setup quick setup configuração de sistema operacional e instalação de programas automática descrição da ideia utilização de um package manager chocolatey homebrew para instalação não supervisionada de programas frequentemente utilizados pelas pessoas do escritório armazenar os instaladores atualizados automaticamente em uma máquina local para acelerar o processo de download ao invés de ficar horas e horas baixando tudo e configurando seu windows mac no seu primeiro dia apenas pega um script no slack e roda dentro de minutos sua máquina estará pronta também muito útil para pessoas que frenquentemente formatam os os ou que usam máquinas virtuais potencialmente criar scripts diferentes dependendo da área da pessoa rh financeiro dev design etc team name pending procurando interessados
1
22,707
32,034,015,112
IssuesEvent
2023-09-22 14:10:47
h4sh5/pypi-auto-scanner
https://api.github.com/repos/h4sh5/pypi-auto-scanner
opened
hpcflow-new2 0.2.0a107 has 2 GuardDog issues
guarddog exec-base64 silent-process-execution
https://pypi.org/project/hpcflow-new2 https://inspector.pypi.io/project/hpcflow-new2 ```{ "dependency": "hpcflow-new2", "version": "0.2.0a107", "result": { "issues": 2, "errors": {}, "results": { "exec-base64": [ { "location": "hpcflow_new2-0.2.0a107/hpcflow/sdk/submission/jobscript.py:990", "code": " init_proc = subprocess.Popen(\n args=args,\n cwd=str(self.workflow.path),\n creationflags=subprocess.CREATE_NO_WINDOW,\n )", "message": "This package contains a call to the `eval` function with a `base64` encoded string as argument.\nThis is a common method used to hide a malicious payload in a module as static analysis will not decode the\nstring.\n" } ], "silent-process-execution": [ { "location": "hpcflow_new2-0.2.0a107/hpcflow/sdk/helper/helper.py:111", "code": " proc = subprocess.Popen(\n args=args,\n stdin=subprocess.DEVNULL,\n stdout=subprocess.DEVNULL,\n stderr=subprocess.DEVNULL,\n **kwargs,\n )", "message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null" } ] }, "path": "/tmp/tmp74igf81j/hpcflow-new2" } }```
1.0
hpcflow-new2 0.2.0a107 has 2 GuardDog issues - https://pypi.org/project/hpcflow-new2 https://inspector.pypi.io/project/hpcflow-new2 ```{ "dependency": "hpcflow-new2", "version": "0.2.0a107", "result": { "issues": 2, "errors": {}, "results": { "exec-base64": [ { "location": "hpcflow_new2-0.2.0a107/hpcflow/sdk/submission/jobscript.py:990", "code": " init_proc = subprocess.Popen(\n args=args,\n cwd=str(self.workflow.path),\n creationflags=subprocess.CREATE_NO_WINDOW,\n )", "message": "This package contains a call to the `eval` function with a `base64` encoded string as argument.\nThis is a common method used to hide a malicious payload in a module as static analysis will not decode the\nstring.\n" } ], "silent-process-execution": [ { "location": "hpcflow_new2-0.2.0a107/hpcflow/sdk/helper/helper.py:111", "code": " proc = subprocess.Popen(\n args=args,\n stdin=subprocess.DEVNULL,\n stdout=subprocess.DEVNULL,\n stderr=subprocess.DEVNULL,\n **kwargs,\n )", "message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null" } ] }, "path": "/tmp/tmp74igf81j/hpcflow-new2" } }```
process
hpcflow has guarddog issues dependency hpcflow version result issues errors results exec location hpcflow hpcflow sdk submission jobscript py code init proc subprocess popen n args args n cwd str self workflow path n creationflags subprocess create no window n message this package contains a call to the eval function with a encoded string as argument nthis is a common method used to hide a malicious payload in a module as static analysis will not decode the nstring n silent process execution location hpcflow hpcflow sdk helper helper py code proc subprocess popen n args args n stdin subprocess devnull n stdout subprocess devnull n stderr subprocess devnull n kwargs n message this package is silently executing an external binary redirecting stdout stderr and stdin to dev null path tmp hpcflow
1
2,611
5,344,457,108
IssuesEvent
2017-02-17 14:35:09
globus/globus-sdk-python
https://api.github.com/repos/globus/globus-sdk-python
closed
Test on windows
portability/compatiblity
We have windows in the pypi classifiers, but it's untested, and the config file handling may behave strangely on windows because of the use of expanduser.
True
Test on windows - We have windows in the pypi classifiers, but it's untested, and the config file handling may behave strangely on windows because of the use of expanduser.
non_process
test on windows we have windows in the pypi classifiers but it s untested and the config file handling may behave strangely on windows because of the use of expanduser
0
542
3,002,565,556
IssuesEvent
2015-07-24 17:58:53
K0zka/kerub
https://api.github.com/repos/K0zka/kerub
closed
ispn storage configuration
component:data processing enhancement priority: normal
The following should be configurable * location of the storage root directory * number of owners for the distributed caches separately for the "dynamic", "static" and history data
1.0
ispn storage configuration - The following should be configurable * location of the storage root directory * number of owners for the distributed caches separately for the "dynamic", "static" and history data
process
ispn storage configuration the following should be configurable location of the storage root directory number of owners for the distributed caches separately for the dynamic static and history data
1
40,502
20,926,363,612
IssuesEvent
2022-03-24 23:38:56
pytorch/pytorch
https://api.github.com/repos/pytorch/pytorch
closed
TorchBench V2 nightly run detects a 7.8% regression on nvidia_deeprecommender model
module: performance triaged module: regression
### 🐛 Describe the bug On 20220223, the TorchBench test test_train[nvidia_deeprecommender-cuda-eager] regresses 7.89634% because of the commit https://github.com/pytorch/pytorch/commit/3aecce7015. The perf delta has been validated by PR: https://github.com/pytorch/pytorch/pull/73306. After reverting the commit, the nvidia_deeprecommender tests show speedup. We observe the latency difference below (lower the better): ``` Benchmark Run 6688487 Run 41bdfa78 Delta ---------------------------------------------- ---------------- ----------------- ------------ test_train[nvidia_deeprecommender-cpu-eager] 0.455588 0.460329 +1.030094% test_eval[nvidia_deeprecommender-cpu-eager] 0.0818041 0.0791861 -3.306099% test_train[nvidia_deeprecommender-cuda-eager] 0.131359 0.121776 -7.868537%* test_eval[nvidia_deeprecommender-cuda-eager] 0.0369564 0.0289373 -27.711980%* ``` Commit 6688487 contains the commit 3aecce7015, and commit 41bdfa78 reverts 3aecce7015. After reverting the commit, the latency of test `nvidia_deeprecommender` decreases (show there is speedup). The CPU test delta is below the noise threshold `7%`, but the CUDA tests show performance signals. ### Versions PyTorch 1.12.0.dev20220223+cu113 cc @VitalyFedyunin @ngimel
True
TorchBench V2 nightly run detects a 7.8% regression on nvidia_deeprecommender model - ### 🐛 Describe the bug On 20220223, the TorchBench test test_train[nvidia_deeprecommender-cuda-eager] regresses 7.89634% because of the commit https://github.com/pytorch/pytorch/commit/3aecce7015. The perf delta has been validated by PR: https://github.com/pytorch/pytorch/pull/73306. After reverting the commit, the nvidia_deeprecommender tests show speedup. We observe the latency difference below (lower the better): ``` Benchmark Run 6688487 Run 41bdfa78 Delta ---------------------------------------------- ---------------- ----------------- ------------ test_train[nvidia_deeprecommender-cpu-eager] 0.455588 0.460329 +1.030094% test_eval[nvidia_deeprecommender-cpu-eager] 0.0818041 0.0791861 -3.306099% test_train[nvidia_deeprecommender-cuda-eager] 0.131359 0.121776 -7.868537%* test_eval[nvidia_deeprecommender-cuda-eager] 0.0369564 0.0289373 -27.711980%* ``` Commit 6688487 contains the commit 3aecce7015, and commit 41bdfa78 reverts 3aecce7015. After reverting the commit, the latency of test `nvidia_deeprecommender` decreases (show there is speedup). The CPU test delta is below the noise threshold `7%`, but the CUDA tests show performance signals. ### Versions PyTorch 1.12.0.dev20220223+cu113 cc @VitalyFedyunin @ngimel
non_process
torchbench nightly run detects a regression on nvidia deeprecommender model 🐛 describe the bug on the torchbench test test train regresses because of the commit the perf delta has been validated by pr after reverting the commit the nvidia deeprecommender tests show speedup we observe the latency difference below lower the better benchmark run run delta test train test eval test train test eval commit contains the commit and commit reverts after reverting the commit the latency of test nvidia deeprecommender decreases show there is speedup the cpu test delta is below the noise threshold but the cuda tests show performance signals versions pytorch cc vitalyfedyunin ngimel
0
252,920
19,083,888,521
IssuesEvent
2021-11-29 01:34:08
vlizarn/challenge-python-cash-machine
https://api.github.com/repos/vlizarn/challenge-python-cash-machine
opened
[Feature]: Update readme information.
documentation enhancement update Readme
## Overview - Assign | Assign | Description | Status | | :----: | :---- | :----: | | 1 | Replace 'permited' for the 'permitted' word. | | | 2 | Add comma in code line 37. | |
1.0
[Feature]: Update readme information. - ## Overview - Assign | Assign | Description | Status | | :----: | :---- | :----: | | 1 | Replace 'permited' for the 'permitted' word. | | | 2 | Add comma in code line 37. | |
non_process
update readme information overview assign assign description status replace permited for the permitted word add comma in code line
0
217,806
24,351,601,595
IssuesEvent
2022-10-03 01:00:39
billmcchesney1/hadoop
https://api.github.com/repos/billmcchesney1/hadoop
opened
CVE-2022-42004 (Medium) detected in jackson-databind-2.9.10.1.jar
security vulnerability
## CVE-2022-42004 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.10.1.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to vulnerable library: /hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-documentstore/target/lib/jackson-databind-2.9.10.1.jar,/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-common/target/lib/jackson-databind-2.9.10.1.jar,/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client/target/lib/jackson-databind-2.9.10.1.jar</p> <p> Dependency Hierarchy: - :x: **jackson-databind-2.9.10.1.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/billmcchesney1/hadoop/commit/6dcd8400219941dcbd7fb0f6b980cc2c6a2a6b0a">6dcd8400219941dcbd7fb0f6b980cc2c6a2a6b0a</a></p> <p>Found in base branch: <b>trunk</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In FasterXML jackson-databind before 2.13.4, resource exhaustion can occur because of a lack of a check in BeanDeserializer._deserializeFromArray to prevent use of deeply nested arrays. An application is vulnerable only with certain customized choices for deserialization. <p>Publish Date: 2022-10-02 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-42004>CVE-2022-42004</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Release Date: 2022-10-02</p> <p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.13.4</p> </p> </details> <p></p>
True
CVE-2022-42004 (Medium) detected in jackson-databind-2.9.10.1.jar - ## CVE-2022-42004 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.10.1.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to vulnerable library: /hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-documentstore/target/lib/jackson-databind-2.9.10.1.jar,/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-common/target/lib/jackson-databind-2.9.10.1.jar,/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client/target/lib/jackson-databind-2.9.10.1.jar</p> <p> Dependency Hierarchy: - :x: **jackson-databind-2.9.10.1.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/billmcchesney1/hadoop/commit/6dcd8400219941dcbd7fb0f6b980cc2c6a2a6b0a">6dcd8400219941dcbd7fb0f6b980cc2c6a2a6b0a</a></p> <p>Found in base branch: <b>trunk</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In FasterXML jackson-databind before 2.13.4, resource exhaustion can occur because of a lack of a check in BeanDeserializer._deserializeFromArray to prevent use of deeply nested arrays. An application is vulnerable only with certain customized choices for deserialization. <p>Publish Date: 2022-10-02 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-42004>CVE-2022-42004</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Release Date: 2022-10-02</p> <p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.13.4</p> </p> </details> <p></p>
non_process
cve medium detected in jackson databind jar cve medium severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to vulnerable library hadoop yarn project hadoop yarn hadoop yarn server hadoop yarn server timelineservice documentstore target lib jackson databind jar hadoop yarn project hadoop yarn hadoop yarn server hadoop yarn server timelineservice hbase hadoop yarn server timelineservice hbase common target lib jackson databind jar hadoop yarn project hadoop yarn hadoop yarn server hadoop yarn server timelineservice hbase hadoop yarn server timelineservice hbase client target lib jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href found in base branch trunk vulnerability details in fasterxml jackson databind before resource exhaustion can occur because of a lack of a check in beandeserializer deserializefromarray to prevent use of deeply nested arrays an application is vulnerable only with certain customized choices for deserialization publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution com fasterxml jackson core jackson databind
0
7,198
10,335,388,852
IssuesEvent
2019-09-03 10:27:37
qgis/QGIS
https://api.github.com/repos/qgis/QGIS
closed
Band selector not working in processing batch mode dialog
Bug Feedback Processing
<!-- Bug fixing and feature development is a community responsibility, and not the responsibility of the QGIS project alone. If this bug report or feature request is high-priority for you, we suggest engaging a QGIS developer or support organisation and financially sponsoring a fix Checklist before submitting - [x] Search through existing issue reports and gis.stackexchange.com to check whether the issue already exists - [x] Test with a [clean new user profile](https://docs.qgis.org/testing/en/docs/user_manual/introduction/qgis_configuration.html?highlight=profile#working-with-user-profiles). - [ ] Create a light and self-contained sample dataset and project file which demonstrates the issue If the issue concerns a **third party plugin**, then it **cannot** be fixed by the QGIS team. Please raise your issue in the dedicated bug tracker for that specific plugin (as listed in the plugin's description). --> **Describe the bug** <!-- A clear and concise description of what the bug is. --> In processing, when working with raster as inputs, you often need to select the band to use (even if the raster only has one band). In single mode this works fine, and if only one band is available that one is selected as soon as you select the input raster. On the other hand, in batch mode, the band selector widget does not work (It's always empty), does not use band 1 by default and does not allow you to type the band number manually. This means that it's impossible to use the tools in batch mode. **How to Reproduce** <!-- Steps, sample datasets and qgis project file to reproduce the behavior. Screencasts or screenshots welcome --> Zonal statistics algorithm is a good example of this. 1. Load a raster layer and a vector layer 2. Open the Zonal statistics tool in the processing Raster analysis group 3. You should be able to select the raster and the raster band to use ![image](https://user-images.githubusercontent.com/3607161/61050709-8d969000-a3df-11e9-92fa-55db991d39d6.png) 4. click the run as batch. not the band number is not possible to select or even set. ![image](https://user-images.githubusercontent.com/3607161/61050755-9e470600-a3df-11e9-8994-ca54b7dff6c5.png) **QGIS and OS versions** <!-- In the QGIS menu help/about, click in the dialog, Ctrl+A and then Ctrl+C. Finally paste here --> QGIS 3.4.9 LTR in kubuntu 18.04 **Additional context** <!-- Add any other context about the problem here. -->
1.0
Band selector not working in processing batch mode dialog - <!-- Bug fixing and feature development is a community responsibility, and not the responsibility of the QGIS project alone. If this bug report or feature request is high-priority for you, we suggest engaging a QGIS developer or support organisation and financially sponsoring a fix Checklist before submitting - [x] Search through existing issue reports and gis.stackexchange.com to check whether the issue already exists - [x] Test with a [clean new user profile](https://docs.qgis.org/testing/en/docs/user_manual/introduction/qgis_configuration.html?highlight=profile#working-with-user-profiles). - [ ] Create a light and self-contained sample dataset and project file which demonstrates the issue If the issue concerns a **third party plugin**, then it **cannot** be fixed by the QGIS team. Please raise your issue in the dedicated bug tracker for that specific plugin (as listed in the plugin's description). --> **Describe the bug** <!-- A clear and concise description of what the bug is. --> In processing, when working with raster as inputs, you often need to select the band to use (even if the raster only has one band). In single mode this works fine, and if only one band is available that one is selected as soon as you select the input raster. On the other hand, in batch mode, the band selector widget does not work (It's always empty), does not use band 1 by default and does not allow you to type the band number manually. This means that it's impossible to use the tools in batch mode. **How to Reproduce** <!-- Steps, sample datasets and qgis project file to reproduce the behavior. Screencasts or screenshots welcome --> Zonal statistics algorithm is a good example of this. 1. Load a raster layer and a vector layer 2. Open the Zonal statistics tool in the processing Raster analysis group 3. You should be able to select the raster and the raster band to use ![image](https://user-images.githubusercontent.com/3607161/61050709-8d969000-a3df-11e9-92fa-55db991d39d6.png) 4. click the run as batch. not the band number is not possible to select or even set. ![image](https://user-images.githubusercontent.com/3607161/61050755-9e470600-a3df-11e9-8994-ca54b7dff6c5.png) **QGIS and OS versions** <!-- In the QGIS menu help/about, click in the dialog, Ctrl+A and then Ctrl+C. Finally paste here --> QGIS 3.4.9 LTR in kubuntu 18.04 **Additional context** <!-- Add any other context about the problem here. -->
process
band selector not working in processing batch mode dialog bug fixing and feature development is a community responsibility and not the responsibility of the qgis project alone if this bug report or feature request is high priority for you we suggest engaging a qgis developer or support organisation and financially sponsoring a fix checklist before submitting search through existing issue reports and gis stackexchange com to check whether the issue already exists test with a create a light and self contained sample dataset and project file which demonstrates the issue if the issue concerns a third party plugin then it cannot be fixed by the qgis team please raise your issue in the dedicated bug tracker for that specific plugin as listed in the plugin s description describe the bug in processing when working with raster as inputs you often need to select the band to use even if the raster only has one band in single mode this works fine and if only one band is available that one is selected as soon as you select the input raster on the other hand in batch mode the band selector widget does not work it s always empty does not use band by default and does not allow you to type the band number manually this means that it s impossible to use the tools in batch mode how to reproduce zonal statistics algorithm is a good example of this load a raster layer and a vector layer open the zonal statistics tool in the processing raster analysis group you should be able to select the raster and the raster band to use click the run as batch not the band number is not possible to select or even set qgis and os versions qgis ltr in kubuntu additional context
1
644,446
20,977,948,814
IssuesEvent
2022-03-28 16:54:40
GoogleContainerTools/skaffold
https://api.github.com/repos/GoogleContainerTools/skaffold
opened
Skaffold fails to log and sync
kind/bug priority/p1
@veermuchandi I added some instrumentation to Skaffold to log when a pod-watching event comes in and is rejected by the podSelector. We further modified this https://github.com/briandealwis/skaffold/commit/62939cee7e7a6318a0552387082cb4f1322ff3ec#diff-dc46afb61b0c02fbc96e826c64ba48587c52475011740d97e8033568d9381a14R141 ```diff --- pkg/skaffold/kubernetes/watcher.go +++ pkg/skaffold/kubernetes/watcher.go @@ -133,8 +135,10 @@ func (w *podWatcher) Start(ctx context.Context, kubeContext string, namespaces [ if !ok { continue } + l.Tracef("podWatcher: pod/%s: event=%s", pod.Name, evt) if !w.podSelector.Select(pod) { + l.Tracef("podWatcher: skipping event for pod/%s: not matched by selector: %+v", pod.Name, w.podSelector) continue } ``` and we see a puzzling result where the selector's `names` is empty and so isn't matching any image. ``` TRAC[0009] podWatcher: skipping event for pod/mynodejsapp-deployment-6dfc8c95b6-hkzdc: not matched by selector: &{RWMutex:{w:{state:0 sema:0} writerSem:0 readerSem:0 readerCount:0 readerWait:0} names:map[]} subtask=-1 task=DevLoop ``` The podSelector here is a `kubernetes.ImageList`: https://github.com/GoogleContainerTools/skaffold/blob/1471f8af56a7977ce2621957b96e66598016bc1b/pkg/skaffold/kubernetes/image_list.go#L30-L34 These image `names` are populated from a callback to `AddTagsToPodSelector()` https://github.com/GoogleContainerTools/skaffold/blob/1471f8af56a7977ce2621957b96e66598016bc1b/pkg/skaffold/deploy/util/util.go#L61-L73 which is called by our deployers, for example: https://github.com/GoogleContainerTools/skaffold/blob/1471f8af56a7977ce2621957b96e66598016bc1b/pkg/skaffold/deploy/kubectl/kubectl.go#L161-L164 There must be some kind of race condition happening.
1.0
Skaffold fails to log and sync - @veermuchandi I added some instrumentation to Skaffold to log when a pod-watching event comes in and is rejected by the podSelector. We further modified this https://github.com/briandealwis/skaffold/commit/62939cee7e7a6318a0552387082cb4f1322ff3ec#diff-dc46afb61b0c02fbc96e826c64ba48587c52475011740d97e8033568d9381a14R141 ```diff --- pkg/skaffold/kubernetes/watcher.go +++ pkg/skaffold/kubernetes/watcher.go @@ -133,8 +135,10 @@ func (w *podWatcher) Start(ctx context.Context, kubeContext string, namespaces [ if !ok { continue } + l.Tracef("podWatcher: pod/%s: event=%s", pod.Name, evt) if !w.podSelector.Select(pod) { + l.Tracef("podWatcher: skipping event for pod/%s: not matched by selector: %+v", pod.Name, w.podSelector) continue } ``` and we see a puzzling result where the selector's `names` is empty and so isn't matching any image. ``` TRAC[0009] podWatcher: skipping event for pod/mynodejsapp-deployment-6dfc8c95b6-hkzdc: not matched by selector: &{RWMutex:{w:{state:0 sema:0} writerSem:0 readerSem:0 readerCount:0 readerWait:0} names:map[]} subtask=-1 task=DevLoop ``` The podSelector here is a `kubernetes.ImageList`: https://github.com/GoogleContainerTools/skaffold/blob/1471f8af56a7977ce2621957b96e66598016bc1b/pkg/skaffold/kubernetes/image_list.go#L30-L34 These image `names` are populated from a callback to `AddTagsToPodSelector()` https://github.com/GoogleContainerTools/skaffold/blob/1471f8af56a7977ce2621957b96e66598016bc1b/pkg/skaffold/deploy/util/util.go#L61-L73 which is called by our deployers, for example: https://github.com/GoogleContainerTools/skaffold/blob/1471f8af56a7977ce2621957b96e66598016bc1b/pkg/skaffold/deploy/kubectl/kubectl.go#L161-L164 There must be some kind of race condition happening.
non_process
skaffold fails to log and sync veermuchandi i added some instrumentation to skaffold to log when a pod watching event comes in and is rejected by the podselector we further modified this diff pkg skaffold kubernetes watcher go pkg skaffold kubernetes watcher go func w podwatcher start ctx context context kubecontext string namespaces if ok continue l tracef podwatcher pod s event s pod name evt if w podselector select pod l tracef podwatcher skipping event for pod s not matched by selector v pod name w podselector continue and we see a puzzling result where the selector s names is empty and so isn t matching any image trac podwatcher skipping event for pod mynodejsapp deployment hkzdc not matched by selector rwmutex w state sema writersem readersem readercount readerwait names map subtask task devloop the podselector here is a kubernetes imagelist these image names are populated from a callback to addtagstopodselector which is called by our deployers for example there must be some kind of race condition happening
0
400,791
11,780,732,496
IssuesEvent
2020-03-16 20:44:01
silverbackis/ApiComponentBundle
https://api.github.com/repos/silverbackis/ApiComponentBundle
opened
Components can specify whether they must be explicitly permitted
Low Priority v2
You may have a component which can only be added to specific component groups. If there is a boolean on components whereby if the value is `true` the component can only be added to component groups that specifically allow it, then the user-interface becomes cleaner by not listing these components to be added to every component group, and the restriction will mean the component must be used properly.
1.0
Components can specify whether they must be explicitly permitted - You may have a component which can only be added to specific component groups. If there is a boolean on components whereby if the value is `true` the component can only be added to component groups that specifically allow it, then the user-interface becomes cleaner by not listing these components to be added to every component group, and the restriction will mean the component must be used properly.
non_process
components can specify whether they must be explicitly permitted you may have a component which can only be added to specific component groups if there is a boolean on components whereby if the value is true the component can only be added to component groups that specifically allow it then the user interface becomes cleaner by not listing these components to be added to every component group and the restriction will mean the component must be used properly
0
16,085
20,254,340,796
IssuesEvent
2022-02-14 21:18:24
scikit-learn/scikit-learn
https://api.github.com/repos/scikit-learn/scikit-learn
closed
Multi-target GPR sample_y fails when normalize_y=True
Bug module:gaussian_process
### Describe the bug Related to #22174; supposed to have been fixed in [#20761](https://github.com/scikit-learn/scikit-learn/pull/20761)? See #22199 When using a GPR model for multi-target data, if we set normalize_y=False then sample_y fails. ### Steps/Code to Reproduce ``` import numpy as np import sklearn from sklearn.gaussian_process import GaussianProcessRegressor as GPR print(sklearn.__version__) X_train = np.random.rand(7,3) X_test = np.random.rand(4,3) # ---- Working code ---- # # n_targets = 1 Y_train = np.random.randn(7,1) model = GPR(normalize_y=True) model.fit(X_train, Y_train) print(model.sample_y(X_test, n_samples=1).shape) print(model.sample_y(X_test, n_samples=15).shape) # n_targets = 2 Y_train = np.random.randn(7,2) model = GPR(normalize_y=False) model.fit(X_train, Y_train) print(model.sample_y(X_test, n_samples=1).shape) print(model.sample_y(X_test, n_samples=15).shape) # ---- Broken code ---- # # n_targets = 2 Y_train = np.random.randn(7,2) model = GPR(normalize_y=True) model.fit(X_train, Y_train) print(model.sample_y(X_test, n_samples=1).shape) print(model.sample_y(X_test, n_samples=15).shape) ``` ### Expected Results GPR.sample_y should result in an array of shape (n_samples_X, n_targets, n_samples_y), so the above code should produce ``` (4, 1, 1) (4, 1, 15) (4, 2, 1) (4, 2, 15) (4, 2, 1) (4, 2, 15) ``` ### Actual Results Instead we get an error relating to the shape of the predicted covariance: ``` File "sklearn/gaussian_process/_gpr.py", line 477, in sample_y y_samples = [ File "sklearn/gaussian_process/_gpr.py", line 478, in <listcomp> rng.multivariate_normal(y_mean[:, i], y_cov, n_samples).T[:, np.newaxis] File "mtrand.pyx", line 4069, in numpy.random.mtrand.RandomState.multivariate_normal ValueError: cov must be 2 dimensional and square ``` ### Versions System: python: 3.9.5 | packaged by conda-forge | (default, Jun 19 2021, 00:27:35) [Clang 11.1.0 ] executable: /Users/tnakam10/opt/anaconda3/envs/aerofusion/bin/python machine: macOS-11.6.1-x86_64-i386-64bit Python dependencies: pip: 21.3.1 setuptools: 60.5.0 sklearn: 1.0.2 numpy: 1.19.5 scipy: 1.7.3 Cython: None pandas: 1.3.5 matplotlib: 3.5.1 joblib: 1.1.0 threadpoolctl: 3.0.0 Built with OpenMP: True
1.0
Multi-target GPR sample_y fails when normalize_y=True - ### Describe the bug Related to #22174; supposed to have been fixed in [#20761](https://github.com/scikit-learn/scikit-learn/pull/20761)? See #22199 When using a GPR model for multi-target data, if we set normalize_y=False then sample_y fails. ### Steps/Code to Reproduce ``` import numpy as np import sklearn from sklearn.gaussian_process import GaussianProcessRegressor as GPR print(sklearn.__version__) X_train = np.random.rand(7,3) X_test = np.random.rand(4,3) # ---- Working code ---- # # n_targets = 1 Y_train = np.random.randn(7,1) model = GPR(normalize_y=True) model.fit(X_train, Y_train) print(model.sample_y(X_test, n_samples=1).shape) print(model.sample_y(X_test, n_samples=15).shape) # n_targets = 2 Y_train = np.random.randn(7,2) model = GPR(normalize_y=False) model.fit(X_train, Y_train) print(model.sample_y(X_test, n_samples=1).shape) print(model.sample_y(X_test, n_samples=15).shape) # ---- Broken code ---- # # n_targets = 2 Y_train = np.random.randn(7,2) model = GPR(normalize_y=True) model.fit(X_train, Y_train) print(model.sample_y(X_test, n_samples=1).shape) print(model.sample_y(X_test, n_samples=15).shape) ``` ### Expected Results GPR.sample_y should result in an array of shape (n_samples_X, n_targets, n_samples_y), so the above code should produce ``` (4, 1, 1) (4, 1, 15) (4, 2, 1) (4, 2, 15) (4, 2, 1) (4, 2, 15) ``` ### Actual Results Instead we get an error relating to the shape of the predicted covariance: ``` File "sklearn/gaussian_process/_gpr.py", line 477, in sample_y y_samples = [ File "sklearn/gaussian_process/_gpr.py", line 478, in <listcomp> rng.multivariate_normal(y_mean[:, i], y_cov, n_samples).T[:, np.newaxis] File "mtrand.pyx", line 4069, in numpy.random.mtrand.RandomState.multivariate_normal ValueError: cov must be 2 dimensional and square ``` ### Versions System: python: 3.9.5 | packaged by conda-forge | (default, Jun 19 2021, 00:27:35) [Clang 11.1.0 ] executable: /Users/tnakam10/opt/anaconda3/envs/aerofusion/bin/python machine: macOS-11.6.1-x86_64-i386-64bit Python dependencies: pip: 21.3.1 setuptools: 60.5.0 sklearn: 1.0.2 numpy: 1.19.5 scipy: 1.7.3 Cython: None pandas: 1.3.5 matplotlib: 3.5.1 joblib: 1.1.0 threadpoolctl: 3.0.0 Built with OpenMP: True
process
multi target gpr sample y fails when normalize y true describe the bug related to supposed to have been fixed in see when using a gpr model for multi target data if we set normalize y false then sample y fails steps code to reproduce import numpy as np import sklearn from sklearn gaussian process import gaussianprocessregressor as gpr print sklearn version x train np random rand x test np random rand working code n targets y train np random randn model gpr normalize y true model fit x train y train print model sample y x test n samples shape print model sample y x test n samples shape n targets y train np random randn model gpr normalize y false model fit x train y train print model sample y x test n samples shape print model sample y x test n samples shape broken code n targets y train np random randn model gpr normalize y true model fit x train y train print model sample y x test n samples shape print model sample y x test n samples shape expected results gpr sample y should result in an array of shape n samples x n targets n samples y so the above code should produce actual results instead we get an error relating to the shape of the predicted covariance file sklearn gaussian process gpr py line in sample y y samples file sklearn gaussian process gpr py line in rng multivariate normal y mean y cov n samples t file mtrand pyx line in numpy random mtrand randomstate multivariate normal valueerror cov must be dimensional and square versions system python packaged by conda forge default jun executable users opt envs aerofusion bin python machine macos python dependencies pip setuptools sklearn numpy scipy cython none pandas matplotlib joblib threadpoolctl built with openmp true
1
178,282
14,666,448,174
IssuesEvent
2020-12-29 16:19:33
themadmrj/chat_ui_kit
https://api.github.com/repos/themadmrj/chat_ui_kit
closed
[feature request] Would u add a example folder for this wonderful lib?
documentation
So that new comers can try it more easily
1.0
[feature request] Would u add a example folder for this wonderful lib? - So that new comers can try it more easily
non_process
would u add a example folder for this wonderful lib so that new comers can try it more easily
0
11,097
13,939,278,909
IssuesEvent
2020-10-22 16:16:36
Arch666Angel/mods
https://api.github.com/repos/Arch666Angel/mods
closed
liquid rubber shouldn't be green
Angels Bio Processing Impact: Bug
**Describe the bug** The fluid in the pipe shows a green fluid, instead of purple, as the icon
1.0
liquid rubber shouldn't be green - **Describe the bug** The fluid in the pipe shows a green fluid, instead of purple, as the icon
process
liquid rubber shouldn t be green describe the bug the fluid in the pipe shows a green fluid instead of purple as the icon
1
11,035
13,850,549,572
IssuesEvent
2020-10-15 01:32:06
metabase/metabase
https://api.github.com/repos/metabase/metabase
closed
BigQuery cannot show table where `integer` column is set as "Entity Key" or "Foreign Key"
Administration/Data Model Database/BigQuery Priority:P1 Querying/Processor Type:Bug
**Describe the bug** When a BigQuery table has a column, which is `INTEGER` (probably related types as well), and the Field Type is set to "Entity Key" or "Foreign Key", then it's not possible to browse the table and will either result in "No results" or sometimes an error. **To Reproduce** 1. Create BigQuery table, where one column is `INTEGER` 2. Admin > Data Model > (BigQuery) > (table) > (integer column) - set Field Type to "Entity Key" 3. Browse Data > (BigQuery) > (table) - it will either show "No results" or sometimes errors with `class clojure.lang.LazySeq cannot be cast to class clojure.lang.Associative (clojure.lang.LazySeq and clojure.lang.Associative are in unnamed module of loader 'app')` (full stacktrace below) 4. Setting Field Type to "No special type" will now show content of the table <details><summary>Full stacktrace</summary> ``` [2138a10e-70ef-4e9b-8776-47b741f5eaa1] 2020-10-14T20:59:10+02:00 ERROR metabase.query-processor.middleware.catch-exceptions Error processing query: null {:database_id 43, :started_at #t "2020-10-14T18:59:07.539881Z[GMT]", :via [{:status :failed, :class clojure.lang.ExceptionInfo, :error "Error reducing result rows", :stacktrace ["--> query_processor.context.default$default_reducef$fn__38011.invoke(default.clj:61)" "query_processor.context.default$default_reducef.invokeStatic(default.clj:58)" "query_processor.context.default$default_reducef.invoke(default.clj:49)" "query_processor.context$reducef.invokeStatic(context.clj:69)" "query_processor.context$reducef.invoke(context.clj:62)" "query_processor.context.default$default_runf$respond_STAR___38015.invoke(default.clj:70)" "driver.bigquery$post_process_native$fn__1424.invoke(bigquery.clj:201)" "driver.bigquery$do_with_finished_response.invokeStatic(bigquery.clj:156)" "driver.bigquery$do_with_finished_response.invoke(bigquery.clj:147)" "driver.bigquery$post_process_native.invokeStatic(bigquery.clj:184)" "driver.bigquery$post_process_native.invoke(bigquery.clj:178)" "driver.bigquery$process_native_STAR_$thunk__1518.invoke(bigquery.clj:231)" "driver.bigquery$process_native_STAR_.invokeStatic(bigquery.clj:233)" "driver.bigquery$process_native_STAR_.invoke(bigquery.clj:226)" "driver.bigquery$eval1522$fn__1524.invoke(bigquery.clj:250)" "query_processor.context$executef.invokeStatic(context.clj:59)" "query_processor.context$executef.invoke(context.clj:48)" "query_processor.context.default$default_runf.invokeStatic(default.clj:69)" "query_processor.context.default$default_runf.invoke(default.clj:67)" "query_processor.context$runf.invokeStatic(context.clj:45)" "query_processor.context$runf.invoke(context.clj:39)" "query_processor.reducible$pivot.invokeStatic(reducible.clj:34)" "query_processor.reducible$pivot.invoke(reducible.clj:31)" "query_processor.middleware.mbql_to_native$mbql__GT_native$fn__45716.invoke(mbql_to_native.clj:26)" "query_processor.middleware.check_features$check_features$fn__44992.invoke(check_features.clj:42)" "query_processor.middleware.optimize_datetime_filters$optimize_datetime_filters$fn__45881.invoke(optimize_datetime_filters.clj:133)" "query_processor.middleware.auto_parse_filter_values$auto_parse_filter_values$fn__43798.invoke(auto_parse_filter_values.clj:44)" "query_processor.middleware.wrap_value_literals$wrap_value_literals$fn__47410.invoke(wrap_value_literals.clj:142)" "query_processor.middleware.annotate$add_column_info$fn__43560.invoke(annotate.clj:574)" "query_processor.middleware.permissions$check_query_permissions$fn__44867.invoke(permissions.clj:64)" "query_processor.middleware.pre_alias_aggregations$pre_alias_aggregations$fn__46399.invoke(pre_alias_aggregations.clj:40)" "query_processor.middleware.cumulative_aggregations$handle_cumulative_aggregations$fn__45065.invoke(cumulative_aggregations.clj:61)" "query_processor.middleware.resolve_joins$resolve_joins$fn__46931.invoke(resolve_joins.clj:183)" "query_processor.middleware.add_implicit_joins$add_implicit_joins$fn__39285.invoke(add_implicit_joins.clj:245)" "query_processor.middleware.large_int_id$convert_id_to_string$fn__45677.invoke(large_int_id.clj:44)" "query_processor.middleware.limit$limit$fn__45702.invoke(limit.clj:38)" "query_processor.middleware.format_rows$format_rows$fn__45657.invoke(format_rows.clj:81)" "query_processor.middleware.desugar$desugar$fn__45131.invoke(desugar.clj:22)" "query_processor.middleware.binning$update_binning_strategy$fn__44157.invoke(binning.clj:229)" "query_processor.middleware.resolve_fields$resolve_fields$fn__44673.invoke(resolve_fields.clj:24)" "query_processor.middleware.add_dimension_projections$add_remapping$fn__38834.invoke(add_dimension_projections.clj:318)" "query_processor.middleware.add_implicit_clauses$add_implicit_clauses$fn__39041.invoke(add_implicit_clauses.clj:141)" "query_processor.middleware.add_source_metadata$add_source_metadata_for_source_queries$fn__39434.invoke(add_source_metadata.clj:105)" "query_processor.middleware.reconcile_breakout_and_order_by_bucketing$reconcile_breakout_and_order_by_bucketing$fn__46596.invoke(reconcile_breakout_and_order_by_bucketing.clj:98)" "query_processor.middleware.auto_bucket_datetimes$auto_bucket_datetimes$fn__43745.invoke(auto_bucket_datetimes.clj:125)" "query_processor.middleware.resolve_source_table$resolve_source_tables$fn__44720.invoke(resolve_source_table.clj:46)" "query_processor.middleware.parameters$substitute_parameters$fn__46381.invoke(parameters.clj:114)" "query_processor.middleware.resolve_referenced$resolve_referenced_card_resources$fn__44772.invoke(resolve_referenced.clj:80)" "query_processor.middleware.expand_macros$expand_macros$fn__45387.invoke(expand_macros.clj:158)" "query_processor.middleware.add_timezone_info$add_timezone_info$fn__39465.invoke(add_timezone_info.clj:15)" "query_processor.middleware.splice_params_in_response$splice_params_in_response$fn__47293.invoke(splice_params_in_response.clj:32)" "query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__46607$fn__46611.invoke(resolve_database_and_driver.clj:33)" "driver$do_with_driver.invokeStatic(driver.clj:61)" "driver$do_with_driver.invoke(driver.clj:57)" "query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__46607.invoke(resolve_database_and_driver.clj:27)" "query_processor.middleware.fetch_source_query$resolve_card_id_source_tables$fn__45605.invoke(fetch_source_query.clj:267)" "query_processor.middleware.store$initialize_store$fn__47302$fn__47303.invoke(store.clj:11)" "query_processor.store$do_with_store.invokeStatic(store.clj:46)" "query_processor.store$do_with_store.invoke(store.clj:40)" "query_processor.middleware.store$initialize_store$fn__47302.invoke(store.clj:10)" "query_processor.middleware.cache$maybe_return_cached_results$fn__44649.invoke(cache.clj:209)" "query_processor.middleware.validate$validate_query$fn__47311.invoke(validate.clj:10)" "query_processor.middleware.normalize_query$normalize$fn__45729.invoke(normalize_query.clj:22)" "query_processor.middleware.add_rows_truncated$add_rows_truncated$fn__39303.invoke(add_rows_truncated.clj:36)" "query_processor.middleware.results_metadata$record_and_return_metadata_BANG_$fn__47278.invoke(results_metadata.clj:147)" "query_processor.middleware.constraints$add_default_userland_constraints$fn__45008.invoke(constraints.clj:42)" "query_processor.middleware.process_userland_query$process_userland_query$fn__46470.invoke(process_userland_query.clj:136)" "query_processor.middleware.catch_exceptions$catch_exceptions$fn__44951.invoke(catch_exceptions.clj:174)" "query_processor.reducible$async_qp$qp_STAR___38097$thunk__38098.invoke(reducible.clj:101)" "query_processor.reducible$async_qp$qp_STAR___38097.invoke(reducible.clj:107)" "query_processor.reducible$sync_qp$qp_STAR___38106$fn__38109.invoke(reducible.clj:133)" "query_processor.reducible$sync_qp$qp_STAR___38106.invoke(reducible.clj:132)" "query_processor$process_userland_query.invokeStatic(query_processor.clj:217)" "query_processor$process_userland_query.doInvoke(query_processor.clj:213)" "query_processor$fn__47454$process_query_and_save_execution_BANG___47463$fn__47466.invoke(query_processor.clj:229)" "query_processor$fn__47454$process_query_and_save_execution_BANG___47463.invoke(query_processor.clj:221)" "query_processor$fn__47498$process_query_and_save_with_max_results_constraints_BANG___47507$fn__47510.invoke(query_processor.clj:241)" "query_processor$fn__47498$process_query_and_save_with_max_results_constraints_BANG___47507.invoke(query_processor.clj:234)" "api.dataset$fn__50789$fn__50792.invoke(dataset.clj:55)" "query_processor.streaming$streaming_response_STAR_$fn__35515$fn__35516.invoke(streaming.clj:73)" "query_processor.streaming$streaming_response_STAR_$fn__35515.invoke(streaming.clj:72)" "async.streaming_response$do_f_STAR_.invokeStatic(streaming_response.clj:66)" "async.streaming_response$do_f_STAR_.invoke(streaming_response.clj:64)" "async.streaming_response$do_f_async$fn__23301.invoke(streaming_response.clj:85)"], :error_type :qp, :ex-data {:type :qp}}], :error_type :qp, :json_query {:database 43, :query {:source-table 8102}, :type "query", :parameters [], :middleware {:js-int-to-string? true, :add-default-userland-constraints? true}}, :native {:query "SELECT `dacort.hacker_news_comments`.`id` AS `id`, `dacort.hacker_news_comments`.`by` AS `by`, `dacort.hacker_news_comments`.`author` AS `author`, `dacort.hacker_news_comments`.`time` AS `time`, `dacort.hacker_news_comments`.`time_ts` AS `time_ts`, `dacort.hacker_news_comments`.`text` AS `text`, `dacort.hacker_news_comments`.`parent` AS `parent`, `dacort.hacker_news_comments`.`deleted` AS `deleted`, `dacort.hacker_news_comments`.`dead` AS `dead`, `dacort.hacker_news_comments`.`ranking` AS `ranking` FROM `dacort.hacker_news_comments` LIMIT 2000", :params nil, :table-name "hacker_news_comments", :mbql? true}, :status :failed, :class java.lang.ClassCastException, :stacktrace ["clojure.lang.RT.assoc(RT.java:827)" "clojure.core$assoc__5416.invokeStatic(core.clj:191)" "clojure.core$update_in$up__6853.invoke(core.clj:6185)" "clojure.core$update_in.invokeStatic(core.clj:6186)" "clojure.core$update_in.doInvoke(core.clj:6172)" "clojure.lang.RestFn.invoke(RestFn.java:445)" "--> query_processor.middleware.large_int_id$result_int__GT_string$fn__45670$fn__45671.invoke(large_int_id.clj:12)" "query_processor.middleware.large_int_id$result_int__GT_string$fn__45670.invoke(large_int_id.clj:12)" "query_processor.context.default$default_reducef$fn__38011.invoke(default.clj:59)" "query_processor.context.default$default_reducef.invokeStatic(default.clj:58)" "query_processor.context.default$default_reducef.invoke(default.clj:49)" "query_processor.context$reducef.invokeStatic(context.clj:69)" "query_processor.context$reducef.invoke(context.clj:62)" "query_processor.context.default$default_runf$respond_STAR___38015.invoke(default.clj:70)" "driver.bigquery$post_process_native$fn__1424.invoke(bigquery.clj:201)" "driver.bigquery$do_with_finished_response.invokeStatic(bigquery.clj:156)" "driver.bigquery$do_with_finished_response.invoke(bigquery.clj:147)" "driver.bigquery$post_process_native.invokeStatic(bigquery.clj:184)" "driver.bigquery$post_process_native.invoke(bigquery.clj:178)" "driver.bigquery$process_native_STAR_$thunk__1518.invoke(bigquery.clj:231)" "driver.bigquery$process_native_STAR_.invokeStatic(bigquery.clj:233)" "driver.bigquery$process_native_STAR_.invoke(bigquery.clj:226)" "driver.bigquery$eval1522$fn__1524.invoke(bigquery.clj:250)" "query_processor.context$executef.invokeStatic(context.clj:59)" "query_processor.context$executef.invoke(context.clj:48)" "query_processor.context.default$default_runf.invokeStatic(default.clj:69)" "query_processor.context.default$default_runf.invoke(default.clj:67)" "query_processor.context$runf.invokeStatic(context.clj:45)" "query_processor.context$runf.invoke(context.clj:39)" "query_processor.reducible$pivot.invokeStatic(reducible.clj:34)" "query_processor.reducible$pivot.invoke(reducible.clj:31)" "query_processor.middleware.mbql_to_native$mbql__GT_native$fn__45716.invoke(mbql_to_native.clj:26)" "query_processor.middleware.check_features$check_features$fn__44992.invoke(check_features.clj:42)" "query_processor.middleware.optimize_datetime_filters$optimize_datetime_filters$fn__45881.invoke(optimize_datetime_filters.clj:133)" "query_processor.middleware.auto_parse_filter_values$auto_parse_filter_values$fn__43798.invoke(auto_parse_filter_values.clj:44)" "query_processor.middleware.wrap_value_literals$wrap_value_literals$fn__47410.invoke(wrap_value_literals.clj:142)" "query_processor.middleware.annotate$add_column_info$fn__43560.invoke(annotate.clj:574)" "query_processor.middleware.permissions$check_query_permissions$fn__44867.invoke(permissions.clj:64)" "query_processor.middleware.pre_alias_aggregations$pre_alias_aggregations$fn__46399.invoke(pre_alias_aggregations.clj:40)" "query_processor.middleware.cumulative_aggregations$handle_cumulative_aggregations$fn__45065.invoke(cumulative_aggregations.clj:61)" "query_processor.middleware.resolve_joins$resolve_joins$fn__46931.invoke(resolve_joins.clj:183)" "query_processor.middleware.add_implicit_joins$add_implicit_joins$fn__39285.invoke(add_implicit_joins.clj:245)" "query_processor.middleware.large_int_id$convert_id_to_string$fn__45677.invoke(large_int_id.clj:44)" "query_processor.middleware.limit$limit$fn__45702.invoke(limit.clj:38)" "query_processor.middleware.format_rows$format_rows$fn__45657.invoke(format_rows.clj:81)" "query_processor.middleware.desugar$desugar$fn__45131.invoke(desugar.clj:22)" "query_processor.middleware.binning$update_binning_strategy$fn__44157.invoke(binning.clj:229)" "query_processor.middleware.resolve_fields$resolve_fields$fn__44673.invoke(resolve_fields.clj:24)" "query_processor.middleware.add_dimension_projections$add_remapping$fn__38834.invoke(add_dimension_projections.clj:318)" "query_processor.middleware.add_implicit_clauses$add_implicit_clauses$fn__39041.invoke(add_implicit_clauses.clj:141)" "query_processor.middleware.add_source_metadata$add_source_metadata_for_source_queries$fn__39434.invoke(add_source_metadata.clj:105)" "query_processor.middleware.reconcile_breakout_and_order_by_bucketing$reconcile_breakout_and_order_by_bucketing$fn__46596.invoke(reconcile_breakout_and_order_by_bucketing.clj:98)" "query_processor.middleware.auto_bucket_datetimes$auto_bucket_datetimes$fn__43745.invoke(auto_bucket_datetimes.clj:125)" "query_processor.middleware.resolve_source_table$resolve_source_tables$fn__44720.invoke(resolve_source_table.clj:46)" "query_processor.middleware.parameters$substitute_parameters$fn__46381.invoke(parameters.clj:114)" "query_processor.middleware.resolve_referenced$resolve_referenced_card_resources$fn__44772.invoke(resolve_referenced.clj:80)" "query_processor.middleware.expand_macros$expand_macros$fn__45387.invoke(expand_macros.clj:158)" "query_processor.middleware.add_timezone_info$add_timezone_info$fn__39465.invoke(add_timezone_info.clj:15)" "query_processor.middleware.splice_params_in_response$splice_params_in_response$fn__47293.invoke(splice_params_in_response.clj:32)" "query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__46607$fn__46611.invoke(resolve_database_and_driver.clj:33)" "driver$do_with_driver.invokeStatic(driver.clj:61)" "driver$do_with_driver.invoke(driver.clj:57)" "query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__46607.invoke(resolve_database_and_driver.clj:27)" "query_processor.middleware.fetch_source_query$resolve_card_id_source_tables$fn__45605.invoke(fetch_source_query.clj:267)" "query_processor.middleware.store$initialize_store$fn__47302$fn__47303.invoke(store.clj:11)" "query_processor.store$do_with_store.invokeStatic(store.clj:46)" "query_processor.store$do_with_store.invoke(store.clj:40)" "query_processor.middleware.store$initialize_store$fn__47302.invoke(store.clj:10)" "query_processor.middleware.cache$maybe_return_cached_results$fn__44649.invoke(cache.clj:209)" "query_processor.middleware.validate$validate_query$fn__47311.invoke(validate.clj:10)" "query_processor.middleware.normalize_query$normalize$fn__45729.invoke(normalize_query.clj:22)" "query_processor.middleware.add_rows_truncated$add_rows_truncated$fn__39303.invoke(add_rows_truncated.clj:36)" "query_processor.middleware.results_metadata$record_and_return_metadata_BANG_$fn__47278.invoke(results_metadata.clj:147)" "query_processor.middleware.constraints$add_default_userland_constraints$fn__45008.invoke(constraints.clj:42)" "query_processor.middleware.process_userland_query$process_userland_query$fn__46470.invoke(process_userland_query.clj:136)" "query_processor.middleware.catch_exceptions$catch_exceptions$fn__44951.invoke(catch_exceptions.clj:174)" "query_processor.reducible$async_qp$qp_STAR___38097$thunk__38098.invoke(reducible.clj:101)" "query_processor.reducible$async_qp$qp_STAR___38097.invoke(reducible.clj:107)" "query_processor.reducible$sync_qp$qp_STAR___38106$fn__38109.invoke(reducible.clj:133)" "query_processor.reducible$sync_qp$qp_STAR___38106.invoke(reducible.clj:132)" "query_processor$process_userland_query.invokeStatic(query_processor.clj:217)" "query_processor$process_userland_query.doInvoke(query_processor.clj:213)" "query_processor$fn__47454$process_query_and_save_execution_BANG___47463$fn__47466.invoke(query_processor.clj:229)" "query_processor$fn__47454$process_query_and_save_execution_BANG___47463.invoke(query_processor.clj:221)" "query_processor$fn__47498$process_query_and_save_with_max_results_constraints_BANG___47507$fn__47510.invoke(query_processor.clj:241)" "query_processor$fn__47498$process_query_and_save_with_max_results_constraints_BANG___47507.invoke(query_processor.clj:234)" "api.dataset$fn__50789$fn__50792.invoke(dataset.clj:55)" "query_processor.streaming$streaming_response_STAR_$fn__35515$fn__35516.invoke(streaming.clj:73)" "query_processor.streaming$streaming_response_STAR_$fn__35515.invoke(streaming.clj:72)" "async.streaming_response$do_f_STAR_.invokeStatic(streaming_response.clj:66)" "async.streaming_response$do_f_STAR_.invoke(streaming_response.clj:64)" "async.streaming_response$do_f_async$fn__23301.invoke(streaming_response.clj:85)"], :context :ad-hoc, :error "class clojure.lang.LazySeq cannot be cast to class clojure.lang.Associative (clojure.lang.LazySeq and clojure.lang.Associative are in unnamed module of loader 'app')", :row_count 0, :running_time 0, :preprocessed {:database 43, :query {:source-table 8102, :fields [[:field-id 286918] [:field-id 286913] [:field-id 286915] [:field-id 286919] [:datetime-field [:field-id 286920] :default] [:field-id 286911] [:field-id 286917] [:field-id 286914] [:field-id 286916] [:field-id 286912]], :limit 2000}, :type :query, :middleware {:js-int-to-string? true, :add-default-userland-constraints? true}, :info {:executed-by 44, :context :ad-hoc, :nested? false, :query-hash [123, 33, 19, 84, 111, 42, 17, 19, -89, -54, 124, -125, 98, -71, -7, 107, -62, 103, 40, 52, -24, -42, 75, -46, 23, 78, -98, 61, -14, 74, 21, 62]}, :constraints {:max-results 10000, :max-results-bare-rows 2000}}, :data {:rows [], :cols []}} [2138a10e-70ef-4e9b-8776-47b741f5eaa1] 2020-10-14T20:59:10+02:00 DEBUG metabase.middleware.log POST /api/dataset 202 [ASYNC: completed] 2.7 s (24 DB calls) App DB connections: 0/4 Jetty threads: 2/50 (5 idle, 0 queued) (52 total active threads) Queries in flight: 1 (0 queued) ``` </details> **Information about your Metabase Installation:** Metabase `master` (commit `adcbfbf`), 1.36.6.4 and 0.36.6 - works on 0.36.4 / 1.36.4.1 **Additional context** Somehow the error message seems to be swallowed most of the time, which probably relates to #12625 and #13164 https://discourse.metabase.com/t/database-sync-error-clojure-lang-lazyseq-cannot-be-cast-to-clojure-lang-associative-with-version-0-36-6/11813
1.0
BigQuery cannot show table where `integer` column is set as "Entity Key" or "Foreign Key" - **Describe the bug** When a BigQuery table has a column, which is `INTEGER` (probably related types as well), and the Field Type is set to "Entity Key" or "Foreign Key", then it's not possible to browse the table and will either result in "No results" or sometimes an error. **To Reproduce** 1. Create BigQuery table, where one column is `INTEGER` 2. Admin > Data Model > (BigQuery) > (table) > (integer column) - set Field Type to "Entity Key" 3. Browse Data > (BigQuery) > (table) - it will either show "No results" or sometimes errors with `class clojure.lang.LazySeq cannot be cast to class clojure.lang.Associative (clojure.lang.LazySeq and clojure.lang.Associative are in unnamed module of loader 'app')` (full stacktrace below) 4. Setting Field Type to "No special type" will now show content of the table <details><summary>Full stacktrace</summary> ``` [2138a10e-70ef-4e9b-8776-47b741f5eaa1] 2020-10-14T20:59:10+02:00 ERROR metabase.query-processor.middleware.catch-exceptions Error processing query: null {:database_id 43, :started_at #t "2020-10-14T18:59:07.539881Z[GMT]", :via [{:status :failed, :class clojure.lang.ExceptionInfo, :error "Error reducing result rows", :stacktrace ["--> query_processor.context.default$default_reducef$fn__38011.invoke(default.clj:61)" "query_processor.context.default$default_reducef.invokeStatic(default.clj:58)" "query_processor.context.default$default_reducef.invoke(default.clj:49)" "query_processor.context$reducef.invokeStatic(context.clj:69)" "query_processor.context$reducef.invoke(context.clj:62)" "query_processor.context.default$default_runf$respond_STAR___38015.invoke(default.clj:70)" "driver.bigquery$post_process_native$fn__1424.invoke(bigquery.clj:201)" "driver.bigquery$do_with_finished_response.invokeStatic(bigquery.clj:156)" "driver.bigquery$do_with_finished_response.invoke(bigquery.clj:147)" "driver.bigquery$post_process_native.invokeStatic(bigquery.clj:184)" "driver.bigquery$post_process_native.invoke(bigquery.clj:178)" "driver.bigquery$process_native_STAR_$thunk__1518.invoke(bigquery.clj:231)" "driver.bigquery$process_native_STAR_.invokeStatic(bigquery.clj:233)" "driver.bigquery$process_native_STAR_.invoke(bigquery.clj:226)" "driver.bigquery$eval1522$fn__1524.invoke(bigquery.clj:250)" "query_processor.context$executef.invokeStatic(context.clj:59)" "query_processor.context$executef.invoke(context.clj:48)" "query_processor.context.default$default_runf.invokeStatic(default.clj:69)" "query_processor.context.default$default_runf.invoke(default.clj:67)" "query_processor.context$runf.invokeStatic(context.clj:45)" "query_processor.context$runf.invoke(context.clj:39)" "query_processor.reducible$pivot.invokeStatic(reducible.clj:34)" "query_processor.reducible$pivot.invoke(reducible.clj:31)" "query_processor.middleware.mbql_to_native$mbql__GT_native$fn__45716.invoke(mbql_to_native.clj:26)" "query_processor.middleware.check_features$check_features$fn__44992.invoke(check_features.clj:42)" "query_processor.middleware.optimize_datetime_filters$optimize_datetime_filters$fn__45881.invoke(optimize_datetime_filters.clj:133)" "query_processor.middleware.auto_parse_filter_values$auto_parse_filter_values$fn__43798.invoke(auto_parse_filter_values.clj:44)" "query_processor.middleware.wrap_value_literals$wrap_value_literals$fn__47410.invoke(wrap_value_literals.clj:142)" "query_processor.middleware.annotate$add_column_info$fn__43560.invoke(annotate.clj:574)" "query_processor.middleware.permissions$check_query_permissions$fn__44867.invoke(permissions.clj:64)" "query_processor.middleware.pre_alias_aggregations$pre_alias_aggregations$fn__46399.invoke(pre_alias_aggregations.clj:40)" "query_processor.middleware.cumulative_aggregations$handle_cumulative_aggregations$fn__45065.invoke(cumulative_aggregations.clj:61)" "query_processor.middleware.resolve_joins$resolve_joins$fn__46931.invoke(resolve_joins.clj:183)" "query_processor.middleware.add_implicit_joins$add_implicit_joins$fn__39285.invoke(add_implicit_joins.clj:245)" "query_processor.middleware.large_int_id$convert_id_to_string$fn__45677.invoke(large_int_id.clj:44)" "query_processor.middleware.limit$limit$fn__45702.invoke(limit.clj:38)" "query_processor.middleware.format_rows$format_rows$fn__45657.invoke(format_rows.clj:81)" "query_processor.middleware.desugar$desugar$fn__45131.invoke(desugar.clj:22)" "query_processor.middleware.binning$update_binning_strategy$fn__44157.invoke(binning.clj:229)" "query_processor.middleware.resolve_fields$resolve_fields$fn__44673.invoke(resolve_fields.clj:24)" "query_processor.middleware.add_dimension_projections$add_remapping$fn__38834.invoke(add_dimension_projections.clj:318)" "query_processor.middleware.add_implicit_clauses$add_implicit_clauses$fn__39041.invoke(add_implicit_clauses.clj:141)" "query_processor.middleware.add_source_metadata$add_source_metadata_for_source_queries$fn__39434.invoke(add_source_metadata.clj:105)" "query_processor.middleware.reconcile_breakout_and_order_by_bucketing$reconcile_breakout_and_order_by_bucketing$fn__46596.invoke(reconcile_breakout_and_order_by_bucketing.clj:98)" "query_processor.middleware.auto_bucket_datetimes$auto_bucket_datetimes$fn__43745.invoke(auto_bucket_datetimes.clj:125)" "query_processor.middleware.resolve_source_table$resolve_source_tables$fn__44720.invoke(resolve_source_table.clj:46)" "query_processor.middleware.parameters$substitute_parameters$fn__46381.invoke(parameters.clj:114)" "query_processor.middleware.resolve_referenced$resolve_referenced_card_resources$fn__44772.invoke(resolve_referenced.clj:80)" "query_processor.middleware.expand_macros$expand_macros$fn__45387.invoke(expand_macros.clj:158)" "query_processor.middleware.add_timezone_info$add_timezone_info$fn__39465.invoke(add_timezone_info.clj:15)" "query_processor.middleware.splice_params_in_response$splice_params_in_response$fn__47293.invoke(splice_params_in_response.clj:32)" "query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__46607$fn__46611.invoke(resolve_database_and_driver.clj:33)" "driver$do_with_driver.invokeStatic(driver.clj:61)" "driver$do_with_driver.invoke(driver.clj:57)" "query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__46607.invoke(resolve_database_and_driver.clj:27)" "query_processor.middleware.fetch_source_query$resolve_card_id_source_tables$fn__45605.invoke(fetch_source_query.clj:267)" "query_processor.middleware.store$initialize_store$fn__47302$fn__47303.invoke(store.clj:11)" "query_processor.store$do_with_store.invokeStatic(store.clj:46)" "query_processor.store$do_with_store.invoke(store.clj:40)" "query_processor.middleware.store$initialize_store$fn__47302.invoke(store.clj:10)" "query_processor.middleware.cache$maybe_return_cached_results$fn__44649.invoke(cache.clj:209)" "query_processor.middleware.validate$validate_query$fn__47311.invoke(validate.clj:10)" "query_processor.middleware.normalize_query$normalize$fn__45729.invoke(normalize_query.clj:22)" "query_processor.middleware.add_rows_truncated$add_rows_truncated$fn__39303.invoke(add_rows_truncated.clj:36)" "query_processor.middleware.results_metadata$record_and_return_metadata_BANG_$fn__47278.invoke(results_metadata.clj:147)" "query_processor.middleware.constraints$add_default_userland_constraints$fn__45008.invoke(constraints.clj:42)" "query_processor.middleware.process_userland_query$process_userland_query$fn__46470.invoke(process_userland_query.clj:136)" "query_processor.middleware.catch_exceptions$catch_exceptions$fn__44951.invoke(catch_exceptions.clj:174)" "query_processor.reducible$async_qp$qp_STAR___38097$thunk__38098.invoke(reducible.clj:101)" "query_processor.reducible$async_qp$qp_STAR___38097.invoke(reducible.clj:107)" "query_processor.reducible$sync_qp$qp_STAR___38106$fn__38109.invoke(reducible.clj:133)" "query_processor.reducible$sync_qp$qp_STAR___38106.invoke(reducible.clj:132)" "query_processor$process_userland_query.invokeStatic(query_processor.clj:217)" "query_processor$process_userland_query.doInvoke(query_processor.clj:213)" "query_processor$fn__47454$process_query_and_save_execution_BANG___47463$fn__47466.invoke(query_processor.clj:229)" "query_processor$fn__47454$process_query_and_save_execution_BANG___47463.invoke(query_processor.clj:221)" "query_processor$fn__47498$process_query_and_save_with_max_results_constraints_BANG___47507$fn__47510.invoke(query_processor.clj:241)" "query_processor$fn__47498$process_query_and_save_with_max_results_constraints_BANG___47507.invoke(query_processor.clj:234)" "api.dataset$fn__50789$fn__50792.invoke(dataset.clj:55)" "query_processor.streaming$streaming_response_STAR_$fn__35515$fn__35516.invoke(streaming.clj:73)" "query_processor.streaming$streaming_response_STAR_$fn__35515.invoke(streaming.clj:72)" "async.streaming_response$do_f_STAR_.invokeStatic(streaming_response.clj:66)" "async.streaming_response$do_f_STAR_.invoke(streaming_response.clj:64)" "async.streaming_response$do_f_async$fn__23301.invoke(streaming_response.clj:85)"], :error_type :qp, :ex-data {:type :qp}}], :error_type :qp, :json_query {:database 43, :query {:source-table 8102}, :type "query", :parameters [], :middleware {:js-int-to-string? true, :add-default-userland-constraints? true}}, :native {:query "SELECT `dacort.hacker_news_comments`.`id` AS `id`, `dacort.hacker_news_comments`.`by` AS `by`, `dacort.hacker_news_comments`.`author` AS `author`, `dacort.hacker_news_comments`.`time` AS `time`, `dacort.hacker_news_comments`.`time_ts` AS `time_ts`, `dacort.hacker_news_comments`.`text` AS `text`, `dacort.hacker_news_comments`.`parent` AS `parent`, `dacort.hacker_news_comments`.`deleted` AS `deleted`, `dacort.hacker_news_comments`.`dead` AS `dead`, `dacort.hacker_news_comments`.`ranking` AS `ranking` FROM `dacort.hacker_news_comments` LIMIT 2000", :params nil, :table-name "hacker_news_comments", :mbql? true}, :status :failed, :class java.lang.ClassCastException, :stacktrace ["clojure.lang.RT.assoc(RT.java:827)" "clojure.core$assoc__5416.invokeStatic(core.clj:191)" "clojure.core$update_in$up__6853.invoke(core.clj:6185)" "clojure.core$update_in.invokeStatic(core.clj:6186)" "clojure.core$update_in.doInvoke(core.clj:6172)" "clojure.lang.RestFn.invoke(RestFn.java:445)" "--> query_processor.middleware.large_int_id$result_int__GT_string$fn__45670$fn__45671.invoke(large_int_id.clj:12)" "query_processor.middleware.large_int_id$result_int__GT_string$fn__45670.invoke(large_int_id.clj:12)" "query_processor.context.default$default_reducef$fn__38011.invoke(default.clj:59)" "query_processor.context.default$default_reducef.invokeStatic(default.clj:58)" "query_processor.context.default$default_reducef.invoke(default.clj:49)" "query_processor.context$reducef.invokeStatic(context.clj:69)" "query_processor.context$reducef.invoke(context.clj:62)" "query_processor.context.default$default_runf$respond_STAR___38015.invoke(default.clj:70)" "driver.bigquery$post_process_native$fn__1424.invoke(bigquery.clj:201)" "driver.bigquery$do_with_finished_response.invokeStatic(bigquery.clj:156)" "driver.bigquery$do_with_finished_response.invoke(bigquery.clj:147)" "driver.bigquery$post_process_native.invokeStatic(bigquery.clj:184)" "driver.bigquery$post_process_native.invoke(bigquery.clj:178)" "driver.bigquery$process_native_STAR_$thunk__1518.invoke(bigquery.clj:231)" "driver.bigquery$process_native_STAR_.invokeStatic(bigquery.clj:233)" "driver.bigquery$process_native_STAR_.invoke(bigquery.clj:226)" "driver.bigquery$eval1522$fn__1524.invoke(bigquery.clj:250)" "query_processor.context$executef.invokeStatic(context.clj:59)" "query_processor.context$executef.invoke(context.clj:48)" "query_processor.context.default$default_runf.invokeStatic(default.clj:69)" "query_processor.context.default$default_runf.invoke(default.clj:67)" "query_processor.context$runf.invokeStatic(context.clj:45)" "query_processor.context$runf.invoke(context.clj:39)" "query_processor.reducible$pivot.invokeStatic(reducible.clj:34)" "query_processor.reducible$pivot.invoke(reducible.clj:31)" "query_processor.middleware.mbql_to_native$mbql__GT_native$fn__45716.invoke(mbql_to_native.clj:26)" "query_processor.middleware.check_features$check_features$fn__44992.invoke(check_features.clj:42)" "query_processor.middleware.optimize_datetime_filters$optimize_datetime_filters$fn__45881.invoke(optimize_datetime_filters.clj:133)" "query_processor.middleware.auto_parse_filter_values$auto_parse_filter_values$fn__43798.invoke(auto_parse_filter_values.clj:44)" "query_processor.middleware.wrap_value_literals$wrap_value_literals$fn__47410.invoke(wrap_value_literals.clj:142)" "query_processor.middleware.annotate$add_column_info$fn__43560.invoke(annotate.clj:574)" "query_processor.middleware.permissions$check_query_permissions$fn__44867.invoke(permissions.clj:64)" "query_processor.middleware.pre_alias_aggregations$pre_alias_aggregations$fn__46399.invoke(pre_alias_aggregations.clj:40)" "query_processor.middleware.cumulative_aggregations$handle_cumulative_aggregations$fn__45065.invoke(cumulative_aggregations.clj:61)" "query_processor.middleware.resolve_joins$resolve_joins$fn__46931.invoke(resolve_joins.clj:183)" "query_processor.middleware.add_implicit_joins$add_implicit_joins$fn__39285.invoke(add_implicit_joins.clj:245)" "query_processor.middleware.large_int_id$convert_id_to_string$fn__45677.invoke(large_int_id.clj:44)" "query_processor.middleware.limit$limit$fn__45702.invoke(limit.clj:38)" "query_processor.middleware.format_rows$format_rows$fn__45657.invoke(format_rows.clj:81)" "query_processor.middleware.desugar$desugar$fn__45131.invoke(desugar.clj:22)" "query_processor.middleware.binning$update_binning_strategy$fn__44157.invoke(binning.clj:229)" "query_processor.middleware.resolve_fields$resolve_fields$fn__44673.invoke(resolve_fields.clj:24)" "query_processor.middleware.add_dimension_projections$add_remapping$fn__38834.invoke(add_dimension_projections.clj:318)" "query_processor.middleware.add_implicit_clauses$add_implicit_clauses$fn__39041.invoke(add_implicit_clauses.clj:141)" "query_processor.middleware.add_source_metadata$add_source_metadata_for_source_queries$fn__39434.invoke(add_source_metadata.clj:105)" "query_processor.middleware.reconcile_breakout_and_order_by_bucketing$reconcile_breakout_and_order_by_bucketing$fn__46596.invoke(reconcile_breakout_and_order_by_bucketing.clj:98)" "query_processor.middleware.auto_bucket_datetimes$auto_bucket_datetimes$fn__43745.invoke(auto_bucket_datetimes.clj:125)" "query_processor.middleware.resolve_source_table$resolve_source_tables$fn__44720.invoke(resolve_source_table.clj:46)" "query_processor.middleware.parameters$substitute_parameters$fn__46381.invoke(parameters.clj:114)" "query_processor.middleware.resolve_referenced$resolve_referenced_card_resources$fn__44772.invoke(resolve_referenced.clj:80)" "query_processor.middleware.expand_macros$expand_macros$fn__45387.invoke(expand_macros.clj:158)" "query_processor.middleware.add_timezone_info$add_timezone_info$fn__39465.invoke(add_timezone_info.clj:15)" "query_processor.middleware.splice_params_in_response$splice_params_in_response$fn__47293.invoke(splice_params_in_response.clj:32)" "query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__46607$fn__46611.invoke(resolve_database_and_driver.clj:33)" "driver$do_with_driver.invokeStatic(driver.clj:61)" "driver$do_with_driver.invoke(driver.clj:57)" "query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__46607.invoke(resolve_database_and_driver.clj:27)" "query_processor.middleware.fetch_source_query$resolve_card_id_source_tables$fn__45605.invoke(fetch_source_query.clj:267)" "query_processor.middleware.store$initialize_store$fn__47302$fn__47303.invoke(store.clj:11)" "query_processor.store$do_with_store.invokeStatic(store.clj:46)" "query_processor.store$do_with_store.invoke(store.clj:40)" "query_processor.middleware.store$initialize_store$fn__47302.invoke(store.clj:10)" "query_processor.middleware.cache$maybe_return_cached_results$fn__44649.invoke(cache.clj:209)" "query_processor.middleware.validate$validate_query$fn__47311.invoke(validate.clj:10)" "query_processor.middleware.normalize_query$normalize$fn__45729.invoke(normalize_query.clj:22)" "query_processor.middleware.add_rows_truncated$add_rows_truncated$fn__39303.invoke(add_rows_truncated.clj:36)" "query_processor.middleware.results_metadata$record_and_return_metadata_BANG_$fn__47278.invoke(results_metadata.clj:147)" "query_processor.middleware.constraints$add_default_userland_constraints$fn__45008.invoke(constraints.clj:42)" "query_processor.middleware.process_userland_query$process_userland_query$fn__46470.invoke(process_userland_query.clj:136)" "query_processor.middleware.catch_exceptions$catch_exceptions$fn__44951.invoke(catch_exceptions.clj:174)" "query_processor.reducible$async_qp$qp_STAR___38097$thunk__38098.invoke(reducible.clj:101)" "query_processor.reducible$async_qp$qp_STAR___38097.invoke(reducible.clj:107)" "query_processor.reducible$sync_qp$qp_STAR___38106$fn__38109.invoke(reducible.clj:133)" "query_processor.reducible$sync_qp$qp_STAR___38106.invoke(reducible.clj:132)" "query_processor$process_userland_query.invokeStatic(query_processor.clj:217)" "query_processor$process_userland_query.doInvoke(query_processor.clj:213)" "query_processor$fn__47454$process_query_and_save_execution_BANG___47463$fn__47466.invoke(query_processor.clj:229)" "query_processor$fn__47454$process_query_and_save_execution_BANG___47463.invoke(query_processor.clj:221)" "query_processor$fn__47498$process_query_and_save_with_max_results_constraints_BANG___47507$fn__47510.invoke(query_processor.clj:241)" "query_processor$fn__47498$process_query_and_save_with_max_results_constraints_BANG___47507.invoke(query_processor.clj:234)" "api.dataset$fn__50789$fn__50792.invoke(dataset.clj:55)" "query_processor.streaming$streaming_response_STAR_$fn__35515$fn__35516.invoke(streaming.clj:73)" "query_processor.streaming$streaming_response_STAR_$fn__35515.invoke(streaming.clj:72)" "async.streaming_response$do_f_STAR_.invokeStatic(streaming_response.clj:66)" "async.streaming_response$do_f_STAR_.invoke(streaming_response.clj:64)" "async.streaming_response$do_f_async$fn__23301.invoke(streaming_response.clj:85)"], :context :ad-hoc, :error "class clojure.lang.LazySeq cannot be cast to class clojure.lang.Associative (clojure.lang.LazySeq and clojure.lang.Associative are in unnamed module of loader 'app')", :row_count 0, :running_time 0, :preprocessed {:database 43, :query {:source-table 8102, :fields [[:field-id 286918] [:field-id 286913] [:field-id 286915] [:field-id 286919] [:datetime-field [:field-id 286920] :default] [:field-id 286911] [:field-id 286917] [:field-id 286914] [:field-id 286916] [:field-id 286912]], :limit 2000}, :type :query, :middleware {:js-int-to-string? true, :add-default-userland-constraints? true}, :info {:executed-by 44, :context :ad-hoc, :nested? false, :query-hash [123, 33, 19, 84, 111, 42, 17, 19, -89, -54, 124, -125, 98, -71, -7, 107, -62, 103, 40, 52, -24, -42, 75, -46, 23, 78, -98, 61, -14, 74, 21, 62]}, :constraints {:max-results 10000, :max-results-bare-rows 2000}}, :data {:rows [], :cols []}} [2138a10e-70ef-4e9b-8776-47b741f5eaa1] 2020-10-14T20:59:10+02:00 DEBUG metabase.middleware.log POST /api/dataset 202 [ASYNC: completed] 2.7 s (24 DB calls) App DB connections: 0/4 Jetty threads: 2/50 (5 idle, 0 queued) (52 total active threads) Queries in flight: 1 (0 queued) ``` </details> **Information about your Metabase Installation:** Metabase `master` (commit `adcbfbf`), 1.36.6.4 and 0.36.6 - works on 0.36.4 / 1.36.4.1 **Additional context** Somehow the error message seems to be swallowed most of the time, which probably relates to #12625 and #13164 https://discourse.metabase.com/t/database-sync-error-clojure-lang-lazyseq-cannot-be-cast-to-clojure-lang-associative-with-version-0-36-6/11813
process
bigquery cannot show table where integer column is set as entity key or foreign key describe the bug when a bigquery table has a column which is integer probably related types as well and the field type is set to entity key or foreign key then it s not possible to browse the table and will either result in no results or sometimes an error to reproduce create bigquery table where one column is integer admin data model bigquery table integer column set field type to entity key browse data bigquery table it will either show no results or sometimes errors with class clojure lang lazyseq cannot be cast to class clojure lang associative clojure lang lazyseq and clojure lang associative are in unnamed module of loader app full stacktrace below setting field type to no special type will now show content of the table full stacktrace error metabase query processor middleware catch exceptions error processing query null database id started at t via status failed class clojure lang exceptioninfo error error reducing result rows stacktrace query processor context default default reducef fn invoke default clj query processor context default default reducef invokestatic default clj query processor context default default reducef invoke default clj query processor context reducef invokestatic context clj query processor context reducef invoke context clj query processor context default default runf respond star invoke default clj driver bigquery post process native fn invoke bigquery clj driver bigquery do with finished response invokestatic bigquery clj driver bigquery do with finished response invoke bigquery clj driver bigquery post process native invokestatic bigquery clj driver bigquery post process native invoke bigquery clj driver bigquery process native star thunk invoke bigquery clj driver bigquery process native star invokestatic bigquery clj driver bigquery process native star invoke bigquery clj driver bigquery fn invoke bigquery clj query processor context executef invokestatic context clj query processor context executef invoke context clj query processor context default default runf invokestatic default clj query processor context default default runf invoke default clj query processor context runf invokestatic context clj query processor context runf invoke context clj query processor reducible pivot invokestatic reducible clj query processor reducible pivot invoke reducible clj query processor middleware mbql to native mbql gt native fn invoke mbql to native clj query processor middleware check features check features fn invoke check features clj query processor middleware optimize datetime filters optimize datetime filters fn invoke optimize datetime filters clj query processor middleware auto parse filter values auto parse filter values fn invoke auto parse filter values clj query processor middleware wrap value literals wrap value literals fn invoke wrap value literals clj query processor middleware annotate add column info fn invoke annotate clj query processor middleware permissions check query permissions fn invoke permissions clj query processor middleware pre alias aggregations pre alias aggregations fn invoke pre alias aggregations clj query processor middleware cumulative aggregations handle cumulative aggregations fn invoke cumulative aggregations clj query processor middleware resolve joins resolve joins fn invoke resolve joins clj query processor middleware add implicit joins add implicit joins fn invoke add implicit joins clj query processor middleware large int id convert id to string fn invoke large int id clj query processor middleware limit limit fn invoke limit clj query processor middleware format rows format rows fn invoke format rows clj query processor middleware desugar desugar fn invoke desugar clj query processor middleware binning update binning strategy fn invoke binning clj query processor middleware resolve fields resolve fields fn invoke resolve fields clj query processor middleware add dimension projections add remapping fn invoke add dimension projections clj query processor middleware add implicit clauses add implicit clauses fn invoke add implicit clauses clj query processor middleware add source metadata add source metadata for source queries fn invoke add source metadata clj query processor middleware reconcile breakout and order by bucketing reconcile breakout and order by bucketing fn invoke reconcile breakout and order by bucketing clj query processor middleware auto bucket datetimes auto bucket datetimes fn invoke auto bucket datetimes clj query processor middleware resolve source table resolve source tables fn invoke resolve source table clj query processor middleware parameters substitute parameters fn invoke parameters clj query processor middleware resolve referenced resolve referenced card resources fn invoke resolve referenced clj query processor middleware expand macros expand macros fn invoke expand macros clj query processor middleware add timezone info add timezone info fn invoke add timezone info clj query processor middleware splice params in response splice params in response fn invoke splice params in response clj query processor middleware resolve database and driver resolve database and driver fn fn invoke resolve database and driver clj driver do with driver invokestatic driver clj driver do with driver invoke driver clj query processor middleware resolve database and driver resolve database and driver fn invoke resolve database and driver clj query processor middleware fetch source query resolve card id source tables fn invoke fetch source query clj query processor middleware store initialize store fn fn invoke store clj query processor store do with store invokestatic store clj query processor store do with store invoke store clj query processor middleware store initialize store fn invoke store clj query processor middleware cache maybe return cached results fn invoke cache clj query processor middleware validate validate query fn invoke validate clj query processor middleware normalize query normalize fn invoke normalize query clj query processor middleware add rows truncated add rows truncated fn invoke add rows truncated clj query processor middleware results metadata record and return metadata bang fn invoke results metadata clj query processor middleware constraints add default userland constraints fn invoke constraints clj query processor middleware process userland query process userland query fn invoke process userland query clj query processor middleware catch exceptions catch exceptions fn invoke catch exceptions clj query processor reducible async qp qp star thunk invoke reducible clj query processor reducible async qp qp star invoke reducible clj query processor reducible sync qp qp star fn invoke reducible clj query processor reducible sync qp qp star invoke reducible clj query processor process userland query invokestatic query processor clj query processor process userland query doinvoke query processor clj query processor fn process query and save execution bang fn invoke query processor clj query processor fn process query and save execution bang invoke query processor clj query processor fn process query and save with max results constraints bang fn invoke query processor clj query processor fn process query and save with max results constraints bang invoke query processor clj api dataset fn fn invoke dataset clj query processor streaming streaming response star fn fn invoke streaming clj query processor streaming streaming response star fn invoke streaming clj async streaming response do f star invokestatic streaming response clj async streaming response do f star invoke streaming response clj async streaming response do f async fn invoke streaming response clj error type qp ex data type qp error type qp json query database query source table type query parameters middleware js int to string true add default userland constraints true native query select dacort hacker news comments id as id dacort hacker news comments by as by dacort hacker news comments author as author dacort hacker news comments time as time dacort hacker news comments time ts as time ts dacort hacker news comments text as text dacort hacker news comments parent as parent dacort hacker news comments deleted as deleted dacort hacker news comments dead as dead dacort hacker news comments ranking as ranking from dacort hacker news comments limit params nil table name hacker news comments mbql true status failed class java lang classcastexception stacktrace clojure lang rt assoc rt java clojure core assoc invokestatic core clj clojure core update in up invoke core clj clojure core update in invokestatic core clj clojure core update in doinvoke core clj clojure lang restfn invoke restfn java query processor middleware large int id result int gt string fn fn invoke large int id clj query processor middleware large int id result int gt string fn invoke large int id clj query processor context default default reducef fn invoke default clj query processor context default default reducef invokestatic default clj query processor context default default reducef invoke default clj query processor context reducef invokestatic context clj query processor context reducef invoke context clj query processor context default default runf respond star invoke default clj driver bigquery post process native fn invoke bigquery clj driver bigquery do with finished response invokestatic bigquery clj driver bigquery do with finished response invoke bigquery clj driver bigquery post process native invokestatic bigquery clj driver bigquery post process native invoke bigquery clj driver bigquery process native star thunk invoke bigquery clj driver bigquery process native star invokestatic bigquery clj driver bigquery process native star invoke bigquery clj driver bigquery fn invoke bigquery clj query processor context executef invokestatic context clj query processor context executef invoke context clj query processor context default default runf invokestatic default clj query processor context default default runf invoke default clj query processor context runf invokestatic context clj query processor context runf invoke context clj query processor reducible pivot invokestatic reducible clj query processor reducible pivot invoke reducible clj query processor middleware mbql to native mbql gt native fn invoke mbql to native clj query processor middleware check features check features fn invoke check features clj query processor middleware optimize datetime filters optimize datetime filters fn invoke optimize datetime filters clj query processor middleware auto parse filter values auto parse filter values fn invoke auto parse filter values clj query processor middleware wrap value literals wrap value literals fn invoke wrap value literals clj query processor middleware annotate add column info fn invoke annotate clj query processor middleware permissions check query permissions fn invoke permissions clj query processor middleware pre alias aggregations pre alias aggregations fn invoke pre alias aggregations clj query processor middleware cumulative aggregations handle cumulative aggregations fn invoke cumulative aggregations clj query processor middleware resolve joins resolve joins fn invoke resolve joins clj query processor middleware add implicit joins add implicit joins fn invoke add implicit joins clj query processor middleware large int id convert id to string fn invoke large int id clj query processor middleware limit limit fn invoke limit clj query processor middleware format rows format rows fn invoke format rows clj query processor middleware desugar desugar fn invoke desugar clj query processor middleware binning update binning strategy fn invoke binning clj query processor middleware resolve fields resolve fields fn invoke resolve fields clj query processor middleware add dimension projections add remapping fn invoke add dimension projections clj query processor middleware add implicit clauses add implicit clauses fn invoke add implicit clauses clj query processor middleware add source metadata add source metadata for source queries fn invoke add source metadata clj query processor middleware reconcile breakout and order by bucketing reconcile breakout and order by bucketing fn invoke reconcile breakout and order by bucketing clj query processor middleware auto bucket datetimes auto bucket datetimes fn invoke auto bucket datetimes clj query processor middleware resolve source table resolve source tables fn invoke resolve source table clj query processor middleware parameters substitute parameters fn invoke parameters clj query processor middleware resolve referenced resolve referenced card resources fn invoke resolve referenced clj query processor middleware expand macros expand macros fn invoke expand macros clj query processor middleware add timezone info add timezone info fn invoke add timezone info clj query processor middleware splice params in response splice params in response fn invoke splice params in response clj query processor middleware resolve database and driver resolve database and driver fn fn invoke resolve database and driver clj driver do with driver invokestatic driver clj driver do with driver invoke driver clj query processor middleware resolve database and driver resolve database and driver fn invoke resolve database and driver clj query processor middleware fetch source query resolve card id source tables fn invoke fetch source query clj query processor middleware store initialize store fn fn invoke store clj query processor store do with store invokestatic store clj query processor store do with store invoke store clj query processor middleware store initialize store fn invoke store clj query processor middleware cache maybe return cached results fn invoke cache clj query processor middleware validate validate query fn invoke validate clj query processor middleware normalize query normalize fn invoke normalize query clj query processor middleware add rows truncated add rows truncated fn invoke add rows truncated clj query processor middleware results metadata record and return metadata bang fn invoke results metadata clj query processor middleware constraints add default userland constraints fn invoke constraints clj query processor middleware process userland query process userland query fn invoke process userland query clj query processor middleware catch exceptions catch exceptions fn invoke catch exceptions clj query processor reducible async qp qp star thunk invoke reducible clj query processor reducible async qp qp star invoke reducible clj query processor reducible sync qp qp star fn invoke reducible clj query processor reducible sync qp qp star invoke reducible clj query processor process userland query invokestatic query processor clj query processor process userland query doinvoke query processor clj query processor fn process query and save execution bang fn invoke query processor clj query processor fn process query and save execution bang invoke query processor clj query processor fn process query and save with max results constraints bang fn invoke query processor clj query processor fn process query and save with max results constraints bang invoke query processor clj api dataset fn fn invoke dataset clj query processor streaming streaming response star fn fn invoke streaming clj query processor streaming streaming response star fn invoke streaming clj async streaming response do f star invokestatic streaming response clj async streaming response do f star invoke streaming response clj async streaming response do f async fn invoke streaming response clj context ad hoc error class clojure lang lazyseq cannot be cast to class clojure lang associative clojure lang lazyseq and clojure lang associative are in unnamed module of loader app row count running time preprocessed database query source table fields default limit type query middleware js int to string true add default userland constraints true info executed by context ad hoc nested false query hash constraints max results max results bare rows data rows cols debug metabase middleware log post api dataset s db calls app db connections jetty threads idle queued total active threads queries in flight queued information about your metabase installation metabase master commit adcbfbf and works on additional context somehow the error message seems to be swallowed most of the time which probably relates to and
1
143,374
22,036,658,724
IssuesEvent
2022-05-28 17:44:26
exostin/ExpiryDate
https://api.github.com/repos/exostin/ExpiryDate
closed
Miniaturki dla kaÅŒdej postaci w grze (przycięte z oryginałów)
Experience design
Potrzebujemy przyciętego obrazu png w skali 1:1 jako miniaturkę postaci (jak w lolu) dla kaÅŒdej postaci w grze (zarówno gracza, jak i bossów)
1.0
Miniaturki dla kaÅŒdej postaci w grze (przycięte z oryginałów) - Potrzebujemy przyciętego obrazu png w skali 1:1 jako miniaturkę postaci (jak w lolu) dla kaÅŒdej postaci w grze (zarówno gracza, jak i bossów)
non_process
miniaturki dla kaÅŒdej postaci w grze przycięte z oryginałów potrzebujemy przyciętego obrazu png w skali jako miniaturkę postaci jak w lolu dla kaÅŒdej postaci w grze zarówno gracza jak i bossów
0
93,406
15,886,057,680
IssuesEvent
2021-04-09 21:45:55
garymsegal-ws-org/dev-example-places
https://api.github.com/repos/garymsegal-ws-org/dev-example-places
opened
CVE-2021-25329 (High) detected in tomcat-embed-core-9.0.35.jar, tomcat-embed-core-9.0.36.jar
security vulnerability
## CVE-2021-25329 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>tomcat-embed-core-9.0.35.jar</b>, <b>tomcat-embed-core-9.0.36.jar</b></p></summary> <p> <details><summary><b>tomcat-embed-core-9.0.35.jar</b></p></summary> <p>Core Tomcat implementation</p> <p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p> <p>Path to dependency file: dev-example-places/api/r2dbc/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.35/tomcat-embed-core-9.0.35.jar</p> <p> Dependency Hierarchy: - spring-boot-starter-web-2.3.0.RELEASE.jar (Root Library) - spring-boot-starter-tomcat-2.3.0.RELEASE.jar - :x: **tomcat-embed-core-9.0.35.jar** (Vulnerable Library) </details> <details><summary><b>tomcat-embed-core-9.0.36.jar</b></p></summary> <p>Core Tomcat implementation</p> <p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p> <p>Path to dependency file: dev-example-places/api/jdbc/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.36/tomcat-embed-core-9.0.36.jar</p> <p> Dependency Hierarchy: - spring-boot-starter-web-2.3.1.RELEASE.jar (Root Library) - spring-boot-starter-tomcat-2.3.1.RELEASE.jar - :x: **tomcat-embed-core-9.0.36.jar** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/garymsegal-ws-org/dev-example-places/commit/14a29ec1a84abf2ff445ea8ee791bfbd0aa81b6f">14a29ec1a84abf2ff445ea8ee791bfbd0aa81b6f</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The fix for CVE-2020-9484 was incomplete. When using Apache Tomcat 10.0.0-M1 to 10.0.0, 9.0.0.M1 to 9.0.41, 8.5.0 to 8.5.61 or 7.0.0. to 7.0.107 with a configuration edge case that was highly unlikely to be used, the Tomcat instance was still vulnerable to CVE-2020-9494. Note that both the previously published prerequisites for CVE-2020-9484 and the previously published mitigations for CVE-2020-9484 also apply to this issue. <p>Publish Date: 2021-03-01 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-25329>CVE-2021-25329</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.0</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: High - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://lists.apache.org/thread.html/rfe62fbf9d4c314f166fe8c668e50e5d9dd882a99447f26f0367474bf%40%3Cannounce.tomcat.apache.org%3E">https://lists.apache.org/thread.html/rfe62fbf9d4c314f166fe8c668e50e5d9dd882a99447f26f0367474bf%40%3Cannounce.tomcat.apache.org%3E</a></p> <p>Release Date: 2021-03-01</p> <p>Fix Resolution: org.apache.tomcat:tomcat:7.0.108, org.apache.tomcat:tomcat:8.5.63, org.apache.tomcat:tomcat:9.0.43,org.apache.tomcat:tomcat:10.0.2</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.apache.tomcat.embed","packageName":"tomcat-embed-core","packageVersion":"9.0.35","packageFilePaths":["/api/r2dbc/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.springframework.boot:spring-boot-starter-web:2.3.0.RELEASE;org.springframework.boot:spring-boot-starter-tomcat:2.3.0.RELEASE;org.apache.tomcat.embed:tomcat-embed-core:9.0.35","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.tomcat:tomcat:7.0.108, org.apache.tomcat:tomcat:8.5.63, org.apache.tomcat:tomcat:9.0.43,org.apache.tomcat:tomcat:10.0.2"},{"packageType":"Java","groupId":"org.apache.tomcat.embed","packageName":"tomcat-embed-core","packageVersion":"9.0.36","packageFilePaths":["/api/jdbc/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.springframework.boot:spring-boot-starter-web:2.3.1.RELEASE;org.springframework.boot:spring-boot-starter-tomcat:2.3.1.RELEASE;org.apache.tomcat.embed:tomcat-embed-core:9.0.36","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.tomcat:tomcat:7.0.108, org.apache.tomcat:tomcat:8.5.63, org.apache.tomcat:tomcat:9.0.43,org.apache.tomcat:tomcat:10.0.2"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-25329","vulnerabilityDetails":"The fix for CVE-2020-9484 was incomplete. When using Apache Tomcat 10.0.0-M1 to 10.0.0, 9.0.0.M1 to 9.0.41, 8.5.0 to 8.5.61 or 7.0.0. to 7.0.107 with a configuration edge case that was highly unlikely to be used, the Tomcat instance was still vulnerable to CVE-2020-9494. Note that both the previously published prerequisites for CVE-2020-9484 and the previously published mitigations for CVE-2020-9484 also apply to this issue.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-25329","cvss3Severity":"high","cvss3Score":"7.0","cvss3Metrics":{"A":"High","AC":"High","PR":"Low","S":"Unchanged","C":"High","UI":"None","AV":"Local","I":"High"},"extraData":{}}</REMEDIATE> -->
True
CVE-2021-25329 (High) detected in tomcat-embed-core-9.0.35.jar, tomcat-embed-core-9.0.36.jar - ## CVE-2021-25329 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>tomcat-embed-core-9.0.35.jar</b>, <b>tomcat-embed-core-9.0.36.jar</b></p></summary> <p> <details><summary><b>tomcat-embed-core-9.0.35.jar</b></p></summary> <p>Core Tomcat implementation</p> <p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p> <p>Path to dependency file: dev-example-places/api/r2dbc/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.35/tomcat-embed-core-9.0.35.jar</p> <p> Dependency Hierarchy: - spring-boot-starter-web-2.3.0.RELEASE.jar (Root Library) - spring-boot-starter-tomcat-2.3.0.RELEASE.jar - :x: **tomcat-embed-core-9.0.35.jar** (Vulnerable Library) </details> <details><summary><b>tomcat-embed-core-9.0.36.jar</b></p></summary> <p>Core Tomcat implementation</p> <p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p> <p>Path to dependency file: dev-example-places/api/jdbc/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.36/tomcat-embed-core-9.0.36.jar</p> <p> Dependency Hierarchy: - spring-boot-starter-web-2.3.1.RELEASE.jar (Root Library) - spring-boot-starter-tomcat-2.3.1.RELEASE.jar - :x: **tomcat-embed-core-9.0.36.jar** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/garymsegal-ws-org/dev-example-places/commit/14a29ec1a84abf2ff445ea8ee791bfbd0aa81b6f">14a29ec1a84abf2ff445ea8ee791bfbd0aa81b6f</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The fix for CVE-2020-9484 was incomplete. When using Apache Tomcat 10.0.0-M1 to 10.0.0, 9.0.0.M1 to 9.0.41, 8.5.0 to 8.5.61 or 7.0.0. to 7.0.107 with a configuration edge case that was highly unlikely to be used, the Tomcat instance was still vulnerable to CVE-2020-9494. Note that both the previously published prerequisites for CVE-2020-9484 and the previously published mitigations for CVE-2020-9484 also apply to this issue. <p>Publish Date: 2021-03-01 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-25329>CVE-2021-25329</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.0</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: High - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://lists.apache.org/thread.html/rfe62fbf9d4c314f166fe8c668e50e5d9dd882a99447f26f0367474bf%40%3Cannounce.tomcat.apache.org%3E">https://lists.apache.org/thread.html/rfe62fbf9d4c314f166fe8c668e50e5d9dd882a99447f26f0367474bf%40%3Cannounce.tomcat.apache.org%3E</a></p> <p>Release Date: 2021-03-01</p> <p>Fix Resolution: org.apache.tomcat:tomcat:7.0.108, org.apache.tomcat:tomcat:8.5.63, org.apache.tomcat:tomcat:9.0.43,org.apache.tomcat:tomcat:10.0.2</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.apache.tomcat.embed","packageName":"tomcat-embed-core","packageVersion":"9.0.35","packageFilePaths":["/api/r2dbc/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.springframework.boot:spring-boot-starter-web:2.3.0.RELEASE;org.springframework.boot:spring-boot-starter-tomcat:2.3.0.RELEASE;org.apache.tomcat.embed:tomcat-embed-core:9.0.35","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.tomcat:tomcat:7.0.108, org.apache.tomcat:tomcat:8.5.63, org.apache.tomcat:tomcat:9.0.43,org.apache.tomcat:tomcat:10.0.2"},{"packageType":"Java","groupId":"org.apache.tomcat.embed","packageName":"tomcat-embed-core","packageVersion":"9.0.36","packageFilePaths":["/api/jdbc/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.springframework.boot:spring-boot-starter-web:2.3.1.RELEASE;org.springframework.boot:spring-boot-starter-tomcat:2.3.1.RELEASE;org.apache.tomcat.embed:tomcat-embed-core:9.0.36","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.tomcat:tomcat:7.0.108, org.apache.tomcat:tomcat:8.5.63, org.apache.tomcat:tomcat:9.0.43,org.apache.tomcat:tomcat:10.0.2"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-25329","vulnerabilityDetails":"The fix for CVE-2020-9484 was incomplete. When using Apache Tomcat 10.0.0-M1 to 10.0.0, 9.0.0.M1 to 9.0.41, 8.5.0 to 8.5.61 or 7.0.0. to 7.0.107 with a configuration edge case that was highly unlikely to be used, the Tomcat instance was still vulnerable to CVE-2020-9494. Note that both the previously published prerequisites for CVE-2020-9484 and the previously published mitigations for CVE-2020-9484 also apply to this issue.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-25329","cvss3Severity":"high","cvss3Score":"7.0","cvss3Metrics":{"A":"High","AC":"High","PR":"Low","S":"Unchanged","C":"High","UI":"None","AV":"Local","I":"High"},"extraData":{}}</REMEDIATE> -->
non_process
cve high detected in tomcat embed core jar tomcat embed core jar cve high severity vulnerability vulnerable libraries tomcat embed core jar tomcat embed core jar tomcat embed core jar core tomcat implementation library home page a href path to dependency file dev example places api pom xml path to vulnerable library home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar dependency hierarchy spring boot starter web release jar root library spring boot starter tomcat release jar x tomcat embed core jar vulnerable library tomcat embed core jar core tomcat implementation library home page a href path to dependency file dev example places api jdbc pom xml path to vulnerable library home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar dependency hierarchy spring boot starter web release jar root library spring boot starter tomcat release jar x tomcat embed core jar vulnerable library found in head commit a href found in base branch master vulnerability details the fix for cve was incomplete when using apache tomcat to to to or to with a configuration edge case that was highly unlikely to be used the tomcat instance was still vulnerable to cve note that both the previously published prerequisites for cve and the previously published mitigations for cve also apply to this issue publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache tomcat tomcat org apache tomcat tomcat org apache tomcat tomcat org apache tomcat tomcat isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree org springframework boot spring boot starter web release org springframework boot spring boot starter tomcat release org apache tomcat embed tomcat embed core isminimumfixversionavailable true minimumfixversion org apache tomcat tomcat org apache tomcat tomcat org apache tomcat tomcat org apache tomcat tomcat packagetype java groupid org apache tomcat embed packagename tomcat embed core packageversion packagefilepaths istransitivedependency true dependencytree org springframework boot spring boot starter web release org springframework boot spring boot starter tomcat release org apache tomcat embed tomcat embed core isminimumfixversionavailable true minimumfixversion org apache tomcat tomcat org apache tomcat tomcat org apache tomcat tomcat org apache tomcat tomcat basebranches vulnerabilityidentifier cve vulnerabilitydetails the fix for cve was incomplete when using apache tomcat to to to or to with a configuration edge case that was highly unlikely to be used the tomcat instance was still vulnerable to cve note that both the previously published prerequisites for cve and the previously published mitigations for cve also apply to this issue vulnerabilityurl
0
5,407
8,235,397,097
IssuesEvent
2018-09-09 03:37:21
Microsoft/LightGBM
https://api.github.com/repos/Microsoft/LightGBM
closed
[R-Package] Horizontal whitespace in R source files
in-process
There is a bunch of miscellaneous trailing whitespace in the R source files. This causes distracting line diffs for those who have editors set to squash it. See discussion in #1635 . Completing this issue means going through all the R files and removing that whitespace.
1.0
[R-Package] Horizontal whitespace in R source files - There is a bunch of miscellaneous trailing whitespace in the R source files. This causes distracting line diffs for those who have editors set to squash it. See discussion in #1635 . Completing this issue means going through all the R files and removing that whitespace.
process
horizontal whitespace in r source files there is a bunch of miscellaneous trailing whitespace in the r source files this causes distracting line diffs for those who have editors set to squash it see discussion in completing this issue means going through all the r files and removing that whitespace
1
79,930
10,145,929,948
IssuesEvent
2019-08-05 06:38:18
ActoSoft/buntiAppMobile
https://api.github.com/repos/ActoSoft/buntiAppMobile
closed
Main Activity UI
documentation enhancement
UI mockup -> https://drive.google.com/file/d/1N0XxzVAKyVxliGpbqeBNglbUZBbrvjiv/view?usp=sharing As in the other UI issues, document the views used in the UI and its id
1.0
Main Activity UI - UI mockup -> https://drive.google.com/file/d/1N0XxzVAKyVxliGpbqeBNglbUZBbrvjiv/view?usp=sharing As in the other UI issues, document the views used in the UI and its id
non_process
main activity ui ui mockup as in the other ui issues document the views used in the ui and its id
0
185
2,519,739,871
IssuesEvent
2015-01-18 09:00:49
mbunkus/mtx-trac-import-test
https://api.github.com/repos/mbunkus/mtx-trac-import-test
opened
does not understand mkv files written by latest VDubMod
C: mkvmerge P: normal R: fixed T: defect
**Reported by moritz on 6 Aug 2003 10:33 UTC** Mike Matsnev <mike@po.cs.msu.su> wrote: I noticed a problem with mkvmerge, it can't read mkv files created by latest VDubMod 1.5.4.1 (build 2066). I reproduced a problem with both converted old avi files created via direct stream copy and using a new clip created from scratch using this avs script: ==== blank.avs BlankClip(length=10,width=320,height=240) ==== I compressed it with divx 5.0.5 and saved as an mkv file from VDM. When trying to read the file with mkvmerge the following error occurs: ==== C:\>mkvmerge -i test1.mkv Error: Demultiplexer failed to initialize: matroska_reader: Failed to read the headers. ==== This is really annoying because VDM does not have a knob to adjust aspect ratio of video streams, and the shell extension also fails to set that info with an error message. The trouble is probably caused by VDM writing wrong headers, however MPC 6.4.5.9 plays such files without any problem, and mkvinfo does not show anything abnormal: ==== + EBML head + Segment |+ Segment information | + Unknown element: EBMLCrc32 | + Muxing application: libebml-0.5.0 & libmatroska-0.5.0 | + Writing application: VirtualDubMod 1.5.4.1 (build 2066/release) | + Segment UID: 0x4d 0x1b 0xa7 0x8d 0xe1 0xfd 0xf0 0x7e 0xf8 0x23 0xfc 0x11 0xd9 0xad 0xec 0x56 | + Duration: 0.417s | + Date: Mon Aug 04 19:29:36 2003 UTC |+ Seek head (subentries will be skipped) |+ EbmlVoid |+ Segment tracks | + Unknown element: EBMLCrc32 | + A track | + Track number: 1 | + Track UID: 2376538957 | + Track type: video | + MinCache: 1 | + Codec ID: V_MS/VFW/FOURCC | + CodecPrivate, length 40 (FourCC: DX50, 0x30355844) | + Lacing flag: 0 | + Default duration: 41.667ms (24.000 fps for a video track) | + Video track | + Pixel width: 320 | + Pixel height: 240 | + Display width: 320 | + Display height: 240 | + Unknown element: VideoDisplayUnit | + A track | + Track number: 2 | + Track UID: 2129722849 | + Track type: audio | + Codec ID: A_MS/ACM | + CodecPrivate, length 20 (format tag: 0x0001) | + Audio track | + Sampling frequency: 44100.000000 | + Bit depth: 16 |+ Cluster ==== If you need any additional info, please tell me. I also attached a small clip (38KB) that exibits such a problem. /Mike Migrated-From: https://trac.bunkus.org/ticket/5
1.0
does not understand mkv files written by latest VDubMod - **Reported by moritz on 6 Aug 2003 10:33 UTC** Mike Matsnev <mike@po.cs.msu.su> wrote: I noticed a problem with mkvmerge, it can't read mkv files created by latest VDubMod 1.5.4.1 (build 2066). I reproduced a problem with both converted old avi files created via direct stream copy and using a new clip created from scratch using this avs script: ==== blank.avs BlankClip(length=10,width=320,height=240) ==== I compressed it with divx 5.0.5 and saved as an mkv file from VDM. When trying to read the file with mkvmerge the following error occurs: ==== C:\>mkvmerge -i test1.mkv Error: Demultiplexer failed to initialize: matroska_reader: Failed to read the headers. ==== This is really annoying because VDM does not have a knob to adjust aspect ratio of video streams, and the shell extension also fails to set that info with an error message. The trouble is probably caused by VDM writing wrong headers, however MPC 6.4.5.9 plays such files without any problem, and mkvinfo does not show anything abnormal: ==== + EBML head + Segment |+ Segment information | + Unknown element: EBMLCrc32 | + Muxing application: libebml-0.5.0 & libmatroska-0.5.0 | + Writing application: VirtualDubMod 1.5.4.1 (build 2066/release) | + Segment UID: 0x4d 0x1b 0xa7 0x8d 0xe1 0xfd 0xf0 0x7e 0xf8 0x23 0xfc 0x11 0xd9 0xad 0xec 0x56 | + Duration: 0.417s | + Date: Mon Aug 04 19:29:36 2003 UTC |+ Seek head (subentries will be skipped) |+ EbmlVoid |+ Segment tracks | + Unknown element: EBMLCrc32 | + A track | + Track number: 1 | + Track UID: 2376538957 | + Track type: video | + MinCache: 1 | + Codec ID: V_MS/VFW/FOURCC | + CodecPrivate, length 40 (FourCC: DX50, 0x30355844) | + Lacing flag: 0 | + Default duration: 41.667ms (24.000 fps for a video track) | + Video track | + Pixel width: 320 | + Pixel height: 240 | + Display width: 320 | + Display height: 240 | + Unknown element: VideoDisplayUnit | + A track | + Track number: 2 | + Track UID: 2129722849 | + Track type: audio | + Codec ID: A_MS/ACM | + CodecPrivate, length 20 (format tag: 0x0001) | + Audio track | + Sampling frequency: 44100.000000 | + Bit depth: 16 |+ Cluster ==== If you need any additional info, please tell me. I also attached a small clip (38KB) that exibits such a problem. /Mike Migrated-From: https://trac.bunkus.org/ticket/5
non_process
does not understand mkv files written by latest vdubmod reported by moritz on aug utc mike matsnev wrote i noticed a problem with mkvmerge it can t read mkv files created by latest vdubmod build i reproduced a problem with both converted old avi files created via direct stream copy and using a new clip created from scratch using this avs script blank avs blankclip length width height i compressed it with divx and saved as an mkv file from vdm when trying to read the file with mkvmerge the following error occurs c mkvmerge i mkv error demultiplexer failed to initialize matroska reader failed to read the headers this is really annoying because vdm does not have a knob to adjust aspect ratio of video streams and the shell extension also fails to set that info with an error message the trouble is probably caused by vdm writing wrong headers however mpc plays such files without any problem and mkvinfo does not show anything abnormal ebml head segment segment information unknown element muxing application libebml libmatroska writing application virtualdubmod build release segment uid duration date mon aug utc seek head subentries will be skipped ebmlvoid segment tracks unknown element a track track number track uid track type video mincache codec id v ms vfw fourcc codecprivate length fourcc lacing flag default duration fps for a video track video track pixel width pixel height display width display height unknown element videodisplayunit a track track number track uid track type audio codec id a ms acm codecprivate length format tag audio track sampling frequency bit depth cluster if you need any additional info please tell me i also attached a small clip that exibits such a problem mike migrated from
0
275,761
20,956,998,668
IssuesEvent
2022-03-27 08:20:27
paulyang0125/PasswordManager
https://api.github.com/repos/paulyang0125/PasswordManager
closed
Add README.md, gitingore, LICENSSE for the project initialization
documentation
Is your feature request related to a problem? Please describe. GITHUB project initialization Describe the solution you'd like Create the relevant project files Additional context N/A
1.0
Add README.md, gitingore, LICENSSE for the project initialization - Is your feature request related to a problem? Please describe. GITHUB project initialization Describe the solution you'd like Create the relevant project files Additional context N/A
non_process
add readme md gitingore licensse for the project initialization is your feature request related to a problem please describe github project initialization describe the solution you d like create the relevant project files additional context n a
0
8,677
11,810,153,302
IssuesEvent
2020-03-19 16:02:31
Polymer/lit-html
https://api.github.com/repos/Polymer/lit-html
closed
lit-html 1.2.0 Release Plan
Release Process
# 1.2.0 Release Plan Release Plan that outlines the focused tasks for lit-html 1.2.0. [Milestone](https://github.com/Polymer/lit-html/milestone/14) ## Notable Changes - Enhancements - [x] [Add templateContent() directive](https://github.com/Polymer/lit-html/pull/1058) - [x] [Add unsafeSVG for rendering SVG snippets](https://github.com/Polymer/lit-html/pull/1000) - [x] [Add live() directive](https://github.com/Polymer/lit-html/pull/1057) - [x] [Add sanitization system for preventing XSS attacks](https://github.com/Polymer/lit-html/pull/1031) - Bug Fixes - [x] [classMap directive does not work with SVG](https://github.com/Polymer/lit-html/pull/1070) - [x] [Support namespaced attributes](https://github.com/Polymer/lit-html/issues/423) - Infrastructure - [x] [Port docs site to eleventy](https://github.com/Polymer/lit-html/pull/1076) - [x] [Long Build Time for Docs](https://github.com/Polymer/lit-html/issues/1049) ## Documentation - [x] [Document live() directive](https://github.com/Polymer/lit-html/issues/1063) - [x] [Document templateContent() directive](https://github.com/Polymer/lit-html/issues/1067) - [x] [Document unsafeSVG() directive](https://github.com/Polymer/lit-html/issues/1068) ## Release Process - [x] [Run Google Internal Tests](https://github.com/Polymer/lit-html/issues/1085) - [x] [1.2.0 Release Notes](https://github.com/Polymer/lit-html/issues/1086) - [x] [Verify 1.2.0 CHANGELOG](https://github.com/Polymer/lit-html/issues/1087) - [ ] [Prelease](https://github.com/Polymer/lit-html/issues/1115)
1.0
lit-html 1.2.0 Release Plan - # 1.2.0 Release Plan Release Plan that outlines the focused tasks for lit-html 1.2.0. [Milestone](https://github.com/Polymer/lit-html/milestone/14) ## Notable Changes - Enhancements - [x] [Add templateContent() directive](https://github.com/Polymer/lit-html/pull/1058) - [x] [Add unsafeSVG for rendering SVG snippets](https://github.com/Polymer/lit-html/pull/1000) - [x] [Add live() directive](https://github.com/Polymer/lit-html/pull/1057) - [x] [Add sanitization system for preventing XSS attacks](https://github.com/Polymer/lit-html/pull/1031) - Bug Fixes - [x] [classMap directive does not work with SVG](https://github.com/Polymer/lit-html/pull/1070) - [x] [Support namespaced attributes](https://github.com/Polymer/lit-html/issues/423) - Infrastructure - [x] [Port docs site to eleventy](https://github.com/Polymer/lit-html/pull/1076) - [x] [Long Build Time for Docs](https://github.com/Polymer/lit-html/issues/1049) ## Documentation - [x] [Document live() directive](https://github.com/Polymer/lit-html/issues/1063) - [x] [Document templateContent() directive](https://github.com/Polymer/lit-html/issues/1067) - [x] [Document unsafeSVG() directive](https://github.com/Polymer/lit-html/issues/1068) ## Release Process - [x] [Run Google Internal Tests](https://github.com/Polymer/lit-html/issues/1085) - [x] [1.2.0 Release Notes](https://github.com/Polymer/lit-html/issues/1086) - [x] [Verify 1.2.0 CHANGELOG](https://github.com/Polymer/lit-html/issues/1087) - [ ] [Prelease](https://github.com/Polymer/lit-html/issues/1115)
process
lit html release plan release plan release plan that outlines the focused tasks for lit html notable changes enhancements bug fixes infrastructure documentation release process
1
10,596
13,423,845,648
IssuesEvent
2020-09-06 01:08:13
kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines
closed
persistent-agent service account should not bind to the cluster-admin role
area/deployment/kubeflow kind/process status/triaged
### What steps did you take: Per https://github.com/kubeflow/manifests/blob/master/pipeline/persistent-agent/base/clusterrole-binding.yaml The role binding binds service account to cluster-admin instead of the created cluster role.
1.0
persistent-agent service account should not bind to the cluster-admin role - ### What steps did you take: Per https://github.com/kubeflow/manifests/blob/master/pipeline/persistent-agent/base/clusterrole-binding.yaml The role binding binds service account to cluster-admin instead of the created cluster role.
process
persistent agent service account should not bind to the cluster admin role what steps did you take per the role binding binds service account to cluster admin instead of the created cluster role
1
18,999
24,994,910,833
IssuesEvent
2022-11-02 22:39:39
geneontology/go-ontology
https://api.github.com/repos/geneontology/go-ontology
closed
parentage: `chromosome segregation`
cell cycle and DNA processes parent relationship query
It seems odd that `chromosome separation (GO:0051304)` has these SubClass Of relationships: * is_a `cell cycle process` * 'part of' some 'chromosome segregation' While `chromosome segregation (GO:0007059)` only has this SubClass Of relationship: * is_a `cellular process` I would think that `chromosome segregation` should also be is_a `cell cycle process` @ValWood - any thoughts here?
1.0
parentage: `chromosome segregation` - It seems odd that `chromosome separation (GO:0051304)` has these SubClass Of relationships: * is_a `cell cycle process` * 'part of' some 'chromosome segregation' While `chromosome segregation (GO:0007059)` only has this SubClass Of relationship: * is_a `cellular process` I would think that `chromosome segregation` should also be is_a `cell cycle process` @ValWood - any thoughts here?
process
parentage chromosome segregation it seems odd that chromosome separation go has these subclass of relationships is a cell cycle process part of some chromosome segregation while chromosome segregation go only has this subclass of relationship is a cellular process i would think that chromosome segregation should also be is a cell cycle process valwood any thoughts here
1
11,450
14,270,709,322
IssuesEvent
2020-11-21 08:35:44
q191201771/lal
https://api.github.com/repos/q191201771/lal
closed
How to configure webhooks?
#Feature *In process *Indefinite delay
Hi, How do we configure webhooks like on_publish on_play etc? Thank You
1.0
How to configure webhooks? - Hi, How do we configure webhooks like on_publish on_play etc? Thank You
process
how to configure webhooks hi how do we configure webhooks like on publish on play etc thank you
1
354,735
10,571,548,949
IssuesEvent
2019-10-07 07:27:18
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
www.foodnetwork.com - video or audio doesn't play
browser-fenix engine-gecko priority-normal
<!-- @browser: Firefox Mobile 70.0 --> <!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:70.0) Gecko/70.0 Firefox/70.0 --> <!-- @reported_with: --> <!-- @extra_labels: browser-fenix --> **URL**: https://www.foodnetwork.com/shows/good-eats/episodes/the-mayo-clinic **Browser / Version**: Firefox Mobile 70.0 **Operating System**: Android **Tested Another Browser**: Unknown **Problem type**: Video or audio doesn't play **Description**: The ad plays, and that's it. Thanks for making free money, Scripps. **Steps to Reproduce**: <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❀_
1.0
www.foodnetwork.com - video or audio doesn't play - <!-- @browser: Firefox Mobile 70.0 --> <!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:70.0) Gecko/70.0 Firefox/70.0 --> <!-- @reported_with: --> <!-- @extra_labels: browser-fenix --> **URL**: https://www.foodnetwork.com/shows/good-eats/episodes/the-mayo-clinic **Browser / Version**: Firefox Mobile 70.0 **Operating System**: Android **Tested Another Browser**: Unknown **Problem type**: Video or audio doesn't play **Description**: The ad plays, and that's it. Thanks for making free money, Scripps. **Steps to Reproduce**: <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❀_
non_process
video or audio doesn t play url browser version firefox mobile operating system android tested another browser unknown problem type video or audio doesn t play description the ad plays and that s it thanks for making free money scripps steps to reproduce browser configuration none from with ❀
0
270,048
28,960,390,593
IssuesEvent
2023-05-10 01:38:02
praneethpanasala/linux
https://api.github.com/repos/praneethpanasala/linux
reopened
CVE-2021-41864 (High) detected in linuxlinux-4.19.6
Mend: dependency security vulnerability
## CVE-2021-41864 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.6</b></p></summary> <p> <p>Apache Software Foundation (ASF)</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p> <p>Found in HEAD commit: <a href="https://api.github.com/repos/praneethpanasala/linux/commits/d80c4f847c91020292cb280132b15e2ea147f1a3">d80c4f847c91020292cb280132b15e2ea147f1a3</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/kernel/bpf/stackmap.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/kernel/bpf/stackmap.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> prealloc_elems_and_freelist in kernel/bpf/stackmap.c in the Linux kernel before 5.14.12 allows unprivileged users to trigger an eBPF multiplication integer overflow with a resultant out-of-bounds write. <p>Publish Date: 2021-10-02 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-41864>CVE-2021-41864</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2021-41864">https://www.linuxkernelcves.com/cves/CVE-2021-41864</a></p> <p>Release Date: 2021-10-02</p> <p>Fix Resolution: v5.4.153, v5.10.73, v5.14.12</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-41864 (High) detected in linuxlinux-4.19.6 - ## CVE-2021-41864 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.6</b></p></summary> <p> <p>Apache Software Foundation (ASF)</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p> <p>Found in HEAD commit: <a href="https://api.github.com/repos/praneethpanasala/linux/commits/d80c4f847c91020292cb280132b15e2ea147f1a3">d80c4f847c91020292cb280132b15e2ea147f1a3</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/kernel/bpf/stackmap.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/kernel/bpf/stackmap.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> prealloc_elems_and_freelist in kernel/bpf/stackmap.c in the Linux kernel before 5.14.12 allows unprivileged users to trigger an eBPF multiplication integer overflow with a resultant out-of-bounds write. <p>Publish Date: 2021-10-02 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-41864>CVE-2021-41864</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2021-41864">https://www.linuxkernelcves.com/cves/CVE-2021-41864</a></p> <p>Release Date: 2021-10-02</p> <p>Fix Resolution: v5.4.153, v5.10.73, v5.14.12</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in linuxlinux cve high severity vulnerability vulnerable library linuxlinux apache software foundation asf library home page a href found in head commit a href found in base branch master vulnerable source files kernel bpf stackmap c kernel bpf stackmap c vulnerability details prealloc elems and freelist in kernel bpf stackmap c in the linux kernel before allows unprivileged users to trigger an ebpf multiplication integer overflow with a resultant out of bounds write publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
0
633,030
20,242,564,562
IssuesEvent
2022-02-14 10:38:36
kubeapps/kubeapps
https://api.github.com/repos/kubeapps/kubeapps
closed
Remaining work for running Kubeapps with Carvel
component/dashboard priority/high size/L kind/refactor component/apis-server
### Description: After https://github.com/kubeapps/kubeapps/issues/3221 we got initial and limited support for managing Carvel bundles in Kubeapps However, there is still a (growing) bunch of pending things before we are able to announce "Carvel support in Kubeapps". _Whereas trivial tasks will be tracked here (marked checkbox means done), others requiring further effort or discussions are split out to separated issues_ #### UI improvements - [x] Check everything works fine working in different namespaces 1. Case 1: available package in A; install it in A; available package in A; install it in B 2. Case 2: get installed packages in A; get installed packages in every namespace - [x] #3852 - [x] #3850 - [x] https://github.com/kubeapps/kubeapps/pull/4138 - [x] https://github.com/kubeapps/kubeapps/pull/4144 #### UX improvements - [x] #3853 - [x] https://github.com/kubeapps/kubeapps/pull/4050 - [x] https://github.com/kubeapps/kubeapps/pull/4100 #### Backend improvements - [x] https://github.com/kubeapps/kubeapps/pull/4068 - [x] https://github.com/kubeapps/kubeapps/pull/4060 - [x] https://github.com/kubeapps/kubeapps/pull/4067 - [x] https://github.com/kubeapps/kubeapps/pull/4085 - [x] https://github.com/kubeapps/kubeapps/pull/4059 - [x] https://github.com/kubeapps/kubeapps/pull/4057 - [x] https://github.com/kubeapps/kubeapps/pull/4074 - [x] https://github.com/kubeapps/kubeapps/pull/4139 - [x] https://github.com/kubeapps/kubeapps/pull/4165 - [x] https://github.com/kubeapps/kubeapps/pull/4192 - [x] https://github.com/kubeapps/kubeapps/pull/4222 #### Response time issues - [x] https://github.com/kubeapps/kubeapps/pull/4069 See also #4099 #### Docs/CI improvements - [x] https://github.com/kubeapps/kubeapps/pull/4127 See also #4093 #### Open questions 1. Think about error handling: aggressive early error return vs ignoring errors (ie: a single failure in a package should break/stop the whole core API call?) 2. Should we default to globalCluster if none is specified? Or just throw an error? 3. Should we implement multicluster support here?
1.0
Remaining work for running Kubeapps with Carvel - ### Description: After https://github.com/kubeapps/kubeapps/issues/3221 we got initial and limited support for managing Carvel bundles in Kubeapps However, there is still a (growing) bunch of pending things before we are able to announce "Carvel support in Kubeapps". _Whereas trivial tasks will be tracked here (marked checkbox means done), others requiring further effort or discussions are split out to separated issues_ #### UI improvements - [x] Check everything works fine working in different namespaces 1. Case 1: available package in A; install it in A; available package in A; install it in B 2. Case 2: get installed packages in A; get installed packages in every namespace - [x] #3852 - [x] #3850 - [x] https://github.com/kubeapps/kubeapps/pull/4138 - [x] https://github.com/kubeapps/kubeapps/pull/4144 #### UX improvements - [x] #3853 - [x] https://github.com/kubeapps/kubeapps/pull/4050 - [x] https://github.com/kubeapps/kubeapps/pull/4100 #### Backend improvements - [x] https://github.com/kubeapps/kubeapps/pull/4068 - [x] https://github.com/kubeapps/kubeapps/pull/4060 - [x] https://github.com/kubeapps/kubeapps/pull/4067 - [x] https://github.com/kubeapps/kubeapps/pull/4085 - [x] https://github.com/kubeapps/kubeapps/pull/4059 - [x] https://github.com/kubeapps/kubeapps/pull/4057 - [x] https://github.com/kubeapps/kubeapps/pull/4074 - [x] https://github.com/kubeapps/kubeapps/pull/4139 - [x] https://github.com/kubeapps/kubeapps/pull/4165 - [x] https://github.com/kubeapps/kubeapps/pull/4192 - [x] https://github.com/kubeapps/kubeapps/pull/4222 #### Response time issues - [x] https://github.com/kubeapps/kubeapps/pull/4069 See also #4099 #### Docs/CI improvements - [x] https://github.com/kubeapps/kubeapps/pull/4127 See also #4093 #### Open questions 1. Think about error handling: aggressive early error return vs ignoring errors (ie: a single failure in a package should break/stop the whole core API call?) 2. Should we default to globalCluster if none is specified? Or just throw an error? 3. Should we implement multicluster support here?
non_process
remaining work for running kubeapps with carvel description after we got initial and limited support for managing carvel bundles in kubeapps however there is still a growing bunch of pending things before we are able to announce carvel support in kubeapps whereas trivial tasks will be tracked here marked checkbox means done others requiring further effort or discussions are split out to separated issues ui improvements check everything works fine working in different namespaces case available package in a install it in a available package in a install it in b case get installed packages in a get installed packages in every namespace ux improvements backend improvements response time issues see also docs ci improvements see also open questions think about error handling aggressive early error return vs ignoring errors ie a single failure in a package should break stop the whole core api call should we default to globalcluster if none is specified or just throw an error should we implement multicluster support here
0
14,698
17,871,064,072
IssuesEvent
2021-09-06 15:34:29
qgis/QGIS
https://api.github.com/repos/qgis/QGIS
closed
High memory usage with Snap geometries to layer and "exception:unknown" message
Processing Bug
Author Name: **Antoine Lafranchis** (@alafr) Original Redmine Issue: [19436](https://issues.qgis.org/issues/19436) Affected QGIS version: 3.7(master) Redmine category:digitising --- In some circumstances, the computer freezes with 99.9% memory usage (then task manager stops responding...) when using the algorithm 'Snap geometries to layer'. It apparently happens only when snapping a line layer to a point reference layer, with a small tolerance (0.00001 or lower). It happend initially on a huge shapefile with geometry errors, but then I could reproduce it with tempopary layers: a line layer with a single line and a point layer with 3 points: the 1st point was precisely on a vertex, the 2nd point close to a second vertex (within the tolerance radius) and the 3rd point quite far from the line. Because I had to "brutally" shut down and restart the computer, I don't have any test file to attach and no crash log. The only details that were left were the processing history: ```processing.run("qgis:snapgeometries", {'INPUT':'Point?crs=EPSG:4326&uid={50146ea0-3a7a-4515-8241-e842f2ef1717}','REFERENCE_LAYER':'LineString?crs=EPSG:4326&uid={c6586332-d416-495e-868d-e34430dab439}','TOLERANCE':1e-5,'BEHAVIOR':0,'OUTPUT':'memory:'}) ``` --- - [test_project.qgz](https://issues.qgis.org/attachments/download/14412/test_project.qgz) (Antoine Lafranchis) - [test_memory_usage.png](https://issues.qgis.org/attachments/download/14413/test_memory_usage.png) (Antoine Lafranchis) - [QGIS.zip](https://issues.qgis.org/attachments/download/14425/QGIS.zip) (Antoine Lafranchis) - [test_snap_config.png](https://issues.qgis.org/attachments/download/14426/test_snap_config.png) (Antoine Lafranchis) - [qgis_version.png](https://issues.qgis.org/attachments/download/14427/qgis_version.png) (Antoine Lafranchis) - [qgis_snap_exception.png](https://issues.qgis.org/attachments/download/14428/qgis_snap_exception.png) (Antoine Lafranchis)
1.0
High memory usage with Snap geometries to layer and "exception:unknown" message - Author Name: **Antoine Lafranchis** (@alafr) Original Redmine Issue: [19436](https://issues.qgis.org/issues/19436) Affected QGIS version: 3.7(master) Redmine category:digitising --- In some circumstances, the computer freezes with 99.9% memory usage (then task manager stops responding...) when using the algorithm 'Snap geometries to layer'. It apparently happens only when snapping a line layer to a point reference layer, with a small tolerance (0.00001 or lower). It happend initially on a huge shapefile with geometry errors, but then I could reproduce it with tempopary layers: a line layer with a single line and a point layer with 3 points: the 1st point was precisely on a vertex, the 2nd point close to a second vertex (within the tolerance radius) and the 3rd point quite far from the line. Because I had to "brutally" shut down and restart the computer, I don't have any test file to attach and no crash log. The only details that were left were the processing history: ```processing.run("qgis:snapgeometries", {'INPUT':'Point?crs=EPSG:4326&uid={50146ea0-3a7a-4515-8241-e842f2ef1717}','REFERENCE_LAYER':'LineString?crs=EPSG:4326&uid={c6586332-d416-495e-868d-e34430dab439}','TOLERANCE':1e-5,'BEHAVIOR':0,'OUTPUT':'memory:'}) ``` --- - [test_project.qgz](https://issues.qgis.org/attachments/download/14412/test_project.qgz) (Antoine Lafranchis) - [test_memory_usage.png](https://issues.qgis.org/attachments/download/14413/test_memory_usage.png) (Antoine Lafranchis) - [QGIS.zip](https://issues.qgis.org/attachments/download/14425/QGIS.zip) (Antoine Lafranchis) - [test_snap_config.png](https://issues.qgis.org/attachments/download/14426/test_snap_config.png) (Antoine Lafranchis) - [qgis_version.png](https://issues.qgis.org/attachments/download/14427/qgis_version.png) (Antoine Lafranchis) - [qgis_snap_exception.png](https://issues.qgis.org/attachments/download/14428/qgis_snap_exception.png) (Antoine Lafranchis)
process
high memory usage with snap geometries to layer and exception unknown message author name antoine lafranchis alafr original redmine issue affected qgis version master redmine category digitising in some circumstances the computer freezes with memory usage then task manager stops responding when using the algorithm snap geometries to layer it apparently happens only when snapping a line layer to a point reference layer with a small tolerance or lower it happend initially on a huge shapefile with geometry errors but then i could reproduce it with tempopary layers a line layer with a single line and a point layer with points the point was precisely on a vertex the point close to a second vertex within the tolerance radius and the point quite far from the line because i had to brutally shut down and restart the computer i don t have any test file to attach and no crash log the only details that were left were the processing history processing run qgis snapgeometries input point crs epsg uid reference layer linestring crs epsg uid tolerance behavior output memory antoine lafranchis antoine lafranchis antoine lafranchis antoine lafranchis antoine lafranchis antoine lafranchis
1
26,214
11,275,361,407
IssuesEvent
2020-01-14 20:36:00
HewlettPackard/POSH-HPOneView
https://api.github.com/repos/HewlettPackard/POSH-HPOneView
closed
New-HPOVServerProfile -ServerProfileTemplate
4.10 Question Security (General)
### Expected Behavior Create new server profile from server profile template. ### Actual Behavior ```powershell New-HPOVServerProfile : Cannot determine the scope for a created resource. The user session is authorized to create the resource in multiple scopes. Retry the operation, specifying the desired resource sc ope. At line:1 char:1 + New-HPOVServerProfile -Name $ovSPName -ServerProfileTemplate $ovSPT - ... + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : InvalidOperation: (Send-HPOVRequest:String) [New-HPOVServerProfile], InvalidOperationException + FullyQualifiedErrorId : InvalidOperation,New-HPOVServerProfile ``` ### Steps to reproduce ```powershell $ovSPName = 'sp001' $ovServer = Get-HPOVServer -Name 's001' $ovSPT = Get-HPOVServerProfileTemplate -Name 'spt01' New-HPOVServerProfile -Name $ovSPName -ServerProfileTemplate $ovSPT ``` ### Version Information LibraryVersion 4.10.1889.2173 ApplianceVersion 4.10.04.370820.00 PSVersion 5.1.14393.1613 PSEdition Desktop PSCompatibleVersions {1.0, 2.0, 3.0, 4.0...} BuildVersion 10.0.14393.1613 CLRVersion 4.0.30319.42000 WSManStackVersion 3.0 PSRemotingProtocolVersion 2.3 SerializationVersion 1.1.0.1
True
New-HPOVServerProfile -ServerProfileTemplate - ### Expected Behavior Create new server profile from server profile template. ### Actual Behavior ```powershell New-HPOVServerProfile : Cannot determine the scope for a created resource. The user session is authorized to create the resource in multiple scopes. Retry the operation, specifying the desired resource sc ope. At line:1 char:1 + New-HPOVServerProfile -Name $ovSPName -ServerProfileTemplate $ovSPT - ... + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : InvalidOperation: (Send-HPOVRequest:String) [New-HPOVServerProfile], InvalidOperationException + FullyQualifiedErrorId : InvalidOperation,New-HPOVServerProfile ``` ### Steps to reproduce ```powershell $ovSPName = 'sp001' $ovServer = Get-HPOVServer -Name 's001' $ovSPT = Get-HPOVServerProfileTemplate -Name 'spt01' New-HPOVServerProfile -Name $ovSPName -ServerProfileTemplate $ovSPT ``` ### Version Information LibraryVersion 4.10.1889.2173 ApplianceVersion 4.10.04.370820.00 PSVersion 5.1.14393.1613 PSEdition Desktop PSCompatibleVersions {1.0, 2.0, 3.0, 4.0...} BuildVersion 10.0.14393.1613 CLRVersion 4.0.30319.42000 WSManStackVersion 3.0 PSRemotingProtocolVersion 2.3 SerializationVersion 1.1.0.1
non_process
new hpovserverprofile serverprofiletemplate expected behavior create new server profile from server profile template actual behavior powershell new hpovserverprofile cannot determine the scope for a created resource the user session is authorized to create the resource in multiple scopes retry the operation specifying the desired resource sc ope at line char new hpovserverprofile name ovspname serverprofiletemplate ovspt categoryinfo invalidoperation send hpovrequest string invalidoperationexception fullyqualifiederrorid invalidoperation new hpovserverprofile steps to reproduce powershell ovspname ovserver get hpovserver name ovspt get hpovserverprofiletemplate name new hpovserverprofile name ovspname serverprofiletemplate ovspt version information libraryversion applianceversion psversion psedition desktop pscompatibleversions buildversion clrversion wsmanstackversion psremotingprotocolversion serializationversion
0
9,582
12,535,311,306
IssuesEvent
2020-06-04 21:07:11
googleapis/python-bigquery-storage
https://api.github.com/repos/googleapis/python-bigquery-storage
closed
Promoting google-cloud-bigquery-storage to a GA release
api: bigquerystorage type: process
Currently at 0.8.x, intent is to promote major version to 1.x. ## Satisfied - [x] backend is GA as of [March 02, 2020](https://cloud.google.com/bigquery/docs/release-notes#March_02_2020) - [x] more than 28 days elapsed since last changes to surface contract ## Open for discussion - This API supports multiple data serialization methods with convenience methods to simplify user consumption (e.g. dataframe support). That mechanism continues to evolve and uses dependencies that are pre 1.x, but the API contract itself is GA and stable. Should this block a 1.x release? - This library is still publishing with py2.7 support. Should we drop py2.7 as part of the GA promotion and avoid having to publish another major release, or handle the py2.7 as its own event (and another major release)?
1.0
Promoting google-cloud-bigquery-storage to a GA release - Currently at 0.8.x, intent is to promote major version to 1.x. ## Satisfied - [x] backend is GA as of [March 02, 2020](https://cloud.google.com/bigquery/docs/release-notes#March_02_2020) - [x] more than 28 days elapsed since last changes to surface contract ## Open for discussion - This API supports multiple data serialization methods with convenience methods to simplify user consumption (e.g. dataframe support). That mechanism continues to evolve and uses dependencies that are pre 1.x, but the API contract itself is GA and stable. Should this block a 1.x release? - This library is still publishing with py2.7 support. Should we drop py2.7 as part of the GA promotion and avoid having to publish another major release, or handle the py2.7 as its own event (and another major release)?
process
promoting google cloud bigquery storage to a ga release currently at x intent is to promote major version to x satisfied backend is ga as of more than days elapsed since last changes to surface contract open for discussion this api supports multiple data serialization methods with convenience methods to simplify user consumption e g dataframe support that mechanism continues to evolve and uses dependencies that are pre x but the api contract itself is ga and stable should this block a x release this library is still publishing with support should we drop as part of the ga promotion and avoid having to publish another major release or handle the as its own event and another major release
1
17,871
12,676,405,616
IssuesEvent
2020-06-19 05:05:14
Azure/azure-cli
https://api.github.com/repos/Azure/azure-cli
closed
Flattening related issues
Infrastructure OKR3.4 Candidate feature-request
Tracking issue for items that need to be resolved prior to release that are related to the change in azure-cli packaing: - [x] Update internal test logic used by CI to detect tests - [x] Restore the `linter_exclusion.yml` files that were previously in certain modules. - [x] Fix the internal linter (azdev already patched) to locate the relocated modules. - [ ] The script to verify the doc source map likely does not work and always passes even though it can't detect the modules. - [x] Ensure CODEOWNERS file is compatible with the new package structure
1.0
Flattening related issues - Tracking issue for items that need to be resolved prior to release that are related to the change in azure-cli packaing: - [x] Update internal test logic used by CI to detect tests - [x] Restore the `linter_exclusion.yml` files that were previously in certain modules. - [x] Fix the internal linter (azdev already patched) to locate the relocated modules. - [ ] The script to verify the doc source map likely does not work and always passes even though it can't detect the modules. - [x] Ensure CODEOWNERS file is compatible with the new package structure
non_process
flattening related issues tracking issue for items that need to be resolved prior to release that are related to the change in azure cli packaing update internal test logic used by ci to detect tests restore the linter exclusion yml files that were previously in certain modules fix the internal linter azdev already patched to locate the relocated modules the script to verify the doc source map likely does not work and always passes even though it can t detect the modules ensure codeowners file is compatible with the new package structure
0
17,747
23,659,559,236
IssuesEvent
2022-08-26 14:22:03
MPMG-DCC-UFMG/C01
https://api.github.com/repos/MPMG-DCC-UFMG/C01
closed
Bug na interface do mecanismo de passos
[1] Bug [2] Baixa Prioridade [0] Desenvolvimento [3] Processamento Dinâmico
## Comportamento Esperado Os botões de manipulação de passos (escolher passo, mover, excluir, etc) devem funcionar sem interferência nos outros passos. ## Comportamento Atual Foi encontrado um bug na versão atual em que ao excluir (e as vezes modificar) um passo do lado de fora de um grupo [abrir em nova aba, fechar aba], os passos no interior do grupo são modificados. O comportamento mais comum é o que o passo simplesmente some, inclusive da lista de passos selecionáveis (vide prints abaixo). ## Passos para reproduzir o erro - Importar [esta](https://drive.google.com/file/d/1bQU9z1gjOHWdB1-J1G2FnTOMGSM0K8g0/view?usp=sharing) configuração - Ir na seção de coleta dinâmica e modificar os passos no interior da estrutura de Para Cada (experimente excluir o passo de screenshot) ## Screenshots (caso necessário) **Antes de remover o passo de screenshot** ![antes](https://user-images.githubusercontent.com/28073797/139126861-2196e4d7-193b-4c3a-b14e-e378f99e4e49.png) **Após remover** ![depois](https://user-images.githubusercontent.com/28073797/139126871-59d04e10-b913-4c75-9775-05c5552fa8c5.png)
1.0
Bug na interface do mecanismo de passos - ## Comportamento Esperado Os botões de manipulação de passos (escolher passo, mover, excluir, etc) devem funcionar sem interferência nos outros passos. ## Comportamento Atual Foi encontrado um bug na versão atual em que ao excluir (e as vezes modificar) um passo do lado de fora de um grupo [abrir em nova aba, fechar aba], os passos no interior do grupo são modificados. O comportamento mais comum é o que o passo simplesmente some, inclusive da lista de passos selecionáveis (vide prints abaixo). ## Passos para reproduzir o erro - Importar [esta](https://drive.google.com/file/d/1bQU9z1gjOHWdB1-J1G2FnTOMGSM0K8g0/view?usp=sharing) configuração - Ir na seção de coleta dinâmica e modificar os passos no interior da estrutura de Para Cada (experimente excluir o passo de screenshot) ## Screenshots (caso necessário) **Antes de remover o passo de screenshot** ![antes](https://user-images.githubusercontent.com/28073797/139126861-2196e4d7-193b-4c3a-b14e-e378f99e4e49.png) **Após remover** ![depois](https://user-images.githubusercontent.com/28073797/139126871-59d04e10-b913-4c75-9775-05c5552fa8c5.png)
process
bug na interface do mecanismo de passos comportamento esperado os botões de manipulação de passos escolher passo mover excluir etc devem funcionar sem interferência nos outros passos comportamento atual foi encontrado um bug na versão atual em que ao excluir e as vezes modificar um passo do lado de fora de um grupo os passos no interior do grupo são modificados o comportamento mais comum é o que o passo simplesmente some inclusive da lista de passos selecionáveis vide prints abaixo passos para reproduzir o erro importar configuração ir na seção de coleta dinâmica e modificar os passos no interior da estrutura de para cada experimente excluir o passo de screenshot screenshots caso necessário antes de remover o passo de screenshot após remover
1
300
2,733,644,257
IssuesEvent
2015-04-17 15:06:41
hammerlab/pileup.js
https://api.github.com/repos/hammerlab/pileup.js
opened
Use chai as promised
process
See https://github.com/domenic/chai-as-promised/ This could greatly simplify the async tests by changing the current async code to something like: expect(bamFile.readAll()).to.eventually.have.length(10);
1.0
Use chai as promised - See https://github.com/domenic/chai-as-promised/ This could greatly simplify the async tests by changing the current async code to something like: expect(bamFile.readAll()).to.eventually.have.length(10);
process
use chai as promised see this could greatly simplify the async tests by changing the current async code to something like expect bamfile readall to eventually have length
1